0% found this document useful (0 votes)
417 views252 pages

ISYE 6644 Simulation Lecture Transcripts

Uploaded by

so hozen
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
417 views252 pages

ISYE 6644 Simulation Lecture Transcripts

Uploaded by

so hozen
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 252

Certain parts of the exams are

open-book. What we mean by


open-book is that students are allowed
to ISYE 6644 Lecture Transcript
THIS IS NOT COMPLETE
AND NEEDS YOUR COLLABORATIVE EFFORTS !
Module 5 is new and needs attention

Knowledge Check questions have been moved here:


ISYE6644 Self-Assessment Questions

Important notes on adding content --


Thank you to everyone who has added and cleaned text and imported images into this transcript document.
There are a few best practices you should keep in mind read this first! While you are contributing:
Replace errors and poor phrasing entirely. If you come across transcription errors or
clunkily phrased language, change them. You don’t have to -- and should not -- employ
strikethrough, brackets, highlighting, comments, or any other annotation. Just be bold and
make the changes.

Don’t duplicate or replace text with images. The document is beginning to respond
sluggishly because of its size. Instead of adding screenshots of text, reproduce it using native
formatting (bolding, bullets, underlines, etc.) Also, replacing text with images makes it
unsearchable. The big exception to this is mathematical formulas, which are much easier to
understand in proper notation.

Crop images before adding them. Google Docs holds on to the entire image file, which again
adds to file size and sluggishness.

1
Import from PowerPoint where possible. Not only are images crisper than from screenshots of
the lecture videos, but you can resize images and delete elements that might obscure the graphic
you are importing.

Use superscript
and subscript as warranted. Press [Control] (or [Command] on a Mac) + [Comma]to
toggle superscript and [Control] + [Period] to toggle subscript.

Thanks again for all of your help!

2
About This Document
This document was originally created in the summer of 2019 and is maintained
collaboratively through the efforts of ISYE 6644 students using transcripts and screenshots
from the video lectures. You are strongly encouraged to improve the formatting, layout, add
or adjust images, bold key words, and even condense copy.

It is expected that sections may be added, removed, or modified -- in which case, again, you
should please take the liberty of adjusting this document to match.

Some tips on formatting: the Weeks (e.g., Week 1), Modules (e.g., 1. Introduction), and
Sub-Modules (e.g., 1.3 (C): What is Modeling?) are formatted respectively as Heading 1,
Heading 2, and Heading 3. These can be adjusted in the format menu, or by pressing Ctrl +
1, Ctrl + 2, and Ctrl + 3, respectively, while the cursor is on the line you wish to change.

Pressing Ctrl + , will toggle subscript text on and off.

Important notes on adding content -- 1


About This Document 3
Module 1 4
Lesson 1: Getting to Know You 4
Prerequisites 4
Suggested Resources 4
Grading (Updated Fall 2021) 5
Programming Requirements 5
Lesson 2: Syllabus 6
Lesson 3: Whirlwind Tour 10
Model types and definitions 10
Knowledge Check 1.3 14
Lesson 4: Whirlwind Tour – History 14
Early Simulation 14
Computer Simulation 15
Knowledge Check 1.4 17
Lesson 5: What Can We Do For You? 17
The Flaw of Averages 18
Other Applications of Simulation 19
Hypothesis Testing 21

3
Knowledge Check 1.5 22
Lesson 6: Some Baby Examples 22
Example 1: The Birthday Problem 22
Example 2: Estimating pi 24
Knowledge Check 1.6 27
Lesson 7: More Baby Examples 27
Knowledge Check 1.7 34
Lesson 8: Generating Randomness 34
Knowledge Check 1.8 37
Lesson 9: Simulation Output Analysis 37
Knowledge Check 1.9 40
Module 2 41
Slide Decks 41
OPTIONAL: Lesson 1: Calculus Primer 41
OPTIONAL: Lesson 2: Saved By Zero! Solving Equations 46
OPTIONAL: Lesson 3: Integration 49
OPTIONAL: Lesson 4: Integration Computer Exercises 52
Lesson 5: Probability Basics 54
Lesson 6: Simulating Random Variables 59
Lesson 7: Great Expectations 63
Lesson 8: Functions of a Random Variable 68
Lesson 9: Jointly Distributed Random Variables 74
OPTIONAL: Lesson 10: Conditional Distributions/Expectation 79
Lesson 11: Covariance and Correlation 85
Lesson 12: Probability Distributions 90
Lesson 13: Limit Theorems 95
OPTIONAL: Lesson 14: Introduction to Estimation 97
OPTIONAL: Lesson 15: Maximum Likelihood Estimation 102
OPTIONAL: Lesson 16: Confidence Intervals 105
Module 3 108
Module 3 Slide Decks: 108
Lesson 1: Stepping Through Differential Equation 108
Lesson 2: Monte Carlo Integration 112
Lesson 3: Monte Carlo Integration Demo 115
Lesson 4: Making Some Pi 117
Lesson 5: A Single-Server Queue 120
Lesson 6: An sS Inventory System 126
Lesson 7: An sS Inventory System Demo 130
Lesson 8: Simulating Random Variables 131

4
Lesson 9: Simulating Random Variables Demo 135
Lesson 10: Spreadsheet Simulation 136
Module 4 140
Slide Decks: 140
Lesson 1: Steps in a Simulation Study 140
Lesson 2: Some Useful Definitions 143
Lesson 3: Time-Advance Mechanisms 146
Lesson 4: Two Modeling Approaches 152
Lesson 5: Simulation Languages 157
Module 5 160
Lesson 1: Introduction 160
Lesson 2: Process Interaction Review 162
Lesson 3.1: Lets Meet Arena 163
Lesson 3.2 Lets Meet Arena Demo 165
Lesson 4.1: Basic Process 166
Lesson 4.2: Basic Process Demo 167
Lesson 5.1: Create Process Dispose Modules 168
Lesson 5.2: Create Process Dispose Modules Demo 170
Lesson 6.1: Details on the Process Module 172
Lesson 6.2.: Details on the Process Module Demo 175
Lesson 7.1: Resource Schedule Queue Spreadsheets 177
Lesson 7.2: Resource Schedule Queue Spreadsheets Demo 179
Lesson 8.1: The Decide Module 182
Lesson 8.2: The Decide Module Demo 183
Lesson 9.1: The Assign Module 185
Lesson 9.2: The Assign Module Demo 187
Lesson 10.1: Attribute Variable Entry Spreadsheet 189
Lesson 10.2: Attribute Variable Entry Spreadsheet Demo 190
Lesson 11.1: Arena Internal Variables 192
Lesson 11.2: Arena Internal Variables Demo 193
Lesson 12.1: Dipslaying Variables Graph Results 194
Lesson 12.2: Displaying Variables Graph Results Demo 194
Lesson 13.1: Run Batch Separate Record Modules 196
Lesson 13.2: Run Batch Separate Record Modules Demo 198
Lesson 14.1: Run Setup Control 199
Lesson 14.2: Run Setup Control Demo 201
Lesson 15.1: Simple Two Channel Manufacturing Example 203
Lesson 15.2: Simple Two Channel Manufacturing Example Demo 206
Lesson 16.1: Fake Customers 208

5
Lesson 16.2: Fake Customers Demo 209
Lesson :17.1 Advanced Process Template 211
Lesson 17.2: Advanced Process Template Demo 212
Lesson 18.1: Resource Failures and Maintenance 213
Lesson 18.2: Resource Failures and Maintenance Demo 215
Lesson 19.1: The Blocks Template 216
Lesson 19.2: The Blocks Template Demo 217
Lesson 20.1: The Joys Of Sets 219
Lesson 20.2: The Joys Of Sets Demo 222
Lesson 21: Description of Call Center Example 223
Lesson 22.1: Call Center 226
Lesson 22.2: Call Center Demo 227
Lesson 23.1: An Inventory System 231
Lesson 23.2: An Inventory System Demo 232
Lesson 24.1: One Line Versus Two Lines 235
Lesson 24.2: One Line Versus Two Lines Demo 236
Lesson 25.1: Crazy ReEntrant Queue 237
Lesson 25.2: Crazy ReEntrant Queue Demo 238
Lesson 26.1: SMARTS and Rockwell FIles 239
Lesson 26.2: SMARTS and Rockwell Files Demo 240
Lesson 27.1: A Manufacturing System 240
Lesson 27.2: A Manufacturing System Demo 242

6
Module 1

Lesson 1: Getting to Know You


In this first lesson, Getting to Know You, we'll go over the course info and objectives:

1. To identify simulation models and recognize simulation studies, and to figure out how to use them.
2. To illustrate the organization of simulation languages, including Modeling with Arena, which is a
comprehensive simulation package that has very, very nice animation capabilities, almost like a computer
game.
3. To concentrate on certain statistical aspects of simulations, including input analysis, random variant
generation, output analysis, and what is known as variance reduction techniques.

Prerequisites

You should know some probability and statistics at the level of our ISyE 2027 and 2028 courses, and should
have stochastic processes in your background. You should also be familiar with some programming language,
and maybe even a spreadsheet package. But don't panic: the course is designed to be as self-contained as
possible.

Suggested Resources

“Simulation Modeling and Analysis (5th Edition)" by Law, A. M.. It is the 5th edition from 2015, but you can
actually get any of the earlier editions pretty much just as well.

“Arena” (6th edition) by Kelton, Sadowski, and Zupick, which concentrates primarily on the simulation modeling
language called Arena.

Arena software that you can get on this website that we've listed below. It is a very nice product. It's got some
limitations because it is the student version. But for all the work that we'll do in this class, it is sufficient.

Grading (Updated Fall 2021)

We're going to have three tests. Midterms 1 & 2 are 25% each and the Final carries 30%.

The remaining 20% consists of homework, a project, and whether I like you.

7
Homework will be assigned after every module, so we'll have at least 10 assignments in all by the time the
course is over. As I said before, I'm going to provide extensive course notes on the website. Now, that does not
mean that you can just go around and print out the notes and skip class. You really should pay attention to
what we go over, because sometimes I give out little clues that might not be in the course notes themselves.
Always a good idea to watch everything so you don't miss anything.

Programming Requirements

The course will have a lot of programming, in this Arena simulation language, also in some spreadsheet
packages. But you do have some flexibility. In any case, you can expect to use the following general types of
languages:

● A spreadsheet package. Everybody knows Excel these days. I'm going to have some spreadsheet
add-ons that we'll talk about when we need them. They'll be widely available, not too hard to use.
● I'd like you to be able to use a real programming language, maybe like Matlab or Python, or whatever
your favorite language happens to be. I'm not going to mandate the language that you use. But just have some
familiarity in programming.
● A simulation language like Arena. If for some reason you don't want to use Arena, that is actually no
problem. But you'll be expected to do the assignments in whatever your favorite language happens to be.

8
Lesson 2: Syllabus
Generally speaking, the course will have certain lessons that emphasize math and statistical issues, and then
certain lessons that are mostly modeling and programming of a huge variety of different systems. We kinda
break them up into those two pieces.

First, I will give some introductory material on the history of simulation, followed by calculus, probability, and
statistics boot camps. This is primarily in Law's book, chapter four. You can actually skip this if you're confident
about your abilities in these subject areas.

Second, hand simulations and spreadsheet simulations where we'll do some elementary simulations by hand
or using a spreadsheet package.

Third, general high-level modeling concepts. What is a simulation? What kinds of ways can you go about
modeling different systems? That material is covered in Law's book, chapters one and two.

Fourth, a short discussion on verification and validation, used to determine whether the simulation is doing
what you think it ought to be doing. (Law devotes an entire chapter to that topic in his chapter five.)

Then I'll start work in the Arena simulation language as a large aside. I'll go over some Arena basics in chapter
four of the Kelton et al book (though I will sneak in some Arena examples before this). We will do a generic call
center in Arena, and then look at some inventory modeling techniques also in chapter five of Kelton et al.

Then we'll go over a manufacturing center which is very, very interesting because it corresponds to very, very,
very general examples that don't apply just to manufacturing centers. Then we'll talk about entity transfers in
Arena. What that means is how do things move around in Arena? How do you get from place to place?

And finally, we'll do some advanced material in Arena, a whole potpourri of different topics that I found useful in
the past. Then, back to the math stats probability aspects of the course, we'll talk about random number
generation.

How do you generate randomness on a computer? Now believe it or not, randomness is not a thing that you
get for free on a computer. In fact, all randomness is fake. It is a big lie that they're telling you. But what I'm
going to do is I'm going to show you how to generate randomness that looks random, not quite random, but it
looks random to you and me, and everybody accepts it. This is in Law, chapter seven. And then I'm going to
take that work and generalize it to random variate generation. So this randomness can be used to generate all
sorts of interesting things, normal random variables, Bernoulli random variables, things that you heard about in
your prob and stats classes.

9
I'll also talk about things like multivariate random variables. If one dimension is good, two dimensions are even
better. So I can generate people's heights and weights simultaneously, things that are correlated with each
other.

After that, I'll generalize even more and talk about random processes like time series -- for instance, a
simulation of unemployment rates which are correlated from month to month, things like that. And as a special
case, we'll look at certain financial models, things like stock prices, option prices, very, very, very interesting
models that you can use. You can generate almost anything with a simulation. We'll show how to do some of
those in this module. So that turns out to be Law, chapter eight. That is a fairly substantial chapter.

Finally, I'll look at a number of additional statistics issues. Input analysis, what that means is what random
variables should you use to drive the simulation? If the random variables you use to run a simulation aren't
very good, then the whole simulation is no good. That is called garbage in, garbage out. So what you have to
do is model the input random variables to the simulation correctly, and we'll talk about how you do that. That is
again another chapter of Law is devoted to that, chapter six. What goes in, must come out, and so we'll also
look at so-called output analysis. In other words, analyze the output coming from the simulation. It turns out
even if you've taken a statistics course, everything they told you in that class was pretty much a lie when it
comes to simulation, and you'll see why. The output from simulations is surprisingly difficult to analyze and you
have to be very careful about that.

One of my favorite topics is comparing systems. One of the reasons you would run a simulation is to see if one
system is better than another, or if one system is the best among many. We'll devote an entire module to that.
And finally, we'll have time to do a variance reduction and other cool things. Variance reduction is basically,
well, how do you run the simulation really, really, really efficiently? And there's a lot of nice little tricks
associated with that. So that comes to the end of the syllabus. The summary for today was that we've chatted
about the syllabus, as I promised. And next time, I'm finally going to get into some real simulation by beginning
our whirlwind tour of the subject.

10
Lesson 3: Whirlwind Tour
We're going to start this tour with general modeling issues and why would we even consider using computer
simulation, you know, what is all the fuss about?

Model types and definitions

Models are high-level representations of the operation of a real-world process or system. A discrete model is
one that changes every once in a while at discrete points in time. For instance, a model of customer flow in a
store would simulate a customer appearing and getting served, but between those events, nothing else is
going on. Then you wait in line but nothing is going on then. You're getting served, but while you're getting
served, nothing is going on. So at discrete points in time, the system changes. Somebody shows up,
somebody leaves. In contrast, a continuous model changes constantly, like the weather.

We'll also look at stochastic models versus deterministic models. I like probabilities and statistics, that is
stochastic. I'm not really as much into deterministic type things, although we will show a couple of deterministic
models. Deterministic models are really boring, because nothing is happening there. It is a little bit of a joke,
but stochastic models are much sexier.

Then I'll also look at dynamic versus static models. A dynamic model changes. A static model is the same thing
over and over and over again. So again, dynamic models are much more interesting to me. So with that in
mind, I'll say most of the time we're going to be looking at discrete, stochastic and dynamic models, not quite
all the time, but most.

Now, how can you solve a model? Generally speaking, at a high level, there's three ways to do this. You could
use analytical methods. Analytical method is like solving an equation, you get an answer, integral of x dx is x
square over two plus c, that is an answer, that is a nice equation and you know, if you're lucky, you'll be able to
come up with an analytical method, an analytical solution to your particular model. Usually those are pretty
easy and I'll give you an example in the next slide.

If an analytical method doesn't work, maybe that is when you go to numerical. A numerical method might be
used to solve an integral, like e to the minus x square. It turns out that thing doesn't have a closed form and
you might need to use numerical integration methods, something of that ilk to solve those kinds of problems.
So those are pretty good. Generally speaking, numerical methods work well. You can use those if you can't
use analytic methods.

Now what happens if you can't use a numerical method? Well, then you may have to resort to simulation.
Look at that, it is so important we colored it in blue. So simulation methods are way more general than these

11
other things and they can usually help you solve methods that are too complicated for these other two,
analytical or numerical.

Here's some examples of models. I can toss a stone off of a cliff and you can model the position of that stone
via the usual physics equations that you took back when you were a freshman. Those have equations, those
are analytical models and you just look them up, very, very simple.

Now, modeling the weather, that might be a little bit too tough to use exact analytical methods, so you might
have to use numerical methods and in fact, in this case, you may require numerical methods to solve a series
of hundreds of thousands of partial differential equations. That is how they model the weather sometimes.
Certainly there are no closed form answers for tough things like that. Now on the other hand, if I add a little bit
of randomness into that weather problem, all of a sudden, even the numerical methods might not work. I can
make a simple model very difficult by adding randomness in a certain way and at that point, you're kind of
forced to use simulation. Now, I'm going to give you plenty of examples coming up in the next few lessons, so
simulation's just got so many uses, you can't believe it, but anything that you can't do with analytical or
numerical methods, simulation might be a very good alternative.

So what is simulation? It is just the imitation of some real-world, or even imaginary process over time. The trick
involves generating some kind of history of the system and you use the history to get data on the system to
draw inferences concerning how it operates. Even if the system isn't real, you can use a conceptual system,
simulate that and make guesses of its future performance.

Simulation, it turns out, is one of the top three industrial engineering/ operations research/ management
science technologies. I think the other two, probably statistics and engineering econ, but simulation is the best
one, take my word for it. Now, it is used by academics and practitioners on this huge variety of theoretical and
applied problems, so most people use simulation on real-world, applied problems, but people use simulation all
the time on theory-oriented problems as well and we'll talk about those in the course of the course. In any
case, it is an indispensable, problem-solving methodology, that everybody could use quite successfully.

So, what is simulation good for? Well, you can use simulations to describe, analyze real system behavior or
even conceptual system behavior, as I mentioned before. You can ask what if questions about these types of
systems, like, what if I added another server, how would that improve my queue waiting time? Simulation helps
a great deal in system design and optimization. Some aircraft companies use simulation before they even think
about building a new aircraft.

Plus, I can simulate almost anything. I can do customer-based systems like manufacturing systems, supply
chains, health systems, countless applications and I can also simulate systems, that don't really have

12
customers, as we would think of them. I could simulate stock prices, stock option prices and people use
simulation for that all the time and there's plenty of reasons to simulate.

You might be interested in whether or not a particular system can accomplish its goals. Are you going to be
able to get an order out on time? Will the system capacity be enough to get all the materials through the
system in a timely way? What if the current system doesn't accomplish the goals? What do you do? Do you
add another server? Do you add buffer space? What if the system needs just a little bit of improvement? What
things can you tweak to improve the system a bit?

You can also use simulation to create a game plan. What action should you take next? You can base different
strategies on the outcomes of simulations of those different strategies. What if you have a known problem in
your system, like a bottleneck? Well, you can simulate what would happen if you added another server at the
bottleneck, but maybe the bottleneck moves downstream. The simulation will help you identify things like that.

You can resolve disputes. Suppose two people at the company are having a fight over different strategies, that
you could adapt in a particular problem. Simulation might help you to figure out which strategy is actually
better. You could also use simulation to sell an idea. Simulation is so easy to use, at least for certain problems
and it also has the potential to be very pretty, very easy for people to understand and you can use it to sell your
ideas.

In fact, simulation has got a lot of advantages. You can study models, that are far too complicated for analytical
or numerical treatments, we've talked about this already. You can study very detailed relations, that might be
lost in those kinds of treatments. For instance, an analytical treatment might be able to give you the exact,
expected waiting time in a queue, but it doesn't necessarily tell you what percentage of customers are going to
have to wait a certain amount of time or what percentage of customers get mad and leave the system. The
simulation can give you many, many more details than just an “answer”.

You can also use simulations as the basis for experimental studies of systems. You can run the simulation
beforehand and then it'll give you an idea of how many runs you have to take, maybe with real data. The
simulation is very useful for that. You can use simulation to check results, that other people have obtained by
other methods. So if your pal has done a little queuing theory study and says, "I believe we need four servers
at this station," then you can run a simulation of that station, see if four servers is actually enough. Hopefully,
you don't embarrass your friend.

You could also use simulation to reduce design blunders. There are countless examples, where if they had
simulated the darn system beforehand, then they would have seen, "Oh, no bottleneck here," or, "We should
have put more space there," and simulation is very easy, a very easy method to avoid those kinds of issues
and like we've said before, it is a really nice demo method.

13
And finally, it turns out that sometimes, if you're a little bit lucky, simulation is very easy to do and it is actually
kind of fun. It is like a video game almost.

Now, I have to be honest, there are also some disadvantages. I hope the advantages outweigh the
disadvantages, but let's go over a few of those. Sometimes simulation is not so easy, I mean, it is not a
panacea. Sometimes you actually have to roll up the sleeves and do a lot of programming.

Sometimes it is very difficult to collect the data, that you need from the simulations. You gotta be a little careful,
but simulation is often a little harder, than you might think. Sometimes it is very time-consuming, very costly,
because all programming is, it is just a matter of course.

Now, the more subtle thing is that simulations don't give you “the answer”. They typically give you random
output, just like you would get if you were looking at data. The data that you get is random, in a sense, so
simulations don't give you the answer. What you have to do is you give whatever this answer is from the
simulation and then you have to attach confidence in it, like I think that the queuing, the average queuing time
in the system, a person is going to have to wait five minutes, plus or minus 30 seconds. You always have to
give that plus or minus. I mean, we're really statisticians here and you have to be careful about that. So we'll
talk about simulation output a lot later on, but you have to be careful. Remember I told you, everything they
taught you in Stats class was a lie and that really does hold.

In fact, simulation is a great problem-solving technique, but to do certain problems, better methods may exist. If
I'm just tossing the stone off the cliff, and I'm assuming it is not raining or anything and there is no air pressure
issues and no friction, then your equation from Physics class might be just fine, so you won't need simulation,
but I've found that a lot of times, simulation is in fact the best method. So, here is the summary of what we just
did. I finally started this little tour about the nature of simulation and simulation models and next time, I'm going
to start talking about the history of simulation, so I'll give this historical, or hysterical presentation.

Knowledge Check 1.3

1. We are interested in modeling the arrival and service process at the local McBurger Queen burger joint.
Customers come in every once in a while, stand in line, eventually get served, and off they go. Generally
speaking, what kind of model are we talking about here? (More than one answer below may be right.)

A. Discrete
B. Continuous
C. Stochastic
D. Deterministic

14
2. Which of the following can be regarded as advantages of simulation? (More than one answer below may be
right.)

A. Simulation enables you to study models too complicated for analytical or numerical treatment.
B. Simulations can serve as very pretty demos that even University of Georgia graduates can understand.
C. Simulation can be used to study detailed relations that might be lost in an analytical or numerical
treatment.
D. Simulations are often tedious and time-consuming to produce.

15
Lesson 4: Whirlwind Tour – History
Simulation has really come into its own in the last 50 or 60 years with the rise of computers. But it has been
around a very long time -- at least a couple hundred years.

Early Simulation

In 1777, Count Buffon drew some parallel lines in the ground, threw a needle onto
the ground and counted up the number of times that the needle intersected one of
the lines. From there, he was able to back engineer an estimate for pi. And the
more times he performed the experiment, generally speaking, the better the
estimate of pi. While not the best way to estimate pi, it serves as an early example
of how simulations could be performed, sequentially at least.

In the early 1900s, William Gosset, was working as a statistician at the Guinness
Brewery in Ireland. Guinness didn't want the world to know that it was employing a
statistician because that would give away some proprietary information that they
were doing certain types of quality control, but approved of Gosset anonymously
publishing some of his results in the statistical literature. Calling himself “Student,” Gosset published some of
his results on what is now known as the Student t distribution that he obtained when he was at Guinness. In
order to obtain Student's t distribution, Gosset took several hundred samples of data, tossed all the samples in
a hat, and then picked them out randomly. (The data was how long were the index fingers of British prisoners.)
We actually still use the Student t distribution today in every stats class.

Computer Simulation

The first use of computer simulation occurred after World War Two, when mathematicians Stan Ulam and
Johnny von Neumann simulated thermonuclear chain reactions in the context of the development of the
hydrogen bomb. The 1950s and 1960s also brought a lot of industrial applications of simulation, particularly in
the context of manufacturing and certain queueing models.

People also started to develop simulation languages. This made it easier to write programs that could handle
generic problems in manufacturing and queueing. Eventually, these were developed into easy-to-use modeling
tools for these generic models. They had nice graphics. You could simulate things more quickly. One early
language, SIMSCRIPT, was developed by Harry Markowitz, who also won a Nobel prize for some of the work
he did in optimizing financial portfolios.

16
In the 1960s, people actually started putting forth very rigorous theoretical work having to do with simulation.
Namely, the development of very precise and efficient computational algorithms. Some very nice probabilistic
and statistical methods to analyze simulation output and input. And theory started developing in a very, very
strong way. We'll be looking at that as the course progresses.

Simulation began, for our purposes, with manufacturing problems. Simulation was the technique of choice
because these manufacturing problems are too hard to do analytically or numerically. In particular, simulation is
very easy for calculating the movement of parts and how these parts interact with system components like
machines and other types of servers. Obviously, the simulation can be used to evaluate how the parts flow
through the system. Simulation can be used to examine conflicting demand for resources. Now if you've got
millions of parts flying through the system, and each one demands particular machines or people to work on
them. Those are going to cause conflicts, and the simulation can be used to see where those conflicts are
occurring and maybe even what to do about the conflicts. And in particular, when asking what to do about the
conflicts, maybe you have two or three possible solutions and you can study those contemplated solutions or
changes before you actually do them. In particular again, you can avoid lots and lots of design blunders by
simulating beforehand.

Some typical questions that might arise, especially in the context of manufacturing:

● What is the throughput going to be?


● How many parts can you get through the system?
● How many people can get through your hospital during an emergency situation?
● How can we change the throughput?

Simulation is an easy way to figure out how we can implement changes. Particularly manufacturing systems,
you're going to encounter bottlenecks frequently. Well, where are they? And if you do something about the
bottlenecks, does that cause bottlenecks to occur in other places. Simulation will answer that lickety-split.

One thing that I'm very interested in and suppose that you have to decide among five or six possible designs.
You can use simulation over and over again to figure out which will be the best design most of the time. And
so, this is a form of optimization, simulation is very helpful with that. If you're looking at a giant network, you
might want to know what is the reliability of the network. How can you get stuff from the beginning to the end, in
a reliable way, without machines failing, so as to prevent you from getting from A to B? So reliability is very
important. Simulation can check into that.

And finally, what is the impact of breakdowns? If a machine is to break down, what are you going to do about
it? Should you have redundancy? How long is it going to take to get the system back up to speed, etc.? So the
summary of this lesson is that well you can kinda take your seat belts off. We went through a brief history

17
lesson. Kinda fun. Simulation has been around a long time. It's really come into its own in the last few decades.
And the next lesson, I'm going to actually look at very specific applications of simulation. You'll see that it is
very, very wide ranging and a lot of them are really interesting.

Knowledge Check 1.4

Who is William Gosset?

A. He invented the t distribution that is used ubiquitously in statistics.


B. He invented the s distribution that is used ubiquitously in statistics.
C. He invented tea.
D. He invented the word "ubiquitous".
E. He is the brother of Louis Gossett Jr., best known for his fine acting in many films, including An Officer
and a Gentleman.

2. YES or NO? Has anyone closely related to the field of computer simulation ever won a Nobel Prize?

A. Yes
B. No

18
Lesson 5: What Can We Do For You?
Simulation can be used for everything. If you can describe it, you can simulate it. And we'll look at all these
different applications coming up

Near the beginning of my career, this is what I used to think simulation was:

In the past fifteen years, I have used simulation in a number of different applications. In manufacturing, for
instance, I and my colleagues have simulated queuing problems in automobile and carpet production facilities.
How does material flow through these plants? I really like doing queueing problems, and you'll see when we do
the simulation package later on in the course, the language that we use, Arena, is very very easy to work on
queueing problems with. So for instance, one of the sort of bellwether examples is call center analysis. How
many servers do you need in a call center in order for calls to proceed through in a timely way?

Another wonderful example is modeling a fast food drive-thru. We have a particular restaurant here near
Georgia Tech where the line at the fast food drive-thru can only be about four or five cars before the line starts
extending onto a busy street. So we simulated different strategies on getting customers through that line. What
happens if you have some of the employees come out and take orders manually in the car as opposed to
going through and talking to the machine? And this particular fast food outlet, which is a chicken-related outlet
here in Atlanta, has really done a good job. They simulate all of their restaurants to this day. We simulated
another fast food drive-thru with a slightly different intent.

19
You've noticed that in this next item, I've linked that up with a call center. Let's suppose some very large fast
food restaurant with many many restaurants around the country wants to have all their fast food handled
through one giant call center. So you drive up to the little machine and you talk to the person and that person
isn't in the store itself. That person is in Chicago with a hundred of their best friends. And so we wanted to
know how many of the outlets did you need in order for this to be an efficient way of doing things? And it turns
out, we needed about 150 restaurants before this became efficient.

We've also done some work in airport security lines. I'm sure everybody encounters those. We only did the
good things in the security line, the bad things that cause you to wait a long time, they were probably done by
somebody else, not us.

Let's look at a generic supply chain. These are just typical examples. A supply chain might consist of a set of
suppliers and then feed into manufacturers and then distributors and then retails, then customers, depending
on the level of the chain. And there may be many of each and they might be interacting with each other. It is
just a giant queueing network. So any node can connect to any other node here just like a network. In fact, it
looks like a flowchart:

This particular chart goes from left to right, you could have these arrows go all over the place. But it is basically
a flow chart.

The Flaw of Averages

When I was a child, a famous professor and CEO of a supply chain company actually asked why would
anybody want to simulate a supply chain? And it turns out that there's lots of reasons. Because back then, in
your supply chain, you would just operate with means as opposed to probability distribution. So you'd always
say, well on the average, it takes five days to get from A to B in the supply chain. But averages aren't very
good sometimes. In fact there is this thing called The Flaw of Averages. If you use averages too much, your
results are going to be non-realistic because averages don't quite take into account variability. So that is why
simulation is such an important tool. In fact, many supply chain tools these days have simulation capability. So
we can use simulation to determine how much value-added a particular forecasting application provides in the

20
supply chain. So if we're thinking we have this certain set of forecasts, how is that going to supply what is going
on, how is that going to apply to what is going on in a supply chain? Simulation is a good way to characterize
that. You can also use simulation to analyze the randomness or model errors. This is the big deal in a supply
chain. Is your supply chain solution robust? What is the best solution? And we'll talk more about that as the
course progresses.

Other Applications of Simulation

Simulation is huge in financial analysis these days. If you can do portfolio analysis and options pricing, often
with simulation, Wall Street has a job for you. If you go tf l-. o your financial advisor, they brag about the
portfolio analysis that they do via simulation. They try to use fancy words, but they'll say something like, "well
we do five hundred Monte Carlo simulations of our proposed portfolio allocation strategy for you." And you're
supposed to be impressed with that. Now you'll be able to do it yourself.

We also do a lot of traffic simulation. A lot of the time, traffic flow is a little more complicated than you might
think. We've discovered situations in which you add a lane and it causes more congestion.

Related to that is airspace simulation. How long does it take aircraft to get between two different destinations?
That depends: what if it is really congested at Hartsfield-Jackson, or there is a thunderstorm in Atlanta
requiring traffic holds on some of the planes coming in from, say, New York?

There are also many healthcare applications for simulation. A classic example is modeling patient flow in a
hospital. How many rooms should you allocate for different purposes in a hospital? How do you optimize doctor
and nurse scheduling? You never know how many people are going to show up in the emergency room on
Friday night. And if you haven't done good scheduling, maybe you're just using this Flaw of Averages, you put
in the average number of doctors that you need, you could be setting yourself up for a big problem if there is a
fairly major fire or if there is a car crash that you hadn't anticipated.

Simulation is also used for procurement of supplies, that has to do with inventory. I've used simulation a little bit
for purposes of disease surveillance. How do you know that a disease is actually in the process of occurring?
How does a disease go through a population? Is it a virulent disease like 1918 influenza, or something less
intense like swine flu happened to be a few years ago?

Let me now talk about a specific health systems application, namely in surveillance. So you can use simulation
to monitor certain times series. That is basically what surveillance is. Is something happening in your time
series? Has the unemployment rate spiked for some reason? Or has a disease started to occur for some
reason? You look at a time series of data and you kind of see what happens. Is anything interesting going on?
So the idea here is to predict issues as or before they happen. Is the influenza outbreak starting to occur now?

21
You may have read in the paper that Google is very good at predicting these types of things, sometimes in a
scary way. They'll look at some of the purchases that you've made at a store, and they'll be able to determine
you're pregnant. And this is maybe before you've told your family. So these predictive analytics are one of the
hottest topics now in operations research and operations management and computer science. I'm most
interested in, is a disease starting out and is it in the process of becoming an outbreak? Is something occurring
that is out of the ordinary? And what is nice about this, as I alluded to, is that you can take advantage of these
huge data sets that are available now. Google, for instance, can look at everything that they have that all their
users are doing. It is just amazing what is out there.

Hypothesis Testing

I'm going to give a really strange surveillance application here. And some people may know this particular
person:

Who is Mr. Handsome? He is Dr. Harold Shipman, a British serial killer who used morphine and heroin
overdoses to kill patients, mostly elderly women. He was caught after he carelessly revised a patient's will and
left all her assets to himself. It turns out he doctored the records to show that the patients had actually needed
morphine, but software recorded the dates of the modifications and they were backdated so they were able to
tell that something funny was going on.

What does this have to do with simulation? It turns out, surveillance uses what are known as sequential
statistical hypothesis tests, where the hypothesis test might be something like, we usually call in H0, the
hypothesis might be, null hypothesis might be, no disease or in this case no murder. So that is a null

22
hypothesis and you have to come up with ample data to disprove that. People are innocent until proven guilty.
That is the only time that you would reject the null hypothesis. So it turns out though, that for complicated
problems, these test statistics that you use to determine whether or not the null hypothesis is true, they might
have very very difficult, complicated statistical distributions, even if the null hypothesis, which is usually easy,
even if that is true. Even under H0, that is how we would say that. So the test statistics might have difficult
distributions. They might not be normal or exponential or the things that you learned in your baby stats course.
So the nice thing is that I can use Monte Carlo simulation to approximate the probability distributions of these
very difficult statistics. Simulation can be used for lots and lots of stuff, this is one application. And then, when I
take my sample and I compare it to what this distribution is supposed to be, if the sample doesn't look like it is
coming from the distribution, then I reject H0. And this is one of the ways that they were able to catch this guy.
So simulation can be used for everything. So here is a summary of this lesson. We looked at a wide variety of
practice simulation applications. I went a little crazy on some of them. In the next lesson, what I'm going to be
doing, is I'm going to be giving some really really easy examples of the actual use of simulation in very very
simplistic settings.

Knowledge Check 1.5

Which of the following are areas where simulation has found substantial application? (More than one answer
below may be correct.)

A. Inventory and Supply Chain Analysis


B. Financial Analysis
C. Manufacturing
D. Health Systems
E. Transportation Systems

Why might simulation be a good tool to analyze supply chains? (More than one answer below may be correct.)

A. Supply chains are always deterministic systems.


B. Supply chains often have complicated network structures, making exact analysis difficult.
C. Supply chains are , with random travel times, lead times, and order patterns.
D. Supply chain simulations can be programmed in a matter of minutes.

23
Lesson 6: Some Baby Examples
In this lesson, I'm going to be continuing the whirlwind tour by showing you some really baby, easy, little
examples of how simulations run. I'll provide the software so you'll be able to carry them out yourself.

Example 1: The Birthday Problem

How many people do you need to have in a room in order to have at least a 50% chance that at least two of
them are going to be having the same birthday?

Let's assume there are 365 days in the year, and that everyone has an equal chance of being born on any
particular day. So, January 1st, probability of one over 365. April 22nd, which is my birthday, one over 365.
How many people do you need to have in a room in order to have at least a 50% chance that at least two of
them are going to be having the same birthday?

A. 9
B. 23
C. 42
D. 183

183 is what I originally thought when I first heard this problem. In fact, if you have 183 people in the room,
there's a 99.999% chance that at least two people will have the same birthday. In fact, with 42 people in the
room, you have a 90, 93% chance.

23 is the right answer. Let's


actually simulate this time.

47475 is what is known as a random


number seed, an integer that I
pick out of my head, just to get
the party started. And you'll see
some interesting properties of
this seed. And then, I've got a
little calendar here that I made. And
every time I click off a person
entering the room, I generate
their birthday randomly, with
probability of one out of 365. It turns

24
out, in this made-up example, I only needed 24 people before I finally got a match.

Let’s run it again with a different seed, 12345. With each click, the program generates a new birthdate.
February 5th, November 1st, December 26th, May 4th... and we got a match after 19 days on April 8th.It wasn't
23, the statistical 50/50 point, because every time I run the simulation, I'm going to get a different answer. This
particular time, it was 19.

Let's run this again, still using the seed 12345. Look at that, it was February 5th, just like before. Isn't that a
coincidence? Click, click, click, click, click, click, click. Oh my gosh. The simulation ended again, on the 19th
birthday. And the reason is, since I started with the same seed, I'm going to get the same answer. Now,
simulations are supposed to be dealing with random numbers. In fact, the numbers are not random, as this
demonstrates. They just look random to you and me, which is good enough, most of the time. But if I start with
the same seed, I get the same answer. It turns out, this is a good thing, and we'll talk about that in great and
tedious detail later on.

Let's do another seed. October 23rd, January 24th, oh, there is December 26th again, but it is a new run.
March 15th, July 10th… click, click, click. 17 this time. And if I change the seed one more time, we'll do one
more run... look at that, it only took six tries before I got a match. You can try this problem out at home -- and,
in fact, in the homework assignment, I'm going to have you do that.

Example 2: Estimating pi

I'm going to use Monte Carlo simulation to estimate pi, 3.14159. And the idea here is very simple, and it is
related to the 1777 Buffon's needle problem. What is the area of a unit square? Well, by definition, it is one,
you'll see in the next picture. I'm going to inscribe a circle inside that unit square. You will see, in the picture,
that the area of the inscribed circle is pi r2, which is pi * (½)^2, or pi/4. Therefore, if I toss a random dart at the
square, the probability that it hits that inscribed circle will be the ratio of the areas, pi over four divided by one,
which is, of course, pi over four. What I'm going to do then is I'm going to throw lots and lots and lots of darts.
And I'm going to count out the proportion of darts that land in that circle. The proportion should approach pi
over four by what is known as the law of large numbers, so if I throw a million darts into that square, about pi
over four times a million are going to land in the circle. So I take that proportion, and multiply it by four, which
will give me the estimate of pi.

The screen will have a random number seed, up here just like before. I'll choose whatever number I want. The
number of points will be whatever number of points I want to throw, how many darts I want to throw. And the
screen will encompass all of these dart throws. So, see, right there, I threw a dart right there, it landed in the
circle. That counts for pi. Oop, out
here, the dart that I threw

25
missed the circle. Now, meanwhile, on the right-hand side, I'm going to keep a running tally of how my
estimator for pi is going as I throw more and more darts. So, as I move from the left to right, I'm going to have
more and more darts, and you'll see that the estimator for pi is approaching 3.14159. What a surprise. At the
beginning, every time I throw a dart, whether or not it hits the circle, it has a big effect, so it is really bouncing
around here, so, these darts all hit the circle, and so, it increased the proportion closer to four. Well, closer to
one, so that the estimate for pi was closer to four. Then we had a few that missed, and eventually, the
estimator for pi is settling down to the correct answer.

I'm going to simulate 100 points, and let's just see what happens. Some of them made it, some of them just
missed. But I count up the proportion, I multiply it by four, and it looks like my answer was 3.160.

Now, let's run 1,000 simulations. I'll change my seed, even though I don't really have to. I'm changing it a little
bit. And since I'm doing more points, it'll change the answer. And let's start. Now I'm doing 1,000 points. And
you can see, oh, look at this. It started out kind of badly, but as I did 1,000 points, my answer, my estimator for
pi, was 3.140, which is all of a sudden much closer.

Let's see what happens if I simulate 100,000. The answer ended up 3.137. And actually, that is interesting. The
answer was a little bit worse than the run with only 1,000, but that just happens due to random error. And that
is why you run simulations. The more you run, the better your answer will be.

Now, we'll do a fun little calculus example. And you'll see that you've actually seen this kind of stuff before, but
this is with a little twist. So what I'm going to do, I'm going to use simulation to integrate sine of pi x from zero to
one. And this is a very, very simple problem. You learned how to do it way back in 12th grade or first year of
college.

And the way you did it back before you learned that sine integrates to cosine, or negative cosine, actually, the
way you did it before you learned how to do actual integration is that you added up a bunch of rectangles, that
is how Leibniz did it in the 1600s. So you approximated sine of pi x by a bunch of rectangles, and you added
up those rectangles. You let the width of the rectangles get skinnier and skinnier, and then you added them all
up.

So, what we're going to do is I'm going to sample n random rectangles between zero and one. They're going to
have height f of x, sine of pi x, and they're going to have width one over n. But instead of being right next to
each other, like Leibniz did it balck in the 1600s, we're going to have these rectangles centered randomly along
the x-axis between zero and one. And we're going to do the same exact thing, except we're just gonna, we're
going to have random rectangles, as opposed to rectangles right next to each other. And we're going to add up
the areas, I'm going to make n really, really big, and I'm going to get the answer. The answer is two over pi,
believe it or not. But we'll get that approximately.

26
So let's see what the demo is going to look like. We'll start with this screen. And what I've done here is that I've
made 64 rectangles. So I'm randomly selecting 64 rectangles. You can't see this here very easily, but there are
64 rectangles of width one over 64, and they all have the correct heights. Wherever the center is, that is their
height. And I'm going to add up all the areas of those rectangles, and there is 64 of 'em. Here is the seed,
567893. And it turns out, my estimator for the area from zero to one is 0.5886. And I happen to know the real
answer, because I did okay in calculus class, the real answer is two over pi, which is 0.6366. So, eh, not the
best answer, the best estimator ever. 0.5886. But that is only because we sampled 64 rectangles. We'll see in
the demo what happens when I simulate more rectangles. I'm going to get a better answer.

So, now, I'm going to do something crazy, and only simulate four rectangles, so you can really explicitly see
what we have here. So look at that, see, four rectangles. That one is centered there, see there, and they're not
very adjacent to each other, but there are four rectangles. Each of them has a width one over 25. And I add up
the areas, and I get, it is a little hard to see, 0.5790. Not the best answer ever. Let's now do 64 rectangles. I'm
not going to change the seed. I don't really need to here. Estimate, see, this looks a lot more like what we had
before, and now, the answer is 0.6694. That is a little bit better, but it is in the other direction. And, finally, I'm
saving the best for last. I happen to know that I can sample up to 1,024 in this software before it bags out on
me, so let's do 1,024 rectangles. These are really skinny rectangles. And here is what we get. Look at that,
there is 1,024 rectangles there. A little bit hard to see. And there is the answer, 0.6374, really close to 0.6366,
so I'm really happy about that. And, in fact, Monte Carlo integration is a very, very nice methodology to use in
several applications.

Knowledge Check 1.6

Suppose there are 40 random people in a room. What is the probability that at least two of them will have the
same birthday?

A. Close to 0
B. A bit less than ½
C. Almost exactly ½
D. Somewhat greater than 1/2 correct

Inscribe a circle in a unit square and toss 1000 random darts at the square. Suppose that 800 of those darts
land in the circle. Using the technology developed in this lesson, what is the resulting estimate for pi?

A. -3.2
B. 2.8

27
C. 3.0
D. 3.2
E. 4.0

Answers: D, D

28
Lesson 7: More Baby Examples
I'm going to show you what happens when you use a bad random generator. So, so far, I've kind of hinted that
these random number generators aren't really random at all, but they appear to you and me to give out random
numbers. And I stick to that. However, there have been cases in the past people have developed random
number generators that do not appear to give random numbers, and amazingly, they've still been used. We'll
get into more details on that when we talk about random number generation later on in the course, but I'm just
going to give you an example of what you can come up with.

I'm going to simulate in a simple way people's heights versus weights. What you're going to be seeing is sort of
a two-dimensional normal distribution with mostly observations in the middle and some in the tails in the
outside. And one dimension will correspond to the height. And the other will correspond to weight. And I'll show
you what it looks like in a second. Do the observations look random?

In one case, I'm going to use a good random number generator, and they will look random. In another case,
well, let's see. This is what is called the Box-Muller method. We'll talk about this again in detail later on in the
course. I'm going to show you exactly where it comes from, how it is used. So what we see here is this blurb in
the middle here. This blurb, and these are heights versus weights. Let's pretend the heights are on the x-axis
and the weights are on the y-axis. (Let's also pretend that heights and weights are not correlated, which is very
unrealistic.)

Right over here, this is a big, tall, heavyish guy, maybe it's like Shaquille O'Neal. And down here is a shorter
very light person, maybe that's a little kid. But these people are in the tails obviously. See the tails are many
fewer observations, and most of the action is taking place in the middle here. Just like you would expect from a
normal distribution. And as before, I'm going to keep track of the number of points that I simulate as well as the
seed. And I'm going to have both a good generator and a bad generator and we'll see what comes up here.
I've got a slightly different graphic. But you'll have access to this software as well.

So what I'm going to do, let's generate, I don't know, 1,000 points. I'll use this random number seed, one, two,
three, four, and I just click the toss button here. And there we go. You can see that we have this cloud of points
that's generated over the x-y axis. Most of this stuff again is taking place in the middle. There's a little bit on the
outside. Let's do 1,000 observations instead. Click, and again you can see most of the stuff is taking place in
the middle, and you get some of this tail action going on. Now, what I'm also going to do is I'm also going to
generate these random numbers, these heights and weights, using a bad generator. This one I think is called

29
the random generator. I'm going to change the seed just for a certain reason here. And this generator is called
the random generator, and it has probably bad performance characteristics, but let's take a look. Click. Okay,
actually, yeah, not so bad it turns out. Again, most of this stuff is taking place here. We have some more people
on the outside, but I think that's just a random chance.

So why am I having a cow over this random varied generator? Well, it turns out, a generator must work well for
any seed that you can think of. And I happen to know that this one, two, three, five, wasn't going to give me
very bad performance. I also happen to know that if I were to pick a power of two as my seed, you would hope
that would still do okay. What happens if I pick a power of two as my seed? Here's what you get. Whoa, this is
extremely beautiful. But it's not at all random. And so, you would never want to use a random generator like
this that gave these provably, well, they're beautiful, but they're just not random observations. And this has to
do with the fact that rando is just not a good generator. It's got some probably bad properties. And we'll talk
about those later on in the course. This is just a proof of what's coming up.

Now we'll take a look at a queueing problem. These are always a lot of fun actually. And I'm going to walk you
through a spreadsheet. Then we'll cover queues in much, much more detail later on. These are really fun,
though. So queues are us. So suppose we go to McWendy's, which is a popular burger joint. And I encounter a
single server queue. So there's one line and there's one server at the front of the line, and people go in first in
first out. Customers show up, you wait in line, they get served first in first out. This is the simplest possible
queueing model, and, in fact, there's analytical solutions to models like this. But that's okay. This is a simulation
course, so we'll do simulation. What happens is the arrival rate approaches the service rate. That's something
that simulation can answer very easily. Nothing much, does the line get pretty long? Do the hamburgers start to
taste better? Well, let's see.

So what the purpose of the simulation is, in this case, it'll allow us to analyze this very simple queueing model.
And you're going to be pretty surprised by the amount of stuff that the simulation gives us automatically. And,
of course, like I said before, you can analyze these things by numerical and exact methods, but simulation
actually gives us more information in a lot of senses.

Before I show you the screen that the demo's on, I just wanted you to notice that queueing and queueoid have
a lot of vowels in a row. I don't know any other words that have so many. Here's what the typical demo screen
looks like. And I'm not actually going to run a live demo of this, but we can run through the screen here.

30
So, first of all, I would specify an interarrival mean. And what that means is customers show up about every
four minutes, plus or minus. The MM one notation means this is a so-called memory list or Markov or
exponential distribution, and so that means about every four minutes a guy shows up. This means about after
three minutes, a guy can get served. So, in other words, the server is a little bit faster than the customers
showing up. This is a moderately congested queueing system. So m, exponential, or Markovian interarrival
times. M exponential service times, and one one server. So you can see that this is a moderately congested
system because this ratio three over four point seven five is starting to get near one. If this thing got to be .999,
you'd be in a major traffic jam.

Anyway, let's go over and see how we would simulate this. And what we're going to do is, we're going to
eventually do this by hand. And then we'll do it in Arena. We'll do it with sorts of things. We even have in my
spreadsheet package, you'll get to play with this. So there's lots of ways you can simulate. We'll do it by hand
now. The I in this column refers to the customer number showing up in the queue. Arrive means the arrival
time. The start means when do they start getting served? Service is the service time. Leave is the leave time.
When does he complete service? And wait is the amount of time he waits. These graphics simply represent the
current queue length as a function of time. And the server utilization. It turns out in this example, the server's
probably going to be utilized 75, 80% of the time, and the queue length bounces from zero to up to four or five
and back down.

The first guy, he shows up at time zero. He showed up when the store opened. He didn't have to wait at all. His
starting time for getting served was zero, he got served immediately. His service time turns out was seven.
Now, I admit, that's not a service with a mean of three. But seven could happen. If the mean of three gets
plugged into the algorithm. You could get a seven. You could get a two, you could get a one. On average it's
three, but this time we got a seven. And then since he starts getting served at time zero, and the service time is
seven, he leaves at time seven. Good for him, and this lucky guy didn't have to wait at all. Meanwhile, the
second guy shows up at time two. Now, on average, the time between arrivals is four, but he showed up two
minutes after the first guy, perfectly allowable. The problem for this guy is that somebody's getting served
already and the first person doesn't get out of service until time seven. So customer two who arrived at time
two has to wait until time seven to leave. So that poor guy has to wait seven minus two, he waits five minutes.
Now, finally, after he waits a little while and he gets served at time seven, his service time itself is three
minutes. So he requires three minutes of service, and so he leaves after seven plus three, he leaves at time
10. So isn't that nice? Very simple.

Meanwhile, customer three shows up at time 5. Uh-oh, customer one is getting served until time seven.
Customer two is getting served until time 10. So this poor person has to wait in line for a while. In fact, he has

31
to wait until time 10, which is five minutes. When he finally gets to get served at time 10, his service time is five
more minutes. And so, he leaves at time 15. See how this works? You can write this up in a spreadsheet very
easily. You can see the respective service times for the next customers. Give you the expected, actual waiting
times.

So the first customer waited zero. Second, five, third five, the fourth, fifth and sixth customers didn't have to
wait at all, good for them. The seventh customer waited one minute. Et cetera, et cetera, et cetera. Well, look
at this, we had a busy period here where a lot of people had to wait. But this is very nice. This is what the
simulation tells you. Very, very nice. What's nice about this simulation and all simulation products is that they
give you summary statistics. So this is wonderful. This output analysis summary statistic page says that after
time 93, we had a throughput of 21 customers. The average number of customers in the system during this run
was 2.3 customers. The average time in the system was 10.19 minutes. The server was being utilized 93% of
the time. The average length of the queue was 1.37. The average time of the customer was 6.05. Absolutely
wonderful. It gives you all the output that you would ever want from the simulation. Very, very nice, and it's all
automatic. You don't have to do the programming yourself.

The next example that I'll give has to do with stock market follies, and I just made these numbers up. But you
can see that there's a tremendous application in the stock market. So I'm going to simulate a small portfolio of
various stocks. I'll simulate them over I think it's a five year period. And the stocks are going to change, the
prices are going to change, obviously, randomly from year to year. And the stock portfolio will have
components that have various volatilities. Some sectors are more highly volatile than others. And what you can
do, I won't do this in this particular simulation, but you can consider different mixes for the portfolio that take
into account the anticipated rate of return and the different portfolio volatilities, and if you wanted to, you could
optimize over these types of portfolios and make a lot of money. That's how Markowitz got his Nobel Prize.

And what I'll show you now is a very simple spreadsheet application that I just made up with five different
categories of stocks. Here's what it looks like. So I've got, well, six different categories. I've got an energy
sector. Pharmaceuticals, entertainment, insurance, banking and computer technology. And, again, I made
these numbers up. Energy, apparently, it goes up by about 5% a year on average. Pharmaceuticals up 6% a
year. Oh, look at this, computer technology is just racing. Up 18% a year on average. The standard deviations,
though, leave something to be desired. So even though energy appreciates at about 5% a year on average,
unfortunately, the stand deviations are 30%. So this is not something that you can take lightly. And look at
computer technology. Standard deviation is 50% here, which is crazy. Entertainment industry and insurance

32
industry, those are a little more well behaved. You know, insurance is just boring, right? 5% a year, what a
boring industry.

So here's what I'm going to do. I'm going to start out with $5,000 in each of the components for a $30,000
portfolio. Or if you want to make it interesting, let's pretend this is $5 million. It's a little bit more important than,
so I have a $30 million portfolio. And I'm just simulating what happens from year to year. Well, the energy goes
up by about 5% per year. Well, look it went way down the first year to 2701. Then it went back up, stayed the
same, went back down. Went back up over the five years. So, oh, that's unfortunately, I didn't do so well there.
And it had to do with that volatility. Same kind of thing happened over here with computer technology. It went
way down the first years and started catching up a little bit in year five. And you can see that some of the
stocks did better than others. And by the end of five years, we made 3,000, $4,000, or three or four million,
depending on how you count it.

Let me give a demo now, and I'll show you how the volatility actually works. So here's a live demo that we'll do.
And you can see I've set this up in Excel. Here are the same numbers that we had on the screen. And you can
see that, you know, it looks like I did pretty well. After five years, I got up to 41, $42,000. If I hit F9 basically, the
simulation changes its random numbers. And you can see I get a different outcome. Here it's what? $37,000.
Do it again. You can see it changes from run to run. Every time I hit the Return button. But look at this,
sometimes I lose money. That's how it is when you have these highly volatile portfolios. Sometimes I make
money. Sometimes, whoa, this time I made $96,000. I did great. Of course, that doesn't happen all the time. So
what you would be expected to do to make sense out of this thing is that maybe you simulate the portfolio
1,000 times or 10,000 times and you get a histogram of that number right there. How does that change over
the 10,000 runs? And depending on the volatility and rate of return, weights, you can design a portfolio that
tends to do better or is more conservative than others. That's what portfolio analysis is and simulations is a big
help with that.

The last thing I'll talk about in this lesson is taking a random walk. This is a terrific topic which we will talk about
again in gross and tedious detail later on in the class. It's a wonderful topic. It's used all over the place. And I'm
just going to walk you through what it does. And I won't give a demo just yet. I'll save that for later on. So, what
we're going to do is I'm gonna, I'm going to be a drunk guy much, I'm drunk. And I'm going to take a step up or
down. And the amount of space that I move up or down is normally distributed. And I'm going to do that every
minute or every hour. So I'm going to walk up or down. Each step length is normally distributed. And I do this

33
every time unit. And I want to know where am I after a certain number of time units. So that's called a random
walk.

And this has applications in financial analysis, hypothesis testing, all over the place. It turns out that this
random walk converges into what's called Brownian motion, which will come up all the time in this course. Just
all the time. It turns out, Einstein and Black and Scholes won Nobel Prizes for research involving Brownian
motion. This is what it looks like. So, every time it goes up a little bit, right there, that's my little step up, up up
up, down down down down down. Up up up, you can see me. I'm a drunk person walking up and down. And
does this not look like a stock price? So this type of thing has applications in stock pricing, portfolio analysis,
option pricing, absolutely wonderful topic areas. And we'll have somewhere much, much more to say about
that as the course progresses. Here's a summary of what we did just now. We ran some simulations on
additional easy examples. All of them involve randomness. And finally, next time, I'm going to show you how to
generate that randomness on a computer. And this will lead to a bunch of interesting issues that will purvey the
entire course.

Knowledge Check 1.7

1. TRUE or FALSE? All random number generators perform pretty much the same.

A. True
B. False

2. Suppose customers to a barber shop show up at times 4 and 11. Moreover, suppose that it takes the barber
12 minutes to serve customer 1 and then 14 minutes to serve customer 2. When does customer 2 leave the
barber?

A. 18
B. 25
C. 30
D. 40

34
Lesson 8: Generating Randomness
I'm going to continue the whirlwind tour with a discussion on the generation of randomness. How do you do
that on a computer? The algorithms that generate randomness are not random at all. I've already alluded to
that. But to you and me, they appear to be random. So, we'll see how that's possible.

Randomness. Well, we need random variables to run the simulation. You need interarrival times. You need
service times. So, the trick that we'll learn about is that you generate uniform zero one pseudo-random
numbers. Pseudo just means pretend random, not really random. Uniform zero ones are just random numbers
between zero and one. These pseudo-random numbers, PRN's, are generated via deterministic algorithms. So
in fact they're not random, they just seem to be. Now these uniforms are then fed into an equation. You start
with these uniforms and I can get any other distribution I want. I can get exponential interarrival times. I can get
normal heights. But I need to start out with these uniforms.

So, let's talk about that. Uniform zero ones. Again, they come from a deterministic algorithm. The nicest one
that I can think of is one called a linear congruential generator. That's a mouthful. But here's, in a few words,
here's what it does. You're not expected to remember this as this point 'cause we'll go over this later on. But
you start out with a seed. We've discussed this before.

Let's make it an integer seed just like we wrote down in all those examples. Let's call it x not. So that's my first
seed. I'm going to use that to generate my next number. Let's call that x i. X i for i equals one, is equal to some
constant A times x i minus one. That would be x zero in this case, if i equals one. Modulus m. So mod is the
modulus function. This is basically the remainder function. So if I have seven mod four, seven divided by four
has remainder three. That's what the mod function would be for seven mod four. We'll talk about that more later
on if you've forgotten that. So a and m are carefully chosen. M is usually a really big prime number and a is
some other big number. But it turns out, if you use this algorithm x i equals a times the previous, x i minus one
mod m, that often works out to give you kind of nice looking random integers. To change that from a random
integer to a random number, PRN pseudo random number between zero and one, I just start with my x i, divide
by m, which is the biggest possible integer that I can get from this algorithm. And then that's guaranteed to give
me a number between zero and one. So we'll say that u sub i equals x i divided by m.

Here's a pretend example. Let's start with x not equals four. I'm just making this up for no reason. And then,
that means that x i, I'll set that equal to five times x i minus one mod seven. And let's see what I get. X one is

35
equal to five times x not, which equals 20 mod seven. And if I divide 20 by seven, let's see that gives me two
remainder six so 20 mod seven is six. Using the same manipulations, x two equals two, x three equals three, x
four equals one, x five equals five, et cetera. And all these numbers are very easy to compute. You just plug
into the mod equation. Then, if I want to get my uniforms, I divide by seven, which is the modulus. So u one
equals x one over m six over seven. U two equals two over seven. et cetera, et cetera. The problem is you can
look at these things and you can tell the numbers don't look so random. But that's just because we start off with
such a small pretend example where m is equal to 7.

A real example involves the following linear congruential generator. X i equals 16807 times x i minus one mod
two to the 31st minus one. Now that number is quite a bit bigger than seven. That's about two billion. Turns out,
this thing actually works pretty well and we'll be using it later on. Again set u i equal to x i over m. That'll give
me a number between zero and one. This generator, well, it used to be used in a number of simulation
languages. It's not anymore. It's got nice properties including the fact that it has what's called a large cycle
time. It doesn't repeat for a while. But there are better generators out there, which we'll talk about later on.

Now how do I go from these uniforms to generating other random variables. I'm not going to go through the
theory here, but it always amounts to applying some transformation. So, for instance, if I start out with that
uniform, I apply the following transformation, negative one over lambda log of u i. If I apply that transformation
to the u i's, believe it or not, I get an exponential random variable. Now that's true by what's called the inverse
transform method. And I can use this for many, many important probability distributions. Now, it turns out there
are other more sophisticated methods available. For instance, Box-Muller, which we've already seen. That's
used for the normal distribution. And we'll talk about those as the course proceeds.

Here's a summary. We showed how to generate what appears to be randomness on a computer. And in the
next lesson, we'll show how random input means random output. And that spells trouble with a capital t. What
do you do about that? You can't quite use the baby stat stuff that you learned when you took that course a
long, long time ago. So what do you end up doing about it?

Knowledge Check 1.8

1. Suppose we are using the (terrible) pseudo-random number generator [MATH] mod(8), with starting value
("seed") [MATH]. Find the second PRN, [MATH].

A. 0

36
B. 1/8
C. 3/8
D. 3

2. Suppose that we generate a pseudo-random number U = 0.728. Use this to generate an Exponential
(lambda = 3) random variate.

a. -0.106

b. 0.106

c. -0.952

d. 0.952

37
Lesson 9: Simulation Output Analysis
The last stop on our whirlwind simulation tour will be a discussion on Simulation Output Analysis which
happens to be my favorite topic. Here's the lesson overview. Last time I talked a little bit about how we can
generate randomness so as to run the simulation. Now, unfortunately, random input to the simulation means
random output, and that requires very, very careful analysis.

What can we do about that? The bottom line is that everything they taught you in your Baby Stats class was a
big fat lie. And the problem is that simulation output is never, ever independent or normal. So we're going to
need some new methods, and we'll hint at those today. So, analyzing randomness. Simulation output is nasty.
Let's look at the simplest reasonable example I can think of.

Let's consider some consecutive customer waiting times in line at McWendy's. So first of all, the waiting times
for consecutive customers are not normally distributed. Usually they're skewed. First of all, of course, they can
never be negative, right? You can't have a negative waiting time. But usually what happens is most people
kinda wait about a certain amount of time, but there's right tail. It's certainly not symmetrically distributed, and
it's therefore certainly not normally distributed. Also, the waiting times are not identically distributed because
patterns change during the day. There might be a big long line of people at McWendy's during breakfast time
or lunchtime, or dinnertime, but in those times in between the customer arrival patterns are completely
different, so the waiting times might be like zero during those slow times.

The worst thing though is that waiting times are not independent of each other. And if you're waiting in line a
long time, then the poor guy next to you is probably also waiting in time a long time, so your waiting time is
correlated with his. So what I've just shown is that waiting times are not i.i.d normal, independent identically
distributed normal random variables. This is a problem. So what that means is that you are not allowed to
analyze simulation output data by the usual Baby Stats methods. Really bad, you get into a lot of trouble if you
do that. So it turns out there's going to be two cases that we can consider with respect to simulation output
analysis. We have a terminating simulation where you're interested in short-term behavior. You're going to use
one method called the method of independent replications. Examples of terminating simulations are like you go
to a bank and you're only interested in the behavior over the course of one day. What's the average customer
waiting time in a bank over the course of a day? So you're probably going to end up simulating individual days
many times, but it's just one day. Another example might be the average number of infected people during a
pandemic. The pandemic is over, there's nothing long term about it, it only lasts for a month or so, a couple
months, how many people got infected? So these are terminating, short run simulations.

38
There's also a steady-state simulation which is kind of a long run simulation. Here, instead of these end the
simulation after one day kinds of things, I'm interested in really long-term behavior. Let's run an assembly line
24/7. That's the kind of thing I'm interested in. Or a Markov chain, if you remember stochastic processes, if you
don't, don't worry about it, a Markov chain that's run for a long time. These are the kinds of things that we care
about with steady-state simulation and, in that case, you attack the problem by something called the method of
batch means or other methods. Let's look at terminating simulations. These are usually analyzed by what’s
called the method of batch means or other methods.

I'll tell you about that in a little bit, but I don't expect you to remember it until we get to this topic much later in
the course. What the method does is that it makes independent runs or replications of the simulation model,
each run is conducted under identical conditions. Then you look at the sample means from each of these runs,
each of these replications, and you pretend that sample means are approximately i.i.d, independent identically
distributed normal random variables. There's a lot of evidence to the truth to that assumption, so go ahead and
make that assumption. And then, use classical baby statistics on that i.i.d sample of replication means, not on
the original observations, on these means that you get from each of the replications. Then you can use the
classical statistical techniques on those. You still have to be careful, but it works.

In steady-state simulation, and again I don't expect you to remember this at least until we get to the
discussion, this output analysis topic much later in the course, first you have to deal with the initialization bias.
And what that is, the stuff that happens right at the beginning of simulation is not indicative of steady-state. Like
if you open up the store, nobody's going to have to wait right at the beginning unless there's a giant sale going
on. Nobody's going to be waiting in line. Whereas long-run behavior, people are more apt to be in line, at least
on average over the long term. So I have to deal with the bias of the simulation at the beginning of the run.
Usually what people do is they warm up the simulation before they start collecting the data, the steady-state
data. If you don't do this, it can mess up your subsequent statistical analysis.

Now it turns out there are lots of ways to deal with steady-state data. I mentioned this method of batch means.
There's also other funny sounding things like overlapping batch means or spectral analysis, standardized time
series, regeneration. We'll learn those terms later on.

Batch means is the one that most people use. What's that? Well, in batch means you make one giant run
versus a number of shorter independent replications, 'cause it's steady-state, so you make one long run. You

39
warm up the simulation before collecting data just like I warned you on the previous slide. You chop these
remaining observations, after you warm it up from the one long run, into contiguous adjacent batches. So you
batch the observations, that's why they call it batch means, then you take the sample mean from each batch.
And using various central limit theorems and things that we'll talk about later, you can assume that those
sample means are again i.i.d normal and bang, off you go, classical statistics on these batch means. So again
we'll talk about this in much more details later on. This is just a tour, a three-hour tour, like Gilligan's Island.

Amazingly, we are done with the module, and we talked about some very troublesome simulation output just
now, so we're done with module one. And I want you to get ready for this next module two which will be a little
bit of a review on calculus, probability, and statistics, but I'm going to throw in some simulation so you'll be able
to see some applications right away. Module two is meant to be self-contained, so if you kinda forgot some of
your calc and problem stats, go through that and it'll get you up to speed. It will also give you a little bit of
simulation preview. We'll have a couple homework problems involving simulation problems.

Knowledge Check 1.9

1. TRUE or FALSE? Simulation outputs such as consecutive customer waiting times are almost always
independent and identically distributed normal random variables.

A. True
B. False

Let's simulate a bank that closes at 4:30 p.m. What kind of simulation approach would you take?

A. Steady-state simulation
B. Terminating simulation
C. Arnold Schwarzenegger simulation
D. I'm from The University of Georgia. What is simulation? And what is bank?

40
Module 2

Slide Decks

Lessons 1 - 6:
https://prod-edxapp.edx-cdn.org/assets/courseware/v1/e9b545305903de292f13869f3baa5ac5/asset-v1:GTx+I
SYE6644x+2T2019+type@asset+block/ISYE_6644_Module_2__Lessons_1-6_PRODUCTION_VERSION.pdf

Lessons 7 - 10:
https://prod-edxapp.edx-cdn.org/assets/courseware/v1/df2046c26c52eaae13133071faed6606/asset-v1:GTx+I
SYE6644x+2T2019+type@asset+block/ISYE_6644_Module_2_-_Lessons_7-10_PRODUCTION_VERSION.p
df

Lessons 11 - 16:
https://prod-edxapp.edx-cdn.org/assets/courseware/v1/2a15ecc1986991912afa1dc2533fa35c/asset-v1:GTx+I
SYE6644x+2T2019+type@asset+block/ISYE_6644_Module_2_-_Lessons_11-16_PRODUCTION_VERSION.
pdf

OPTIONAL: Lesson 1: Calculus Primer


Well, now we're about to begin module two, which is basically a series of boot camps on calculus, probability,
and statistics. We're done with the easy stuff, we've done a little tour, now we're going to get to do a little math,
but hopefully it's mostly a review for you. So here's the lesson overview for now. In the last module I gave a
very high-level overview of what simulation is, that was our tour, and this time the module is going to present
these various boot camps, and I'm going to start right now with a calculus lesson. Now here's the dirty little
secret, in fact there's nothing here that you haven't taken before if you've taken the prerequisites, so don't
worry too much, just go with the flow and let's see what we remember. So we'll start our calculus primer now.
The goal here is just to give a brief review of these little tidbits that we'll be using later on throughout the entire
course. First of all, let's go through a couple of elementary definitions that you might remember from a long
time ago. Let's suppose that f of x is a function that maps different values of x from a domain to a range, and
you can represent this thing in shorthand, f goes from X to Y, you've probably seen that before. Here's an
example. If f of x equals x squared, a little parabola, then this function takes x-values from the real line R, that's
some more notation, and it maps it to the nonnegative portion of the real line, we'll call that R plus. So let's do a
little definition here. We'll say that a function f of x is continuous if for any x naught and x in the domain we
have that the limit as x goes to x naught of f of x equals f of x naught. Now that limit just denotes the familiar
limit that we're all used to even in high school, and we'll assume that f of x is going to exist for all the x in the

41
domain. So here's a simple example. Let's let f of x equals three x squared, now that's just a quadratic function,
that thing is obviously continuous for all x. Now the function f of x equals integer x, that's the round down
function, some people call it the floor function, that thing's got what's called a jump discontinuity at any integer,
so for instance 3.4 equals three, 3.5 equals three, et cetera et cetera, then it jumps up to four when we get to
4.0. Here's another definition, we needed continuity for this one, if f of x is continuous, then the derivative,
which we denote as d dx f of x, or sometimes we call it f prime of x, that's the limit as h goes to zero of the
following: f of x plus h, minus f of x, over h. Now that can also be interpreted as a slope because f of x plus h
minus f of x, that's kinda the rise and divided by h, that's the run, over the range x to x plus h, so that can be
interpreted as a slope, or an instantaneous slope at any point x. Now we'll assume that it exists and it's
well-defined for any given x, but that thing is the derivative. Here are some old friends. So when you take the
derivative of a polynomial term, x to the k, you may remember that that derivative is k times x to the k minus
one. If you don't remember, go back to your calculus book lickety split. One nice derivative is e to the x, e is
that exponential constant but e to the x, the derivative of that stays the same, e to the x. It's kind of a neat
property but you can go back to your calculus book and see why. Sine of x, the derivative of that is cosine of x,
and sort of complimentary to that, cosine of x, the derivative of that is minus sine of x, that's just something that
you have to remember. The natural log of x, the derivative of that, is one over x, we'll be using that in a little
while when we integrate. And finally, this last old friend, if for some reason you want the derivative of
arctangent of x, well that's one over one plus x squared, and believe it or not we'll be using that a couple times
in the course. So, here are some well known properties of derivatives, and I'll just use these on and off, just
wanted to remind you of them. First of all, derivatives have this linearity property. The derivative of a times f of
x plus b, well that's just equal to a times the derivative of x, the derivative of b, well that's the derivative of a
constant, and that's zero so that goes away. If you've got the sum of two functions then the derivative of the
sum is the sum of the derivative. So f of x plus g of x, the derivative of that, is the derivative of f plus the
derivative of g. Very nice. Here's what's called the product rule, the derivative of the product is this slightly
complicated thing, f prime of x times g, plus f of x times g prime. So that's just the product rule, you have to
remember it, that'll be useful later on when we do integration by parts. Now we've got this quotient rule, this is
an interesting one, the derivative of the quotient is this mess on the right hand side, so the derivative of f of x
over g of x is g times f prime minus f times g prime over g squared. That's a little bit hard to remember, I'll go
over a mnemonic in a second. Finally, the chain rule, I'll also give a reference on that, that's the derivative of a
more complicated function. So if we want the derivative of f of another function g, so we're kind of doing a
composition of functions, the derivative of f of g of x, that's equal to the derivative of f evaluated at g, now you
take the inside, kind of do this in a chain, times g prime. So f g quantity prime equals f prime of g times g prime.
It's kind of like a chain. So here are a couple references, the first one is a mnemonic for the quotient rule. So if

42
you consider f of x as Hi because it's on top, it's in the numerator, and g of x as Ho, cause it's in the
denominator, then the derivative of the quotient can be expressed as Ho dee Hi minus Hi dee Ho over Ho Ho.
Think about that, but it's an easy way to remember it. And then the chain rule, there's a reference that you can
go look up on YouTube. Here's an example, cause you know we went through all these equations, time to do a
little example, here's an easy one I could think of, let's let f of x equal x squared and we'll set g of x equal to
natural log of x, let's just see what some of these things end up equaling when we do these derivative rules.
First of all let's do the product rule, so what's the derivative of f of x times g of x? Well remember, we said that
this is going to be the f of x times g prime plus f prime times g of x. And let's see if that turns out to be the case.
So the first equality is just a, I've reexpressed the derivative of f times g as the derivative of x squared times
natural log of x, so according to my product rule it's, well we'll do f prime of x times g, so f prime is two x, g is
natural log of x, plus f of x times g prime, this looks a little tricky because all of a sudden we just have an x, but
it actually makes sense. What's g prime? The derivative of natural log of x is one over x, and so that cancels
out partially with the x squared and that leaves me with the x and so that's how we get that answer, very very
easy. How about the quotient rule? F of x over g of x, prime, so that's the derivative of x squared over natural
log of x, well you do Ho dee Hi minus Hi dee Ho over Ho Ho, Ho Ho in the denominator is very easy, that's just
natural log squared of x, that's easy. So Ho dee Hi minus Hi dee Ho, well let's do the dee Hi part first, that's two
x, Ho is natural log of x so that's where we get the left hand side of that numerator, and then minus Hi dee Ho,
well we've already done that, that's minus x, so that's where that comes from. Okay how about f of g quantity
prime, so we're going to need the chain rule, and that's going to be two times g of x times g prime of x, and that
turns out to equal two times natural log of x over x. Again you can go and check that out, it's very very easy,
very very simple. Okay let's make a remark now. If one derivative is good enough, how about two derivatives?
The second derivative, f double prime of x, equals the derivative of the first derivative, so I can express it as d
dx of f prime of x, and you can kinda think of it as the slope of the slope, and if you remember physics class, if f
of x is what we'll call position, then f prime of x is usually regarded as velocity, and f double prime is
acceleration, so if we want to be Isaac Newton, we remember all this stuff. Knowing the second derivative is
very nice because it's helpful in finding mins and maxes of a function. So it turns out that the minimum or
maximum of f of x can only occur when the slope of f of x is zero, think about it, kinda makes sense. In other
words, only when f prime of x equals zero, let's pretend for now that that happens at the specific values x
equals x naught, then if f double prime of x naught is less than zero, it turns out you get a max. If f double
prime of x naught is greater than zero, you get a minimum, that's known as concave down, concave up, and if f
double prime of x naught equals zero you get a point of inflection, that doesn't happen very often but when it
does you get real sick and that's why it's called a point of infection. I'm going to go through this example pretty
quickly, then we'll talk again in the next lesson about this one. Let's find x that minimizes f of x equals e to the

43
two x plus e to the minus x. It turns out that the minimum can only occur when the derivative equals zero, and I
can take the derivative of that using the chain rule, the derivative of the first term, e to the two x, think of it as e
to the g of x, when I take the derivative of that, I get e to the g of x times g prime of x, and so that gives me,
cause g prime is two, I get two e to the two x, similarly the minus sign comes down from the e to the minus x,
so the second term in the derivative f prime of x is going to be minus e to the minus x. Anyway, take all that,
two times e to the two x minus e to the minus x, set that equal to zero and then solve for x. So that solution
occurs when x naught, after a little algebra, equals minus 1/3 natural log of 2, and it turns out it's very easy to
show, if I take the second derivative that f double prime of x is greater than zero for all x, and so therefore x
naught is the minimum. Okay, great, done with that example. And in fact, done with this lesson. Here's a
summary. What we did here is that we did a little bit of calculus derivative memories, very nice stuff, I hope
there were no surprises. And next time, I'm going to take sort of that last example and I'm going to give some
numerical techniques to solve nonlinear equations, in particular I'll try to solve these nonlinear equations to find
out what the zeroes are, and this is always pretty interesting and it's actually stuff that we'll be using throughout
the course.

44
OPTIONAL: Lesson 2: Saved By Zero! Solving Equations
In this lesson, I'm going to be solving certain nonlinear equations, looking for zeros in all the wrong places. So,
here's the lesson overview. Last time, I talked about certain calculus basics, and right at the end, we started
looking for solutions to nonlinear equations, in other words, finding a zero. This time, I'm going to be a little bit
more formal about ways that we can conduct such a search. The material is going to be used several times in
the course, take my word for this, because it's actually very, very important, and is useful in a lot of different
methodologies within simulation. So, finding zeros, how might you find a zero for some complicated linear
function? In other words, I'm looking for the value of x, such that f(x) = 0. There's a couple ways: trial and error,
now I've used this before, it is not so great, sort of a search and destroy type thing. Bisection, this is actually
better. Here we're using a sort of divide and conquer strategy, where we at least narrow things down every step
of the way, and Newton's method, or some variation of Newton's method, this is sort of a highly dependable,
very quick method, and then, maybe later on in the course, we may have time to look at a fixed-point method,
those are a little bit more complicated and specialized, but they're very nice as well. So, here's an example that
you may recall from the last lesson, and this example actually has an exact answer, at least we can solve it.
Nevertheless, I'm just going to remind us of this example just for stabilization purposes. Let's find the value of x
that minimizes this function, e to the 2x plus e to the -x. It turns out that this minimum can only occur when you
take the derivative and set it equal to zero, so I took the derivative using the chain rule, and I got two times e to
the 2x, minus e to the -x, set that equal to zero. I'm going to make a little algebra manipulation here, and it
turns out that the solution to this is x naught equals negative 1/3 log of two, and that's approximately equal to
minus .231. It turns out it's also easy to show that the second derivative is greater than zero for all x, and so
since that's concave up at that point, x naught is indeed a minimum. So that's actually getting a real answer,
and the thing works, it's possible to do that. Now we'll look at the method of bisection. Here, let's suppose that I
can find x values, x1 and x2, such that the function g(x1) is less than zero, and g(x) value of x2, is greater than
zero. I'm using g here instead of f because I can use any letter I want, and I'll follow a similar logic if the
inequalities are reversed, but let's just pretend that g(x1) is less than zero, g(x2) is greater than zero, then by
what's known as the Intermediate Value Theorem, which you may remember from calculus class, if one value
of x gives you a g(x) less than zero, and one value of x gives you a g(x) greater than zero, that that means that
there must be a zero in between x1 and x2, so x*, let's say that's the zero, falls between x1 and x2, that's what
the Intermediate Value Theorem says, at least if g(x) is continuous. So what I'm going to do is, I'm now going to
kind of split things in the middle. Let's take x3 equal to x1 + x2 over two, if g(x3) is less than zero, then that
means that there's a zero between x3 and x2, otherwise if g(x3) is greater than zero, there must be a zero in x1

45
to x3. That just follows again by the Intermediate Value Theorem. Either way, the length of the search interval
decreases by a factor of two, so what we'll do is that we'll continue in the same exact manner until the length of
the search interval is as small as desired, because every time we iterate when we go to x4, x5, dot dot dot, the
length of the interval gets smaller and smaller and smaller, so let's try this out for this function g(x) equals x
squared minus two. Now, in fact, we know that x equals square root of two is a solution to this, so x squared
minus two equals zero, that has a zero at x equals square root of two, so we know that, but by doing this
iteration, which I'll show you on the next slide, we'll be able to approximate square root of two. Now, I'll illustrate
the bisection method on the function g(x) equals x squared minus two, and we already know that the zero is
going to be x equals square root of two. I have graciously graphed out the function in Excel, here it is right
there, beautiful. You can see just by looking at the thing that the zero occurs right there, and if you look
carefully, it occurs between 1.38 and 1.44, which is not a surprise because you and I know that the square root
of two is 1.41 blah blah blah. Let's just arbitrarily pick a starting point, x1 equals one, and x2 equals two, so
here's x1 equals one, x2 equals two right there, and you can see that the g of one value and the g of two value
are negative and positive, and so that means, since g(x1) is negative and g(x2) is positive, that means that
there's a zero between one and two, so my rules says that I split the difference between one and two, I take
the midpoint, so x3 is going to be x1 plus x2 over two, that's 1.5, and here's 1.5 on the graph. You can see that
we're getting pretty close to 1.414, but g of 1.5 turns out to be 0.25, which is greater than zero, and the
implication is that the zero is between one and 1.5, so now I can continue in this manner, dot dot dot. I'll
calculate x4 equals 1.25, continuing, x5 equals 1.375, and x6 equals 1.375, and look at this, we can really see
quickly that x* is starting to converge, it's converging to 1.414, kind of fast. It's still taking its time compared to
Newton's method, which I'll show you in a minute, but it's doing a really good job. So now, let's look at
Newton's method. So suppose somehow that I can find a reasonable first guess for the zero, say xi or x
naught. We'll start the iteration at x, at i equals zero. So, if g(x) has a very nice, well-behaved derivative, and by
well-behaved I don't want it to be too flat near the zero, I'll show you why in a minute, then I'm going to iterate
my guess x sub i, to xi plus one, as follows. Xi plus one, my new guess for the zero, is equal to xi minus g(xi),
divided by g prime of xi. So, g(xi), if I'm close to my answer, that's going to be approximately zero, so I'm not
going to be moving my answer too much, and depending on the derivative, if the derivative is close to zero, I
may have to move things a lot. If the derivative is quite big or quite small, or quite big negatively, I won't be
moving much at all. But these two things taken together, the g(x) and the g prime of x, show me how much I'm
going to be moving my guess. Now, what I'll do if things go well, believe it or not, is I iterate xi plus one, xi plus
two, dot dot dot. Things are going to actually converge, unless I've got a couple crazy cases which I won't talk
about right now. In fact, this makes sense because if xi and xi plus one are close to each other and close to the
zero, x*, if you do a little bit of algebra, you end up with g prime of x sub i, approximately equal to g(x*) minus

46
g(xi), over x* minus xi, so that's kind of the derivative at point x sub i, so in fact, if things go well, this will
converge fairly quickly. Let's try Newton out for the same example that we used for bisection, namely g(x)
equals x squared minus two, and of course we secretly know that the zero for this thing is x equals square root
of two, which is about 1.414. Notice that the iteration step here is to set xi plus one equal to xi minus g(xi) over
g prime of xi, it's this thing right there, and g(xi) is equal to xi squared minus two, g prime of xi, is equal to 2xi,
that's very easy. A little algebra gives us that last expression, xi over two, plus one over xi, very very simple.
Let's start with this bad guess of x1 equals one, or x0 equals one, doesn't matter. Here's what we have, x2
equals xi over two, plus one over xi, and xi is x1 in this case, so x1 equals one, so that's one over two, plus one
over one, which is equal to 1.5. Now, I plug 1.5 into this equation, 1.5 over two, plus one over 1.5, now we get
1.4167. Actually, it's a bunch of sixes, but I rounded up to six seven. Then finally, I plug 1.4167 into here and
here, and I end up with, on just three or four iterations, 1.4142, amazing, very close to the actual answer, 1.414
blah blah blah, and it only took three iterations. Absolutely fantastic, very very nice. Here's a summary of what
we just did. I went over some very easy, basic techniques to solve certain nonlinear equations for zeros. Next
time I'll be doing the battle of Leibniz versus Newton. It's going to be integration time, we're going to see a lot of
old friends from calculus class.

47
OPTIONAL: Lesson 3: Integration
We're now going to continue in module two with integration as we go through our calculus primer. Here is the
lesson overview. So, last time, we went through a couple of different ways to find zero of a complicated
function. That was nice. We learned a couple of ways that we'll be using throughout the rest of the course.
Now, we'll kinda do what goes up, must come down. A couple lessons ago, we reviewed derivatives, now we're
going to go the other direction. We're going to do integration. And again, this is the situation where I hope that
you'll remember a lot of old friends. Let's make the fundamental definition that a function, capital f of x, having
derivative, little f of x, is called the antiderivative. The antiderivative is denoted, capital f of x equals the integral,
that little sign there is the integral sign, of f of x, d x, and it's also called the indefinite integral of little f of x. So,
there's a couple ways that you can say the notations. This is the fundamental theorem of calculus. If little f of x
is continuous, then the area under the curve, over the interval a, b is denoted and given by what's called the
definite integral. Integral from a to b of f, of x, d x that's equal to, and this is notation, capital f of x, the big
straight up thingy evaluate a and b. That's just the way you say it. This is equal to capital f of b minus capital f
of a. That's the fundamental theorem of calculus. This relates the derivative of a function capital f of x and it
relates it to the antiderivative little f of x. So, everything's all nice what you would expect. The reason they call it
a definite integral, is because it's really, really an integral. Here are some old friends. Just like we looked at the
friends of the derivatives, here are the friends involving integrals. So, the indefinite integral of x to the k power,
is x to the k, plus one over k, plus one, plus c. C is an arbitrary constant, because you'll see that when you take
the derivative of x to the k, plus one over k, plus one that c, just becomes a zero. So, I'm going to have that c
every place. And I'll give you a little mnemonic to remember that in a minute. This particular integral holds for
all k not equal to one. For k equal to negative one. For k equal to negative one, we have to look at the special
case, integral of d x over x. It turns out that's the natural log of absolute value of x plus c. We'll look at a
mnemonic for that in a little while. The reason for this is there's some convergence and divergence issues
which only take place when k equals minus one, so let's just remember that. Now, if the derivative e to the x is
e to the x then the integral of e to the x, d x is e to the x plus c. Similarly, the integral of cosine equals sine plus
c. Because, remember when we took the derivative of sine we got cosine. And the integral of one over one
plus x squared, is equal to arctan plus c. So, these are just, there's a million of these things. I just gave a few
that you might remember. Here's an example. And this is that mnemonic I was talking about. It's very easy to
see from the previous argument, that since the integral of d x over x is log of x, then the integral of d cabin over
cabin, is log cabin. Pretty funny math joke. But, the reason I'm giving you this is because I want you to
remember that plus c, the arbitrary constant. And then we remember that log cabin plus c is equal to

48
houseboat. So that's just something to keep in mind, and I'm sure you'll remember that the rest of your lives.
Theorem. Some well-known properties of definite integrals are as follows. The integral of f of x, d x from a to a,
so think of the area under this continuous function under one point. There is no area. So, it's zero. The integral
from a to b is negative of the integral from b to a. And finally, the integral from a to b plus the integral, equals
the integral from a to c plus the integral from c to b. So, if you stop taking the integral at some midpoint, c, and
then you start up again, when you add those two pieces up you just get the whole thing from a to b. Here's
some other properties of general integrals. The integral of the sum is the sum of the integrals. This is sort of
the analogous to the product rule and derivatives. And this is called integration by parts. Integral of f times g
prime equals f of x times g of x minus integral of g times f prime. That's called integration by parts. 'Cause
we're breaking things up into f and g. And finally, this is sort of reminiscent of the chain rule. The integral of f of
g of x times g prime is equal to f of u, d u. That's the substitution rule, where we take u equal to g of x. We got
to be a little bit careful here, because this substitution of u of x equal to g of x will possibly change the limits of
integration. But it's something you should remember. And here's a couple mnemonics that will help you
remember. You can go look them up on the web. Let's use integration by parts with f of x equal to x and g
prime of x equal to e to the two x. And then we'll also use the chain rule, as well. So, integral of x times e to the
two x. So, this thing you can see that I'm associating x with f of x and e to the two x with g prime of x. When
you use this integration by parts, one of the tricks is just figuring out what's the f of x part what's the g prime of
x part. So, according to integration by parts, the first term is going to be f of x times g of x, which is f of x is x. G
of x, the integral of g prime which is integral of e to the two x is going to be e to the two x over two. I needed
the chain rule for that. So, that first term, x times e to the two x over two evaluate at zero and one. And then
subtract off g of x times f prime of x. And g of x is e to the two x over two, f prime equals one. And so what I
have. I do my substitution of zero and one into that first term with the big vertical thingy, and then I subtract off
the integral. And I get e squared over two after I substitute. Now I do the integral. That turns out to be minus e
to the two x over four, evaluate at zero and one. And there's my answer. e to the two, e squared plus one over
four. When the smoke clears. A pretty easy example. Let's give another definition. Derivatives of arbitrary order
k can be written as f, k in parentheses of x or d k, d x k, f of x. And by convention we let f not just be the same
as f of x. So one nice thing, which we'll use a couples times in class, is that you can write f of x in what's called
a Taylor series expansion about a point a and it's given by this infinite sum, sum from k equal to zero of the k'th
derivative evaluated at a times x minus a to the k'th power over k factorial. That's what's called a Taylor series
expansion, and it's quite nice. It's used occasionally. Maclaurin series is just the Taylor series expanded around
a equals zero. Now without going into huge detail here are some Maclaurin friends. You may remember this.
Sine of x, if I do all those derivatives in the previous slide, can be written as that infinite sum. The first one.
Cosine of x, the second infinite sum, and e to the x, is the third one. They all look a little messy but they're

49
really not bad at all and they're actually kind of famous. And while we're at it, 'cause I'm looking at all these
sums. Here are three that have nothing to do with these Taylor series, really, but these are just things that
come up every once in a while. So this is as good a place as any to talk about 'em. If I'm interested in adding
up all the values of k from one to n, that's just n times n plus one over two. This is something that Gauss was
able to do when he was five or six years old, believe it or not. If I want to add up all the values of k squared,
you get n times n plus one, times two n plus one, over six. That's famous. And if I have this infinite geometric
series, I want to add up p to the k from k equals zero infinity, and turns out that that's just one over one minus
p. And that's something that you probably learned in high school, but it bears repeating. So one last thing we'll
talk about is L'Hospital's Rule. Every once in a while we get into trouble when we have something of the form
zero over zero or infinity over infinity. So sometimes, L'Hospital's Rule Is useful because it turns out that zero
over zero or infinity over infinity, they could equal anything. So here's what it says, if the limits f of x as x goes
to a, and g of x as x goes to a, both go to zero or go to infinity, then the limit of x goes to a, a limit as x goes to
a, of f of x over g of x, equals the limit as x goes to a of f prime of x over g prime of x. And it looks like we're
complicating things but sometimes this rule, L'Hospital's Rule, makes things a lot easier. And this rule makes
me sick, ha ha ha, L'Hospital. So here's a quick example. And this characterizes why this things so useful. So
let's look at sine of x over x. Now as x goes to zero, this is a zero over zero type thing. So turns out, let's take
the derivative. Derivative of the top, sine of x is cosine of x. The derivative of the bottom, x is equal to one. So I
can now take the limit is x goes to zero of cosine of x over one, is going to be cosine of zero over one, which is
one. So congratulations, you've just shown sine of x over x goes to one. Again you gotta be a little careful
because these zero over zero things or infinity over infinity can go to any quantity. You've gotta be very careful
about that. Here's a summary of what we just did. We've renewed acquaintances with a bunch of old integral
friends and we looked at some elementary properties, including L'Hopsital's Rule. Next time, I'm going to look
at some numerical integration techniques and I want to see what happens when I can't do the integrals in those
nice, close forms. So what do we do if those friends don't want to come out and play with us?

50
OPTIONAL: Lesson 4: Integration Computer Exercises
So now, we're going to be moving into some integration techniques when you can't get the answer in closed
form. So, this lesson is entitled, Integration Computer Exercises. You'll see why. Here's the overview. Last time
I did this wonderful, puntastic integration review. And you'll see for yourselves when you go through that last
lesson. This time, I'm going to implement several, what are called, numerical techniques, that you might need if
you can't get those beautiful printed closed-form solutions to a particular integral. Which happens every once
and awhile. And one of these techniques actually incorporates simulation. I'm just going to give you a hint as to
how that works and we'll do that in gross and tedious detail a little bit later. As an exercise, let's do some very
easy integration via what I call Riemann sums. And here what we're going to do is we'll simply approximate the
area under this nice beautiful, continuous function, f of x from a to b, just by adding up the areas of n adjacent
rectangles. And you might remember from calculus class that let's say these rectangles evenly divide the
interval ab into n pieces, so n as in Nancy. And so, the width of each rectangle is going to be, we'll call it delta
x. And that's just b minus a over n. And the height of the rectangle is f of xi, where xi is one of the points inside
each of the rectangles. So xi in this case could be a lot of things. So, I'm going to take it equal to a plus i times
delta x. That's the right hand n point of the ith rectangle. So you can picture me writing little adjacent rectangles
next to each other under the curve and those rectangles are approximately equal the curve. So, the integral
from a to b of f of x dx is approximately equal to the sum of the areas of the rectangles. The height of the
rectangle is f of xi and the width of the rectangle is delta x. So, what I've said now is that as Riemann did, I can
approximate the integral by this very simple summation of rectangles. And now to simplify things a little bit,
looks a little bit nastier, but these are actual simplifications. Delta x, I'm just taking the same for each of the
rectangles. It's b minus a over n, instead of having x of i, I'm substituting a plus i times b minus a over n. That's
x of i. And the nice thing about this is, as n goes to infinity for nicely behaved functions, the right hand side of
this mess equals the left hand side. That's as n goes to infinity. The rectangles get skinnier and skinnier and
they eventually approximate the area under the curve really well. We've sorta seen that in some of the demos
I've given. So, let's try this out on the integral from zero to one. A sine of pi x over 2dx. We've sorta seen this
before. It secretly equals two over pi, which is 0.6366. And I'm going to do this for different values of n. Actually,
I'll write down one and then you can see for yourself for other values. Okay, here we go. Since I'm a very nice
guy I'm going to try to make things as easy as possible. And I'm going to take a equals zero and b equals one.
And that just simplifies things. So instead of writing delta x equals b minus a over n, b is one, a is zero, then
delta x is just one over n. And xi is i over n. So this simplifies things. So what we have is, integral from a to b, f
of x dx, just equals, like I promised, integral from zero to one of f of x. And that approximately equals, by that

51
2nd equation on the previous page, summation f of xi delta x. Delta x is b minus a over n, which is one minus
zero over n, so that's where we get the one over n. F of xi, well f of x is sine of pi, x over two. So you can see
how I've got the sine of pi over two, and the x is just i over n. So let's take n equals 100, get Excel or whatever
out, don't do it by hand, 'cause there's 100 of 'em. This calculates out to a value of .6416, which is actually
pretty close to that true answer of two over pi, which is approximately .6366. If I made n bigger, I'm going to get
closer to that answer. And maybe you can try that out at home. Here's another computer exercise. This is the
trapezoid version. Now we're going to do the same type of numerical integration problem, except now I'm using
the trapezoid rule, which is usually a little bit better than this Riemann summation. Now we have integral from a
to b of f of x dx, that equals this more complicated function. In the middle there, there's still this summation
from i equals one to n minus one, so the sum now instead of going from i equals zero to n, goes from i equals
one to n minus one of f of xi. But now we have these end points f of x naught over two, f of xn over two, and so
it turns out if you draw the little picture, instead of having rectangles, you have sort of little trapezoids now. This
mess simplifies down to the next line, b minus a over n, times the giant thing in brackets. And I'm not going to
have you actually go through an example right now. You might want to try this out at home on the same
integral, zero to one, sine of pi x over two. You'll get pretty much the same answer, except it turns out you get a
little bit faster for a smaller value of n converge the answer two over pi a little bit more quickly. You can try this
out yourself. It's not that hard. Okay now, the thing that interests me the most, and in fact this is a simulation
class, we're going to learn in more detail a Monte Carlo simulation method to accomplish the same
approximate integration. So I'm going to have you just take my word for it for now, but we'll get into this in more
detail later on. I just want to show you what we're dealing with here. Let's suppose that I can generate a series
of independent, identically distributed random numbers, uniform random numbers between zero and one. If
you don't remember what a uniform number is between zero and one, it's just a random real number between
zero and one. You can use Excel, the function RAND, what a surprise. How do we use these random numbers
between zero and one to approximate the integral? It turns out that with just a little bit of work, and we'll go
through this in tedious detail later on, the integral from a to b of f of x dx is approximately equal to b minus a
over n, that looks familiar. Times the summation from x equals one to n of f of a plus b minus a u of i. This
looks a lot like the Riemann integral except what's going on is that these are not going to be rectangles that are
right next to each other. These are going to be rectangles that are randomly placed between a and b. Take my
word for it for now. And it turns out, this converges at about the same speed as Riemann, to the correct
answer, namely integral from a to b of f of x. And if you want, you can go home and try this out on the same
integral that we've been working with but you'll be given adequate practice with this later on. Here's a
summary. This time we went over some very easy numerical integration techniques. And right at the end I

52
snuck in a little simulation while you weren't looking, we snuck it in there. Next time I'm just going to make a
guess that it's very, very likely that we're going to be starting our probability primer.

53
Lesson 5: Probability Basics
So now we're finally going to start our review on some Probability basics, and again, if you've had these for
requisites that's just great, if not, have some fun and sort of recall maybe what you might've seen for the first
time way back even in high school if you took an AP Stats class. So here's the lesson overview. In the previous
lesson we completed our Calculus Primer, did a little bit of numerical integration. This time I'm going to start a
review of Probability with some very easy basics, and if you're a tyro, which means beginner, feel free to
peruse, take a look at, the notes at your leisure, but you know, pay attention to the material that you don't
happen to remember so well. So here are the basics. I'm going to assume that you know about the real, real
basics like sample spaces, events, and even the definition of probability, so if somebody tells you well I think
that this thing happens with a probability greater than one or a probability less than zero, you'll laugh at them
because we know that probabilities can only be between zero and one. So I'm assuming you know stuff like
that. Anyway, here's sort of a definition I always start out with in such a class, and that is the definition of
conditional probability. So the probability of some event A, given event B, is equal to the probability of the
intersection of the two events A and B divided by the probability of B all by itself. That's called the conditional
probability of A given B. And this assumes that the probability of B is greater than zero so you don't run into
these dividing by zero type issues but this is a very standard definition. Now, if you don't like math, just think of
it as the probability that A is going to occur given the updated information B, and we'll do a couple examples to
show you. This is certainly something that we'll be using throughout the course. So, what's the probability that
an event A happens given that your friend has told you that B has happened? And this allows you to update
the probabilities. Now for instance, what's the probability that it's going to rain today given that it's rained for the
last four days? That's probably not equal to the probability it's going to rain today, given no information
whatsoever. Let's go through an example or two and we can expand on this. Let's toss a fair die. I'm going to
define the event A as having a one, two, or a three appear and I'll define the event B as having a three, four,
five, or six appear. Then the probability of A given B which means what's the probability that I saw a one or a
two or a three given that my friend secretly told me, well, a three, four, five, or six appeared, but I'm not going
to tell you which one. So what's the probability that a one, two, three appeared given that my friend has already
told me that, well, a three, four, five, or six has appeared? Well, what's going to happen here is that my friend
has eliminated the possibility of one and two, so the probability of A intersect B in this case is just the
probability that a three occurred, which equals 1/6. The probability of B all by itself occurring is 4/6 and so even
my cat can do this division. 1/6 over 4/6, the updated probability of A occurring is 1/4. So before we did the
experiment the probability of A occurring all by itself, the probability of getting a one, two, or three, that was

54
1/2. But given that my friend has told me this information, that a three, four, five, or six has occurred, the
updated probability of A occurring unfortunately goes down to 1/4, so there's lots of interesting problems that
you can do. On the homework I'll give you one or two that you can enjoy. Here's another definition. If the
probability of A intersect B equals P of A times P of B, then A and B are called independent events, and I'll give
some motivation for that. For instance, if I'm interested in looking at the temperature on Mars and the IBM
stock price, it turns out that those are going to be independent events, they have nothing to do with each other,
and I'll show you where this nothing to do with each other motivation comes from in a second. So, at least they
would have you believe that they have nothing to do with each other. Hopefully they are in fact independent.
Theorem, this is sort of the motivational definition of independence. If A and B are independent, then the
probability of A given B equals the probability of A all by itself, so this is not a definition, it's a theorem, and it's
kind of a nice theorem that I sort of pretend is a definition. What it means is the information that my friend gives
me about B does not affect the probability of A at all, so the probability of A occurring given that I know B, that's
just the probability of A all by itself. B has given no influence on the probability of A occurring, so that's why
they're called independent, B has nothin' to do with A. Here's an example, let's toss two dice. I'll let A equal the
event that the sum is a seven, and B is going to be the event that the first die is a four. A die is the singular
version of dice, I'm not being gross here. So the probability of A is 1/6 because there are six ways that you can
get a seven. You can get a one and a six, you can get a two and a five, dot dot dot. There's six ways you can
do it, six out of 36, and so that translates to a probability of 1/6. The probability of B occurring, that the first die
is a four, well let's just look at the first die. One, two, three, four, five, six, one of those possibilities is first die is
a four, so that probability is also 1/6, Okay, meanwhile the probability of A intersect B, the only way that A can
intersect B if the sum is a seven and the first die is a four, that must be that the first die is a four and the
second die is a three, that must be, it's the only way to intersect, and so that event, a four and a three, has a
probability of 1/36, which by coincidence is probability of A times probability of B, 1/6 times 1/6, and so it shows
that the two events in that case, A and B, are independent of each other, congratulations. Let's jump over to
the definition of a random variable. Don't be frightened by this 'cause there's a little equation, it's nothing at all.
Random variable's just a function X from the sample space of the experiment to the real line. I'll give an
example to show what I mean by that. Let X be the sum of two dice rolls. So the sample space is all the pairs
of possible dice rolls. One one, one two, dot dot dot, all the way to six six. What's that called, boxcars I guess?
So in particular, one set of tosses could be a four and a six, and that adds up to 10. So what happens, my
function X takes an element of the sample space, four six, and it moves it over to the real line. So four six gets
transformed over to the real line. In addition, the probability that x equals any particular specific value, so
capital X is the random thing, little x is a specific value. We know that the probability of the sum of two dice
tosses is, if the sum is two the probability is 1/36, there's only one way you can do it, get a one and a one. To

55
get little x equal to three, the sum equals three, the only ways you can do that are tossing a one and a two or a
two and a one, so that's 2/36. And so you can see you get this, what's called a probability mass function. 1/36,
2/36, it goes all the way up to 6/36 when x equals seven, then it comes back down to 1/36 for x equals 12. That
brings to mind a general class of random variables, or I call 'em recreational variables sometimes. So the
definition I'm aiming at here is, if the number of possible values, of our random variable X, is finite or countably
infinite, and in that last example it was finite, there were 12, there were 11 values, x equals two to 12. Then X is
called a discrete random variable. Its probability mass function, or pmf, is given by little f of x, that's a function
of x, and it's defined as the probability that X equals any of these finite or countably infinite x values, and note
that they add up to one, all these probabilities. The sum of x over little f of x equals one. By countably infinite I
mean there could be an infinite number of these things, but they have a one to one correspondence with the
integers, that's what countably infinite means. Example, let's flip two coins, X is the number of heads. X can
only equal zero, one, or two, so the probability that I flip the two coins and get zero heads is 1/4, the probability
that I get two heads, also 1/4. And the probability that I get exactly one head, so I either get heads tails or tails
heads, that's equal to 1/2. And there's no other possible values for X. 1.5, not possible, negative three, not
possible. So if you add up all those numbers, 1/4 plus 1/4 plus 1/2, wow, magic, it's equal to one. And we have
a bunch of other discrete random variables you may remember from Baby Stats class, Bernoulli random
variables, Binomial, Geometric, Poisson, I'll talk about all those as we need them, so don't panic if you don't
remember. Now, the other class of random variables that we're interested in are what's called continuous
random variables. A continuous random variable is one that has probability zero at every individual point, and
in this case there exists what's called a probability density function, or pdf. F of x having the magical property
that, well, f of x is not a probability like it was for the probability mass function, but you can get probabilities
from the pdf. And namely, here's how you do it. If A is any set or any event, then the probability that X falls in
that set is merely the integral of f of x taken over that set, and so f of x, the pdf is not a probability, but it gets
you the probabilities. People mess that up all the time, but that's what it is, and note that if I integrate f of x over
the entire real line, that integrates to one. That's kinda like adding up the pmf and getting a one. So here's the
simplest example I could think of. Just pick a random real number between three and seven. There are an
infinite number of real numbers between three and seven, and in fact, it's uncountably infinite, which is a bigger
infinity than countably infinite, believe it or not. So, the probability that I pick any individual value between three
and seven, like suppose, what's a probability that I pick a five, that probability is zero, 'cause I could have
easily picked 5.00001 or 4.99999 so the probability of any individual number is zero, that kinda corresponds to
the integral of a continuous function over one point. So, even though f of x is not the probability of the event, it's
the pdf, which gives me the ability to calculate probabilities, and for purposes of this, what's called uniform
distribution between three and seven, the pdf is equal to 1/4 for x between three and seven, and notice if you

56
integrate 1/4 from three to seven that gives you a one, so things kinda make sense. Now, some famous
continuous random variables, which you've heard of, are Uniform, Exponential, and Normal, or Gaussian, we'll
talk more about those later. For now, I'm very, very lazy, I'm going to use that little twiddle thing, that means is
distributed as. I don't like writin' things out, save some ink. So if I write x twiddle Uniform 0.1, that means that X
has the uniform distribution from zero to one. Now, I'll give one more definition for any random variable, x either
discrete or continuous, the cumulative distribution function, or cdf, is the probability of everything that's
happened up to little x. So capital F of x is the probability that capital X is less than or equal to the particular
value little x. Capital X is random, little x is a specific value, and the thing on the right-hand side is basically the
integral from minus infinity to x of little f of y dy, so everything that's happened up to time x, that's if X is
continuous. The reason I'm using little y, is it's just a dummy variable, and it's the same thing with the
summation effects as a discrete random variable. Notice that capital F of x, when you let x go to minus infinity
it's zero, because the probability of x being less than minus infinity is zero. Similarly, the probability that X is
less than infinity is one, and the nice thing is, if X is continuous you take the derivative of big X, you get little f.
Uh, take the derivative of big F, you get little f, and vice versa, integrate little f, you get big F, that's by the
fundamental theorem of Calculus. Example, flip two coins. Number of heads? Well remember the probability
that we would get zero heads is 1/4, so nothing happens until you get x equal to zero, that's why there's a zero
on the first line and 1/4 on the second line. Then nothing happens until you get to x equals one, whereupon the
probability that x equals one is 1/2 so the cumulative probability goes from 1/4 plus 1/2 to 3/4, and then nothing
happens until you get to x equals two, whereupon the cdf jumps up to one because 3/4 plus the probability that
x equals two, which is 1/4, that equals one. In the continuous case, let's pretend x has the exponential lambda
distribution. This is the old friend that I reminded you about a slide or so ago, that just means that little f of x is
lambda elim minus lambda x, that is by definition, and that's for x greater than zero. Well, if you integrate that
thing from zero to x, because you don't need to worry about anything below x equals zero, if you integrate that
from zero to x it turns out with elementary Calculus that capital F of x is equal to one minus elim minus lambda
x. We'll go through this again in more detail later on 'cause it's a very important distribution. Here's the
summary. We started out our basic review of Probability and next time I'm going to show ya how to do some
simulation in random variables,

57
Lesson 6: Simulating Random Variables
In this lesson, I'm finally going to start simulating random variables. And again, this by way of doing some
review of prob and stats, but I figured I'd sneak in some simulation now. Here's the lesson overview. Last time
we started our probability primer. All right, finally. And this time, I'm going to show you how to simulate some
very easy random variables on the computer. And of course, this is one of the main reasons that you're taking
this class. So first of all, let's start out with kind of the easiest example. I'm just going to pick a random integer,
a discrete uniform distribution from one to n. In other words, the random variable X equals i with probability one
over n for i equals one, two, up to n. This is like a n-sided dice toss or die toss for Dungeons and Dragons
people. So, here's how we do it. If U is a uniform zero one random variable, which I can get in Excel using the
RAND function. We'll talk more about uniform zero ones in a little while. We can simulate a discrete uniform
random variate, simply by setting X equal to n times a number between zero and one, random number
between zero and one, and then rounding up, taking the ceiling function. It's very, very easy, and you'll see
why. For example, let's take n equals 10, so we have a ten-sided die toss. I'm going to sample a uniform
number between zero and one. Let's pretend it turns out to be 0.73 the number between zero and one. Then,
that gives me X equal to the ceiling function of n times U, ceiling function of 7.3, and that equals eight. So very
simple. I get a uniform of 0.73 and that gives me a discrete uniform of eight. Very, very easy. Here's another
discrete random variable. This one's a little more complicated. Let's let little f of x or the probability that f equals
little x equal to 0.25, if x equals minus two, 0.10 if x equals three, and 0.65 if x equals 4.2. And notice how
those things add up to one. Now, I can't use a die toss to simulate this one. This is a little bit too complicated.
So I'm going to use what's sort of called the inverse transform method. Here's what it is. We'll go into more
detail on this later on, but I make a little table. x, little f of x, the cdf, capital F of x, or probably that x is less than
or equal to little x, and then associated uniform zero one. So what I mean by that, the x values are minus two,
three, and 4.2. I just make those up. Little f of x, read the previous definition of probability that X equals little x.
That's 0.25, 0.10, 0.65 respectively. The cdf at those points, well I just add up the pmfs. 0.25, 0.35, 1.00. That's
the third column. Now, let's associate some uniforms with those cdf values. The first value of x, x equals minus
two, that corresponds to a cdf of 0.25. So any time I sample a uniform between zero and 0.25, that's going to
correspond to x equals minus two. x equals three has a probability of 0.10, and the cdf at that point is 0.35. I'm
going to correspond to that x equals three value, the uniforms 0.25 to 0.35. The probability that I end up with a
uniform between 0.25 and 0.35 is 0.10, just the probability that I want for x equals three. And the final value, x
equals 4.2, that is the probability of 0.65, all corresponding uniforms to be between 0.35 and 1.0, thus
completing the whole grid of uniform zero ones and let's see what we get here. Let's sample a uniform and we

58
choose the corresponding x value, in other words, we take the inverse, x equals F inverse of U. Don't worry
about that for now, because we'll talk about it later. For example, if U equals 0.46, you look at that third line,
that falls between 0.35 and 1.00, so that corresponds to x equals 4.2. Much more on this later. Now, I'm going
to use inverse transform to generate continuous random variable. Here's the main result. We'll prove it later on.
Theorem: If X is a continuous random variable with cdf capital F of x, then the random variable capital F of
capital X is uniform. Now this is crazy. What the thing on the second line there, capital F of capital X, that is not
a cdf anymore, because there's a capital X sitting there in the middle of that, not a little x. It's capital F of capital
X. That thing is a nasty, horrible random variable. Okay, fine. What that's going to allow me to do though, just
take my word for it for now, since that's uniform, according to my theorem, which I haven't proven for you, I'm
going to just set capital F of capital X equal to the uniform and solve backwards for x. So that's going to require
that I can take X equals to F inverse of U. That's the inverse cdf. And so what I'll be able to do, if I can solve for
X, that generates a value of X, given that I can come up with a uniform, which I can from Excel or from any
other programming language. Here's an example, suppose capital X random variable is exponential, then the
cdf of the exponential is a real function, capital F of little x is one minus e to the minus lambda x for X greater
than zero. We showed you that before. So now if I plug in capital X to get this big, horrible, nasty random
variable, one minus e to the minus lambda capital X, that gives me a uniform distribution according to the
theorem. That is a messy, awful random variable. I set it equal to U and solve for X. Here's what I get after a
little bit of algebra, which we'll go through later. X equals negative one over lambda natural log of one minus U.
That's exponential. Amazing. So if I take a uniform, plug it into that thing, I get an exponential. Beautiful. Now
the only things I haven't told you about is how do you generate uniforms? Well the above random number
generation examples, they all required us to generate basically independent and identically distributed
uniforms. I'm going to use iid from now on, because it's a mouth full. How do you do that? If I'm lazy I can just
go to Excel and use the RAND function or something similar, your favorite language, but what I'm going to do is
I'll give you an algorithm to generate these pseudo random numbers, which we talked about in the tour of
simulation. What it's going to do, it's going to give me a series, we'll call them R one, R two, dot, dot, dot of
deterministic numbers that appear to be iid uniform. So these are not really random numbers, but to you and
me they're going to look it. And there are deterministic algorithms to do this. I need to start out with a seed, an
integer seed just to get going. Let's pick your favorite fairly large integer, x naught, and you calculate X i equal
to 16807 the previous integer, X i minus one mod two to the 31st minus one. Now we had gone over this in the
tour, we'll give more details in this later on. So the first one, x one is going to equal 16807 times x naught,
which you've already picked out. Mod two to the 31st minus one. The second one, x two equals 16807 times x
one, which you just calculated, mod two to the 31st minus one. This is going to give me a series of very large
integers. X one, x two, dot, dot, dot. I'm going to turn them into my supposedly pseudo random numbers, R i by

59
just taking my integers and dividing by the biggest possible integer from my modular arithmetic two to the 31st
minus one. This is a little bit of a hassle to program by yourself, but it's really kinda easy. It's not that bad. You
can program it by yourself, you can use the following pseudo code. Here's a FORTRAN implementation for
instance, from an old book by Bratley, Fox, and Schrage. I won't go through the details here, we'll do that later,
but it's basically two or three lines of code here. And it works really well, so you can look at this yourself during
the commercials while you're watching your favorite TV show. This thing actually works pretty well. And it gives
me proper uniforms. So what you do is, you input your favorite x naught integer, and the function returns a
pseudo random number called UNIF as well as an updated integer that you can go and use again. And this
thing works pretty well. A couple exercises I'm going to just blow through these. Look at them at your leisure.
These are sort of bonus exercises. You can use whatever you want, these are bonus. As long as you can give
me a uniform, you can do these very easily. Make a histogram of minus natural log of U one. It turns out, of U
sub i. These are going to give you exponential random variables. You can try that by yourself. Now you can
plug in to a slightly more complicated function. Let's generate X i and Y i. Those are going to be independent
pairs of uniforms. Plug into this nasty equation for Z i equals square root of minus two log of X i times sin of two
pi Y i. See what you get. I think you're going to feel pretty normal about what you get. And finally this really
nasty one. Again, let X i and Y i be independent uniforms. Now define Z i as X i over X i minus Y i. I'm not
going to tell you what you get, but it's something very interesting. So what you do, you plug in the X i and Y i.
Do that ten thousand times. Make a histogram. See what you get and it's going to be something interesting.
Here's a summary of this lesson. This was actually a starter lesson on random variate generation. Just get
psyched up. Good stuff is coming up. And next time I know that you've been expecting this, but we're going to
have a nice lecture on expectations!

60
Lesson 7: Great Expectations
In this lesson, we're going to be continuing our probability boot camp, except I might diverge a little into classic
literature. In this lesson, we'll be doing Great Expectations. So, here's the lesson overview. Last time, I took a
brief detour to show you how to simulate some very easy random variables. In this lesson, I'll be back to the
formal Probability Review. In particular, this module is going to be all about taking expected values of random
variables. Now, we'll be paying particular attention to something called LOTUS, L-O-T-U-S, and we'll talk about
that soon. So, the fundamental definition that we'll be working with is the expected value, or mean, of a random
variable X. That's defined as E of X equals, and now what I've done is I've broken things up into two pieces, a
discrete version and a continuous version. So, this discrete version is just summation over all values of X of X
times f of X, and that's if X is a discrete random variable, and every place we see a summation, we put an
integral sign if X is continuous. So, the expected value of X is the integral over all X, of X times f of X DX. In
either case, the expected value is basically just a weighted average of the X values, and the weights are the f
of X's, the probability mass, or probability density functions. Now, if you're real into notation, here's a
shorthand, and occasionally, I'll use this integral, or the real line of X times d f of X. D, f of X, d capital f of X, is
just shorthand notation for the two braced items to the left. Here's an example, a very simple example for the
Bernoulli distribution, so X has the Bernoulli p distribution, if X equals one with probability p and X equals zero
with probability one minus, which sometimes they'll call that q. So that's just a coin flip with probability, p, of
landing on heads, or X equals one in that case. Probability q of landing on tails, X equals zero. So here, we
have the expected value of X, just the summation overall X of X times f of X, that's a discrete random variable,
and you can see right here, it's just a X can take the value of zero and one, so it's zero times f of zero plus one
times f of one. F of one equals p, and so it simplifies down to p, as we have, on the bottom there. So that's the
expected value of a Bernoulli random variable. Here's another example, this is the continuous example. X is a
Uniform distribution from a to b, and what that means is that X is sample uniformly sort of with equal probability,
anywhere between a and b. In this case, f of X equals one over b minus a, for all values of X between a and b.
And you can see that that's just the equal sampling, sort of, between a and b. So this is a continuous random
variable. And so for the expected value, we now take the integral, over all values, X, between a and b. So the
integral of X time f of X from a to b, after the smoke clears, it's just a plus b over two. And in fact, if you draw a
picture out of f of X, it's just a constant, one over b minus a, you can see that the average is just that a plus b
over two. You can tell just by looking at the thing. Let's look at a slightly more difficult continuous example,
namely, let's suppose that X is distributed as an Exponential distribution with parameter lambda. In this case, f
of X just equals lambda e to the minus lambda X for X greater than zero, and zero otherwise, and this often

61
represents the distribution of the lifetime of say a light bulb, or some electrical component. In any case, the
expected value of X, we're going to have to do some integration by parts, and the L'Hospital's Rule, and all
sorts of stuff, I'll let you go through that. The expected value of X, since it's continuous, is going to be the
integral from X equals zero to infinity of X times f of X, as we're starting at zero in this case, X times f of X is X
times lambda e to the minus lambda X, after the smoke clears, again, this is after integration by parts, and
L'Hospital's Rule, you get one over lambda, and that's a famous result, hopefully you remember that from a
probability class. Here's a definition, or actually it's a theorem that you may not have seen before. This is called
the Law of the Unconscious Statistician, or LOTUS. And this is a very, very important nice little result. We'll kind
of pretend it's a definition, but in fact it's a theorem. So, this just gives us the expected value of sort of an
arbitrary function of X, let's call that arbitrary function h of X, and it can kind of be anything, I'll give you some
examples in a minute. But the expected value of h of X, this is a general definition now, is, in the discrete case,
its summation over all X of h of X of f of X, that's if X is discrete. And if X is continuous, it's the integral over all
X of h of X times f of X dx, again, wherever you see a summation, put an integral sign for the continuous case.
Here's my shorthand notation. Integral over the real line of h of X times d f of X. Like I said, the function h of X
can be anything, kind of nice. For example, h of X equals X squared or one over X, or sin of X or log of X. It
can be anything nice. Here's an example. Here's a discrete example. Suppose X has the following discrete
random variable distribution. X can equal two, three, or four, with probabilities 0.3, 0.6, and 0.1, respectively.
Let's suppose, for some reason, we want to calculate the expected value of X cubed. So, h of X equals X
cubed, and we plug in to my equation the expected value of X cubed is just sum over all X of X cubed times f
of X, I plug in the three different values that X can take, namely, X equals two, three, and four, so here's X
cubed, two cubed times f of X, f of two is 0.3, plus three cubed times 0.6, plus 4 cubed times 0.1, and after a
little algebra, you get 25. What could be easier? Here's another example. Suppose that X is Uniform 0.2, now
I'll go for the whole thing instead of the expected value of X cubed, let's just be arbitrary and get the expected
value of X to the nth power. So that's going to be, using the Law of the Unconscious Statistician, integral over
the real line of X to the n times f of X, so since it's a uniform distribution, I only have to integrate from zero to
two, and then f of X is one half, and after the smoke clears there, my answer is two to the n divided by n plus
one. So, since we saw that thing, the expected value of X to the n, let's actually define that because it's quite
useful. The expected value of X to the n is actually called the nth moment of X. The slightly more complicated
thing, the expected value of the following messy, awful function, is the expected value of X minus E of X, that
whole thing to the nth power, that's called the nth central moment. Now, even though it looks awful, don't panic.
That number E of X is just a number. And so, this messy thing inside X minus a number to the nth power, that's
just a complicated, big old h of X function. That's called the nth central moment of X. And that just sort of a
specialty case of moments. The most special case is the variance of X. That's defined as the second central

62
moment, and you can see it's the expected value of the squared deviation of X from its mean. Now what does
that mean? E of X, the thing in the middle there, is the expected value of X. X minus E of X is how far does the
random variable tend to deviate from the mean. Now that could be plus or minus, so let's square it and take the
average over all those values. And you get the variance. So, whereas the expected value of X is a measure of,
sort of the middle of the distribution, the variance of X is a measure of how much the distribution is spread out.
You may be familiar with the standard deviation of X, that's just the positive square root of the variance. And
here's a theorem that I always like to calculate the variance of X with. After a little bit of algebra, it turns out that
you can calculate the variance simply by using the expected value of X squared minus the square of the mean.
That's just an easier way to calculate variance sometimes. As an example, let's suppose that X is our good
friend, the Bernoulli distribution. And you might want to remember that the expected value of that X equals p.
Then it turns out that the expected value of X squared is, by the Law of the Unconscious Statistician, it's a
summation of X squared times f of X when you substitute in the zero and the one, turns out that's equal to p.
And so then the variance of X equals, using my formula from the last page, the expected value of X squared
minus E of X quantity squared that's going to be p minus p squared, and that simplifies to p times one minus p.
Let's be a little more ambitious, let's look at the exponential distribution with parameter lambda. By the Law of
the Unconscious Statistician, the expected value of X to the nth power is the integral over the real line, in this
case you only have to go from zero to infinity, 'cause that's the only place where X is defined, of X to the n
times lambda E to the minus lambda X. That's X e n f of X d exit, and after you do the calculus, now you may
recognize this as what's called a gamma function or you may have recognized your integral tables, but believe
it or not, after the algebra, this comes out to be n factorial over lambda to the n. That's sort of a famous result.
In particular, the variance of X equals E of X squared minus E of X quantity squared, that's going to be if I plug
in to the above equation n equals two, that gives me two over lambda squared minus the square of the mean,
we've already calculated the mean as one over lambda, after I do the algebra there, the variance of X equals
one over lambda squared. That's sort of a famous result. Okay, fantastic. Here's a general theorem. It shows
that expectation is a linear function. So what I mean by that is, the expected value of aX plus b equals, well just
move the expected value inside. a times the expected value of X plus b, so just move the expected value
inside. Doesn't quite work out like that for the variance. The variance of aX plus b equals a squared times the
variance of X. So the a comes outside as an a squared, the b goes away. Let's think about that. If I have a
random variable, and I just shift it by b, that's not going to change how much it's spread out. It's just going to
affect where it's centered. So, that b doesn't come into play when I look at the variance. Here's an example,
let's suppose that X is exponential three, lambda equals three from the last page, the expected value then of
minus two X plus seven, I just made that up, is, well let's bring the expected value inside, like I said, so minus
two, expected value of X plus seven, the expected value of X is one over lambda, one over three, so that's

63
where I get this minus two thirds, and then I'm so totally lazy, I'm just going to keep the plus seven there. If you
have extra time during commercials tonight, you can calculate minus two thirds plus seven for yourself. The
variance, meanwhile, well minus two is my value of a, it comes outside as minus two squared, so that's going
to be four times the variance of X, so that's where that four comes from. The variance of X is one over lambda
squared, so that's four over nine. So very, very easy. Let's take a tiny little break here, just a moment. And,
turns out, we've just gotten through a bunch of good stuff with respect to expectations. And now I'm going to do
one more topic, this is sort of a bonus topic. I'd like you to take a look at it carefully, but this really is a bonus
topic, and turns out it's very useful for a variety of reasons, and these are called moment generating functions.
And it's a little more challenging, that's why I decided to take this little break for a second. Okay, break's over.
So, here's a definition of a moment generating function. M X of t equals the expected value of e to the t X. Now
it looks a little awful, but that's what the moment generating function is, mgf. So it turns out, M X of t is a
function of t, not explicitly of X. I mean, once you define the random variable, X, the moment generating
function for the random variable, X, is just the expected value of e to the t X, it's a function of t. Here's an
example. Now, we don't panic, we're just going to use the Law of the Unconscious Statistician. Let's let X equal
our friend the Bernoulli distribution with parameter, p. So, the mgf, M X of t is, by definition, the expected value
of e to the t X, by the Law of the Unconscious Statistician, it's the sum over all X of e to the t X times f of X. At
this point, see it's a discrete distribution, so we use a summation, not an integral sign, that darn thing looks like
Laplace transform, if you remember back to your engineering days, the only thing that's different is that we
have a t instead of a minus t. So, if you remember Laplace transforms, that's great, if you don't, don't worry
about it. In any case, the only two possible values of X are zero and one. Here's the X equals zero term, e to
the t zero times f of zero, that's q, probability X equals zero is q for Bernoulli, and e to the t times one, f of one,
that's this term, f of one is p, after a little algebra, you get the answer, p e to the t plus q, pet q. That's the
answer. As another example, let's go continuous again. Take the exponential distribution, that's just 'cause it's
easy for me. M X of t equals the integral over the real line e to the t x f of X d X, the integral only goes from
zero to infinity, because that's where the exponential is defined. We have the e to the t X right there, that's the
e to the t X. The e to the minus lambda X times lambda, well there's the lambda right here, here's the e to the
minus lambda X right there, I take the integral of that thing, and it's lambda over lambda minus t, if lambda is
greater than t. The reason lambda has to be greater than t is that the darn thing will equal infinity if that's not
the case. You'll get an issue.

Okay now why is the moment generating function important? Well, under certain technical conditions, which
I'm not going to get into, it turns out that the expected value of X to the kth power equals a function of the
moment generating function, wow. In fact that function's a little bit complicated, it looks like a Christmas tree.

64
It's the kth derivative with respect to t of the mgf, M X of t evaluated at t equals zero. And you gotta do things in
that order. Take the kth derivative of the mgf, then shove in t equals zero, and you're going to come up with the
expected value of X to the kth power. So, in other words, you can generate moments of X from the moment
generating function. Isn't it amazing why they named it that. So the moment generating function's got a lot of
other important uses, but in particular in identifying distributions and proving convergence in some things, we'll
talk about those a little bit later in the course. But for right now, the mgf generates moments. What a surprise.

Here's an example: let's look at the exponential distribution you might remember from a couple pages ago, that
the mgf is lambda over lambda minus t, let's take the expected value of that. Let's get the expected value of
the exponential distribution Well, from the previous theorem, it's the first derivative of M X of t evaluated at t
equals zero. The first derivative of lambda over lambda over t, we can do this in our heads, turns out is lambda
over lambda minus t squared. Just use the chain rule. Evaluate that at t equals zero. So I shove in the t equals
zero, I get lambda over lambda minus zero squared which is one over lambda. It looked like a lot of work, but
this is actually a little less work than going and using L'Hospital's Rule and integration by parts, and all that. So,
this is actually a quicker way to get the expected value, in my opinion. If you want to get the expected value of
X squared, well just play the game again. Take the second derivative and that's just two times lambda over
lambda minus t cubed, again, we do the chain rule, very easy. Set t equals zero and plug in, and you get two
over lambda squared. And this immediately gives you the variance, expected value of X squared minus E of X
quantity squared blah, blah, blah, equals one over lambda squared. So that's an old result, and I think we
actually, we were able to do this a little bit more quickly this way.

So, here's a summary of what we did in this lesson. Did lots of expectations, I just love using LOTUS. We'll be
using LOTUS all the time in this course. And it's especially useful in terms of simulation, you'll see why later.
Next time, let's suppose I know everything there is to know about the random variable, in this lesson, we just
looked at expected values, that's kind of small stuff. Suppose we know everything, we know the whole
distribution. Well, what happens if we square the thing or take the log? So, what I'm going to do now, in the
next lesson, we'll look at arbitrary functions of random variables, and these play huge roles in simulation.

65
Lesson 8: Functions of a Random Variable
Today I'll be looking at functions of a random variable as I continue the probability primer. Let's get into that.
The lesson overview for now is, well, what did we do last time? I looked at tons of moments, and moment
generating functions and I hope we had a lot of fun there. In this lesson, let's suppose I know everything about
a random variable, not only the moments, just everything. What can I say about functions of the random
variable? In particular, what's the distribution going to be looking like? And this has absolutely huge
implications in this course, especially in random variant generation.

Here's the problem in general. Let's suppose I have a random variable, X, and I completely know the
probability mass function of the probability density function. In either case, it's F of X. Let's look at the function
of X, H of X, I'll call that Y. We saw h of X notation when I used LOTUS in the last lesson. Let's let this arbitrary
function of X be denoted by Y and what I want now is I want g of Y, the PMF or PDF of Y. So, I've got the PMF
or PDF of X, now what's the PMF or PDF of that function of X. It's very important. Here's some examples. And,
take my word on this, that these work. It turns out, if X is normal, normal zero one, we'll talk about that
distribution later on, it so happens that if I take the square of that I get a chi-square distribution. Even if you
don't know what those are, if I square a normal, I get a chi-square, it's because of this lecture. Now, let's look at
a uniform distribution. Let's suppose I start with a nice, easy, uniform zero one distribution. We've actually seen
this. If I take the function negative one over Lambda, a natural log of that uniform, I get an exponential.
Amazing! So, I start with a uniform. I plug the uniform into this little equation here. Negative one over Lambda,
natural log of U and I miraculously get an exponential Lambda distribution. This has huge implications in
random variant generation.

So, let's start off with a very easy, discrete example. I'm going to let X denote the number of heads from two
coin tosses and I'm going to derive the PMF for this function of X Y equals X cubed minus X. So anyway, let's
look at the PMF of X first. X is the PMF from two coin tosses, so it can equal zero, one, or two, with
corresponding probabilities, a quarter, a half, and a quarter. So, let's look at the corresponding values for Y. If X
equals zero, then Y equals X cubed minus X equals zero. If X equals one, then Y also equals zero. And if X
equals two, then Y equals X cubed minus X equals six. So, you can see, the values of X equals zero and one
map into Y equals zero and the value of X equals two maps into Y equals six. So, what does this imply? Let's
add up the probabilities. The probability the Y equals zero is the probability that X equals zero or one. Add
those up. A quarter plus a half equals three quarters. The only other possibility for Y, is Y equals six. The

66
probability of that is one quarter. I didn't have to do much thinking there. That corresponds to X equals 2. So, in
other words, the probability that Y equals zero is three quarters. The probability that Y equals six is one
quarter. That is my PMF of Y. Very, very easy.

Let's do a continuous example. This one's a little bit more challenging. Now suppose X has PDF, because it's
continuous. F of X equals absolute value of X. This is going to be defined for X between negative one and one.
It looks like this, sort of a reverse triangle. It goes from minus one to one and you can see that if you take the
integral of this thing, you can almost see in your head that it integrates to one. It's a legitimate PDF. Given that
is my starting point, let's find the PDF of the function Y equals X squared. Okay, well, here's what you do.
Here's how you find these things. I'm going to find the CDF first, CDF of Y. Then, I'll take the derivative. So, this
is the usual way you do these problems. You find the CDF first, then you take the derivative. Here's the CDF.
Now, I want to do as little thinking as possible because I'm totally lazy, so let's just kinda do things by the book
here. The CDF of Y is capital G of Y and that is, by definition, the probability that Y is less than or equal to little
Y. That's just by definition, no thought whatsoever, okay? Again, in my quest not to do any thinking, let's just
plug in. What is Y? It's X squared, so that's the probability that X squared is less than or equal to Y. And, now,
let's remember that I only know stuff about X. I know everything there is to know about X. I don't know this X
squared stuff. So, how do I transform X squared to X? Well, you take the square root, so that's what this next
step is. The probability that X squared is less than or equal to Y is the probability that X is between the
negative square root of Y and plus square root of Y. We have to be a little careful when we take the square root
to remember that minus part. Okay, well, what is this now? By definition, it's the integral of F of X DX from
negative square root of Y to plus square root of Y and F of X is square root of X. After a little algebra, you get
that result equals Y, and so, I can get the PDF of Y now by taking the derivative. The derivative of the CDF is
the PDF and even I can take the derivative of capital G of Y equals Y. That's just little G of Y equals one. And
what that means then, is that Y is uniform zero one because that is the PDF of a uniform. So, if you start out
with F of X equals absolute value of X, from negative one to one; I square it. Unbelievably, I get a uniform
distribution. And, maybe, I'll give you a homework assignment on that sometime. You'll see for yourself that it
works. Even if you use simulation, you'll get a nice uniform distribution.

So, this phenomenon is related to what's called the Inverse Transform theorem. This is a fantastic theorem
that's used all the time in simulation. You'll see why in a little while. Let's suppose that i X is a continuous
random variable and the CDF is denoted capital F of X, as usual. Unbelievably and amazingly and
stupendously, capital F of capital X, so this thing is a nasty random variable. Capital F of little x is a CDF. That's
just a function of real numbers, but capital F of capital X is a nasty random variable. But, we don't have any

67
problems with that. It's just some random variable. If I plug capital X into it's own CDF, amazingly, I get a
uniform distribution. That's what this theorem says. Here's the proof and I'll just do this very quickly. We're
going to see the proof again, so don't worry about it if these steps are scary. We'll let capital Y equal Capital F
of capital X, so it's just a nasty function of X. Then the CDF of Y is, let's do as little thought as possible, the
probability that Y is less than or equal to little Y, that is by definition. Now, I'm lazy. I don't know nothing about Y.
So, let's plug in what Y equals; capital F of capital X. That's all that we've done here. Now, I only know about X,
so what I'm going to do is, I'll take the inverse of both sides. So, in this step right here, I take the capital F
inverse of capital F on both sides; the F inverse of both sides gives me the next step. Probability that X is less
than or equal to F inverse of Y. Where did the capital F go? Well, when I did the inverse of F times F, I got a
cancellation, and so, that's why I'm left with a plain old X then. Alright, well, this is going to happen a second
time. Let's do the probability that X is less than or equal to the F inverse of Y. By definition, that's capital F of F
inverse of Y. The F and the F inverse cancel again and I get Y. If I take the derivative, just like before, I get the
uniform distribution. So, that's the proof. Very, very ,very nice. And, we're going to encounter this a couple other
times very soon. It's just tremendously important to generate random variables during a simulation.

Let's do a simple example now, to show how to generate random variables. In particular, the exponential
distribution. I'm going to use the Inverse Transform Theorem to do this. So, let's start out real easy. Let's
suppose that X is exponential, and we know from a previous lesson, that the CDF is capital F of little X that
equals one minus E to the minus Lambda X for X greater than or equal to zero. So, now, I'm going to use the
Inverse Transform Theorem. I plus in capital X into it's own CDF, capital F of capital X equals one minus E to
the minus Lambda capital X. The Inverse Transform Theorem says that, that thing, messy as it may be, is
uniform. Thank you Inverse Transform Theorem. Since it's uniform, let's set it equal to U. And, supposing I can
generate a uniform from Excel or some place. So, set capital F of X equal to U and solve for X. So that's, I'm
going to take the inverse, sort of, I'm going to solve for X. After a little bit of algebra, we'll step through later on
step by step. After a little bit of algebra, X equals negative one over Lambda natural log of one minus U. And
so, since X started out as exponential, that means that negative one over Lambda natural log of one minus U
is, itself, exponential. So, if you can give me a uniform, I can give you an exponential. It's brilliant! So, for
instance, if Lambda equals two and U equals .27, I plug in Lambda equals two, U equals .27 right there and I
get X equals .157. That's my exponential to random variant or realization, as they call it.

Here's another example. This is the Weibull distribution. I'm going to just walk you through this very quickly.
Suppose X, as the Weibull distribution, turns out that mess is the CDF. One minus E to the minus Lambda X,
all that raised to the Beta power. The Weibull distribution is a complicated distribution used in reliability theory
and you can see that if I set Beta equals one, the exponential is actually a special case. So, if I set capital F of

68
X equal to U, just like before, and solve for X, you can show, do this during the commercials tonight, that X
equals one over Lambda minus natural log one minus U, all that raised to the one over Beta power. You can do
that yourself. That's like a little exercise for you. And, what I'd like you to do, if you have a little extra time, I'll
make this an assignment at some point. Pick your favorite values of Lambda and Beta and generate a bunch
of X's. You can then make a histogram and you've got yourself a Weibull distribution. One nice thing is to look
at different values of Lambda and Beta and see what the thing looks like. It turns out that, if Beta is less that
one or if Beta is greater than one, the thing looks very interesting in comparison. I'm going to do a quick bonus
here. I'm going to walk us through this very, very quickly because it's already been a tough lesson, but if you
are really, really stout of heart, let's look at this bonus result. And, I'm not really going to hold you responsible,
but I'd like you to see it. In fact, if you don't want to look at it, go ahead, go to the summary, but if you've got a
couple minutes, take a look. This is just another way to get the PDF of Y equals H of X for some very nice,
continuous function of H. This looks a little like LOTUS and we'll be talking about LOTUS on the next page. So,
here's another way to get the PDF of capital Y equals H of X. For continuous function, it avoids getting a step
or two. It looks a little more complicated, but, it turns out, it's kinda more direct. I'll just step you through this
very quickly. The CDF of Y is, by definition, the probability that Y is less than or equal to Y. I'm lazy, so I plug in
Y equals H of X and then I take the inverse of both sides. If Y is nice and continuous, I can take the inverse
and that gives me the probability that X is less than or equal to H inverse of Y. By the chain rule, and since the
PDF has to be greater than zero, I take the derivative of this thing, right there. Little F of Y is the PDF of Y. I
take the derivative of the CDF and, now, I use the chain rule; X equals H inverse of Y. Here's the derivative
from the chain rule, looking awful, but that's what it is. The fact that the PDF is positive means that I take the
absolute value of that term. So, I'll use this result to prove LOTUS. So, here's the expected value of Y,
remember, Y equals H of X, that's what LOTUS is the expected value of Y. This equals, by definition, so I
haven't evoked LOTUS. I'm going to prove it here. By definition, this is the integral over the real line of Y times
the PDF of Y. That's this step right here. I plug in, using the chain rule, an expression from the previous page,
that awful mess from the last page. That's what that mess is. And, now, if you're willing to believe that the
absolute value mess times the D of Y is merely expressed as D of H inverse of Y. And, it kind of makes sense
because the DY's cancel in the previous line. And then, I set X equal to H inverse of Y. You get this result,
which is, in fact, LOTUS. Absolutley amazing! Here's a summary of what we just did. This is a long, difficult
lesson, but we got through it just fine. We looked at distributions of functions of random variables. And,
perhaps, the most important thing for this class, is that along the way we used the Inverse Transform Theorem
to turn uniforms, magically, into arbitrary distributions. We looked at the exponential and Weibull. You give me a
uniform, I'll give you any distribution you want. And this is extremely useful in simulation all the time. So, next

69
time, if you liked one random variable, you're going to love two. We'll generalize a lot of the results that we've
done over the last couple lessons into bi-variant distributions. Random variables with two components.

70
Lesson 9: Jointly Distributed Random Variables
In this lesson, we'll look at two dimensional random variables that may or may not be correlated with each
other. Think about height and weight, for instance -- or, in simulation, consecutive, correlated waiting times: if
we have customers in a line, obviously, the waiting times between customers are going to be correlated with
each other.

Let's consider two random variables for now, interacting together, like height and weight. So let's look at the
definition of the joint cdf of X and Y, two random variables, simultaneously:

The marginal cdf of X is the cdf of X just by itself, so marginal means, just by itself. So you can think of F of
(x) as the probability that X is less than or equal to x: if I let Y equal to infinity, in the joint cdf definition, that's
the probability that X is less than or equal to x.

So that's the same thing as X all by itself. I'm using the subscript capital X to remind us it's just X all by itself.
Similarly, the marginal cdf of Y all by itself, capital F of subscript y, that's the joint cdf evaluated, X equals
infinity, and little y.

If X and Y are discrete random variables, then the joint pmf probability mass function of X and Y is:

That completely defines the joint probability mass function of X and Y. In fact, if you add up all those possible F
of (x) y values, you'll get one. Just as before, for the cdf we can calculate the marginal pmf of x, and that's just
the pmf of x all by itself, f(x) equals the probability that X = x. The way you get that is you take the joint pmf,
and just add up all the possible y values. The marginal pmf of X is:

71
Similarly, the marginal pmf of Y is:

This table gives the joint probability mass function, f of (x, y), for a simple example:

The possible values of X are 2, 3, and 4, while the possible values of Y are 4 and 6. The numbers in the
interior of the table give the joint pmfs. So for instance, the probability that X = 2 and Y Y = 4 is 0.3. By
definition, these interior entries sum to 1.

If I were to sum all the y values for say, X = 2, I would get 0.3 + 0.1 = 0.4. That corresponds to the probability
that X = 2. Similarly, I get x equals three, x equals four, and so that completely defines the probability mass
function for x, and namely, it's point four, point four, and point two, for x equals two, three, and four,
respectively. And that adds up to one, because it's a probability mass function. Same thing with y, the
probability that y equals four, is point six, the probability that y equals six, is point four, I just added up all the x
values along the row. So very, very nice. Here's the continuous version of these definitions. If X and Y are
continuous, then the joint pdf, probability density function, of X and Y, is given by this nasty looking thing, little f
of (x, y) equals the second partial derivative of capital F of (x, y). So just take my word for that and note that if I
take the double integral of f of (x, y), dxdy, that equals one, that corresponds to adding up in two dimensions all
the x and y probabilities in the discrete case. I can get the marginals of X and Y. The marginal pdf of X is given
by f of (x) equals the integral, over all values of y of f of (x, y), that corresponds to adding up, in the discrete
case, all the f of (x, y) values, over all the y's, to get the marginal pmf of x. So here's the marginal pdf of x that's
just the integral over all y of f of (x, y). The marginal pdf of y, similarly, is integral of x of f of (x, y), dx. So I'm
integrating out the x's, and I'm just left with the function of the y's. Here's an example, and I'll be using this
example for a while. Let's suppose that the joint pdf is given by f of (x, y) equals 21 over four, x squared, y. And

72
this is for all y values between x squared and one. This is a legitimate pdf, if I take the double integral over all
the (x, y) values satisfying x squared less than y less than one, this thing does, in fact, integrate to one. It's a
little bit tricky, because of these limits, the limits are x squared, less than y, less than one. I call these funny
limits, I'll be using that expression occasionally. Funny, as in strange, not ha-ha. So this is the joint pdf, the darn
thing does integrate to one. Here's the marginal. Marginal of x is given by integral over all values of y of f (x, y)
and here's where we have to be a little careful, the limits of y are from x squared to one. So let's slavishly plug
that in. Integral from x squared to one of f of (x, y) is 21 over four, x squared, y.

After the smoke clears, you get 21 over eight, x squared, times 1 minus x to the fourth. Now, that is for x
between negative one and one. Where did the y go? Well I integrated it out, it's gone. So, after I get rid of the y,
I'm just left with the limits of x squared having to be less than one, and that's the same thing as negative one
less than x, less than one. That's how I got that.

The marginal for y, similarly, f of (y) equals integral of f of (x, y), dx. Now, if x squared is less than y, is less than
one, well, that means that x squared runs from negative square root of y to plus square root of y. It looks a little
bit tricky, but that's what it is. And after the smoke clears there, I'm left with 7 halves, y to the 5 halves. And
there are no x's anymore because this is the marginal y's and I've integrated the x's out. That just means that y
is between zero and one. Because I got rid of all of the x's in my limits. That's where I get zero less than y, less
than one. The nice thing is, I can integrate this pdf, 7 halves y to the 5 halves, in my head between zero and
one, and the darn thing integrates to one, just like I promise. It's a legitimate pdf. Here's the definition. X and Y
are independent random variables if f of (x, y) equals f of (x) times f of (y) for all x and y. This corresponds to
the same definition of independence we had way, way long ago when we started our review. Here's the
theorem, X and Y are independent if you can write their joint pdf as a function of x times a function of y. I'm
calling that a(x) times b(y). Just for any old a(x) and b(y). This is all fine and good, but sometimes it looks like
you can write it as a of (x) times b of (y), but if the darn thing has funny limits, that's why I keep on saying the
darn thing, then these guys are not independent. So if you can write it as a of (x) times b of (y), that's great, but
don't forget the funny limits if they're there. And I'll give examples in a second, to show you what I mean. So,
funny limits cause a lot of trouble. In the previous example that I did, again, I'll show you that in a minute, we
had funny limits. So, here are a bunch of examples. Let's let f of (x, y) equal cxy, for zero less than x, less than
two, zero less than y, less than three. What's nice here is that, look at that, a(x) times b(y), very simple, no
funny limits. These guys are independent, that's an easy one. How about this? This is our previous example,
from a couple pages ago. Look at that, I can write a(x) times b(y), but hold on, funny limits. So that means that
x and y are not independent, too bad. And finally, here's f of (x, y) equals c over x plus y. The reason I say c, is

73
because I'm too lazy to figure out what the appropriate consonant is going to be. C over x plus y, that you
cannot write as a function of a(x) times b(y), so too bad, they're not independent. Notice that I didn't have funny
limits here, but so what, I couldn't write that as a function of a(x) times b(y) so, not independent. Too bad. Let's
give another definition, the conditional probability density function or probability mass function of Y, given X
equals little x, is just like, when we were doing the initial probability review, it's f of (x, y) over f of (x), and that
assumes that x is greater than zero. This is a legitimate probability mass or probability density function. For
instance, in the continuous case, the thing integrates to one, just like it's supposed to. So if I integrate over all
values of y, that will integrate to one for any given x. And I'll motivate that a little more in a second. Here's an
example, if f of (x, y) equals our old friend, 21 over four, x squared, y, for x squared less than y, less than one,
then f of (y) given x, is, by definition, f of (x, y) over f of (x), I plug and chug all this mess here, this is 21 over
four, x squared, y, that's my f of (x, y), here's f of (x), which we calculated a couple pages ago. I do the algebra,
and here's my answer. Two, y, over one minus x to the fourth. And this is for y between x squared, and one. I
did nothing to get rid of the x value, I mean, those x's are there because I told you, I've got this information
about x, so of course it should be included in the conditional pdf. These conditional distributions are very, very
important. If X and Y are independent, then it turns out that f of (y), given x, equals f of (y) for all of x and y. So
what this means, is that information about x, contributes nothing to the distribution of y. So this might be one of
these cases where, what's the IBM stock price today, given the fact that it's thirty degrees on Mars? Well, the
information x equals 30 degrees has no impact, at all, on the IBM stock price. Or is that what we're told to
believe. So here's a quick proof. F of (y), given x equals, by definition, f of (x, y) over f of (x). Now, if x and y are
independent, then f of (x, y) equals f of (x) times f of (y), by definition of independence. And then, the f of (x)'s
on top and bottom cancel, and I'm done. So that's the end of the proof, very simple. This is going to lead me
now to the conditional expectation of Y given X equals little x. If you're willing to believe that I can give you the
conditional pdf or pmf of Y given X, then I can take the expected value of that updated information of Y. And
again, I'll motivate in a second. Here's the definition, the conditional expectation of Y given X equals little x is in
the discrete case, summation of y times f of (y) given x, in the continuous case, integral, of y times f of (y) given
x, and that's taken over dy. X is fixed, and we integrate, or sum, the y's. Now, this is a very, very important
concept. Because what it allows us to do is to update our information about y, after we've heard about x. So for
example, let's suppose I've got a guy in the room and he's seven feet tall, and I'm interested in his expected
weight. The information that he's seven feet tall, is going to give me an expected value for his weight, an
expected value of y, given that x equals seven. That's probably going to be greater than the expected weight of
some totally random guy, with no information about his height. Now, that would be the expected value of y,
that's total, random guy from the population, no knowledge at all about his height, that's going to give me a
different expected value for weight, than somebody who's height I already know. Here's our old continuous

74
example, that we've seen nine thousand times already. Let's let f of (x, y) equal 21 over four, x squared, y, x
squared less than y, less than one. Here's the conditional expected value, by definition, it equals the integral
over the real line of y times f of (y) given x. I plug in f of (y) given x, that's two y squared over one minus x to
the fourth times y. I integrate y between x squared and one, and after the algebra, here's my answer.
Two-thirds, one minus x to the sixth over 1 minus x to the fourth. And this is actually for x between negative
one and one. Notice it's perfectly okay to have an x here, because in fact, the expected value of y is a function
of little x. The expected weight is a function of the height. So it's perfectly okay to have that nasty looking
expression. So here's a summary of what we just did in this lesson. We did a lot of stuff with joint pmf's and
pdf's, we looked at independents, we looked at conditional distributions, we even looked at conditional
expectation, that's a lot of stuff. Next time, I'm going to continue the fun, with conditional expectation.

75
OPTIONAL: Lesson 10: Conditional Distributions/Expectation
I'll be doing a lesson on Conditional Expectation now, and this may be the toughest lesson for the foreseeable
future. This will be a lot of fun, in other words. Here's the overview. Last time we talked about joint PMF's and
PDF's, Independence, Conditional Distributions, and we touched on Conditional Expectation. Here we're going
to really get into Conditional Expectation. I'm going to use it for several very, very cool applications. It's a tough
lesson, don't panic. We're going to get through this together. Here's our old definition of the Conditional
Expectation of a random variable Y given that X equals little x. In other words, what's a guy's expected weight
given a particular height? The definition is the expected value of Y given that X equals little x is summation
over all values of Y of y times f of y given x in the discrete case and integral, the same thing, Y times f of y
given x dy in the continuous case. We take the integral and the summation over Y. The example we did last
timeof our old buddy 21 over four x squared y. It turns out that the conditional PDF is f of y given x equals f of
xy over f of x, that's by definition, equals blah, blah, blah, blah, blah. From last time, two y over one minus x to
the fourth. That was the conditional PDF of y given x, and then the conditional expected value of y given x was
the integral of y times f of y given x dy. That equals y times the thing up top, y times that. That's 2y squared
over one minus x to the fourth. Do the algebra, that's my Conditional Expectation. Okay, a little messy but not
too awful. So that was just that last example that we ended the previous lesson with. Now, toughest theorem of
this part of the course, Double Trouble. We're going to look at what's called Double Expectation. Here's the
idea: I just got you the expected value of y given x, what if we average all those expected values over all the
possible values of x's. So the idea is to take the average expected value of all the conditional expected values,
and what that's going to do, it turns out amazingly that's going to give me the overall population average. You'll
see it's kind of a miracle. Now, it turns out this is a very useful tool when calculating certain overall averages
like we're about to do. It's also a good way to get around some other problems. I don't want you to panic. Let's
look at this thing and then we'll do some applications. So here we go, here's the Theorem: Double
Expectations. It's called Double Expectations because I have two expectations here. Let's look at the thing on
the inside. The expected value of y given x. Now we just spent a little time looking at the expected value of y
given little x, so let's pretend that that's a little x right now. We know how to calculate that. We've just done an
example or two. So let's calculate that and instead of the little x let's just insert a capital X. So we know how to
calculate that. That's a nasty looking random variable. So if a couple pages ago I had that, what was it two
thirds one minus x to the sixth over one minus x to the fourth. If I put capital x's there, that would have just
been a nasty, awful, random variable. So no problem, call h of x, and we got this thing called Lotus. So that's
the expected value of a nasty h of x, that's all that is. And we can take the expected value of that by using

76
Lotus. Amazingly it equals the expected value of y. So If I take the expected value averaged over all x's, I end
up with the expected value of y. It looks awful, but it's a good way to sometimes calculate the expected value of
y in a sneaky, backhanded manner. What I'm going to do, I'm going to show you the proof of this thing. And this
is the worst, most horrible, awful proof ever, but it allows me to define all the terms and walk you through things
pretty carefully. So the expected value of the expected value of y given capital x, by the Unconscious
Statistician, Lotus, if you interpret the expected value of y given capital x is a nasty h of x function, there it is.
So by Lotus, this is the expected value of capital y given little x times f of x dx. No problem. Now this thing
equals by definition of the expected value of y given little x. that's all it is, horrible as it may be. So now I've got
two integrals there. All I'm going to do in the next line is, this is just a clean up line, I just got rid of the big
parentheses, scrambled up some stuff, I've made a dx dy, all the terms are there. Everything's there, I just
cleaned things up a little bit. And then,`what I'm going to do is I'm going to separate things. I'm just going to do
all the x's first, take the y outside in the second integral. And you'll see that this thing in the inside is going to be
very useful. So all I've done is I've moved things around a little bit. Let's look at that. What is the integral of f of
x y dx? Well we learned that it's just f of y. That's the marginal of y, so I'm left with integral of y times f of y dy
and what do ya know it's the expected value of y. I am so smart. So that proves it. Now, that doesn't mean that
this is not a nasty lookin' thing, but it's at least true.

Let's apply this to our old example. So suppose f of x y equals our friend, and by previous examples we
already know f of x, f of y, and the expected value of y given x. Now what I'm going to do, I'm going to find the
expected value of y, I'm going to do it in two ways because it's Christmas in July. So here's solution number
one. This is the old boring way that my cat can do. So the expected value of y, the old boring way, is just
integral over the real line of y times f of y. This is the integral, remember y goes from zero to one. Y times f of y,
so y times seven halves y to the five halves, that's where I get the seven halves right here, and that equals
seven ninths. Okay, ho hum, very boring, but that's the answer, and really that's correct. Let's do it now the
exciting way. So the expected value of y equals by Double Expectation. This is the definition of Lotus, the
Double Expectation: The integral over the real line of expected value of y given little x, f of x dx. That's by
Lotus. Okay it just so happens I know that x goes from negative one to one. I know the expected value of y
given little x. I know f of x, this is all from previous stuff. It looks awful, but there's a lot of cancellation. I'm just
left with a little easy little polynomial of x. After I do the algebra, woah, what a surprise, seven ninths. And
notice how these little fellas match up like they're supposed to. That's how they call it a theorem. So, I admit,
solution number two took a little more algebra, but my purpose was just to show you that I could do this thing in
two ways.

77
Okay, so what I'm going to do now is I'm going to start applying Double Expectation. Here's a cutesy way to
calculate the mean of a Geometric Distribution. What is a Geometric? Let me remind you. Let y be a Geometric
p, and what I mean by that is y could be the number of coin flips before heads appears, and we'll let this be a
biased coin so maybe the probability of heads is p. So what I'm doing is I'm repeating an experiment. I'm
repeating Bernoulli Trials many, many times until I get a success. So from baby probability class, or from the
review that I'll do later, we know that the Probability Mass function of y is, f of y is the probability that y equals
little y is q to the y minus one times p. Remember q is just one minus p. What this means is I get y minus one
failures in a row, and then my yth trial, yith trial, that's a great word, I get a success. So I get y minus one trials,
then a success. The probability that is q to the y minus one. Failure, failure, failure, failure, failure. Y minus one
times and then a success with probability p on the yth trial. And so the overall probability that capital y of little y
is q to the y minus one times p. The old fashion boring way to calculate the mean is the expected value of y
equals sum over all y of y times f of y. I'll just simplify that, that's the sum from y equals one to infinity of y times
q to the y minus one times p, I just plugged in the PMF. Now that equals, after a lot of algebra which I'm not
going to go through, so just believe me on this one, that equals one over p. In fact, it's really too bad that I don't
have the opportunity to show the algebra, because I really wanna use the step where I take a p outside. If you
believe me, the expected value of y is one over p, and it makes sense. I'll explain why later when we do our
probability distributions review.

My point is, let's do Double Expectation here, and I'm going to use what's called a Standard One Step
Conditioning Argument while I do Double Expectations. So let's define the random variable x equal to one if the
first flip is a heads, and x equals zero otherwise. So let's pretend I have knowledge of the first flip. And I don't
really have knowledge, but I do know that it can either be heads or tails. So x is going to either equal one with
probability p or zero with probability one minus p. Based on the result of x in the first step, here's what we
have. This is a big messy thing but let's go through each step. Expected value of y equals expected value of
the expected value of y given x by Double Expectation. By Lotus, this is the sum over all x of the expected
value of y given x times f of x. So this is also using the Law of Total Probability, or some people call it the
Standard Conditioning Argument, but this is basically Lotus. The expected value of y given x f of x, sum over
all x values. Well what are the x values? X can only equal zero, or one. So the expected value of y given x
equals zero times probability that x equals zero plus the expected value of y given x equals one times
probability that x equals one. Now, here's where we do that one-step argument. This is so nice, if x equals
zero, that means I failed. And it's just like I'm starting all over again. Because I didn't get heads coming up on
my first trial, it's like I'm starting the experiment all over again. So the expected value of y given x equals zero,
whatever that is, it's one plus the expected value of starting all over again, because the one comes from my

78
initial failure, and the expected value of y comes from essentially starting the experiment over again. So the
expected value of y given x equals zero is one plus E of the original y times the probability that x equals zero
which is q, one minus p. What's the expected value of y given that x equals one? If x equals one, it means that
I was successful on the first trial, I was successful. And that means I won on the first trial. So that means that
the expected value of y given x equals one is one. And that happened with probability p. Okay so that's why I
put the question why. Now I've got the expected value of y on the left, an expected value of y on the right. I
solved, and I get one over p. Vavoom. Very good.

Let's do another trick. Let A be an arbitrary event. I'm going to now define y equals one if A occurs and y
equals zero if A doesn't occur. This is the same, I guess I could’ve used x's before, but it turns out since I'm
ultimately calculating the expected value of y I'm gonna use y's here instead of x's. So let's let A be that
arbitrary event y equals one if it occurs, y equals zero otherwise. So the expected value of y is given by the
sum over all y times f of y by definition. And y can only equal zero or one if I plug in zero I get nothing. If I plug
in one, so y times f of one is the probability that A occurs. So this is the probability that y equals one, this is the
probability that A occurs. So the expected value of y equals the probability of A. And what this means in the big
picture, if I had an indicator function which is what y is, one if A occurs, zero if it doesn't, then the expected
value of an indicator function is the probability of the corresponding event. That's all it means in high level
words. Expected value of y is the probability of A. Let's do a conditioning version of this. So now,` for any
random variable x, this is why I'm using x's here, the expected value of y, this indicator event, given that x
equals little x is, same reasoning as before, it's the sum over all y of y times f of y given x instead of f of y as
above. It's f of y given x, so this equals by the same reasoning as before the probability that y equals one given
x equals little x. And this is by definition the probability of the event occurring given that x equals little x. Okay
so I kinda want you to take note of this result, because it has a lot of application. And in fact, you may not have
learned this in your baby probability class, that's why this is kind of a bonus section. So, here's an implication.
The probability of A equals the expected value of y, because by definition that's what it is, equals the expected
value of the expected value of y given x, that's our Double Expectation, equals by Lotus the expected value of
y given x equals little x. And I'm using, because I'm lazy I'm using this generalized version so that I don't have
to write summations or integrals together. And then this equals, by the last thing on the previous page, the
probability of A given x equals little x. Very, very nice. Here's an example theorem of an application. And it
looks like it might be coming out of nowhere but in fact it's not. If x and y are independent, continuous random
variables, then believe it or not this thing above implies that the probability that y is less than x, is the integral of
the probability that y is less than little x times f of x dx. Now yikes, how did I get that from the above. It turns out
not difficult. The proof follows from the above result if we let A, the event A, equal y less than x. Given x equals

79
little x, translates to y less than x. See how that works? And this d f of x, is just, remember that's my short hand
for f of x dx. So this is a very nice example theorem. If you have two random variables, you can figure out the
probability that one is less than the other using that nice little equation.

Here's an example. Let's suppose that x is exponential Mu, and y is exponential Lambda; these are
independent random variables. And this type of example comes up all the time in Queuing Theory and other
places. Then the probability that y is less that x equals, here's the example theorem I just showed you, now I'm
going to do a plug and chug. So x runs between zero and infinity, because x is exponential Mu. The probability
that y is less than x, that's just the CDF of x which is one minus e to the minus Lambda x. So then, f of x, the
next line, is just Mu times e to the minus Mu x. And that presents a little bit of an algebra problem, and after the
algebra is carried out we get Lambda over Lambda plus Mu. So the probability that we get y is less than x
equals this very simple expression: Lambda over Lambda plus Mu. And it turns out there is a very logical
reason, x and y correspond to arrivals from a Poisson Process. And Mu and Lambda are the arrival rates. So
if, x might be say female arrivals, and y could be male arrivals to a store, and if females are coming in at three
per hour and males are coming in at let's say nine per hour, then the probability that a male will show up before
a female is going to be nine over 12. Nine over nine plus three. So you can see it makes intuitive sense. If you
didn't understand what I just said, don't worry about it we'll talk a little bit more about this when we talk about
Poisson Processes later on.

Okay so the last thing I'll talk about is Variance Decomposition, and I just want to show you what this is. I'm not
going to go through any proofs or anything very carefully. It turns out, just like we have Double Expectation for
the expected value of y, we also have sort of a Double Expectation kind of thing for variance, and since this is a
Variance Decomposition this is useful a lot of the time when we're simulating complicated random variables. So
the variance of y can be broken down into the expected value of the variant, the conditional variance of y given
x, which I'll define below, plus the variance of the expected value of y given x. This is a mess. It's awful. And I
just want you to see this. I don't expect you to memorize it. Here's a proof of this thing from Ross, and along
the way I'm going to define all this stuff. So let's look at the first term: the expected value of the variance of y
given x. What is that? Let's go to First Principles, we'll look on the inside the variance of y given x is the
expected value of y squared given x minus the expected value of x quantity squared. That's from First
Principles of what the variance of something is. The variance of z equals the expected value of z squared
minus e of z quantity squared. That's what's going on there. So right here, this equation follows from Double
Expectation where I used y squared instead of y. So the expected value of the expected value of Y squared

80
given X is just the expected value of Y squared. This term is a mess, I'm not going to fool around with it, we'll
get rid of it in a minute.

So similarly, the variance of the expected value of Y given X, this is my second term in the decomposition. That
equals, again I'm going to apply First Principles, it equals the expected value of the thing squared minus the
expected value of the whole thing quantity squared. That's what this is, this is from First Principles. This term
we're going to leave alone for a minute, because it's going to go away. And this term is the expected value of
the expected value. We know by Double Expectation that's e of y quantity squared. So let's put all this stuff
together, and from the last page, the two quantities that I'm adding up, these two guys I add them up and we
cancel. Boom. That term and both pieces. I'm left with e of y squared from the last page minus e of y quantity
squared from right here. And that equals the variance of Y. And that is a mess.

Well here is a summary of what we just did. This is by far the hardest lecture that we'll see for quite some time.
I studied Conditional Expectation along with a bunch of its applications. We did Double Expectation, we did
Standard Conditioning, we did all these little applications. And next time I'm going to tone things down. We're
going to do covariance and correlation.

81
Lesson 11: Covariance and Correlation
This time I'm going to be talking about independence, covariance, correlation, and some related results. This
will be a lot easier than in the previous lesson. And it turns out that correlation just shows up all over the place
in simulation. Both in inputs, outputs, just everywhere.

So let's start out with this nasty, awful-looking definition. It's not really a definition, it's a theorem, but we'll call it
a definition. This is essentially the two-dimensional version of LOTUS. And by that, remember what we call
LOTUS the Law of the Unconscious Statistician, this is going to allow us to kind of calculate arbitrary
two-dimensional expected values. So, let's suppose that h of XY is sort of some complicated function of X and
Y, where X and Y are random variables, then the expected value of h of XY, well, depending on either the
discrete case or the continuous case will use sums or integrals.
● So in the discrete case it's the double sum of h of XY times f of XY, where f of XY is bivariate PMF.
● In the continuous case it's the double integral of the entire real line squared of h of XY, f of XY. In this
case, f of XY is the two-dimensional PDF probability density function.
So here's the theorem. Now the reason I went through this awful-looking, two-dimensional Law of the
Unconscious Statistician, if you wanted to prove the following theorem, you'd need LOTUS in two dimensions.
So, I'm not going to show you how you actually prove the theorem, but if you needed to do so, if they cornered
you in the back alley you would need the definition of LOTUS. So here's a theorem, it's an easy theorem.
Theorem: Whether or not X and Y are independent, we have E[X+Y] = E[X] + E[Y].
Whether or not X and Y are independent, the random variables, the expected value of X plus Y equals the
expected value of X plus the expected value of Y. So, in fact they don't have to be independent. If you've got
two things, you want to add up the expected value, just add them up. It's very, very simple. That's the easy
one. The more interesting one is
Theorem: If X and Y are independent, then Var[X+Y] = Var[X] + Var[Y].
if X and Y are independent, so now we really do have this independence going on here, then the variance of X
plus Y equals the variance of X plus the variance of Y. Again, you need the two-dimensional LOTUS to prove
this result. This is very, very important. It turns out if X and Y are not independent you may have to worry about
what's called covariance, which we'll talk about in a little while, so stay tuned for the dependence case.

Here's another definition. Since we're into bivariate random variables, let's look jump up to multiple random
variables. Let's suppose we have these random variables X1 through Xn, they are said to form a random

82
sample from the PMF or the PDF f of X, if all the Xs are independent and each Xi has the same PDF or PMF.
So they're called a random sample then. It's like you're just taking a bunch of Xs from the same distribution,
they're all independent of each other. Here's the notation. You've kind of seen this a little bit already. X1
through Xn are said to be iid f of X, we have a the little twiddle there, iid, that's something we've seen multiple
times now, if they form a random sample. Iid means independent and identically distributed. You've seen that
already, now we're formalizing it. Here's an example, and even a sort of a theorem as well. If X1 through Xn
form a random sample from f of X, in other words, they're iid with PMF or PDF f of X, then the sample mean,
just add up the Xi's and divide by n, we all know what the sample mean is, you learn that in third grade, then
the sample mean has the expected value, so the expected value of the sample mean is the expected value of
just one of the observations individually. And the variance of the sample mean equals the variance of one of
the observations divided by n. So the expected value of the sample mean stays the same as the variance of
one observation, but the variance actually starts decreasing. So the more observations you take, the more the
variance of the sample mean decreases. We'll talk about that in greater detail later on, but I just wanted you to
see this result. We'll even prove it later on. Now, what's going on is that it turns out that all random variables
are not necessarily independent, and so we've gotta be careful about that, especially in simulation.

So let's start off with the definition of the covariance between X and Y. That's kind of the most fundamental
measure of non-independence between two random variables, although there are others. So, the covariance
between X and Y is defined as the expected value of X minus E of X times Y minus E of Y. And let's not be
daunted by this. E of X is just a number, E of Y is just a number. And so what this thing is, it's just a slightly
messy-looking h of Y function, nothing more than that, and so you can use the Law of the Unconscious
Statistician to calculate that. I'm going to make life easier for you, and this is a theorem. That mess equals the
expected value of XY minus E of X times E of Y. That's just a, that's a theorem, you can prove it while you're
watching the commercials if you want. But this is just an easier expression and right there, this is a much
easier h of XY function to apply the Law of the Unconscious Statistician to, and I'll do a numerical example
later on to show you how to do it. Just as an example, temperature and snowfall often result in a negative
covariance, whereas something like a, maybe IQ and grade point average might result in a positive covariance.
So we'll go through an example later on, you'll see. Now meanwhile, if I were to take X equal to Y, then it turns
out that the covariance of X with itself is the variance of X. You can see that very easily by plugging in X and Y.
Here's the theorem. If X and Y are independent random variables, then the covariance is zero. That's kind of a
fundamental theorem. On the other hand, this is something that people mess up all the time, the covariance of
X and Y equaling zero does not mean that X and Y are independent. People mess this up all the time. Here's
an example. Let's pretend that X is uniform negative one, one. I'm just making this up, but there's a reason I'm

83
using this one. So X is just a random number between negative one and one, and let's let Y equal X squared.
Now, Y equals X squared, therefore X and Y are totally dependent, very very very dependent. 'Cause if you
know X, you know Y. On the other hand, the covariance between X and Y is, well, here's the expected value of
X times Y. Expected value of X times X squared, and that equals the expected value of X cubed minus E of X
times E of Y. Well, by symmetry, you can easily show that the expected value of X cubed and the expected
value of X are both equal to zero, therefore the covariance is equal to zero. And so even though the covariance
of X and Y is zero, they are not independent, put an unhappy face. I mean sometimes independence is neither
good nor bad, but we'll put an unhappy face for that.

So let's do a miscellaneous little theorem. This is an easy one, I don't need you to prove it, but I'd just like you
to memorize it. So suppose we're interested in the covariance between aX and bY, where a and b are just
constants. Well, all you do is, you pull the constants outside, and you get a times b times the covariance of X
and Y. So in fact, that has to do with the fact that the variance of aX equals a squared times the variance of X.
That last result is a special case of this theorem. Here's a maybe more important theorem. Whether or not X
and Y are independent, the variance of X plus Y equals the variance of X plus the variance of Y Now we saw
that when X and Y were independent, but whether or not that X and Y are independent, you have to add on
this two times the covariance of XY term. So whether or not X and Y are independent, the variance of X plus Y
is V of X plus V of Y plus two times the covariance. If X and Y happen to be independent, then the covariance
is zero. Similarly, the variance of X minus Y is variance of X plus variance of Y, take my word for it, there's a
plus sign there, minus two times the covariance of X and Y. These are just things that you have to know. And
now let's finally give the definition of correlation between X and Y. All this is, it's standardized covariance. So
the correlation between X and Y is the covariance divided by the square root of the product of the variances,
and this standardizes covariance into a number between negative one and one, so correlation, standardized
covariance, is always between negative one and one. If something's highly correlated, the correlation is going
to be around one. If it's negatively correlated, you know maybe highly-negatively correlated, it'll be towards
negative one. Something like SAT scores and first year performance at college, well, believe it or not, the
correlation is a low positive number, maybe .2 or .3, so the more you work with this you'll kind of get a better
idea for what correlations do.

So finally let's look at this joint probability mass functions. I'm finally giving an example. So where we have a
joint PMF, f XY, and X can take the values two, three, and four, and Y can take the values 40, 50, and 60. And
what I like to think of these as, well, let's see. X could be the GPA of students at the University of Georgia and
Y, 40, 50, 60, could be their corresponding IQs. This is actual data. So if you look very quickly, you can see that

84
these numbers in the middle here, that's the joint probability mass function, they add up to one. And here are
the marginals, this is the probability mass function of Y, this is the probability mass function of X. So all these
things add up to one just like they're supposed to, we've seen this kind of example before. Well, if I go and I
calculate the expected value of X using that marginal on the last row, the expected value is 2.8 and the
variance is .66. That's just an elementary calculation, you should know how to do that by now. Similarly, the
expected value of Y is 51 and the variance of Y is 69, so this is just after you do the elementary calculations.
Now here's the interesting thing. This is the new thing that we're doing in this lesson. The expected value of X
times Y is the two dimensional Law of the Unconscious Statistician, double summation of h of XY, XY, times f of
XY, and if you go through all the calculations for instance, X times Y times f of XY, well, here's the first value, X,
two, times Y, 40, times f of XY, zero, that's an easy one, that contributes zero to the expected value of XY. Then
you do it for all other, all of those other terms, so it's nine terms in total. After the smoke clears, you get 140.
With the expected value of XY in hand, I can calculate the covariance. That's the thing up top here. The
expected value of XY minus E of X times E of Y, and whatever that is, I can also calculate now the correlation.
So I take this expression up on top, expected value of XY minus E of X times E of Y, divide by the square root
of the product of the variances, and I get negative .415, which is actually a little disappointing, because it tells
me that at the University of Georgia, the GPA is somehow negatively correlated with the IQ. Which is really
weird, but I guess that's why you would go to the University of Georgia in the first place.

I'm going to give one more example, this is actually kind of a fun one having to do with portfolio analysis. So
now let's look at two different assets that you own. They could be like two different stock prices or a stock price
and a bond price. Let's call them S1 and S2. The expected returns, this could be the return rate per year, are
let's say mu one for the first asset, mu two for the second, and the variances are sigma one squared, and
sigma two squared for the assets, and let's say that the covariance between the two assets is sigma one, two.
I'm keeping everything completely general here. Let's define a portfolio. All the portfolio is just a weighted
combination of S1 and S2, so the weight is the number between zero and one, a little secret though, in portfolio
analysis we actually allow negative weights and weights beyond one. That's if you want to do calls and puts,
options like that. Don't worry about it for now. We'll just assume the weight's between zero and one, and so the
portfolio is just a linear combination of S1 and S2. Well, the expected value of the portfolio is the expected
value of the first component, so that's W times mu one, plus the expected value of the second component, one
minus W times mu two. Using the theorems on the variance of the sum of two random variables from a couple
pages ago, the variance of the portfolio is W squared, sigma one squared, that's the first component. Plus one
minus W quantity squared times sigma two squared, that's the second component. Remember now we have
the two times covariance component, that's this, two W one minus W sigma one, two. That's the covariance

85
component. I can attempt to optimize this portfolio. Let's see, what we can do is well, we'll have the, let's fix the
expected value and let's try to minimize the variance, so I'm going to find the portfolio that minimizes the
variance. So there's other things I could do too, I could find the portfolio that maximizes the expected profit, et
cetera et cetera, but let's at least, right now we'll do the easiest thing and find the portfolio that minimizes
variance. So, let's just do D, DW of the variance, set it equal to zero, and after a little bit of algebra I get this
mess down here. The critical point is W equals sigma two squared minus sigma one, two over blah blah blah,
and it turns out if I take the second derivative, which I'm not going to do here, this is the thing that works.

Let's do a numerical example. So I'm just going to do a plug-and-chug into this thing, suppose the expected
value of S1 is .2, that corresponds to like a 20% return, I'll take that. The expected value of S2 is 10%, and the
variances are .2, .4, covariance is negative .1, so that's interesting. You have two different assets that are
negatively correlated maybe like a stock and a bond, I don't know, you can make these things up quite easily.
So what value of W maximizes the expected return of this portfolio? And what value of W minimizes the
variance? And I've got negative covariances here, so I gotta be a little careful. Well, let's talk trade-offs here,
and I'm going to let you play around with this at home. I could certainly find a value of W that maximizes the
expected return, but it may be the case that the variance is high, maybe I can take advantage of this negative
covariance to reduce the variance, I mean, I hate variation in my portfolio, can't stand it. So the value of W that
maximizes the expected return, which is surprisingly easy, it turns out, may be that might not be the best thing
if you don't like variance. Well, what value of W minimizes the variance? All right, so you can just do a
plug-and-chug, but I'm not going to show you, you can do that at the house. So think about the trade-offs
between return and variation, and this is what portfolio managers do.

So, here's a summary of what we did in this lesson. I defined covariance and correlation, gave a couple
practical examples, some really practical examples, actually. And next time, I'm going to start with a very, very
easy review of some of my favorite distributions.

Lesson 12: Probability Distributions


In this lesson we'll be going over some favorite probability distributions. Here's the lesson overview. Last time I
went through and defined covariance and correlation, gave a couple of quick practical examples. In this lesson
I'm going to talk about a few of my favorite distributions. And we'll go through a little list of these important
distributions, I'll talk about both discrete and continuous distributions. Here are some discrete distributions.
We'll go through these pretty quickly 'cause we've kind of mentioned them already some of them. The easiest

86
distribution, which my cat can do, is the Bernoulli distribution with parameter p. It takes two values, x equals 1,
with probability p, and x equals zero, with probability one minus p, or some people would call that q. The
expected value of the distribution, I think we already showed this, is p, the variance, is p times q, p times one
minus p, and the moment generating function which we sort of did in some bonus slides, was pet q, pe to the t
plus q. You can go and calculate them on your own if you've forgotten how, it's not too bad, you just do a plug
and chug. The Bernoulli generalizes to the binomial, so let's define that and go over a couple of its properties.
If x one through x n are iid Bernoulli, so we've a done of independent Bernoulli p trials, we've conducted the
same zero one experiment over n times, so you do this experiment, if it's a success, you give it a one, if it's a
failure, you give it a zero, with probabilities p and q. Add up the number of successes, that's what Y is. Y is the
number of successes of n Bernoulli trials, and that's binomial. After a little bit of thought, the pmf of the
binomial, is n choose y, so that n choose y, that's a binomial coefficient notation, that's just n factorial over y
factorial over n minus y factorial. This is something that you should remember, if not look it up. So n choose y,
p to the y, q to the n minus y, and the only possible number of successes are zero, one, dot, dot, dot, up to n,
they correspond to the number of successes in n Bernoulli trials, and just very quickly, here's how you calculate
that. The probability of having y successes in a row, is p to the y, followed by n minus y failures in a row, q to
the n minus y, but the probability of having y successes out of n trials, you're not going to necessarily have y
successes in a row, so the number of ways that can scramble up that one particular set of successes and
failures, is n choose y. And that's how you get f of y. Using the fact that I can add up a bunch of Bernoullis to
get a binomial, well the expected value of the Bernoulli is n times p, I'm just adding up p n times. The variance
of iid Bernoullis added up is n times p and I won't go through the calculation, but the moment generating
function just turns out to be pe to the t plus q to the n'th power. Turns out, moment generating functions
multiply together if the random variables are independent, you add 'em up. Don't worry about that for now. The
next distribution I'll look at is the Geometric distribution. X is geometric, and that corresponds to the number of
Bernoulli trials until I finally get a success. So for example, FFFS, failure, failure, failure, success, means that I
had to do Bernoulli trials four times before I got my first success. As we've mentioned before, the pmf is q to
the x minus one times p, for x equals one, two, dot, dot, dot, because you need to take at least one trial before
you get your first success. And what this means, is that I've taken x minus one failures, and then one success
before I can stop. q to the x minus one times p. We derived, or at least I mentioned, that the expected value of
X is one minus p, it turns out that the variance of x is q over p squared, and just to keep this in your
encyclopedia, the moment generating function is pe to the t over one minus qe to the t. You can talk about that
at parties. The geometric, actually generalizes to the negative binomial. Now I'm not gonna, I'm not going to go
through huge detail on this, except to say that the negative binomial is the sum of r iid geometric p random
variables. So in English, what this means is how many Bernoulli trials do I have to take until the rth success

87
occurs, like a pirate. rth success. So for instance, if I have this FFFSSFS, that implies that it took me seven
trials to get my third success. If you're interested, here's the pmf, and the expected value is r over p, ah look at
that, one over p which is the geometric expected value, times r, the variance is r times q over p squared, which
was the answer for the geometric, so it kinda makes sense. The next example is very useful, Poisson. Before I
talk about the Poisson I'm gonna define something called a counting process, and then we'll get to the Poisson
distribution in a minute. A counting process, N of t, just keeps track of the number of arrivals that we observed
during the interval zero to t, it's a time interval. So if seven people showed up by time three, then N of three
equals seven. There's lots of different types of counting processes, I mean you could have the number of
people showing up at a restaurant, turns out that's not quite Poisson. Number of people showing at the
restaurant as the day proceeds, it stays the same for a while, then it increases every time somebody shows up,
people could show up in groups, so sometimes it jumps, maybe you start out at three customers, and then a
party of six shows up so it jumps to nine, well here's what a Poisson process has to satisfy. It's a counting
process such that I'm going to do this in English, not in math, arrivals occur one at a time, at rate lambda. So
you could have maybe four customers an hour, but they show up one at a time. These are very lonely people
showing up at the restaurant. So it's very rare, almost never, that two customers show up at once. Then they
show up every once in a while, not every 15 minutes exactly, but on the average, every 15 minutes here,
because it's lambda equals four customers an hour. But they show up one at a time. We have independent
increments. So all this means in plain English, if I'm looking at two disjoint time intervals, so let's say between
12 midnight and two, that's one interval, and between 5:00 a.m. and 10:00 a.m. that's another disjoint interval,
the numbers of arrivals in those two intervals have nothing to do with each other, they're independent.
Whoever shows up between 12 and two, has nothing to do with the number of people who show up later on in
the day. And finally, we have something called stationary increments. So the distribution of the number of
arrivals only depends on the length of the time period, doesn't really depend on where it starts, just depends on
the length of the time period. So whereas intervals have to be disjoint to be independent, that's assumption
number two, assumption number three only has to do with the length of the time interval, so stationary
increments deals with that. Here's the definition of the Poisson distribution. X is said to be Poisson lambda, if
the number of arrivals comes from a Poisson process in one time unit. So the Poisson distribution is just N of
one. If N of t is a Poisson process with rate lambda. You may recall from previous lessons that the pmf f of x
equals equals e to the minus lambda, lambda to the x, over x factorial, x is zero, one, two, dot, dot, dot, The
expected value in variance are both lambda, and there's the mgf which I'm not even going to worry about.
Some continuous distributions now. Here's the Uniform distribution, f of x equals one over b minus a for x
between a and b, here's the Exponential distribution, f of x equals lambda e to the minus lambda x, we've
already seen the expected value in variance and the moment generating function so many times I'm not going

88
to go over it again. Here's a beautiful theorem that people use all the time, the exponential lambda distribution
has the memoryless property, and what that means if the random variable x, if it's exponential, if it has survived
up to time s, so if my friend says this light bulb has already lived s time units, then the probability that it
survives t additional units, the probability that X is greater than s plus t additional units, is merely the
unconditional probability that it would've survived t units to begin with. In other words, given that the darn thing
has already lived s time units, it's just like it starts over again, it forgot that it lived s time units, the probability
that it survives t more is just the unconditional probability that it would have survived t to begin with. Here's an
example. If X is exponential one over 100, that means lambda equal one over 100, then the probability that X
is greater than 200, given that X is greater than 50, by the memoryless property, it's the probability that X is
greater than 150, and that equals e to the minus lambda t, 'cause remember we have the cdf as one minus e to
the minus lambda t, this is the complement of the cdf, so e to the minus lambda t, do a plug and chug with
lambda equals one over 100, and t equals 150, and there's your answer, I won't even bother calculating that.
Here's the Gamma distribution. Take a look at this mess. What an awful thing. But that's what it looks like, this
gamma of a, is the gamma function, which you might've learned in calculus class, the expected value turns out
after a lot of algebra, is alpha over lambda, the variance is alpha over lambda squared, Mx of t, the mgf, equals
this slight mess, and you may notice that if you take alpha equals one, the exponential is a special case, so if
alpha equals one, then the exponential distribution is a special case. That's just very clear, and another special
case arises from what's called the Erlang distribution, let's talk about that. If X one through X n are iid
exponential, and I just add 'em up, so suppose I have n iid light bulbs, how long are they all going to last? And
so if I replace one by the next one once one fails it turns out that if I add up n iid light bulbs, I get a Gamma
distribution with parameters n and lambda, that's a very special distribution called the Erlang, named after this
Danish queueing theory guy. So adding up n exponentials gives me an Erlang, it has a very nice closed form
cdf, that's what it is. We'll use it occasionally. The cdf of the Gamma distribution is usually not closed form, but
it is closed form for the Erlang, just thought you'd like to know. Not that I want to blow these off, but I'm just
going to mention the Triangular distribution, x is distributed triangular, if it has this pdf, which if you plot it out
looks like a triangle, and the expected value is just a plus b plus c over three. X has the Beta distribution, if it's
got this awful looking pdf, really not so bad, this mess right here is a bunch of gamma functions, and those
taken together believe it or not are a generalization of a binomial coefficient, and I've courteously written down
the expected value and variance for ya. And now let's talk about the most important distribution in the universe,
the Normal distribution. F of x is this beautiful bell shaped distribution that you've seen a million times, and the
mean of this thing is mu, right there, no mus is good news, the variance is sigma squared, and the moment
generating function after a little bit of algebra, calculus, is e to the mu t, plus a half sigma squared t squared.
Just take my word for that. Here's a theorem, if X is normal, then a linear function of x, also normal. aX plus b

89
has mean a mu plus b, aX plus b, has variance a squared plus sigma squared. Very nice result. In particular, if
X is normal, then if I take a equal to one over lambda, or one over sigma, and mu equal to negative mu over
sigma, I get this special case, Z distributed normal zero, one, so if I subtract off the mean, and I divide by the
standard deviation sigma, I get a standard, normal, normal zero, one distribution, here is the specialized pdf
with mu equals, of course find mu equals zero, the sigma equals one, it's so special I give it it's own notation.
Instead of using f, I use phi, and the cdf is denoted by capital phi of z, and that thing is so important, that it's
tabled, you may remember these tables from high school. For instance, the value capital phi of 1.96, this is the
probability that a standard normal is less than 1.96, it's .975, that's just something you can go back, and look at
your old friendly tables. A theorem, if X one and X two are independent, and the Xis are normal, then X one
plus X two adds up to a normal distribution, and in fact the means add up and the variances add up. So here's
a quick example, suppose that X is normal three, four, Y is normal four, six, and X and Y are independent, then
let's look at the distribution of two x minus three y plus one. Well if X is normal with mean three, then two x is
normal with mean six, let's remember that. Mean six. If Y is normal with mean four, then minus three y is
normal with mean negative 12. So we've got a six here, and a minus 12 here, and a plus one here, so when
you add 'em all up, six minus 12 plus one, that gives me a mean of minus five. And that's where that comes
from. The variance takes a little more work, but you can easily calculate a variance of 70, it's a good exercise
and I'll probably give one of those for homework. A corollary of a previous theorem is that if the Xis are iid
normal mu, sigma squared, then the sample mean, remember the sample mean you just add up the Xis and
divide by n, the sample mean has the same mean mu and the variance is sigma squared over n, that's a
corollary of a previous theorem. So the sample mean has the same mean as the regular observations X one
through Xn, but the variance of the sample mean is sigma squared over n. What that means is as you get more
information, the sample mean becomes less variable, becomes a better estimator of mu. So this is a special
case of what's called the Law of Large Numbers, which says that X bar approximates mu very well as n
becomes large. Here's a summary of what we just did with all these distributions. We went over our favorite
discrete and continuous distributions and we ended on a high note by ringing that normal bell. And next time,
we're going to talk more about the normal distribution because it figures prominently in that Law of Large
Numbers which I just talked about, as well as The Central Limit Theorem, which is the most important theorem
in the entire universe.

90
Lesson 13: Limit Theorems
Last time I went over a whole dictionary of sort of my favorite distributions. And this time, I'll take it to the limit
one more time, and we'll see what happens as the sample size, n, gets big. Well, normality happens. In
particular, we'll be looking at the Central Limit Theorem which is, like I said, the, previously, the most important
result theorem ever invented.

So, here's a corollary of the previous theorem that we talked about in the last lesson. If X1 through Xn are iid
Normal mu sigma squared, then the sample mean, which is just the average, has the same mean, mu, and the
variance, though, decreases, sigma squared over n. So if I'm using X bar to estimate mu, because mu is
unknown in practice, then okay, that's great, it's got the correct mean, mu, but the variance of X bar is sigma
squared over n. That means, the variability of X bar is getting smaller and smaller, and it's kind of converging
towards mu, fantastic. Like I said before, this is a special case of the Law of Large Numbers, says that X bar is
a good approximation for mu as n becomes large. Gonna make a nasty little definition, which I use
occasionally. Supposed that Y1, Y2, dot dot dot, have respective cdf's, capital F of Y1, capital F of Y2, and we
say that this series of random variables, Y1, Y2, dot dot dot, converges in distribution to the random variable, Y,
having cdf F of Y. That's my limiting cdf if the limit as n goes to infinity of FYn, equals F of Y for all values of y.
The notation is Y sub n goes to Y, put that little d over, which means it converges in distribution. That just
means that the cdf's of the Y's converge to this final cdf Y. Now how does that apply? This is going to be
fundamental to the Central Limit Theorem in that we'll have a series of things converging to normality, and here
we go.

Suppose that X1, X2, dot dot dot, are iid with pdf or pmf f of x, having mean, mu, and variance sigma squared.
So they're iid, with mean, mu, variance sigma squared. Let's define Z of n equal to the summation of all the Xi's
so far, minus n times mu, so I'm subtracting off the mean of the sum, and I'm going to divide by the standard
deviation of the sum. Another way I can write this, this is just algebra, I divide top and bottom by n, so the two
expressions are completely equivalent, this is just square root of n times X bar minus mu over sigma, and so
you can see, what I've done is I've subtracted off the mean, and divided by, oh this last thing is sigma over
square root of n, I've divided by the standard deviation, that Z sub n converges in distribution to a normal 0.1
distribution. That's why I bothered with the definition on the last page. So basically, this is saying X bar
converges to a normal. Summation of Xi converges to a normal, properly standardized. The cdf of Z of n
approaches cap phi of z as n increases. That's the Central Limit Theorem. It's the most important theorem in

91
the universe. It usually works really well if we have a little bit of symmetry, and if n is kind of big, at least 15.
And we're going to look at more general versions of the Central Limit Theorem a little bit later on in the course.

Here's an example, I'm going to do this fairly quickly. Let's suppose that I have a hundred observations. 100
observations. And let's pretend they're iid exponential. Exponential with parameter one, so secretly I know that
mu equals sigma squared, equals one over lambda or one over lambda squared, but they're both equal to one.
So, n is greater than or equal to 15, I'm going to use the Central Limit Theorem here, let's find an
approximation for the probability that the summation of the Xi's, all 100 of them, is somewhere between 90 and
110. So this is basically saying, suppose I have 100 light bulbs, what's the probability that they last between 90
and 110 months? I'm going to add them all up, what's the probability that they're between 90 and 110. Well,
subtract off n times mu, both places divide by the square root of n mu, so I've standardized, I've got a normal
0.1 in between, and I've chosen these numbers because I wanted to cheat, I knew the answer. 110 minus 100
over square root of 100, that equals one, and so, this probability reduces to the probability that a normal 0.1 is
between negative one and one, and I can go to the tables and I know that that equals .6827, that's my
approximation. So, there's approximately a 68% chance that all those light bulbs are going to land between 90
and 110 years. Now, it turns out, since the summation of the Xi's is a summation of a bunch of exponentials as
Erlang, I can actually go through and use Minitab or some other software package to calculate the exact value,
and it turns out, what do you know, it's .6835, so, my approximation worked just great. It was fantastic, very
nice.

Here are some exercises, I'll go through these lickety-split, this is something for you to do at home during the
commercials. Pick your favorite random variable X1, simulate it a bunch of times and make a histogram. It
won't look normal if it's your favorite random variable, it's not going to be normal. Simulate a couple of these
things and add them up. It's going to look a little different than the original random variable, so if you take two
uniform distributions and add them up and do that many, many times and make a histogram, that's going to
look a little different than a uniform distribution. It'll look triangular, it turns out. Do this with three of your favorite
uniforms, or whatever random variable you want. Do this with many, and eventually, you're going to get a
normal distribution. Or something that looks normal. But look at number five, it turns out this trick will not work
for this Cauchy distribution. I don't expect you to necessarily prove this at this point, but we'll encounter this a
couple times, and you'll see why it doesn't work later on. So, Central Limit Theorem doesn't always work, and
there's a reason for this, we'll talk about that later on.

92
So, what did we just do? Well, I went over a bunch of Central Limit Theorem results, although we'll be seeing
more general versions of that a little bit later on. And in the next lesson, we'll be starting our Stats Attack. Do
three lessons dealing with some statistics results that you may remember from way back when.

93
OPTIONAL: Lesson 14: Introduction to Estimation
In this lesson we'll be doing the start of a statistics review. Here's our overview. Last time I finally completed
that Probability Boot Camp. I bet that was a lot of fun for everyone. And this time, I'm going to be starting a
review of the stats basics. And specifically I'll be talking about unbiased estimation and mean squared error, a
couple of concepts that are important. If you are not yet, or no longer a stats maven, I would just suggest going
through this leisurely. If you kind of know all this stuff, feel free to skip ahead a little bit.

Here's our fundamental definition: a statistic is just a function of the observations X1 through Xn that's not
explicitly dependent on any unknown parameters. So these are things that you can actually observe. There's
nothing unknown about the Xs. Here's some examples. The sample mean, which is just summation of the Xis
over n, and the sample variance. Summation of Xi minus Xbar squared over n minus one. There is nothing
unknown that you can't observe there. Statistics are random variables. So if I take two different samples of
observations, I'm going to get two different values of the stat. And a statistic is usually used to estimate some
unknown parameter from the underlying probability distribution of the Xs. So for instance, I use the sample
mean to estimate the true mean, which I won't normally know. So for instance, if mu is the true mean of a
random variable, then I take a bunch of samples, Xis, and I usually use the sample mean to estimate the true
mean.

Let's suppose that X1 through Xn are iid random variables. I'm going to let T of X be just a function of X1
through Xn, and I'm going to let that be a statistic, so it's a function that I can calculate, based only on the
observations and it has no unknown parameters in it. I'm going to use T of X to estimate this unknown
parameter, let's call theta, theta is sorta my generic unknown parameter, and if I use T of X to estimate theta,
I'll call T of X a point estimator for theta. Here's some examples. Xbar, the sample mean, is usually a point
estimator for the true mean. So Xbar is an estimator for an unknown parameter mu, equal the true mean. S
squared, the sample variance, is often a point estimator for the true variance, sigma squared equals variance
of SI which I would not know in practice. Now, it would be nice if this statistic had certain properties. For
instance, I would hope that its expected value should equal the thing that we're trying to estimate, so the
expected value of Xbar ought to equal mu. That's called unbiasness, it should also have low variance. It
doesn't do me any good if T of X is just bouncing around all the time, depending on which sample I take.

So, like I said, we say that T of X is unbiased for theta, if the expected value of T of X equals theta, the thing
that it is trying to estimate. Here's an example, I'm not going to go through every step here, because we've

94
actually already done this a bit, but the expected value of Xbar, if the Xs are iid, equals the expected value of
the sum over n. I'll pull the 1 over n outside, and the expected value inside. All these E of Xis are equal to mu,
and so when the smoke clears, the darn thing equals mu. The expected value of Xbar equals mu. Therefore,
Xbar is unbiased for mu. Va-voom, that's why we call Xbar the sample mean cause it's a good estimator for the
mean. Very nice. As a baby example, let's suppose that X1 through Xn are iid exponential lambda. Well, I said
in the previous example theorem, the Xis can be iid anything, so that means that Xbar is unbiased for mu, and
we happen to know that for the exponential, mu equals 1 over lambda. Now lambda's unknown, but at least
Xbar is a good unbiased estimator for 1 over lambda. Little bit of a warning, just because Xbar is unbiased
from 1 over lambda, does not mean that 1 over Xbar, the reciprocal, is unbiased for the reciprocal for 1 over
lambda, namely lambda. So it turns out, in fact, 1 over Xbar is biased for lambda. We'll talk more about that
later, but at least the good news is that Xbar is unbiased for 1 over lambda. Better than nothing.

Here's another example theorem. Suppose the Xis are iid anything with mean mu and variant sigma squared,
then the expected value of S squared, the expected value of that mess there, after a lot of algebra, it turns out
it's equal to the variance of Xi equals sigma squared, so that means that the expected value of S squared is
always equal to sigma squared, so S squared is unbiased for sigma squared. And that's why we call it the
sample variance. Here's the baby example, supposed the Xis are, again, iid exponential, and that means that S
squared is unbiased for the variance, which turns out to be 1 over lambda squared.

Now I know, everybody's probably itching to see this proof, and because this isn't really a stats class, I will just
walk us through this very quickly. All we do here, is I'm going to find the expected value of this thing, S
squared. If it equals sigma squared then I'm done, because I want to show that it's unbiased. So before I start
taking expected values, just believe me when I say this thing, after some algebra, equals this thing. If you just
go through and do the algebra, that's what you come up with, so now, I can start doing some algebra using the
fact that the expected value of Xi equals the expected value of Xbar, the variance of Xbar equals the expected
value of X1 over n, equals sigma squared over n, so keeping these two facts in mind, here we go. The
expected value of S squared equals, plug in to the equation on the top. Put the expected values in, this
expected value is the same for every value of i, there are n of them. Pull the n outside. Already, it's starting to
look easier. Now, use the fact that the expected value of X squared equals the variance of X plus the expected
value of X quantity squared, that's what that is. So it looks a little awful, we're taking one step back to go two
steps forward, and now, the variance of X1 is sigma squared, the variance of Xbar is sigma squared over n.
The expected value of X1 squared is mu squared. The expected value of Xbar squared is mu squared, get
some cancellation. Va-voom, sigma squared, and we are done with the proof. You can go through the details

95
again on your own during the commercials. So, a remark, unfortunately, S is biased for the standard deviation,
so even though S squared is unbiased, S is biased, too bad.

Here's the big example. Again, I'll go through the highlights. Suppose that X1 through Xn are iid uniform zero
theta. So I don't know theta, that's my unknown parameter. What I'm doing is I'm sampling observations that
are between zero and an unknown upper bound, so the PDF for this example is 1 over theta, where X is
between zero and some unknown value theta. Let's just say I'm giving you some numbers, 58, 123, 46, what's
that value of theta? Well, secretly, which I'm not telling you, it's 150. But you can't tell because you can only
look at the observations that I'm giving you. So how would we go about estimating theta? I'm going to look at
two estimators. Y1 equals 2 times Xbar, and Y2 equals this complicated thing. N plus 1 over n, times the
maximum of Xi. Again, I don't want to bore you with huge algebraic details. Let's at least look at that first
estimator. The expected value of Y1 equals 2 times the expected value of Xbar by definition, remember Xbar is
always unbiased for the mean, it's always unbiased for the expected value of Xi, so expected value of Xbar
equals expected value of Xi for the uniform distribution between zero and theta, that has mean theta over 2, so
theta over 2 cancels with that 2 to give me a theta. So we see that Y1 is unbiased for theta, so that's fantastic,
Y1 is an unbiased estimator for theta. It's also the case that Y2 is unbiased, but it takes quite a bit more work.
What I could do, I'll go through all the steps in the notes and you can follow along on your own. What I would
do is as a first step, I'd get the CDF of the maximum of the Xis, that's the first step.

Here's what that looks like, that's the probability that M is less than or equal to little y, that's by definition the
CDF. It equals, if the maximum is less than little y, all of the Xis are less than little y. Since they're independent,
I can take the product. I can then calculate for each individual probability, the exact answer, the exact value
that the probability of X1 is less than or equal to y. That turns out to be y over theta. All that's raised to the nth
power. I then take the derivative to get the probability density function. Here's the derivative. Once I have the
probability density function of M, I'm kinda set, I can get the expected value of M. By definition, that's just the
integral of Y times F of Y, and that turns out to be n theta over n plus 1. And this is where I get the funny n plus
1 over n term. If I take Y2 equal to n plus 1 over n times M, which is that thing, it turns out that Y2 is unbiased
for theta. Why did I go through all that? Well, Y1 and Y2 are unbiased, which is better. Well, the way to now
figure out which is better is to compare variances. I am not going to tell you how to do this because it's a mess,
but after a lot of algebra, it turns out that the variance of Y1 is sigma squared over 3n. Take my word for it, and
the variance of Y2 is sigma squared over n plus n plus 2. Look at this, we've got an n squared on the bottom.
And that's way smaller than the n that's sitting in the bottom of the variance of Y1. So the point is that Y2 has a
much lower variance than Y1, and it's better.

96
Now, how to quantify that a little bit more rigorously, well, we've already said that the bias of an estimator, T of
X, is just how far the expected value of the estimator differs from the parameter. So, it's biased if the expected
value of the estimator minus the parameter, is not equal to zero. And so the mean squared error is defined as
MSE equals the expected value of the squared deviation of the estimator from the parameter, that's what that
is. The expected squared deviation after a little bit of algebra, the mean squared error of an estimator is the
variance plus the square of the bias, so that's why I was interested in bias and variance in the last example. So
lower mean squared is better, even if there's a little bias. Cause I usually compare things based on mean
squared error so a little bias is okay if the variance is dominant. And let's make one last definition, the relative
efficiency of two estimators, T2 to T1, is just the ratio of the mean squared errors, so if the mean squared error
of T1 is less than the mean squared error of T2, that's great, then we want T1. In our previous example, X1
through Xn uniform zero theta, remember both estimators, Y1 and Y2, they were both unbiased, va-voom! In
addition, the variance of Y1 was sigma squared over 3n and the variance of Y2 was sigma squared over
basically n squared, so because they weren't biased, the mean squared error of Y1 is the same as the
variance, sigma squared over 3n, and the mean squared error of Y2, is sigma squared over that n squared
thing. So, Y2 is better, it's got a lower mean squared error. Well, here's a summary of what we just did. We
began our Stats Attack lesson with an intro to point estimation, and we paid special attention to unbiases and
mean squared error. And the next time, we'll look at what are called maximum likelihood estimators, a little bit
more useful than, a little bit more useful than just plain ol' unbiased estimators.

97
OPTIONAL: Lesson 15: Maximum Likelihood Estimation
Well now we're in phase two of the stats attack and in this lesson, I'll be talking about maximum likelihood
estimation. Here's our overview. Last time, remember we started at stats bootcamp and I talked about
unbiased estimation and mean squared error. In this lesson, I'll be talking about maximum likelihood estimation
and this is probably the most popular point estimation method. It's a very flexible technique that a lot of
software packages kind of do automatically to help estimate parameters from various distributions. So let's
start out with a quick definition. Let's look at an iid random sample, X1 through Xn. That seems to be how we're
starting all these lectures lately, where each Xi has pdf or pmf f of x. Like last lesson, I'll suppose that theta is
some unknown parameter that we need to estimate. Let's define what's called the likelihood function as L of
theta equal to the product of f of Xi. So L of theta is a function of theta, you'll see how, even though I'm
representing it as the product of a bunch of f of Xi's. So these Xi's are changing from observation to
observation. You don't see a theta on the right hand side, but it's embedded there and you'll see it in the
examples that we do. The maximum likelihood estimator MLE because I'm too lazy to write that thing out all the
time. The MLE of theta is the value of theta that maximizes L of theta. Get it, maximum likelihood estimator.
The MLE is a function of the Xi's and is a random variable. That makes sense. It's going to give me an
estimator per theta and it better be a function of the Xi's. Those are my observations. Let's suppose that X1
through Xn are iid exponential lambda. I'm going to find the maximum likelihood estimator for lambda and in
this case lambda's taking the place of the generic parameter theta. Here's how you do it. I'm going to define L
of lambda, L of theta equals the product from i equals one, i equals one to n of f of Xi. This is just by definition.
I'm thinking as little as possible right now. That equals the product from i equals one to n. Well the pdf is
lambda times E of minus lambda Xi, it's our old friend. And it's important to keep these individual little Xi's
because they're different from each other. Now let's see I can do this algebra in my head. Lambda times E of
the minus lambda X1. Lambda times E of the minus lambda X2. And you multiply them all together. You're left
with lambda to the n times E to the minus lambda summation of Xi. That's just algebra. Now my job is to
maximize L of lambda with respect to lambda to get the maximum likelihood estimator. Now I could take the
derivative and plow through all the horrible algebra that accompanies that derivative, but I'm going to do a little
trick. This trick works most of the time. What I'm going to do is, I'm take note of the fact that the natural log of a
function is one-to-one. And what that means, whenever lambda maximizes L of lambda also maximizes natural
log of L of lambda. The lambda that does the trick for L of lambda is exactly the same one that does the trick
for natural log of L of lambda. And it's just a property of natural law. It's a good property. What we'll be doing
here is using natural log to kind of make things easier. The natural log simplifies things believe it or not. Here
we go. Let's repeat the last equation from the last page. L of lambda equals lambda to the n times E of minus

98
lambda. I'm going to take the natural log of this horrible mess, but not so horrible. The natural log of lambda to
the n is n log lambda and the natural log of the expected value, well remember, log and exp are inverses and
so it's like matter and anti-matter. They blow up and they disappear and you're just left with the thing on the
inside. Minus lambda summation of Xi. And the reason I'm able to separate these two terms is because natural
log of A times B is natural log of A plus natural log of B. That's where that comes from. Okay, now our job is
way less horrible because I can take the derivative of this thing. So the derivative of that thing, it's a derivative
of the two terms. Anti is natural of lambda minus lambda summation. We might remember that the derivative of
natural log of lambda is one over lambda. So you get n over lambda for that term. The derivative of the second
thing, lambda summation is oh so easy. Minus summation, right? Set that equal to zero to get those critical
points. And even I can solve for lambda at this point. So let's look at that equation, solve for lambda and you
get lambda equals one over X bar, which makes a lot of sense, because what's the mean of the exponential
distribution? It's one over lambda and we usually estimate the mean by X bar, because it's unbiased. So X bar
is a good estimator for one over lambda. Stands to reason that a good estimator for lambda is one over X bar.
So a couple remarks for the reasons that I just said. Lambda hat equals one over X bar makes great sense.
Again, because the expected value of X equals one over lambda. Since we've done all this work, it's kinda of
like going to a party and so when we're done with our algebra we put a little hat over lambda to indicate that we
got to the MLE. Good for us. And at the end of our work, we kind of change the little Xs into big Xs to indicate
that this is a random variable. That's just a technicality. I usually put big Xs at the end. And just to be careful, I
didn't do that here 'cause I'm lazy, you probably ought to perform a second-derivative test just to make sure
that you're getting a maximum, not a minimum likelihood estimator, which the boss probably wouldn't like. I'm
not going to blame you if you don't, but I'm literally just being lazy here. So one last thing I'll do in this lesson
and it's called the invariance property of MLE's. This is why I wasn't worried about the one over lambda stuff in
a couple slides ago. So here's what the invariance property says. If theta hat is the maximum likelihood
estimator of some parameter of theta and h of whatever is a nice 1:1 function, then h of theta hat is the MLE of
h of theta. So if theta hat's a good estimator for theta, then h of theta hat is the maximum likely estimator of h of
theta. I'm not going to prove this, but here's an example. Suppose that X1 through Xn are exponential, this is
our old friend that never seems to go away, then let's define what's called the survival function. We'll call that F
bar and I'm not using the bar to denote average. People use this to denote the complement. You are one fine
looking survival function. So F bar of X is literally the complement of the CDF, so that's the probability that X is
greater than little X, not less than or equal to, but greater than. And in terms of the CDF, it's one minus capital F
of x and I happen to remember that CDF of the exponential subtracting that from one, it's e to the minus
lambda. So that's the survival function. I saw from the last example that the MLE for lambda, lambda hat, is
one over X bar. So these are my two facts. The survival function, F bar of x is e to minus lambda x and the

99
MLE for lambda, lambda hat equals one over x bar. Therefore, the variance property says that the maximum
likelihood estimator for the complement is, now let's put a giant hat over the complement equals e to the minus
lambda hat. I just substituted the lambda hat. And lambda hat remember is one over x bar and there you go.
So the MLE of the survival function is e to the minus x over x bar. Little X is arbitrary, x bar is the sample mean
of a bunch of observations. And this is the kind of thing that people use in actuarial tables all the time. What's
the survival function based on data, based on the x bar that you have? Well that was that. So what I did in this
lesson is that I went over some basics on MLE's and these are going to be very, very useful later on in the
course when we talk about simulation input analysis. In the next lesson, well we looked at point estimators,
they're just groovy, but we can do better. I'm going to be looking at confidence intervals, which give us sort of a
lot more information.

100
OPTIONAL: Lesson 16: Confidence Intervals
In this lesson, I'll be talking about confidence intervals, which is one of my favorite topics. So what we did last
time is I reviewed maximum likelihood estimators, and these were just point estimators, wussy old point
estimators, for some unknown parameter theta. This time I'm going to do a lot better. Not only am I going to
give you a point estimator, I'm going to enhance that by giving confidence intervals for theta. And we'll be using
confidence intervals, CIs, throughout the course, especially when we do output analysis, which is going to
encompass a major module later on. Before I can actually show you confidence intervals, let's go through a
couple definitions of distributions that we'll be needing later on when we do the confidence intervals. And you
probably have some recollection of these things. So I'll go through the definitions fairly succinctly. I'm not going
to go over lots of properties of these things. So first of all, suppose in the standard normal notation that I've
used earlier, suppose that X1 through... Z1, not Xs, Z1 through Zk, are iid normal 0,1 random variables, iid
normal 0,1. Well, let's square them and add them up. Just take my word for this. Take the Zis, square them,
add them up. So let's let Y equal the sum of the squares. Well, that thing is said to have a chi square
distribution with k degrees of freedom. Degrees of freedom is just a term that comes up, and it kind of just
means flexibility. This is just defined as a random variable that arises when you add up a bunch of squares of
standard normals. And the notation is Y is distributed chi squared k. It turns out that the expected value of Y is
k and the variance of Y is 2k. It also turns out if Z is normal 0,1 and Y is chi squared with k degrees of freedom,
and for technical purposes, if Z and Y are independent of each other, then this little mess here... actually, it's
very simple. This radio of Z divided by the square root of k over its own degrees of freedom. So square root of
Y divided by k. That's its own degrees of freedom. That thing has the Student t distribution with k degrees of
freedom. The notation for this is T is distributed little t with k degrees of freedom. You may be recognizing
these things. Chi squared, student t. These are arising in statistical distributions. It turns out that t with one
degree of freedom is what's called this Cauchy distribution, which causes a lot of issues later on. But I just
wanted to define that. Finally, if Y1 is chi squared with m degrees of freedom, and Y2 is chi squared with n
degrees of freedom, and if Y1 and Y2 are both independent of each other, then the ratio Y1 over Y2, with these
additional m and n terms. So Y1 over m divided by Y2 over n. Well, that thing has the F distribution, F as in
Frank distribution, with m and n, mommy and Nancy, degrees of freedom, respectively. The notation for this is
F is distributed F(m,n). So these distributions, chi squared, Student t, and F, all arise in confidence intervals,
and also in hypothesis testing. So why would we use these? Because they can be used to constrict confidence
intervals for u, mu, and sigma squared under lots of different assumptions. What is a confidence interval? Well,
a 100 times one minus alpha percent two-sided confidence interval for that parameter theta, which we don't

101
know, it's a random interval. Let's call the interval L and U, lower bound, upper bound, such that the probability
that that unknown parameter is in the interval, that probability equals one minus alpha. So you would say, well,
I believe that the probability that the number of people, the proportion of people voting for, I don't know, some
candidate Joe, is between 47 and 51%. And I believe that statement to be true with a probability of .95. That is
a confidence interval. So let's see how we can apply those. Here's some examples, all of which assume that
the underlying observations are iid normal. Here's the easiest example. Let's suppose that sigma squared, the
variance of the Xis is known. Totally unrealistic. Then a 100 times one minus alpha percent confidence interval.
And you pick whatever value of alpha you want, .05, .10. So if alpha is .05, then a 95% confidence interval for
mu is of the form mu is between X bar minus this thing, X bar plus this thing. This thing is the half-length, it's
called, and the half-length is almost always of the form some constant, z, which we'll define in a minute, times
the square root of sigma squared over n. Sigma squared over n is the variance of X bar, you might remember.
Now, in real life, we probably wouldn't know sigma squared, but for this example, I'm assuming sigma squared
is known. So this is a perfectly reasonable confidence interval. The z thing is a normal quantile, which I can
look up. Normal quantile is defined over here. But this is usually something like 1.96, one of our old friends,
1.96, that we look up. Okay, so X bar plus or minus z times the square root of sigma squared over n. That is
my confidence interval for mu. Very, very easy. Here's another confidence interval for mu, but this time it's
assuming that sigma squared is unknown. I'll give a numerical example for this a little bit later on. The
confidence interval now looks exactly like the interval before, except instead of sigma squared, I'm estimating
sigma squared by S squared, and instead of z, I'm estimating by a t distribution quantile. Just like before,
except I don't know sigma squared. So the price I pay is that I have to estimate sigma squared by S squared,
and instead of a z quantile, I have a slightly bigger t quantile. So the confidence interval's going to tend to be
longer. And finally, just 'cuz I'm in the neighborhood, let's also give a confidence interval for sigma squared,
because I usually wouldn't know sigma squared to begin with. A 100 times one minus alpha, you know, like, a
95% confidence interval for sigma squared is, well, there's sigma squared in the middle, and the lower bound is
this thing, the upper bound is this thing. You'll notice that S squared is on both sides. There's no X bar explicitly
written out, although that's contained in the S squared. So S squared is on both sides, and these are chi
squared quantiles on the left and on the right, which you have to look up in the tables. These are in the back of
the book, or you can get them in Excel, you can get them from Minitab. But these are things that you look up.
Okay, so that's the confidence interval for sigma squared, which is widely used. So here's an example. I'm
going to give 20 observations. These are residual flame times in seconds of treated specimens of children's
nightwear. In other words, how long does it take for these things to burst into flames? And it turns out when we
took the data, children were not actually in the nightwear when the clothing was set on fire, so don't worry
about that. No children were harmed. Here's the data. These are just 20 numbers. These were all independent

102
observations, and I want to get a 95% confidence interval for the mean of these guys. There's some underlying
mean. We don't know what it is. Let's get a confidence interval. Here I'll just cut to the chase. After performing
the algebra, it turns out that the sample mean is 9.84 and the sample, this is the sample standard deviation is
.0954. I didn't want to give you the sample variance, because that would've had too many zeroes in it. I now
plug and chug into either Excel or Minitab to get this t inverse, the quantile. Turns out to be t alpha over two n
minus one. Let's take alpha equals 5% and n is 20, so n minus one is 19. I look that up, 2.093. I complete my
plug and chug. The half-length of the confidence interval is given by that equation from a couple slides ago. I
plug and chug the 2.093 as the quantile. Here's the S, here's the square root of n, and here's my half-length,
and that allows me to get the confidence interval. X bar plus or minus H, and it turns out that the confidence
interval is a fairly short one, but it's still 9.80 to 9.89. So a very, very simple confidence interval for the mean.
So I'm 95% confident that the true mean lies between these two numbers. We just looked at a few very famous
confidence intervals. They'll come up now and then in the course. These were mostly for the mean and the
variance of a normal distribution. We'll use more later on when we need them. And the nice thing is we are
finally done with Module 2. We did these calculus, probability, and statistics boot camps, and I snuck in some
simulation here and there, so that'll speed things up a little bit later on. In Module 3, we're going to devote that
entirely to simulation, both hand and spreadsheet simulations.

103
Module 3

Module 3 Slide Decks:

https://prod-edxapp.edx-cdn.org/assets/courseware/v1/f438f8498ed9cc82a2aa880c74119f44/asset-v1:GTx+IS
YE6644x+2T2019+type@asset+block/ISYE_6644_Module_3_-_PRODUCTION_VERSION.pdf

Let’s give a hand for Hand Simulations (Module 3)! In this comparatively easy(!) module, we’ll run some
elementary simulations “by hand”, just to get a feel for the mechanics of the types of models we can work with.
We’ll start off by stepping through the solution of a differential equation – pretty straightforward since there
won’t be any randomness involved; but we won’t even try for an exact solution – just a numerical solution, sort
of a baby version of a simulation. We’ll then do a couple of cutesy Monte Carlo simulation examples that
actually incorporate randomness: we’ll bake a π and then we’ll wait in a line (you’ll see)! After this, we’ll tackle
a slightly more substantial inventory analysis example with random customer demands. At this point – and not
for the first or last time – we’ll review some methods to generate certain easy random variables, including
uniforms, exponentials, and even dice tosses. And lastly, a couple of examples that run simulations with the
help of a spreadsheet.

Lesson 1: Stepping Through Differential Equation


Today, we'll be looking at Hand and Spreadsheet Simulations. These are all kinda fun. We'll be able to handle
these very easily. The first lesson is going to be on stepping through a differential equation, but I'd like to go
through a module overview first.

So in the last module, of course, we reviewed Calculus, Probability, and Statistics, and wow, that was kind of a
long one. In this module, I'm going to go through a bunch of simulation examples that you can literally do by
hand. Or, at least, you can almost do by hand. The idea is simply to give you a flavor of the types of
simulations that you can do. And these are just going to be little teensy, weeny examples.

So continuing the overview,

● I'll first start out with this problem of solving a differential equation by hand without actually solving it,
we'll do it numerically.
● Then we'll look at some Monte Carlo integration problems. We've already looked at those once or twice.
Certainly we went through an example previously.

104
● I'll go back to this old example of making some pi that we looked at literally in the first module.
● Then we'll look at some details on the single server queue, maybe a little bit more than you would
anticipate.
● We'll then look at a more complicated system than (s,S) inventory model. These are kind of fun as well.
● I'll review the simulation of some elementary random variables and then,
● we'll finally look at spreadsheet simulation.

Again, in this lesson, we'll solve a differential equation by hand. So let's step through a differential equation
numerically.

First of all, let's recall from the calculus review, which you may or may not have gone through in detail, it's
perfectly okay, I'll keep things self contained here.

𝑑 𝑓(𝑥+ℎ) − 𝑓(𝑥)
𝑑𝑥
𝑓(𝑥) ≡ 𝑓'(𝑥) ≡ lim ℎ
ℎ→0

If f of x is a continuous function then it has a derivative, we'll call it either ddx of f(x) or f prime of x and that's
simply defined as limit of h goes to zero of f(x+h) minus f(x)/x. And that's of course if the limit actually exist and
is well defined and blah, blah, all this math stuff. I want you to think of the derivative literally, as the slope of the
function at any given point. You can reason that as rise over run in the above equation. So for small h, if I kind
of go backwards in the previous equation.

𝑓(𝑥+ℎ) − 𝑓(𝑥)
𝑓'(𝑥) ≈ ℎ

Then f prime of x is approximately equal to f of x plus h minus f of x over h. And then if I do a little bit of
algebraic manipulation,

𝑓(𝑥 + ℎ) ≈ 𝑓(𝑥) + ℎ𝑓'(𝑥)

you can see that f(x + h) is approximately equal to f(x)+hf'(x). So this is all very simple math. There's no need
to even get the pen out to show it. This is all from Previous work.

Okay, here's an example, and it's a very easy example. Turns out, I happen to know the exact answer on this
example. Let's suppose that we have a differential equation of some population growth model, and it's of the
form f prime(x) equals 2 times f(x) with an initial condition of f(0) = 10. So the thing starts out at times 0 of a
value of 10. I'm going to solve this using what's called a fixed increment time approach with h, remember the h

105
from the last page h equal to 0.01, this is known as what's called Euler's method. People in aerospace and all
the other engineering disciplines use this all the time. So by the previous equation on the last slide,

𝑓(𝑥 + ℎ) ≈ 𝑓(𝑥) + ℎ𝑓'(𝑥) = 𝑓(𝑥) + 2ℎ 𝑓(𝑥) = (1 + 2ℎ)𝑓(𝑥)

f of x plus h is approximately equal to f of x plus h times f prime of x. I'm merely repeating the equation from the
last page. And now what I'll do is I'll substitute in the value for f prime from above, and that's just equal to 2
times f of x, and we still have that h remaining.

2
𝑓(𝑥 + 2ℎ) = 𝑓((𝑥 + ℎ) + ℎ) ≈ (1 + 2ℎ)𝑓(𝑥 + ℎ) ≈ (1 + 2ℎ) 𝑓(𝑥)

So this thing equals f(x) + 2h times f(x). I collect the f(x) terms that I get, (1+2h)f(x). Similarly, I'll repeat the
exercise again. Now f(x+2h) = f((x+h)+h). Last time I looked h+h = 2h. So I repeat the exercise and I get
approximately, this thing is approximately equal to (1+2h). Now instead of f(x+h), I merely write, instead of f(x) I
now write f(x+h) right here. Instead of the x we now have x+h. From the previous line, I know that f(x+h) =
(1+2h) x f(x). Look at that. I've got another 1+2h there, so I square it and it simplifies to 1+2h quantity squared
times f(x). Now similarly, I can go through this exercise again, and again, and again. And you can either take
my word for it or derive in one line for yourself. Here is the general equation,

𝑖
𝑓(𝑥 + 𝑖ℎ) ≈ (1 + 2ℎ) 𝑓(𝑥). 𝑖 = 0, 1, 2,...

f of x + ih is approximately equal to 1 + 2h quantity to ith power times f of x, and that's for i = 0, 1, 2, dot dot
dot. And this approximation is fine but it may deteriorate as i gets large because you have a numerical
approximation error that propagates.

What I'll do now as part of this example, I'm going to plug in f(0) = 10 and I'm going to just arbitrarily choose a
small value for h = 0.01, I just made that up. So with these values of f(0) and h in mind, if I go back to the
previous page, let's look at that last equation there. F(x + ih), I plugged in the value x = 0, h = 0.01, I just do a
plug and chug into this equation.

And I end up with f(0.01) that's h times i is approximately equal to f(0) which is 10 times 1.02, 1 plus 2h to the
ith power. Very nice convenient equation that allows me to approximate f of 0.01 times i. It turns out I happen to
know using elementary techniques that the true solution to this differential equation is f(x) = 10 times e to the
2x. That's the actual answer. You can figure it out using the old calculus tricks that you might have learned way
back when. So using a Taylor series approximation, I'll just use one term. E to the y equals the sum from l
equals zero to infinity of y to the l-th power over l factorial, I'm just going to take sort of the first two terms, that's

106
approximately equal to one plus y. And if y is sort of small, and i is sort of small that's approximately equal to 1
+ y to ith power. And that thing looks an awful lot like equation 2 except for the constant 10 in front of it. So we
could just put the 10 right there if we wanted to. So this makes sense.

So let's see how well the approximation does as I increase i. So here we have different values of x which
correspond to different values of i times h, which is 0.01 times i right there. So I can chose any x value I want
simply by incrementing i equals 1,2,3, which is what I'm doing here. I have x equals 0. X equals 0.01 etc, etc.
You can see that I'm building it up. Now I plug into my approximation or I can plug into my true answer,
because remember, I just happened to know that, I'm so smart. And I plug in, and here are the values that I
get. For the approximation I get, for f of zero, I get ten, for f of 0.01, this is the approximation with 10.20, 10.40,
10.61, etc, etc up to f of 0.10, 12.19. And you can see, things are starting to actually exponentiatite a bit. Now
my true answers follow on the last line. You can see they're, wow, they're really good. They're very, very close.
See, look at that, perfect matches. Even with the error propagating a little bit, I get a pretty good approximation,
the true answer being 12.21, amazing. It's not bad at all, that is the conclusion.

So here's the summary, we actually did pretty well with this stuff. This is a new module, I've got a bunch of new
goals, hand simulation. And in this lesson, I looked at a very easy example, there was no randomness here,
there will be coming up pretty soon. I just solved a nice, easy differential equation. Next time I'm going to show
how to do an integral without using any calculus.

107
Lesson 2: Monte Carlo Integration
In this lesson, we're going to look at Monte Carlo Integration, which is a very fun topic, people use it all the time
in fact.

Here's the overview for this lesson. Last time, remember, I solved, in quotes, a differential equation by hand
and I'm sorry to say that I didn't use any random numbers, but that's about to change. In this lesson, I'm going
to go kind of the other way, I'm going to do integrations, instead of diff eqs, I'm going to use random numbers.
This is actually a very, very important topic. People use it all the time when they can't solve an integral exactly,
so they're kinda forced to use this. This material is, in fact, used in several disciplines, ranging from physics, to
finance, to all sorts of other things.

So let's talk about integration. If you went through the Module 2 reviews on calculus, I went over almost these
very slides. But just to keep things self-contained, I'm going to go through them again. If you've seen these
before and you're an integration expert, go out and have a cup of coffee for a minute. Here's the old definition.
A function, capital F of (x) having derivative little f of (x), assuming all that stuff exist, is called the antiderivative.
The antiderivative is just denoted as capital F of (x) = that integral sign that we all know and love. And we've
seen this thing many times in our lives. Integral sign of f(x) dx and it's also called the indefinite integral of f(x).
The Fundamental Theorem of Calculus says that if f(x) is continuous, then the area under the curve for x
between say, points a and b, is denoted and given by the definite integral. Now we'll enhance the notation. This
just means the area from a to b, the integral from a to b of f(x) dx miraculously equals F(b)- F(a). That's just the
Fundamental Theorem of Calculus, we use this familiar notation in the middle here. F(x) evaluate the big thingy
from a to b, and this is a very, very important result. The only thing that's interesting here, is that I have this
fantastic joke in Module 2, definite means I'm really, really an integral. And if you didn't see Module 2, I urge
you to go back and see the wonderful jokes.

Okay, back to Monte Carlo Integration. Now I want to integrate some arbitrary function g of x from a to b. So
we'll assume it's continuous and well behaved and stuff, but it's sort of an arbitrary function.

𝑏 1
𝐼 = ∫ 𝑔(𝑥) 𝑑𝑥 = (𝑏 − 𝑎) ∫ 𝑔(𝑎 + (𝑏 − 𝑎)𝑢)𝑑𝑢
𝑎 0

We'll call it I, the integral from a to b of g of x dx is I. Now, I'm totally lazy so instead of going from a to b I'm
going to just go from 0 to 1. So, that is accomplished by a very elementary substitution that we learned back in
calculus class. Equivalent interval is an integral from 0 to 1, of g(a+(b-a)u) du. Now what I did I just made an

108
elementary substitution, u=(x-a)/(b-a). So what I did, I changed the range of x from a to b, I changed it to u
between 0 and 1. There's a reason for that. And this is a well known substitution, that you would have learned
in class. If you don't believe me, well believe me, ‘cause this is true. So, this allows me to just concentrate my
efforts on numbers between 0 and 1, instead of going from From a to b. It turns out, we've said this before,
even in the first module, that we can often do these types of integrals by analytical methods. My cat can
integrate integral from 0 to 1 of u squared to u. I mean, it's trivial. Or we can do numerical methods, like the
trapezoid rule, which some of you may have learned at high school, or certainly freshman calculus. Or you
could do something a little more exotic like Gauss-Laguerre integration, which comes out of a more
sophisticated numerical analysis class. But if these methods aren't possible, you can always use Monte Carlo
Simulation, which is the purpose of this lesson.

So here's how we're going to motivate. Let's suppose that U1, U2, not the rock group, Are iid Unif( 0,1) random
numbers. So these are things we can generate, I'll remind you how to do that a little bit later. Let's suppose that
these are iid uniform, so just random numbers between 0 and 1. I'm going to define this intermediate value I
sub little i as equal to (b- a)g (a + (b- a) U sub i. Now that thing actually looks like the interior of the
standardized integral that we looked at, a couple of slides ago, remember? The integral from 0 to 1. This is the
interior of that thing, evaluated at the point U sub i instead of little u, you can go back and look at the slides if
you don't believe me. What I'm going to do, I'm going to use the sample average of all these i I's, for i going
from 1, 2 all the way up to the end. I'm going to use a sample average of those things as my estimator for the
integral i, remember i is the thing that I'm trying to integrate integral from a to b. Of g of x dx. So the sample
mean is I just, obviously, I take all of the I sub i's. I add them up, I average them. And then, if I just merely do a
substitution of the first line on this slide, I substitute in, this is the mess that you get. Okay now that looks a little
bit awful. But, why is this thing okay? So what I'm going to do, I'll appeal to our buddy, the Law of Large
Numbers we've looked at that before. And that just says, if an estimator is asymptotically unbiased and its
variance goes to zero, things are happy. So that's what we'll use. And it'll turn out that I bar N, the sample
average of the I sub i's, turns out that's a pretty good estimator for the integral I.

So, first of all, by the Law of the Unconscious Statistician, Lotus, I'm going to go through this proof here. I
usually don't do that, but you guys are a special class. The expected value of i-bar. Well, we just bring the
expected value inside the summation sign from the previous page. Since the Ui's are IIDs, the summation and
the 1 over n on the previous page, simply disappear. They're all the same. So I just bring that expected value
inside. Then, by the law of the unconscious statistician, horrible as this term may look, I just plug it in right here
and then I multiply by the pdf right there. That's the pdf of u, that's just simply the law of the unconscious
statistician. Now the pdf of u is just 1, because f of u is uniformed. So let's see, this next line, all I did here,

109
missing the 1 there, we don't care cuz it's equal to 1. And then, if you go back to the previous slide, you will
notice that this integral is just I. Amazing, so all that work and we end up with the expected value of i bar n
Equals I. Fantastic. So that means that I bar n is unbiased for I. The expected value is equal to exactly what
you hope for. In addition, the variance of I bar n is O(1/n). What that means, is that it decreases to 0 as n gets
big. That's what that notation means. So the variance of I bar end goes to 0 and gets big. So, the law of large
numbers implies that I bar n goes to I as n goes to infinity, which is exactly what we want. That means that I bar
n is a great estimator for I. Very good.

It also turns out that we can get an approximate confidence interval for I, I'll just tell you what it is. By the
central limit theorem, we know that I bar n is approximately normal, because I bar n is a sample mean. The
mean and the variance are I and variance of I sub i over n, that's just by definition. We've seen this result
many, many times. And so, a reasonable 100 times (1-alpha)% confidence interval for I, is just this thing. And
we solve this in the stats probability review, the stats review. It's just the sample mean, plus or minus the
normal quantile, times the square root of the sample variance over n. Now this could also be, I could easily
substitute a T distribution sample quantile here but this, like I said, this is a reasonable approximate confidence
interval for i. And z is the standard normal quantile, I could have put a t there if I wanted to, and Si squared is
merely the sample variance of the I's. That is a perfectly good reasonable confidence interval for the area
under the curve.

Okay, here's an example. We've actually looked at this thing before, and I'll show you a little version of this in a
minute. Let's suppose that I is our friend, the integral from 0 to 1 of sine of (pi x) dx. I could put (pi u) du if I
wanted to here. and chose you because it goes between 0 and 1, and pretend that I don't know the actual
answer from calc class is 2 over pie, and that's approximately 0.6366. I'm going to take n = 4 uniforms. n = 4 is
way too small, but for purposes of this example, let's do that. n = 4 uniforms. It really is too small, I have just
made these guys up, here is the first one 0.79, 0.11, 0.68, 0.31. Those are my uniforms. Now, I will define Ii, I
just copy the equation from the previous pages, (b- a) g of the mess, a is 0 here, b is 1. So when the smoke
clears, which I did on purpose, I just get g of U sub i. G is equal to sin of pi x, so x is equal to U sub i here, and
so, this is just sin of pi times U sub i. Now I plug this thing into my sample mean, sample mean of the I sub i's.
Let's add them up. Divide by 4. I plug in here. When I plug in the U sub i's I get 0.656. That's my answer with n
equals 4. And that's actually close to 2 over pi. Which is 0.6366. And I have to say, I cheated, I chose these
uniforms, so that we would get a good answer, see how I kinda spread them out? So we got lucky, and in fact, I
cheated. Don't hold that against me though.

110
Now, in addition, the approximate 95% confidence interval for I from equation three, you'll just have to believe
me on this when I do the plug and chug, there's my x bar plus or minus my z alpha over 2, 1.96. This number
right here, turns out to be si squared divided by n and it turns out the confidence interval is 0.596 to 0.716.
That's kind of a fat confidence interval, but the fact is it's only based on four observations, four observations is
not enough. And, in fact, I'll show you in a minute that we will do quite a bit better as n gets big, though
sometimes the convergence is choppy, due to good or bad luck, or if I've decided to cheat or not.

First of all, I claim that you can integrate anything reasonable without having to do all that annoying calculus
stuff. And I mean, you didn't even have to go to class for calculus that day. Next time, I'm going to celebrate our
success, by making some delicious pi using random numbers as our main ingredient.

111
Lesson 3: Monte Carlo Integration Demo
So, I'm going to look at my old demo right now, that displays Monte Carlo integration. So, let's take a look at
that.
What I've done here is, that I've revved up my old Monte Carlo integration program for a sine of pi x dx from 0
to 1. I'm only going to choose four points here, and I picked the seed just out of my head, doesn't rly matter. So
here are four randomly spaced rectangles, cuz that's what's going on here. Here's the height, f of u sub I or g
sub u I here, and I just take these areas of the four rectangles, add them up, divide by 4. And it's a little hard to
see here, you get 0.8665, that's my answer. That is a terrible answer, but that's because I'm only using four
observations.
If I change the seed a little bit, let's do this, I just changed the seed and I'm going to re-estimate the integral.
There we go, I've got four different rectangles and then these are really badly spaced. I look at this, my
answer's 0.2212, which is, again, awful. If I make the integral a little bit more finer, and let's do, say, 128 points.
Let's see how I do. Yeah, that looks a lot better. My answer's 0.6397, that looks quite a bit better. So what I'd
like to do is let's show you how you would do this in a spreadsheet cuz you'll be expected to do this at some
point. So I'm going to go over to Excel. This spreadsheet is displaying what I need to integrate sine of pi u d u,
from 0 to 1 in excel. And what this column shows, column A, shows that I'm merely generating a bunch of
numbers between 0 and 1. So this A8 is equal to RAND parenthesis, and that's the Excel command for
generating uniform. See how that works? It's generating a new uniform. >> Now, this is my g of u sub I
expression and the corresponding row in column B. So I'm reading in g of the first uniform g of the second
uniform. And what I do now, and this is the grand finale, the answer here in space C8, this is the sample
average of the first four observations from B8 to B11. So I'm only basing these on four observations. And it
turns out I got 0.6057, not a bad answer. If I just make one little change to the program here, you can see it
recalculates the random variables. Now I get 0.7375, not so good. So what I'm going to do here, let's just cut to
the chase here. It turns out I generated about 2,000 of these guys so let's go to B2007. So I'm going to make a
much bigger average now cuz I generated so many more. And look at this, look at that answer, 0.6361. That's
a fantastic answer, the true answer being 0.6366. If I run this for another set of random variables I get 0.6323,
also a fantastic answer. Okay that's fine I knew the right answer for this one. Just out of curiosity, let's see now,
if I can integrate the natural log of u, times the natural log of 1- u. Now it turns out, I've integrated this thing
exactly in the past. And the answer turns out to be 2 minus pi squared over 6. I have no idea how I integrated
that thing. I tried for a couple of hours before I prepared this lesson, I completely forgot how I did it. I mean, I
think I must have done a bunch of Taylor series approximations, but holy smokes this is a tough integral. I can't
do anything but Monte Carlo integration. And what I'm going to do is I'll concentrate on this random Riemann

112
sum, which is what we're doing here. Let's not worry about this stuff on the right hand side. So, I generate my
uniforms, just like before, 2,000 of them. I evaluate the uniforms inside of natural log of u times natural log of 1-
u. See, here's the command in Excel, very, very easy. And then I take the average of all 2,000 of them and I
get 0.3548, and what do you know? It's perfectly close to this true answer of 0.3551. So the darn thing actually
works, absolutely amazing.

113
Lesson 4: Making Some Pi
In this lesson, we'll continue our hand and spreadsheet simulations and I'll be making some pies, I promised.
And actually, as we've done a couple of times already in the past, now we'll go through a little bit more detail on
this.

Here's the lesson overview. Last time I talked a bit about Monte Carlo integration with some random numbers
and I showed a couple of examples that illustrated how easy it is to use on, actually, pretty complicated
problems. This time I'm going to do a very simple Monte Carlo example to show how to estimate pi. And this is
a little bit different than Monte Carlo integration but it kinda uses the same general techniques. And I may have
you recall Bouton's needle problem that we talked about a couple of times in the past.

Here we are, let's go through the example very quickly. Consider a unit square, so the unit square is area one,
of course. And what I'm going to do I'll inscribe in the square a circle with radius 1/2, so that means that the
area is pi r squared, pi over four, pi times the half squared. And what I'm going to do is toss some darts
randomly at the square. So you got the circle inside the square, the circle has area pi over four, the square has
area one. And so the probability that a particular dart is going to land in the inscribed circle is approximately pi
over 4. That's just the ratio of the two areas, I'll remind you with a picture in a second. We can use this fact to
estimate pi. So, what I'll do is I'm going to toss n such darts into the square, I'll calculate the proportion of those
darts that land in the circle. And so, that means an estimate for pi is going to be four times that proportion.
That's basically solving for pi if I look at the probabilities. My estimate for pi will say pi n hat equals four times
the observed proportion and that thing is going to converge to pi as n gets big, pi r per the large law of
numbers. Here is an example, suppose that we throw n equals 500 darts, which is a pretty good number, we'll
throw that at the square, and let's suppose that 397 of them land in the circle. Then pi n hat is going to be the
ratio or the proportion 397 over 500 times 4, and you'd get 0.794. Whoops, that's a proportion. I have to
multiply that by 4, I get 3.176. Not so bad. Again, this is sort of UGA value of pi, so I guess they're perfectly
happy to use that.

I'll look at a Monte Carlo Simulation in a second. Here's what we'll be seeing. First of all, you can see that I'm
throwing these darts in the square here, there's my square. Some of them are landing in the circle, and the
proportion that land in the circles multiply by 4 and it gives us our answer for pi, and in this case, it's 3.176. And
that's the number right here at the end. What's going on as we move from left to right in this graph is that every
time I throw another dart I update the answer and you can see what is going on. It's got this tremendous

114
variability here. But as we include more and more observations into our Monte Carlo sample, we get much
better convergence, and I'll go through the example now so you can see.

So let's replicate this Monte Carlo example. I'll show you graphically and dynamically how this works. We'll run
this again with a different seed for 500 random numbers from my software. Let's start, you can see how fast
that was. So it looks like in this example, my answer was 3.165, you can see how the thing appears to be
converging. And just like we did in first class, look at that, 599 not 500. Let's do, I don't know a couple of
hundred thousand. We'll do 200,000 observations. I'll change the seed, just because I can. And here we go. So
now I'm hoping that the Law of Large Numbers comes into play. I'm stalling, I'm stalling, I'm stalling while the
thing is running here. This is really converging quite nicely. Look at that, we're almost done. It's really filled in
the square and the circle very well. And my answer is 3.136. Not too bad. And if I had done more sampling, my
answer would have slowly but surely picked up even another correct digit.

The question remains is, how do I actually conduct such an experiment? How do I do this type of Monte Carlo
experiment? It's actually pretty easy. To simulate the dart toss, let's suppose that U1, and U2, again, not the
rock group. Let's suppose that they're both IID uniform. That means that the ordered pair U1 and U2 represent
the random position of the dart on the unit square. So that means that the dart lands in the circle if this
equation is satisfied. (U1- half) squared + (U2- half) squared is less than or equal to a quarter. That's just the
equation for the circle and its interior. And if you U1 and U2 are such that this is satisfied, that means that my
tosses fell inside the circle. So what I'm going to do is I'm going to generate n of these pairs of uniforms, I'm
going to count up how many of them fall in the circle, satisfy this equation. If 750 out of 100 satisfy the equation
of the inequality, then my proportion is 0.75, I multiply by 4, I get my answer for pi and hat.

So here's the summary. We made some delicioso pi, we used random numbers as our main ingredient. No
surprise there. And next time, I'm going to simulate how we queue up at the single server bakery that was nice
enough to make pi for us. Now what I'm going to do, I'm going to go back for a second and look at a really old
quiz that I forgot to give you the answer to. I want to know, what's the volume of a pizza with radius z and
height a? So the volume of a pizza with radius z and height a. Now, you may remember from geometry or
whatever class you took back in high school, that the area of a pizza, it's the same thing as a squished
cylinder. The cylinder has area, pi r squared, so in this case, it's pi z squared. And you multiply that by the
height to get the volume. So here we go, the answer, it's pi times z squared time a p-i-z-z-a, pizza

115
Lesson 5: A Single-Server Queue
In this lesson, we're going to be looking at some properties of the single server queue. Here's the overview.
Last time, we enjoyed some pie. I'm sure you relished it. And this time, I'm going to simulate the line that forms
in front of the single server at this pie bakery and this is pretty much our first real simulation model. In other
words, one that involves sort of non-static customers, moving around, arriving, getting served. This is the first
one I would call discrete event simulation. This will be something that you should pay attention to.

Here’s what we've got going. Customers arrive at a single server queue. The iid interarrival times, times
between services and the iid service times are generated by some random structure. We'll talk about that in a
little while. Customers have to wait in front of a FIFO line or in a FIFO line, if the server is busy. And then FIFO
just means, of course, first-in, first-out. They wait in a line. They move up one at a time in order as the server
gets done processing people. So, what I'm going to do is I'll estimate certain performance that we would be
interested in. For instance, customer waiting time, expected number of people in the system and server
utilization. Server utilization is just a proportion of time at the server, the single server is busy. And in fact, these
are all things that are adventurous. Because the customer certainly cares about his waiting time and the owner
of the store probably cares about the expected number of people in the system, because he or she may not
have a lot of room in the system. And also the owner of the store is going to be concerned about server
utilization, because you want to keep that server busy, but maybe not too busy. Anyway, here's sort of a
notation fiesta now.
We're going to define the interarrival time between customers as capital 𝐼𝑖. Now, this is the second use recently

of capital 𝐼𝑖. For this lesson, capital 𝐼𝑖 means interarrival time. Time between arrivals for customers i-1 and 𝐼 .

Customer i's actual arrival time is the summation of all the interarrival times. So the arrival time of customer i,
𝐴 𝑖is just the sum of all the interarrival times. Just add them up. You'll see when we do a numerical example.

So customer i is going to start service at, well, pay attention here. But don't worry, because we haven't quite
defined 𝐷𝑖−1 yet. But I'll just walk you through this in English. He's going to start service at time. Well, the

maximum of his arrival time, obviously. He can't start service until he gets there, but he's not going to be able to
start service until the guy ahead of him leaves and that's what this 𝐷𝑖−1 is. 𝐷𝑖−1 is going to be the departure

time of the previous customer. So he can't start service, our guy customer little i. He can't start service until
either he arrives, 𝐴 𝑖, or until the previous guy 𝐷𝑖−1 leaves. Customer i's waiting time we'll define this at this
𝑄
Christmas tree notation, 𝑊𝑖 equals 𝑇𝑖 minus 𝐴 𝑖. In other words, the time that he leaves minus the time that,

116
I'm sorry, the time that he begins service minus the time that he arrives. That's how long he's waiting in line.

The time in the system, this is 𝑊𝑖 without the Q. That's the time that he leaves minus the time that he arrives.

That's the time that he's in the system. Customer i's service time is 𝑆𝑖 . We will generate this randomly. We'll

also generate the interarrival times randomly. And his departure time, the last thing we'll need in the simulation,

𝐷𝑖 is the time that he begins service, whenever that happens plus the service time. So, the time that he begins

service plus the service time is when he leaves. Hey, very easy.

Let's do an example and I'll walk you through this. This'll be standard. You're going to get good at this very
quickly. So, suppose we have the following sequence of events. Looks like a big mess here. But all this
notation, this mess up here. We know pretty well by now. This is the
● i is the customer number.

● Capital 𝐼𝑖 is the interarrival time.

● 𝐴𝑖 is the arrival time.

● 𝑇𝑖 is the start service time.


𝑄
● 𝑊𝑖 is the waiting time.

● 𝑆𝑖 is the the service time and

● 𝐷𝑖 is the time at which customer i leaves.

Let's walk through a couple of leaves and I'll just go blah, blah, blah after a while.
1. So, customer number one. He's in arrival time is three, assuming that we start the simulation at time
zero which is logical. He shows up at time three. Well, if she shows up at time three there's nobody in line in
front of him, he gets served at time three. His waiting time is zero. He didn't have to wait in line, luckily. Good
for him. And his service time, I randomly generate a seven. It turns out I'm using eight-sided dice to do this.
And so, he came up with a seven. And, so that means he started service at time three, the service time is
seven. He leaves at time ten. You just add the start service plus service time. So, he's out of there at time ten.
2. Meanwhile, customer two has an inner arrival time of one. So, he shows up at 3 + 1 = 4. He shows up
at time 4 for customer two. Customer one's still getting served until time ten. So customer two doesn't get
served himself until customer one leaves, remember? Customer two start service time is going to be the
maximum of his arrival time and when the previous guy leaves. In this case, it's going to be time ten. So this
poor guy has to wait in line 10 - 4 = 6. His service time by coincidence is six. And so, his departure time is the

117
start service time 10 plus the service time equals 16.
Let's do one more of these, because I am getting bored.
3. So, the third guy shows up. Customer three, he shows up at time six. Because his interarrival time was
two. I add that to the previous guy's arrival time. That's where the six comes from. Poor guy has to wait until
time 16, because that's when customer 2 leaves. His waiting time is 16 minus 6 equals 10 and that means that
he's got a longer waiting time than the previous guys, too bad for him. And service time is 4 and he gets to
depart at 16 plus 4 equals 20.
And you can follow dot, dot, dot, the other guys yourselves. So very, very nice. You can see that this is very
simple to do. You can do it in Excel very easily. Now it turns out, the one thing that we can generate very
quickly. Let's look at this column right here. I take these 6 service times, 0, 6, 10, 10, 11 and 7 and I have the
average waiting time. These are waiting times. I have the average waiting time for 6 customers and that turns
out to be just average them up, 7.33. So, I take those 6 waiting times, average them up and the average
waiting time for the 6 customers in system is 7.33. Don't know if that's particularly good or bad, but I prefer not
to wait that long myself.

So how do we get the average number of people in the system, which is another thing I want which is the
number in line plus the number of people in service? In the system, that's line plus people getting served. The
way to do that is that you can note that the arrivals and departures are the only possible times in which the
number in the system will call that L(t), number in at time t can change. So, the only time that something can
change or when you have an arrival and when you have a departure. And these time and the associated
things, the arrivals and departures. They're called events. That's why we call the topic discrete-events
simulation, because that the only thing we care about are events. We'll talk much more about this module four.

So, what I'm going to do here is I'm going to go through every single event that occurs in this simulation. I won't
really go through every one of them. I'll give you a flavor of what's going on. So I'll go through each time an
event occurs and what the event is and how many people happen to be in the system at that time, cuz those
are the only times that L(t) can change. So the first event, this is sort of a technicality. But the first event occurs
at time zero and that is the simulation begins. Obviously, there is nobody in the system at time zero. So, L(0)
equals 0. The first time anything interesting happens is when customer one arrives. You'll recall from a couple
slides ago that happens at time three. He arrived at time three. And so at that point, starting at time three,
there's exactly one customer in the system. So, L of 3 equals 1. L of 3.01 equals 1. Nothing changes until time
four when customer two arrives. Customer two arrives at time four. There's somebody already being served, so
customer two has to go in line. So there's one guy in the line, one guy being served. So, L of 4 now jumps up
to equal 2. So, there you go. There's two people in the system at time four. The next thing that happens is a

118
guy shows up at time six. Customer three arrives at time six. He's even in a worse situation, because what's
happened to him is that he's got to wait behind customer two who hasn't been served yet. So, there's three
people in line. Now because of the discrete nature of this example that I made up out of my head, it turns out if
you go back a couple pages at time ten, two events happen simultaneously. I actually like talking about this.
So, customer one departs. But at the same time, customer four arrives. Usually we treat departures first, cuz
they just want to clean people out of the system. So, customer one departs and customer four arrives
simultaneously. The net effect is that the number of people in line that stays at three. Now in real life,
customers usually don't depart and arrive really, really simultaneously. Because arrival times and departure
times are not integers, usually. But for purposes of this example, we'll assume that two things happen
simultaneously at time ten. So you can see, I'm just going to go blah, blah, blah now. And eventually, the
system clears out at time 29 when customer 6 departs and there's nobody in the system any longer. Now this
is tedious to do, but the nice thing is that every simulation language will do this automatically for you and we'll
talk about that in great detail later on.

So, let's illustrate this graphically. So this is the last page, but I've transposed this into a picture. So you can
see, here's time progressing along the x-axis, the t-axis. Here are all the event times, three, four, six. I'll explain
them in a minute. Here's what I'm denoting the queue is going to kinda work itself down and the server's kinda
right here. So this is L(t), the y-axis is L(t). The number of people in the system at time t and the Q is anything
above one, two, etc. In-service at the bottom just means that anybody at the bottom there is being served. So,
let's follow along. At time three, customer one shows up. So there he is, he's being served. At time four,
customer two shows up. There he is. Customer one's still being served. See how that works? At time six,
customer three shows up. Customer two's being served, he's next in line. Customer two is next in line, he's not
being served yet. Customer one is currently still being served. At time ten, remember we have the
simultaneous events. Customer one leaves and customer four shows up. Customer one leaves. Therefore,
customer two starts getting served. Customer three is next in line. Customer four is next in line. So you could
see what's happening here and the system progresses from left to right, and this is basically a graphical
illustration of the history of what's going on in this simulation. So after all that work, the average number in the
system by definition of the average of a function capital L(t). The average in the system is merely the integral
from 0 to 29, because 29 is when the simulation ends, divided by the time of 1 over 29. I take that integral and
it turns out to equal 70 over 29. Now, how do you take such an integral? Well, it turns out this thing is a step
function. Step, step, step and step functions don't think too much about this. Step functions are incredibly easy
to integrate. You just add up the rectangles. It's so easy. So, you can add up these rectangles individually or
what I prefer to do is I add up these horizontal slabs and you can do this in your head very easily the right

119
amount. So, I can add up the horizontal slabs. You can add the vertical slabs. I don't care, but just add up the
rectangles. That's how you take the integral of a step function.

Another way to get the average number of people in the system is to calculate the total person time in the
system divided by the total time. So, the total person time divided by the total time gives you the average

number of people in the system. That's this quantity. Person I is in the system. 𝐷𝑖 minus 𝐴𝑖 , that's the total time

in system. I think I called that Wi before. So, just add up the Wi's divided by 29. I will do this for you and it turns
out, you get 70 over 29. Very, very easy. Finally, remember I'm also interested in the estimated server
utilization. You can see from the picture that the proportion of time that the server is busy. In other words, that
he's serving somebody turns out 26 out of the 29 time units he was busy. Somebody was being served. That
was quite easy.

Now let's do another example where I run sort of the same events, the same arrivals, let's say in the same
service times that I kind of lie. They're not exactly the same events, because I might have different departures.
What I'm going to do here is I'm going to do last-in, first-out. Not first-in, first-out. Last-in, first-out. This is kind
of like you go to the grocery store and go to the cereal area. The stuff that's in front is probably the most
recently inserted cereal box and the stuff way in the back is the cereal from 1964 which you know can enjoy
real sugar in I guess, but it's probably disgusting. An example, first-in, first-out might be like the milk products
which are pushed in from the back. But in any case, last-in, first-out is sometimes a reasonable service
structure. That's sometimes called LIFO. So I'm going to do the same arrival times and the same service time,
but actually the events are going to change a little bit. The departure times are going to change. So, here we
are. I'm using the same arrival times as before. If you go back to the old example, there they are. Exactly the
same. The same service times as before, but what I'm going to do is I'm going to do last-in and first-out. So, let
me run through this for a second. Customer one shows up, he shows up at time three. He gets start service
immediately, doesn't have to wait. Service time is seven and he's out of there at time ten. That's just like
before. Customer two, he's a happy kinda guy, cuz he's the second customer. He shows up at time four.
Goody, goody, it's my turn. Not so fast, because customer three shows up. He shows up at time six, so he gets
to go ahead of customer two. He's a happy camper, cuz he's next. Not so fast. Customer four shows up at time
then and we're processing departures first. And so, that means that he gets to go ahead of customer three who
gets to go ahead of customer two. Well, blah, blah, blah. It turns out that he's out of there after time 16.
Meanwhile, customer five showed up. He's out of there after 17. Then finally, it's customer three's turn.
Meanwhile, customer six shows up. And poor customer 2 has got to wait all the way until time 29, because the
last-in, first-out. I urge you to go through all the details of this yourself, but that's the way it goes. It turns out

120
that the average waiting time for the 6 customers is 5.33. So, it actually decreased a bit. The average number
of people in the system turns out to be 58 over 29. That decreased a bit. So that's a little bit better than FIFO
and that's pretty much, because we cleared out people as soon as we could and it just turns out that's a lucky
coincidence.

Well, here's a summary of what we did. This is actually a lot of fun doing these examples by hand. We
simulated a very, very simple single server queuing system and we showed how to collect certain important
performance statistics. I even did a LIFO version on top of the FIFO version. Next time, I'm going to do a more
complicated example involving an inventory problem. And these are very, very useful. This will be more
complicated and it's going to be almost, so we can't quite do these by hand. This one we'll be able to do, but
it'll be the last one for a while.

121
Lesson 6: An sS Inventory System
Hi everybody, so now we're going to be continuing module three which is on hand and spreadsheet
simulations. In this lesson we'll be going over what's called an (s,S) inventory system and we'll be simulating
that.

Here's the overview for this lesson. Last time you might remember that we simulated this really easy, single
server queue, and we estimated things like expected waiting time and utilizations and number of people in the
queue. In this lesson we're going to be doing something a little bit more challenging. What's known as an (s,S)
inventory system. And after I'm done with this you're going to start to get an idea why we don't want to do these
things by hand. I mean they just start to get a little but more difficult.

Anyway here's a description of a (s,S) inventory system. There's a lot of different inventory policies that people
can use but this is sort of one of the easier ones that people start out with.
● So let's suppose that a store sells some product at d dollars per unit, every time it sells something it's d
dollars.
● The inventory policy is to have at least little s units in stock at the start of each day.
● Now if the stock slips to less than little s by the end of the day, what we do is we place an order with our
supplier that pushes the stock back up to big S by the beginning of the next day.
So, what happens, suppose you drop below little s, we place an order at the end of the day and then maybe
FedEx or one of these freight shipment companies gets this stuff to us by the next morning miraculously, so
that we start out the next sales day with big S units in stock.
● Now there's also various costs associated with this policy.
○ One is well it costs us money to order things the more times we order the more annoying we get to
people so there's a cost just to place an order.
○ There's going to be a cost associated with the number of items we order,
○ there's going to be a penalty cost if we fail to satisfy customer demand, and also be
○ a holding cost to prevent us from having too much stock in our system at any point. So we don't want to
keep too much around because that incurs a cost as well.
We'll talk about those as we go.

Now we're going to have a little notation festival. And this is kind of interesting, for the third lesson in a row, I'm

going to be using the notation 𝐼𝑖 in a different way. This is going to be the inventory that we have at the store at

122
the end of day i. I'm going to let 𝑍𝑖 denote the amount that we order at the end of the day. And don't forget, the

goal is to order up to big S if the inventory is < s at the end of day i (less than little s). So if the inventory is less
than little s, we order up to big S. So, in other words, capital S minus the current inventory at the end of day i.

That's going to be our order that we place at the end of day i, 𝑍𝑖 . So, if an order is indeed placed, so if the 𝑍𝑖

positive, to the supplier at the end of the day. It's going to cost the store K plus c times 𝑍𝑖 . Now let's think about

that.
● The K is the cost that I incurred just for calling up my supplier and nagging them to make the order.
That's a fixed cost every time I place an order I pay K.

● In addition, I pay c times 𝑍𝑖 , that's the unit cost, c is the unit cost if I order 17 items. I'm going to have to

get charged c times 𝑍𝑖 , that 17 items.

● In addition, it costs $h per unit for me to store any unsold inventory overnight. Because it costs money
to maintain the space in the store, opportunity costs, I could be using the money for each item for something
else, if I wasn't storing it in the store. So there's this holding cost of $h per unit for the store to hold unsold
inventory.
● Then there's also a penalty cost of $p per unit. So if I can't satisfy customer demand, they get mad and
maybe they don't come back to the store, or they do damage to the store cuz they're angry. But if that demand
can't be met, I incur a penalty cost of p dollars per unit.
In this very simple example, because this is the first one that we're doing, I'm not going to allow backlogging.
So, if the supply isn't there when a customer comes in, too bad, it's out of there, very angry customer. Finally,

the only really random thing in this example is the demand on day i, that's. That's going to be denoted by 𝐷𝑖 ,

and that's a random variable, I'll just make one up when it comes time to do the simulation.

Let's look at the total amount of money that I'm going to make. Obviously, in English, the total amount of money
is the sales that I make, the money from sales, - the Ordering Cost,- the Holding Cost,- the Penalty Cost. So
that's, in English, how much money I'm going to make on each day. Let's translate that to math a little bit. This
looks awful, but don't worry about it, let's run it through in English a little bit. So sales, well, every time I make a
sale, I make d dollars. However, I may not have enough stuff to satisfy customer demand. So let's take a look
at this.

123
● The sales are going to be the minimum of the customer demand and how much inventory I have at the
beginning of the day. So if the customer demand is 15, but I've only got 10 units hanging around, then I'm only
going to make d times $10 on the demand, on the sales. We'll translate that into more math in a minute.
● Now, this next term minus the ordering cost. Well, we just said that I'm not even going to place an order
if the inventory at the end of the day is greater than or equal to s. If the inventory at the end of the day is less
than s I place an order. And we've already said that the order cost is K + cZi. That's the fixed cost Plus the

number of orders that I make, 𝑍𝑖 times C, the number of items that I order, times C, the unit cost.

● Then I subtract off the holding cost, h times the amount of inventory that I have at the end of the day,
and I also subtract off a penalty cost P times this complicated term, let's look at it in English. So, let's look at
the last term, first of all. Di, the demand minus the inventory. So, if the demand is 5 and the inventory is 10,
then Di minus the inventory is a negative number, and the maximum of zero and a negative number is zero. So
there's no penalty cost, however, what if the demand is 10, and the inventory's only 5. Then, I'm going to mess
up 5 customers and so I'm going to have to pay a penalty for that p times 5, so that's my penalty cost.
Okay, let's finish translating this little fella into math looks pretty much the same, except I've just got this extra
term right here. So let's go through this very quickly.
● The total amount of money I make is D, the amount of money per sale times the minimum of the

inventory. And this second term here, I sub 𝐼𝑖 + 𝑍𝑖−1, that must be the inventory at the beginning of day i if I've

translated from the English correctly. And in fact, I have cuz the inventory at the beginning of day i is the
inventory at the beginning of the previous day plus the amount of stuff that I have to order So this is a pretty
straightforward equation, and you can see that this first term encompasses the sales that we make on day i.
● Now, this next term, same as before, that's the ordering cost,
● The next term, holding cost.
● Finally, this last term by similar reasoning is going to be the penalty cost that you pay for passively not
having enough stuff in stock.
Let's do an example, and this'll be a real simple example. I just made these things up outta my head. First of all
we'll let the amount of money that we make for sale d equal 10. The little s value, the point at which we start
worrying about having to order again, little s is 3, big S is 10. K = 2, that's the cost for placing an order. c is a
unit course a cost for the order, h is the holding cost we'll set that equal to one, p is the penalty cost, we'll set
that equal to two. And then I just made up the following sequence of demands. The first day the demand is
five. The second day the demand is two, blah, blah, blah, blah, blah. 8 6 2 1 I just made those up and in all
simulations we actually have to start the thing out so let's just pretend that i'm fully loaded at the beginning of

124
day one so I've got an initial stock of i not plus z not equals 10. So I'm just starting the simulation out with that
stock.

Okay, now, I'm just going to run through a numerical example. And then I'll give you a short demo with these
data. So on day 1, remember that we've started things out with 10 guys in the system. Here are the demands
that I outlined on the last slide. And these are all, these I just copied them down and I'm not doing anything.
Real genius, I just copied them down, so these are the only random things in the simulation, the only random
inputs. And now, let's walk through a couple of these fellows.

1. So on day 1, I start with 10 guys, I have a demand of 5, therefore my inventory at the end of the day, 𝐼1

is 5, 10 minus 5 equals 5. Because that is not less than little s, which is three, I placed no order. Therefore, my
sales revenue, I've made five sales, $10 each, $50, didn't place an order, $0. I have a holding cost of five times
one, five items are left at the end of the day, times $1 per each fella. So, I subtract off the cost, it's a penalty, so
minus five the penalty cost for not have enough in stoke is zero because I satisfied all the demand and you can
see that the total revenue is sales minus order minus holding cost minus penalty cost and that is 45.
2. Let's run through one more day and you can see that at the beginning day 2 I have 5 Items in the
system, my demand is two. My inventory at the end of the day is five minus two equals three. That number is
not less than three, so again no order. And when the smoke clears, I have a total revenue of seventeen. I think
I will do one more day.
3. On day three, I start with three items in the system. I have a demand of 8, since I only have three items
in the system, I sell them all and I have five angry customers because I failed to satisfied their demand. And so
the inventory in any case at the end of the day is 0, there's no backlog so the inventory's 0. I have to order up
to 10, so I order all 10. My sales revenue, well, on the good side I made 30 bucks. On the bad side, I have to
order, so k plus c times Zi turns out to be $42, that's the price of doing business. Luckily, there's nothing in the
system at the end of the day, so there's no holding cost. But I messed up five customers, so 5 times a $2
penalty cost is 10 bucks. So when the smoke clears on day three, I have sales revenue of 30 minus 42 minus
10 and that means I've actually lost $22.00 at the end of day 3. Too bad for me.
And you can go, dot dot dot, follow along to your heart's content, those are the next three days. At this point
I've prepared a nice demo to give additional information to this demo. Let's go and have a look.

125
Lesson 7: An sS Inventory System Demo
So let's go over here on this Excel spreadsheet which I'll supply to the class. And you can what I've done is
over in this region, the first six rows, rows three to eight. I've completely mimicked the example that I just did.
So these are exactly the same numbers that you saw before. I'm going to keep those same numbers and I'm
going to randomly generate some other ones. So in the subsequent rows, so, row nine, ten, and however
many there are I'm generating different demands. You can see that I've kind of randomly done these
commands. What are they? Let me just check. What I'm generating rand.times eight so this is a uniform
random number between zero and eight and by using the ceiling function I'm essentially rounding up. So I'm
doing an eight sided die roll from one to eight and these are the numbers here you can see. And if you'd like,
I'm just going to run through these just like we did before. We had a demand of one for day seven. The
beginning stock turns out was nine. At the end of the day, it's eight. I don't make an order. Blah blah blah and it
turns out I made $2 that day. But you can go on and on and on. And what I've done right here, this entry in cell
03 is simply the average of, it looks like I've run this thing for m equals three for about 1775 days. And I took
the average of the amount of money I made. So $19 a little over $19. Now if I hit any button after I put another
number in it in Excel, the numbers change. So 19.15, let's do it again, 19.21. And you can see I've taken so
many observations of large large numbers and it's almost always giving me a number near 19.21. That's a very
easy demo and you can see you don't want to do 1775 lines by hand. That's why we do the spreadsheet.

After having seen the demo, we can summarize this lesson. First of all, we simulated our first tricky simulation
model, this is an (s,S) inventory system. And in fact, we've looked at some of the easiest version of the thing
and even that was difficult to do by hand. I could have generalized it to accommodate backlogs or longer lead
times. They'll be lots and lots of generalizations that we can look at later on when we explore this model in
Arena. In any case, in the next lesson, you might have noticed that we've been looking at the different types of
brand and variables that we had to generate. That's what we're going to take a look at, how can we generate
random variables very easily? This will be a bit of a review that we undertook in module two, but I'll have an
additional demo that you can take a look at for more motivation. So this will be a fun lesson coming up, and
looking forward to seeing you then.

126
Lesson 8: Simulating Random Variables
In this lesson I'll be going over a simulation of some elementary random variables. In our overview you might
recall that in the last lesson we simulated an (s,S) inventory model. And this time I'm going to be simulating
certain random variables. And we encountered a couple different random variables in the last couple lessons
and now we'll go over how we could stimulate those on a computer. Now it turns out that on the one hand this
is just a preview of way more sophisticated stuff coming up later, we're going to have a whole module on this
stuff. On the other hand, I'll note that much of the material in this lesson was covered back in the review
Module 2. So if you went over that module, you can fly through this one.

Our first example, one that we did in Module 2. was generating a Discrete Uniform random variable and the
integers 1 through n. So all I want to do, this is just an n-sided die toss just like you would have at the
Dungeons and Dragons convention. So each of these integers, one through n, I want to toss the die with
probably one over n, each of those guys is going to come up. So, here's how you do it, let's suppose I have at
my disposal a uniform number between zero and one. So all you have to do, like I showed in the last lesson,
you can multiply that uniform number between zero and one by n, take the ceiling function, and you've got
yourself an integer from one to n. Some random integer. Here's an example. If n=10 And we sample a uniform
between 0 and 1. Let's suppose it's 0.73. Then here's my n-sided die toss. I multiply n times u, gives me 7.3. I
take the ceiling function, which allows me to round up, and I've just tossed myself an 8, fantastic.

Here's an example of a more interesting discrete random variable. So, I'm just going to define that, so I just
made this up. x can take on the values, -2, 3, and 4.2. Those are the only possible values for x. They have
probabilities 0.25, 0.10, and 0.65 respectively. I literally just made that thing up. It's my own random variable.
Now how do I simulate that random variable. It's not a n-sided die toss like the easier example from the last
page. So I can't use a die toss to simulate this. What I'm going to do is use this inversed transform method, at
least the discreet version of it. That we've mentioned several times in previous lessons. So here is what I do. I
make a table of the different values. -2, 3, and 4.2, with their respective probabilities, 0.25, 0.10, 0.65. While
I'm at it, let's make the the cdf values for those x values. So the corresponding cdf value is our 0.25, 0.35, and
1.0. So see that corresponds to the probability that x is less than or equal to little x. So if little x is -2, the
probability that the random variable is less than or equal to -2 is simply 0.25. If x = 3, the probability that x is
less than or equal to 3 equals the probability that X is minus 2 or 3, which is 0.35. Now the reason I do that is
because I can associate with this cdf these uniforms. So for x equals -2, which occurs with probability 0.25, if I
get a uniform between 0 and 0.25. Which happens with probability 0.25. That means that x = -2. If my uniform

127
is between 0.25 and 0.35, then that corresponds to a probability of 0.1. Low and behold, that's x = 3. See how
they match up? And finally if x is, if the uniform is between 0.35 and 1.0, that corresponds to a probability of
0.65. Lo and behold, that's x = 4.2. It's like we're using the inverse transform of the cdf, which is why it's called
that. Here's an example, let's sample a uniform. The corresponding x value is, you could denote it by this awful
notation, f inverse of U, that's just what it is, and let's do this numerical example. Suppose that U equals 0.46,
and just randomly sample that. Well, where does that fall? That falls right here in that interval, and that
corresponds to x= 2, congratulations. You've just generated x equal to 4.2. If you did this for a million uniforms,
and took the inverse transform each time, you get a beautiful histogram corresponding to these original
probabilities, quite nice.

Let's look at the inverse transform method for continuous distribution, continuous random variables. And we'll
talk about this in great detail when we do our module on random variant generation. Here's a theorem we've
already looked at maybe three times now. If X is a continuous random variable with cdf F(x), then the random
variable F(X). So I plug X into its own cdf. This is a monster that's random, so that thing is uniform. And we've
actually seen this now three times, maybe more. And this suggests a way that we can generate the random
variable X. So if capital F(X) is uniform, let's generate a uniform, there it is. And now let's work backwards.
Take the inverse, which you can do since X is continuous, the X inverse cancels out the F. Matter and
anti-matter just like in Star Trek. And you end up with X equals F inverse of U. So if you give me a uniform, and
you plug into the inverse, assuming you can find the inverse, then there you go, you give me a value of x,
fantastico. Here's an example. Suppose that X is exponential. Now, we've seen many times that the cdf of X is
1 minus e to the minus lambda x, right there. So now plug X into its own cdf. So this looks like the cdf but
notice that there's this X there an so it's a giant, awful, random variable. Set it = to U Just like the inverse
transform theorem tells us to do and solve for x. Here's what you get, X = -1 over lambda log of 1-U. You plug
in the uniform, it gives me an exponential, fantastic.

Now, the only thing that I haven't told you about is how do you get those uniforms? And again, we went over
this back in module two, but I want to keep this self-contained, so we'll do it one more time. How do you
generate uniforms? And what we'll do, as I've mentioned many times is that we're going to generate these
practically iid uniform random variables. They're really not exactly iid because they're coming from an
algorithm, the term is algorithm, but we'll call them pseudorandom numbers. They're sort of independent, but
for our purposes we can treat them as independent. Okay, if you're lazy, what do you want to do? Well I'll show
you in the demo in a minute that we can just use the RAND function in Excel, and that generates uniforms.
Every single programing language has a way to generate iid uniforms, but in Excel you can use RAND. What
I'm going to do is I'll give you now a beautiful quick little algorithm that actually does this for yourself. You can

128
do this in any language you want, from scratch. So these R's are going to be my deterministic numbers that
appear to be iid uniform. So the only thing you have to do, is you supply me with the starter integer, which is
called a seed. Just pick one out of your head. There's many ways to give that, 1, 2, 3, 4, 5, 6, 7, that might be
fine, and calculate the following thing. Xi equals 16807Xi minus 1 mod(2 to 31st minus 1) let's explain what
those are very quickly. This thing, 2 to 31st minus 1 is a gigundo prime number. Mod is the modulus function,
which we've talked about before. 16807, turns out that's seven to the fifth, don't worry about that. But here's the
key thing, if you give me a starter integer like X naught. I plug all that stuff into the equation and it gives me X1.
Then I plug X1 into the same equation. And that gives me X2 so I get this sequence of large integers. Finally, I
set my Ri, my pseudo random number to the large integer divided by 2 to the 31- 1. And that guarantees me
that I'll get a number between 0 and 1. And in fact, what happens, this sequence of numbers between zero and
one really does look iid uniform. It's a property of the 16807 generator.

And here's how you implement this in FORTRAN, which is of course an old language. But you can implement
this in any language at all using this code. Here's the function call. UNIF(IX). IX is the integer seed. Pick
whatever number you want. Calculate this integer, K1 equal to the integer seed IX divided by the magic
number 127773. In FORTRAN, and whatever language you use, this should be a division that truncates. So,
that's how it works in FORTRAN. Whatever language you're working, you should also get an inner integer from
truncation. So, in other words in FORTRAN, 5 divide by 3 equals 1. If you want to do real arithmetic, where you
get 1.6667 you can do that but not in this algorithm. This algorithm requires that you get this integer truncation.
So 5 divided by 3, integer divided by integer in Fortran equals a truncated value 1. Just have to know that.
Then you set the new value of IX, this could correspond to X sub I, equal to 16807 times X sub I minus 1.
Minus a couple correction factors which magically work, we'll talk about that later on. If this number happens to
be less than zero, add this monster to it that monster is 2 to the 31 minus 1. So that will guarantee that you are
now between 0 and 2 to the 31 minus 1. Then multiply it because multiplication is easier to do than division in
most languages. That number is 1 over 2 to the 31st- 1, so this guarantees that the resulting output, which is
UNIF, equals a number between 0 and 1, so very nice. The thing works very well, and you end up with a
beautiful sequence of numbers between zero and one. So at this point we want to take a very easy little demo
and then we come back for the summary.

129
Lesson 9: Simulating Random Variables Demo
So let's take a very quick look at this very easy demo. All I'm going to do here is I'm going to generate
exponentials like I did in that previous example with the inverse transform theorem. So, here's how we do it.
Remember, R-A-N-D is the command in Excel to generate a uniform distribution, uniform random variable. And
so let's look at this column as allegedly marked exponential ones and I've got maybe a couple thousands of
these things it turns out. So let's take a look at this. Looks like this is a natural log of A5. Let me scroll over
here, and there we go. Uniforms. So R-A-N-D, R-A-N-D. They're all random uniform zero ones. It reads in A5
over here in cell B5. And that gives me the minus natural log of a uniform. Now in my equation for the inverse
transform theorem example, I actually had a one minus uniform here, but I'm being really lazy. Because 1
minus uniform has the same distribution as the uniform itself. That's a little bit of a trick that saves a tiny bit of
computation. Anyway, I do this for each one of them, minus natural log, minus natural log, minus natural log.
These are all exponential ones with lambda equals 1 coming down this column. So I did this, I think there's
2,000 of these fellas. And then I constructed a histogram using, you have to go into add ons in Excel, and it's a
little bit of a hassle. But go to file, then add ons, and then get the data add on. You can construct the histogram.
So you can see the first bend from zero to 0.1 had a frequency of 196 from my 2,000 observations. The
second bin, 0.1- 0.2 had 166 etc, etc. And then I made a little histogram, see, and look at that, it looks like
there's an exponential decay. I wonder why they call it the exponential distribution? So I did this for 2,000
observations. If I hit another number, it turns out this is a static histogram, so I won't be able to get a new
histogram. But you can see that the numbers over here all change. But it's very nice and if I were to take the
average of all these things, I would probably get a number very close to 1. Let's see if I do. So I'll take the
average of the first 1,000 numbers. So average, let's see if I can type this correctly, so we'll do b [[B5 to B1004
that's the average of a 1000 of these things. And look at that, the average is 1.018 which is pretty much what
you would expect from an exponential distribution with RV = 1. So here is the summary of what we just did. I
showed how to generate a few really easy random variables. And again, this was covered a bit in Module 2.
And it won't be the last time we cover this material, cuz it's so fundamental. In the next lesson, I'm going to do
another simple Spreadsheet simulation which is actually even more complicated, in some sense, than what we
did with the SS inventory model of a couple of lessons ago.

Lesson 10: Spreadsheet Simulation


Hi, everybody. So in this lesson, we'll continue on our Hand and Spreadsheet Simulation module with sort of a
true spreadsheet simulation involving stock prices.

130
Here's the overview. In the last lesson I simulated a few really easy random variables, a couple of discrete
ones and showed how to do exponential distribution, inverse transfer theorem, stuff we seen before. In this
lesson I'll continue on with a slightly more sophisticated spreadsheet simulation. These spreadsheets are really
useful in business and other applications and we can even use them in certain discrete event simulation
scenarios. Like I could simulate an MM1Q, for instance, using a spreadsheet simulation.

What I'm going to do is I've made up a fake stock portfolio consisting of 6 stocks from different sectors. And
I've put this in an Excel file that you can take a look at in the demo. So we'll start out with $5000 worth of each
stock, or from each sector. And each is going to increase or decrease in value each year, according to this sort
of model.

2 2
Previous Value * max[ 0, Nor(µ𝑖 , σ𝑖 )* Nor(1, (0.2)2) ],

I'm going to take my previous value, and I'll multiply the previous value of that sector by the fudge factor, which
I'll denote by this little mess here. The maximum of 0, because I don't want to have a negative number to
multiply by, even though it seems that my portfolio is often in the negative. And I'll multiply by a normal times
another normal. I've just made up this model, but let's see what they represent.

● The first normal term is due to the usual rate of fluctuation for stock i, or sector i. So maybe if I'm in the
telecommunications sector, the stock might go up on the average of 8% in a year, mu might be 1.08. On the
other hand it might be a highly variable stock. So that sigma squared or sigma, the standard deviation, might
be 20% a year, it's possible.
● The second term is going to be for market fluctuations on a whole. So maybe the entire market is up for
all stock sectors. So the entire market is down for all stock sectors. And again I just made these up. So the
market on average stays about the same. But it experiences high volatility, 20% a year, let's say. That's what
that represents.

Those are natural market conditions that affect all stocks.

Let me first tell you how you're going to generate a normal distribution in Excel. It's actually very, very easy. So
you can use this command NORM.INV and remember that RAND is a uniform. So we're essentially using an
inverse transform method here, that's what it's secretly doing. So the command NORM.INV, that's inverse
transform for the normal. It reads in a uniform that it generates automatically. And here's the mean of the
normal, here's the standard deviation of the normal. Don't put in the variance, put in the standard deviation. So
this norm.inverse uses an inverse transform method for the normal distribution. You plug-in RAND for the

131
uniform, which automatically updates. Every time you use it you get a new uniform. You plug-in the mean, you
plug-in the standard deviation, congratulations you've got yourself a normal distribution.

Here's what the demo's going to look like. Like I said, I promised, we have six different stocks. The mean for
each individual stock is given in this second column. The standard deviation is given in this column. You can
see, for instance, that energy it goes up by 5% per year but the standard deviation is just crazy, 30% per year.
We start each sector off with $5,000, $30,000 total. (Woo-hoo, I'm rich.) And each year, we hope that the
stocks go up, but there's a great deal of fluctuation. Here is the yearly overall market fluctuation, 18%, negative
8%, negative 4%, 11%, minus 15%. These are coming from that second component, the second normal factor
in the previous question. So you can see, here's how the energy sector fluctuated from year to year and we did
pretty well in this example. And here's when I total everything up. It looks like I almost tripled my money. This is
amazing, certainly not my actual portfolio.

So let's go a little deeper into the demo now and I'll show you what's going on with these normal random
variate generation techniques. So over here remember this is the general year performance. And the main
reason I'm doing this is to correlate all the stocks together. I mean usually what happens, if it's a bad year, it's a
bad year for most sectors. If it's a good year, it's a good year for most sectors. So for instance, let's see how
we got this 1.21. Norm inverse rand, so that's a uniform. Uniform is an input for the inverse transform method.
Remember, on average it stays about the same but there's this high volatility, 20%. In this case, we actually
went slightly more than one standard deviation on the high side. Then this 1.11, again a positive year. Here in
year four, we had a slightly negative year, too bad for us. Then each of these stock sectors, here's the mean of
energy, 5% per year. Extremely high standard ration of 30% per year. We start out with 5000 bucks. And here's
how much money we've made in year one. So the good news is in year one there's a natural market increase
of 21%. In addition, so that's where the g4 is coming in, that's where the 21% is coming in. And then I also
looked at the maximum of zero of course, remember we don't want to have negative stock prices unless it's my
portfolio. So we looked at the maximum of that and another normal observation, this is coming from the
expressions d6 and e6. That accommodates the sector mean and standard deviation of stock price increase or
decrease. So we had a great year in the energy sector, went up by 80%. And then you can see that it
fluctuates all the way to $22,000. Wow, that's fantastic. And same kind of thing for these other sectors,
pharmaceuticals, entertainment, etc. In the end, we did pretty well. Things were a little bit flat that last year, but
that's how it goes, we tripled our money. If I were to rerun this with another iteration, again we did pretty well.
Do it again, not so well this time, looked what happened? We got really unlucky here. Down, down, down,
down, down, and just a little bit up. So we start out with $30,000 and this time we only end up with $18,000 and
that is unfortunate. So what we could is that we could repeat this experiment many many times, that's what I'm

132
doing here. And you get these different numbers each time. And this can be automated so that you could
repeat this experiment thousands of times and get a distribution of how much money you're left with by the end
of year five, it's quite nice.

So, at this point, let's summarize what we've done in this lesson and in fact this module. We did some
spreadsheet simulation, you're going to be seeing a lot more of this in upcoming homework. They're actually
kind of fun to do, and you can use them for so many different things. At this point, we're finally done with
module three, where we discussed a hand simulation of very, very easy systems along with a spreadsheet
simulation of slightly more complicated simulations. We'll be learning how to do more and more as we go on in
the class. Module four concerns general simulation principles. So that's what's coming up. We're going to step
back and see what makes a simulation tick, kind of would you write a simulation language if you had to. We're
not going to make you do that. But what's going on inside the simulation, and also what do you use a
simulation for? How can you conduct a proper simulation study? So we will be covering all that in module four.

133
Module 4

Slide Decks:

https://prod-edxapp.edx-cdn.org/assets/courseware/v1/fe403d545cd42947fa3ea4aa176e2d73/asset-v1:GTx+I
SYE6644x+2T2019+type@asset+block/ISYE6644Module4SlideDecks.pdf

Lesson 1: Steps in a Simulation Study


Hi everyone, now we're on to Module 4, which will be a discussion of General Simulation Principles.

Here's an overview of the module. In the last module, we covered hand and spreadsheet simulations. In this
module, what I'm going to do is step back a little bit and look at the big picture. In other words, what are the
components of a simulation? What makes the simulation tick? The idea here is that I want to look at how are
simulations carried out and kinda, what's beneath the surface of a simulation program.

In the first lesson, I'll be looking at steps in the simulation study. Then we'll go on to a bunch of useful
definitions. I'll look at simulation clock time advance mechanisms, how does simulation move ahead in time?
The fourth lesson, we'll talk about two different modeling approaches for simulation. One of which takes a little
bit of work, one of which is a lot easier. And then, I'll have time at the end, the fifth lesson, we'll look at some
simulation languages, at least from a high level. Now this lesson, we'll go back to number 1 up above, and I
want to see what does a simulation study actually involve? Maybe I should have done this already, but now
we've kinda got the ammunition so that I can talk about the different aspects of the simulation.

So now, we'll talk about simulation study steps.


1. First of all, we go at a real high level and talk about the problem formulation. All that is, that's just the
statement of the problem. Profits are too low, what are we going to do? Customers are complaining about the
long lines. What help can I give management? So those are pretty general problems.
2. Now, let's look at, little bit more specific objectives and planning. In other words, what specific questions
do I want to answer, to attack the above problems? So, for instance, how many workers should I hire? Or how
much buffer space should I allocate in the assembly Line? And maybe some of the problems in number 1 it
mitigated, some more steps.

134
3. This is kind of one of the most interesting parts of a simulation study, model building. People are really
good at this sometimes. And it's both an art and a science. Sometimes you use a little bit of math, sometimes
you have modeling skills that just come out of nature. So for instance we might be interested in well, are we
dealing with an M/M/k queuing model? In other words are customers coming in at exponential times between
arrivals? Are the services exponential? How many servers do I have? That's what the M/M/k stands for. Are we
dealing with a queuing network? Do we need physics equations? All of these are models and we want to put
the models together, maybe to form a super model. So that's why, this isn't just math, it's experience, it's art.
And these are the kinds of things you get very good at model building, the more models you make.
4. Data collection, that's always a lot of fun. What kinds of data are you interested in? How much? What's
it going to cost you? And for instance, we might be interested in continuous data, discrete data, what exactly
should be collected? How much are you willing to spend on it? Usually more data is better, nice clean data's
even better than that. But data collection's a real big issue in simulation analysis. Even more steps.

5. At this point, if you're a programming geek, it's time to decide what language you're going to use and
write the program. And there are many, many, many languages out there, hundreds of simulation languages
and you don't even need a real simulation language. You could use something like C++ or Python, but there
are specialty languages for simulations. In any event though, there's something called a modeling paradigm
that you might be interested in. I've listed two of them, Event-Scheduling and Process-Interaction. These are
kind of the ways you look at the simulation. How do you do the model? And these are very high level
paradigms. I'll talk about those in a separate lesson coming up. And they're very,very interesting.
6. Now, after you have written at least the first duration of code, you got to check to see that it's okay. Do
you have any obvious programming errors in it? That is what verification means. If not, go back and iterate
between step 5 and step 6 numerous times until you get it right. Back in the old days, it actually took a lot of
time to perform an iteration, because we didn't have nice handy laptops or PCs back then. We had to submit
jobs to a big machine in the central server, and that usually cost us a lot of time if we made a mistake. Now it's
kind of easy to go and iterate several times, before you get the code working properly. Okay, so that's just a
programming issue. Is your code okay?
7. Perhaps a more important issue is, validation. Is your model okay? So if you've modeled a particular
system as a simple M/M/1 queueing system, and you have five servers, not one server. M/M/1 requires that
you have one server, then you've got the wrong model. So you can go and do certain validation techniques,
usually they're statistical in nature, and if the model's fine, great, proceed. If it's not, go back and do more
modeling in step 3 and more data collection in step 4. Plus, you also have to do a little bit more coding too. Still
more steps. They just don't stop.

135
8. Experimental design, one of my favorite topics. Now that you've got the code is okay and you've
validated the model, the model is good, what experiments do we need to run in order to efficiently answer our
questions? You know, like, how many servers do I need? Or, what can I do to improve the performance of my
system? Well, in order to make the experimental design, do it properly, you need to think about statistical
considerations. How many runs should I make, in order to answer my questions with high confidence? I may
also have some time and budget constraints that have to go into the experimental design as well.
9. Then, you hit the go button.. You press the go button on your simulation, it runs, runs, runs, performing
many, many different experiments. And sometimes, you push the go button, you go away overnight while the
darn thing runs. So it may require a lot of time. And yet still more steps.

10. Output analysis. Well, you've got all these runs that you've just conducted. Now you gotta go back and
pretend you're back in baby stats class, or at the very least, you're taking the statistics portion of this class.
And you've gotta perform correct relevant statistical analysis. In other words, estimate the relevant
performance measures. If you're interested in reducing the mean cycle time, or products, I think I'd estimate
the mean properly. You gotta give confidence intervals. This is often iterative with steps 8 on the experimental
design, step 9 on the production runs, and here's a theorem, you almost always need more runs. So you have
to go back and forth on this quite a bit.
11. At this point, after you've done the output analysis and you're happy with everything, write up your
reports in really good, spiffy English, implement the results if possible and make management happy. If
management's happy, you're happy. And then press the like button. Like, like, like, like, like numerous times.

So here's a summary of what we did in this lesson. Actually pretty easy, straightforward stuff, although there
are a lot of steps in a proper simulation study. We discussed all the steps in such a proper study. Like I said it's
a bit of an art, it's a bit of a science. And next time, we'll go through a bunch of very, very easy definitions that
are relevant to general simulation models. This is a set of sorta necessary conditions that will allow me to
discuss things as we proceed on. So looking forward to the next lesson, see you soon!

Lesson 2: Some Useful Definitions

136
Hello everybody. Now we're going to be moving on to the second lesson in module 4 on General Simulation
Principles. This lesson just consists of a whole bunch of useful definitions that we'll be needing for the rest of
the semester.
So here's the overview. Last time, I discussed at very high level all the steps in the so-called proper simulation
study. You can see it was a real iterative process. Even though I listed all the steps out, you kinda go back and
forth a little bit. In this lesson, I'm going to give a bunch of easy definitions that are relevant to all general
simulation models. These are things that we can use for the remainder of the course. Especially, these come
into play when I look at the programming language because these terms are ubiquitous when I talk about
things in the language.
So let's start with the definitions. First of all, this is something you may have learned when you were little kids,
● a system, our first definition, is a collection of what I'll call entities that interact with each other to
accomplish some kind of goal. So entities could be people or machines. Entity is just a fancy word for things.
So a system is a collection of entities that work together to accomplish some kind of goal.
● And a model is an abstract representation of a system, so we model the system. And usually the model
contains math or logical relationships that describe the system in terms of the entities. What are the
customers? What are the machines or resources that are being used to serve the customers? What are the
states of the systems? Relevant sets, relevant events and we'll talk about all those terms below, but a model is
all of these things. It's an abstract representation of the system. So you could have a single server queue.
That's your system. How do we model that? Well, maybe we use an MM1 queuing model, something like that.
We'll get into more detail in a minute. More definitions.

● The system state is merely a set of variables that contain enough information to describe the system
at any point in time. So think of the state as a snapshot of the system. It contains all the information that you
need to completely describe the system, at least for the purposes of your study.
○ So, for example, let's look at the single-server queue. At a minimum, all you might need to describe the
state of the system at any time could be the following:
■ you could keep track of the number of people in the single server queue at time t. I usually use the
standard queuing notation 𝐿𝑄(𝑡), number of people in the single queue at time t. So if you know that, and

■ if you know whether or not the server is busy at time t, so we would set the variable B(t) equal to one if
the guy is busy. And we would set B(t) equal to zero if the guy is idle, not busy at time t.

137
And for purposes of a simplistic simulation, if you knew how many people are in the queue at time t and
whether or not the server is busy, that kind of gives you all the information you need. At least at the very most
simplistic level.

More More More, how do like it, how do you like it?
● Entities now. I've already talked about these. These are things in the system. Customers or resources
or servers. Now they can be permanent, like a machine in your system is, typically it's there forever. It's
permanent or temporary, customers. Customers come into the system, they get worked on, they leave.
● Let's think about customers for a second, customers, and I guess machines too, but customers can
have lots of properties. How tall are you, how much money do you have to spend, things like that, how much
work are you going to take? These are attributes of the entity, properties of the customers. A customer could
also have a priority, you could be some big shot, a big wig, and so you, since you have high priority, you get to
go to the front of the line. That's a property or an attribute of the customer.
● A list, which I'll occasionally use, or a queue, sometimes that's, I can use anonymously. It's just some
ordered list of entities. For instance, the line that forms in front of a server. They're ordered by maybe the
arrival time or maybe by priority, but it's just a list that's ordered that we can keep track of who belongs where
in the queue, so we'll be using that a lot.

I gotta have more, more cowbell. [SOUND]


● An event now, this is a little bit tricky, because people use this word to define a couple different things,
but I'll be consistent. An event is literally a point in time at which something interesting happens, at which the
system state changes, and which you can't predict with certainty beforehand. So you don't know when the next
arrival is but when it happens, when the next guy shows up, that's an event. When the next guy leaves the
server, that's an event. And you can't really predict that beforehand, unless the server has a fixed deterministic
service time. Usually there's some randomness involved, so an event is typically unpredictable ahead of time,
but when it happens, it happens. Now, some people, well, we told you just now, we have an arrival event,
departure event, you could also have things like machine breakdown event.
○ Some people regard the event, not only as the time that something happens, but the type of thing that's
happening. So for instance, even though I say event technically means the time that a thing happens, I often
refer to an event very loosely as what happens. Like the arrival, like the departure, like the machine
breakdown. These are all things that happen - events. Although technically, it's the time at which they happen
is what the event is.

138
And still not done, got some more definitions.
● An activity is a duration of time of a specified length. In other words, what's called an unconditional
wait. So if I can specify how long something's going to take, that's called an activity, and I use that term every
once in awhile. So examples include, well, when I write my simulation I'm going to say, well the times between
customer arrivals, arrival times, are exponential. Or maybe my service times at the server are constant, or they
can be exponential, but somewhere in the simulation I have to write exponential. Or I have to write constant to
specify the service times or the interarrival times. So, we can explicitly generate those events, so that's why I
say they're specified.

● On the other hand, finally, I might have something that's unspecified. And that's called a conditional
wait. That's a duration of time of unspecified length. These a bit interesting ones. These the ones that make
you think a little bit. The good news is that a lot of stimulation languages handle these automatically. If you're
programming these kinds of things yourself, you kinda gotta think a little bit.
○ So for example, let's look at a customer waiting time. So it turns out we don't know customer waiting
time directly. That's why we're simulating. So all we know when we program up our simulation is that we know
the arrival times and service times because we can specify those. And then either we or the simulation
language has to reverse engineer the waiting times. So you give me the arrival times and service times, I can
calculate, takes a lot of work, takes a little thinking, but I can reverse engineer those waiting times. So you've
got to be a little careful about that. What's very interesting, a lot of people will go and they'll do projects where
they go and observe the waiting times. They kind of forget to look at the arrival times and the service times.
Turns out, it's a lot harder to reverse engineer the arrival times and service times. So I always advise people,
collect arrival times and service times. You can always reverse engineer waiting times.

So here's a summary of what we did in this lesson. This time, we went through a bunch of these easy little
definitions that are relevant to general simulation modeling. And I've gotta put emphasis on the word event as
in discrete event simulation. Now this leads to what we're going to do next time. What is the simulation clock
and how does it move along? Well the simulation clock, it turns out is going to move from event to event and
we're going to help it move along. Now I don't want you to to be scared or surprised about this, but we're the
people that control the simulation clock. How does it go? How does it jump from time to time as the simulation
progresses? And there’s a couple of ways to do this. I'll try not to be scary about it cuz there’s a very easy way
to carry it out. That’s why it’s called discrete event simulation. And this is what makes simulation so interesting.

139
Lesson 3: Time-Advance Mechanisms

So now, we'll continue our module on general simulation principles with a lesson on what are called
Time-Advance Mechanisms.
Here's the overview.
● Last time, I went through a bunch of really easy definitions on things that are relevant to the course.
And I especially concentrate on the word event, which means the time at which something interesting happens
in the simulation.
● This time, I'm going to discuss the simulation clock and how this thing is used to move the simulation
along as time progresses. This is something that either we carry out the calculations for ourselves or the
simulation language does automatically. And in any case, the clock is the heart of any discrete simulation
language.

A couple more definitions just to get us going.


● The simulation clock is a variable whose value represents a simulated time. It doesn't represent
real-time, which is taking place in our lives. The simulation clock represents simulated time.
● Now, we have a couple of time-advance mechanisms. What that means is how does the simulation
clock move? How do we make it move from time to time? Let's talk about those. Now first of all, the simulation
clock always moves forward. Never goes backwards in time like Star Trek. In fact, that's a little bit of a lie.
Because there have been a couple of very interesting research papers that show what happens when
simulation time moves backwards, but we're not going to worry about that in this course. So for our purposes,
the simulation clock moves forward in two ways.
○ One, the Fixed-Increment Time Advance approach and I'll discuss that first and then
○ something called the Next-Event Time Advance approach. This is what makes simulation languages
so cool.

Let's talk about clock movement.


● The Fixed-Increment Time Advance mechanism, that's the easiest thing in some sense. What the
simulation does, it updates the state of the system at fixed times. N times h, n = 0, 1, 2, dot, dot, dot where h is
some small number chosen appropriately. So pretend that h is a one-minute increment, then we update the
state of the system at fixed times zero, one, two, three, four, five minutes, etc., etc. Now, this is used mostly in
continuous-time models like those containing differential equations or stochastic differential equations. Maybe

140
you're simulating aircraft movement through space or weather patterns and it's also used in models where
data's only available at fixed time units like every month. This methodology's not emphasized in this course
and let me tell you why. Suppose we're looking at a queuing model and customers show up every once in
awhile and they get served every once in a while, if I'm advancing the clock every second, nothing's happening
most of the time. What if customers only show up once every couple minutes? I'm advancing the clock. I'm
updating the state. Nothing's changing. I'm wasting computation.

● So, this leads up to an alternative formulation for time advance. Of course, time flies like an arrow, but
fruit flies like a banana and that's called Next Event Time Advance. So here, the clock is initialized at zero
and all known future events are determined and then placed in what's called a future events list or FEL and this
list is ordered by time.
○ So at time 0, we figure out, well, we know there's an arrival coming up in 23 minutes and there's a
second type arrival coming up in 29 minutes. So, I know that two events are going to happen in the future. So, I
put them on the list.
What happens is that the clock doesn't advance every minute or every second. It jumps to the next event.
What's the next even that's going to happen? Well, I just said, we have two different types of arrivals. I think at
times 23 and 29. The clock will then jump from 0 to 23, which is the most imminent event. Then when it's done
dealing with that event, it goes to the next most imminent event, etc., etc. And at each event, the system state
is updated and the future events list may also be updated and we'll see how that happens in a minute.

So, let's give a few remarks and notes on Future Event Lists.
● It turns out, you think about this. By definition, the system state can only change at event times. Nothing
happens between events. So, I have arrivals. I have customer departures. But for our purposes of the thing
that we're simulating, nothing happens. I don't count people getting bored waiting in line as anything
happening. All I care about is when do people show up, when do they leave, when do they start getting served,
stop getting served. Those are the interesting things for me.
● Now, what happens is that the simulation progresses sequentially by executing or dealing with the most
imminent event on the future event list and then going to the next one after that. So, this is what we're dealing
with I have to talk about. Simulation progresses sequentially by executing and dealing with the most imminent
event on the future event list, then going on to the next event. So, what do I mean by this phrase dealing with?
What do we have to do as programmers? Or what does the simulation language do?

141
Well, let's talk about imminent events. The clock advances to the most imminent event. What's the next thing
on the future event list? What we do is we update the system state, depending on what type of event it is. So
the update is a little bit different if you have an arrival or a departure, or a system breakdown. So if you have an
arrival event, what we have to do is if the server is not busy when an arrival shows up, you have to turn the
server on. Or if the server is busy, the new arrival doesn't get to get to the service, it gotta go on the line. So,
you've got to put the guy at the bottom line in the busy server. So that's just how one might update the system
if you have an arrival event, departures are handled a little bit differently. So in any case, if it's an arrival or a
departure or whatever, you have to update the future event list. So you look at what you just did and well, if
you've got an arrival, maybe if the guy starts getting served, if he gets service immediately, you then have to
schedule, perhaps when he's going to be done getting served. So, that's an event. Usually when you have an
arrival, the first thing you do is you generate the next arrival and that's another event that you put in the future
event list. So, that's what I mean by update the FEL.

In particular, let's go through this a little more carefully. What do I mean explicitly by updating the future event
list? Any time there's an event, the simulation is going to have to possibly update the chronological order of the
future event list events. So, the future event list is an ordered list of different types of events. Arrivals,
departures, break downs, end of simulation that's an event. And any time there's an event, it may spawn some
changes in the current future event list. So, here's what could happen.
1. I could insert new events and I could have a new arrival.
2. I could delete events, I'll give an example in a minute.
3. I could move the events around in the future event list or
4. I could do nothing. Maybe something happens, I don't do anything in the future event list.
So, there's all these things you can do in the future event list when an event occurs.

For example, let's suppose a guy arrives at the queue, an arrival occurs. What we'll usually do like I said a
couple slides ago, usually we'll immediately spawn the next arrival time. We'll tack that on in the appropriate
place in the future event list. If the arrival time of this new arrival is sufficiently far in the future, we'll put it at the
end of the future event list in terms of where the events are executed. We'll put at the end of the list. If it's not
so far in the future, we may have to insert it.

So if the next arrival turns out to happen before another event, for example, if you have a slow server finishes
his current customer way in the future, then the next arrival that I generated on the previous slide may have to
be inserted in the interior of the future event list before that slow server finishes his customer and we schedule

142
a departure event. So, I can insert events in the interior of the future event list in terms of the ordered execution
times.

Now, what if I have an arrival of some really nasty-looking guy? I don't want you to get confused by this. But if
he's really nasty, what might happen is that some of the customers might get disgusted and leave or switch to
other lines. Cuz he's so nasty looking. In that case, you may have to delete or move entries in the future event
list. So for instance, if the guy shows up and he's horrible looking, people might just leave. So, kill all those
guys off. They're never going to need to be served or they might go to the servers. So lot's of things can
happen in the future event list, if there's an arrival.

I think a couple more future event list remarks. Now obviously, if you're doing manipulation of the future event
list, I'm putting things on the end. I'm inserting things. I'm deleting things. I'm moving things around. I have to
be very efficient in how I do the processing of the list and people will do what computer scientists call use link
lists for the future event. So link list is roughly speaking is a list where even if the entries in the vector are not
connected A, B, C, D in order, what happens is that there's pointers to where the next entry in the vector is.
And so insertion, deletion, moving around is very easy. You just move the pointers. You don't move the actual
entries. Don't worry about that, but I just want you to be aware that there are ways to do this efficient list
processing.
● So not only are there these link lists, there's a whole bunch of them. Different types are called singly-
and doubly-linked lists and they just store the events in an array, in a vector that allows the chronological order
of the events to be accessed very quickly.
● And like I said, you can easily accommodate any kind of manipulation you want in the array or in the
vector.
● So if you're interested in this, take a computer science course. They teach us on the first or second day.

● Couple more remarks. Occasionally, especially for doing a hand simulation for integer times, you have
to be a little bit careful about events that take place at the same time. It could happen. So for instance,
suppose a customer shows up at the exact same time that a other customer finishes getting served. What do
you do? Which event do you handle first, the arrival or the departure? And I'm not going to force you to do a
particular order of these things, but just establish a ground rule that you can handle these things if you have
ties. What I usually do if I have an arrival taking place simultaneously to a service completion, I'll handle the
service completion first. So I can get somebody out of the system and might clear out space for the guy who is

143
arriving. So I would handle the departure event first, then the arrival. But you can do whatever you want as
long as you're consistent.
● So it turns out that every discrete-event simulation language, except for maybe one that I know of,
maintains a future event list somewhere deep down in its cold, cold heart. So even though if you're
programming, if you're doing this in own language that you've made up yourself like C++ or Python, you're
doing your own code, you have to maintain the future event list yourself. You have to do all the manipulation. In
the discrete-event simulation language, they do all this stuff for you. You just specify what happens, it keeps
track of the future event list. This really saves a lot of programming time.

So like I said, the summary for that is that commercial simulation packages always take of the future event list
for you. So, you don't have to. Very transparent, you don't have to worry about it.

Let's look at a simple example of the Next-Event Time Advance Mechanism. I've stolen this from from the book
by Banks, Carson, Nelson and Nichol. And this is a very easy example where I'm just going to consider the
usual single server, first in, first out, FIFO queue. I will look at exactly ten customers. In fact, I won't even get to
that number. But let's write down the data for ten customers. You can see that I've listed them all with their
arrival times and their service times. So for instance, customer 10 shows up at time 29 and his service time
when we eventually get to him is 1.

So, let's see what the future event list is going to look like.
● The simulation clock is this variable, t in the first column.
● The system state is represented. Remember, we can completely describe the system state in terms of
LQ(t), the number of people in the queue at time t and B(t) whether or not the server is busy at time t.
● Here's what the queue actually looks like and the queue is ordered by customer arrival, and arrival
times. So, these are how I identify the customers in the queue. Queues are ordered by arrival time and
● Here's what the future event list looks like. The future event list actually contains two pieces of
information per entry. Namely, the event time and what the event type is. So when I go through the examples,
you'll see and
● then we keep track of cumulative statistics. As I mentioned, total busy time of the server and total time
in system of all the customers. This is very tedious, but the computer handles all this in a flash. We're going to
go through it line by line. I'll just do one or two lines, three lines and you'll get sick of it and I'll have proven my
point.

144
1. At time zero, we start the simulation. Nobody's in line. The server is idle, zero, zero. There is nobody in
line, so that queue, that set is empty. The future of that list looks like this. I know that at time one, if you go
back to the previous table. At time one, customer one shows up. But here, we are. We're still at time zero. So
far in my statistics, nobody's in the system. So, no busy time and no time in the system. My future event list
says, well, there's only one event. It's taking place at time one and its an arrival. So, there we go.
2. At time one, the clock updates to one.
a. There's still nobody in line,
b. but the server turns himself on. So, he becomes busy. He becomes a one.
c. There is still nobody in line, so the queue is empty.
d. However, at this point, this event of an arrival generates and spawns a couple of other events. Namely,
we know that the second arrival takes place at time three. And we know that the first service time is five, so
that means that customer one departs at time six. So, that's interesting. Customer one's arrivals spawn two
additional events that I put in order on the future event list.
e. Now at time one, the customer one is just starting to service. So at this point, there is no busy time yet.
Nothing's accumulated and he's just gotten into the system, so the total amount of time in the system so far is
zero.
3. We go to the future event list we see that the next event occurs at time three, so we jump the clock to
time three. At this point, Customer Two shows up.
a. Customer one is still being served, so there's one person in the line at this point. The server is still busy.
b. The person in line is Customer Two and I put the customer number and when he arrives, so I can
uniquely pair the customer with his arrival time. So, this notation means that Customer Two showed up at time
three.
c. The future event list looks like this. Well, customer two showed up, I immediately generate the next
arrival which occurs at time four. That's customer three. And meanwhile, customer one still needs to get done
at time six.
d. So the cumulative statistics, very simple. At this point, customer one's been in the system between time
one and two. So between times one and three, that's a total of two time units that the server's been busy. So,
we put a two there. He's the only person in the system, customer one. He's been there a total of two time units.
So, the cumulative time in system is two. You'll see that changes a little bit when we do the event that occurs at
time four, but I'm going to be lazy and put dot, dot, dot.
You get to experience the rest of this yourself.

145
Let's summarize at this point. In this lesson, we discussed the simulation clock and how to use the future event
list to move that simulation along through time. In the next lesson, I'm going to look at two general modeling
approaches. One of which is related to this discrete events that help move the simulation clock along and one
of which will really help you to easily simulate complicated systems.

Lesson 4: Two Modeling Approaches

Hi, everyone, let's continue our module on general simulation principles. Here's our overview.

Last time, I discussed the future event list, and how time flies in a discrete-event simulation. In this lesson, I'm
going to look at two high level simulation modeling approaches. And, in particular, the Process-Interaction
approach is going to be used to model very complicated simulation processes and this is how I'm going to want
you to think for the rest of the semester.

So, let's go through some high-level thoughts first. I hinted last time that commercial simulation packages help
you avoid a lot of grief by automatically handling the future event list and carrying out all the drudgery
themselves. Now how is this possible? Well it comes down to a choice between two modeling world views.
We'll call them the event scheduling approach (which is fine, but I don't like it so much) versus the process
interaction approach, which I like a lot.

I'll discuss event-scheduling first, get it out of the way, in Event-Scheduling, which is probably how most of us
would attempt to do programs right from the start. Which I don't advocate, but this is how we give our first
thoughts. We concentrate on all the events, and how all these events affect the system state. Now, what
happens is that the simulation evolves over time but we have to keep track of every freaking event in
increasing order of time occurrence. We have to keep track of every event, we have to deal with every event,
we have to manipulate the future event list all ourselves. It is a hassle. It's a bookkeeping hassle, and it just
makes me so angry.

So in event-scheduling, if you're going to program in C++, you're going to program in Python from scratch,
you'd probably be using event-scheduling. And if you really think that you want to do event-scheduling, let's
look at the next few slides where I outline a generic simulation program using the event-scheduling approach.
And after I'm done with this, well, we'll see. We may have nasty words for event-scheduling.

146
So here's a generic events scheduling flowchart. I'm stealing this from Law's book. We have to have a main
program that supervises everything else.

1. So let's, step zero we initialize the program, basically that means we set the simulation clock to zero,
we initialize all the states and statistical counters. So nobody might be in the system, the counters are all set to
zero. So there have been zero customers so far etc. And we initialize the future event list. Usually that consists
of, “when's the first customer going to show up,” and maybe “when does the simulation end.” Then we invoke
what's called the timing routine, all that is is when and what is the next event. So that means I have to
determine the next event type. Is it going to be an arrival, departure, a simulation, a resource breakdown and
simulations. These are all possible events and then I advanced the clock to whatever that time is of this next
event. That's the time of your routine. What's the next event, and when is it?

2. Then I invoke the event routine itself. So I have these different types of events, arrival, departure, or
breakdown, and simulation. Call a generic event, event type i.
a. I update the system state,
b. I update the counters, so if I have an arrival I may need to turn the server on, or I may need to put
another guy in the queue, and
c. I make any changes to the future event list. I schedule the next arrival if it's an arrival or maybe I take
somebody out of the queue whatever. If the simulation happens to be over at this point, I'm done. If the
simulation isn't over, I go back to step one the timing routine and I repeat this process over and over. Step one,
step two, step one, step two, repeat it over and over and over. Well, Step three’s quite easy.

Okay that's fine, eventually the simulation ends. What I haven't told you how to do is handle the different types
of events. Let's look at your arrival event for example. What I've been saying, in previous lessons,

● the first thing I do when I have an arrival is I schedule the next arrival, put it on the future event list.
● Now I ask, is the server busy?
○ If the server's not busy, well, I set the waiting time for that customer equal to 0. He doesn't have to wait
at all, that's fantastic, and I keep that for my statistics. So this customer is very lucky, he didn't have to wait.
○ So I gather the appropriate statistics. For instance, the customer waiting time.

147
○ I add one to the number of customers who have gone through the line cuz even though there wasn't a
line, a guy went through.
○ I make the server busy, and
○ then I return to the main program. So I do the next event.
● If the server is busy,
○ I add one guy to the queue.
○ If the queue is full, maybe it's capacitated, I panic or maybe do something about it. Maybe I kick the
customer out.
○ If the queue isn't full, in other words, if there's space left in it or it's un-capacitated, I store the customer
arrival time. Because I'm going to need to know later on how long has he been in the system.
○ And then, I return to the main program. I start the next event.

Departure event, kind of similar, but I don't worry about the next arrival.

● I first ask, is the queue empty?


○ If the queue is empty,
■ I make the server idle because my only customer just left.
■ I eliminate the departure event from consideration being the next event in the future event list, cuz
there's no body getting served.
■ And then I return to the main program.
○ If the queue isn't empty,
■ I grab the guy, the first guy from the front of the queue, I subtract one from the queue.
■ I compute his delay, how long has he been waiting there?
■ I add one to the number of customers delayed.
■ I schedule his departure. I put that in the future event list.
■ And I move up each remaining customer one space in the queue, and
■ I return to the main program.

That was a hassle. How would you like to have to do that yourself? So, it's nice and I guess if you like to do
programming, that's fine. [SOUND] But event scheduling is just too tedious and it makes me so angry.
[SOUND] [MUSIC] Let's now talk about Process-Interaction. This is one of the great achievements of modern
simulation, is that it helps you model things and program things a lot more seamlessly. So, how would we
handle such a generic model in a process-interaction language like Arena? Here's what you do, and I'll talk
about what process-interaction means in a minute.

148
● I generate customers every once in a while. That's called I create customers. Generating customers,
every once in a while, they show up.
● I process, I serve the customers, once they show up. Maybe they have to wait in line a little while but
eventually processed, I serve them.
● And they're done and they leave, I dispose of them. And dispose of the customers after they're done
being served, done being processed.

And that's pretty easy. It's so easy!

So this is the approach that I'm going to use in class. I'm going to concentrate on a generic customer, generic
entity, and the sequence of events and activities that it undergoes. As it goes through the system. And this is
what Arena does so nicely, this is what all the simulation languages do so nicely. You concentrate on one
generic guy and what it does. The simulation language is going to keep track of how that generic guy interacts
with all the other generic guys. So at any time, the system may have a lot of customers interacting with each
other as they compete for resources- machines. So we've got this process, the generic customer goes through
the process and he's interacting with other generic customers that are going through the process. That's why
they call it Process-Interaction.

So you do the task of modeling the generic customer. But you don't have to deal with the event bookkeeping.
That's the horrible stuff, that's the stuff that the simulation handles deep, deep, deep down inside the program
itself. So In fact, the dirty little secret is that the Process-Interaction simulation language, like Arena - it's doing
event scheduling but it's completely transparent. You don't see that. It's deep inside the language. You as a
modeler are doing Process-Interaction. The simulation program deep down inside where you can't see it, it's
doing event scheduling. If I were to ever ask this question on a test like when you model an Arena, are you
doing process interaction? Yeah, the answer is yes. Is Arena deep down inside doing event scheduling? Yeah,
but I would characterize Arena as a process-interaction language. Process-interaction not event scheduling. It
maybe doing that but it's hidden from you. So, the correct answer on the test is Arena is a process interaction
approach and this saves a lot of programming effort.

Here's an example, kind of mentioned this before. A customer is generated, eventually gets served and then
he leaves. How do you do it in an Arena? You create the customer, you process him, you dispose him. Dispose
is kind of a nasty word but I guess it's better than using the word terminate, which is what they use in the
language GPSSH or maybe send to die which they use in the language auto-mod. So Arena's kind of nice in
terms of getting rid of customers. You create the guy, you work on him, you process him, then you get rid of

149
him, you dispose him. Easy! Like! It's very nice. Now, Arena or any other stimulation language adopts this
approach. This is why you use this simulation language, I wish I had thought of it. It is really easy.

So here's a summary of what we just did. In this lesson, I talked about the event scheduling and process
interaction general modeling approaches and we decided that process interaction is what we want to do. In the
next lesson, I'm going to give a short general overview of the various simulation languages and that'll end the
module.

150
Lesson 5: Simulation Languages

And we'll be finishing up our discussion on general simulation principles with a quick high-level discussion on
simulation languages in general. Here's the overview.
● Last time, I looked at two high-level simulation modeling approaches, event scheduling and process
interaction. We decided that we like process interaction lot more,
● And this time I'll discuss simulation languages themselves in a very generic way. And it turns out there
are a lot of simulation languages out there, so choose carefully.

In fact, maybe 100 commercial languages in the ether, maybe more, I lost track. They do have publications
which list them all, comes out every three or four years, you'll see in the trade magazines they'll have synopses
of many of the languages, it makes good bedside reading. Here are some examples of simulation languages,
and I've sort of put them from very low to high-level, although all the commercial languages are about the same
in some sense. So you can start out at low-level with good old friends like FORTRAN, SIMSCRIPT, and
GPSS/H. These are really sort of old fashioned guys. I might even put C++ in there, which is not old-fashioned,
but these are regular programming languages that people actually wrote simulations in. SIMSCRIPT actually is
sort of a specialty language and GPSS was a specialty language. As the years went on, the commercial
languages like Extend, Arena, Simio, Automod, AnyLogic, and I'm missing a lot of good ones, and I apologize
for not including them here. These are typical very nice languages where you have to do a lot less work than
these lower-level languages like FORTRAN. I'd say at universities, there's five to ten major players. But in
industry, if you do these commercial packages, the price range is kind of daunting. I mean you can get them for
less than $1,000, but specialty simulations maybe start at $100,000, which is nuts. But, depending on what you
need and how much time it's going to save you, maybe it's worth it.

The nice thing is, there's some freeware available, several nice packages in Java and Python. There's a cute
little program called SimPyl, Simulation in Python language. Now, in these languages, there might be a little bit
of a learning curve, and there's not as much documentation, but it's not too bad. I've written up projects in
these packages before, and it's actually, they're quite nice and you don't have to spend the money purchasing
a commercial package.

Now there are a number of things that when you're selecting a language, you ought to take into account.

151
● Cost considerations, how much is it actually going to cost you, what's the learning curve, the
programming cost, runtime cost? There's a lot of ways that you have to pay money even in just programmer
time.
● What is the ease of learning? Is there good documentation, is there syntax? What's the flexibility?
● What's the world view? Now, I've mentioned, I don't really like event scheduling that much, sometimes
you can use it. But process interaction, you probably want to look at that. If you're looking at models like
weather patterns and certain differential equation models, you might need to have continuous modeling
capability or a combination of these things. So your world view could be lots of different things.
● Lots of features, which packages have good random variant generators, not recreational vehicle
generators? What about the statistics collection, the debugging aids, the graphics? What about the user
community? So there are lots of different features that play into the adoption of a language.

Now, where can you learn about simulation languages?

● Well, right here, it's nice to have you here and I'm so happy that you joined.
● You can also, if you're a bit of a bookworm, take a look at the textbooks, especially in conjunction with
this class. I mean, we have some recommended books. My notes are nice, I hope.
● There are a lot of very nice conferences, friendly conferences with a nice community. Winter Simulation
Conference comes to mind, there are many more.
● And you can also go to vendor short courses. This may cost you a little money, you might have to buy
the package. But they usually provide pretty good food, so I guess it's worth it just for that.

So here's a summary as we end the module. I just went over some very high-level considerations in choosing a
discrete-event simulation language package. And we're now done with the module. We discussed simulation
terminology, what's under the simulation hood, a little bit on languages. In the next module, I'm going to
introduce and finally give some more or less formal tutorials on Arena, the simulation language that we're using
in this course. And this is really fun, it's like a computer game almost, and it's a fairly powerful modelling
package. So I think you’ll have a lot of fun with the next module. So looking forward to that. See you soon.

Module 5

Lesson 1: Introduction

Hi everybody, so we're finally going to be starting module 5 on the Arena discrete event simulation language
and I'll be doing an intro today.

152
Here's the overview of the module.
Now, of course in the last module we looked at general principles behind simulation languages. How do they
work? Kind of went to deep down inside of the simulation. And in this module I'm actually going to look at a
specific simulation language, Arena. The idea is that we'll go through a series of tutorials, and when you're
done with these things, you can fearlessly simulate anything in the real world. So I'll just give you a bunch of
tricks that will allow you to do sort of general simulations.

Now this module is built on numerous mini tutorials, each one of them got a little bit of a trick or some
information that you have to learn. And many are going to be self-contained and we'll focus on a specific topic,
but they'll be aiming towards a common goal, so after the end of a few of these mini tutorials we’ll do sort of a
non trivial demo. Then we'll go to the next set of mini tutorials, and by the end of things you'll be able to do kind
of a a lot of different types of models. So each lesson contains some of these written notes that you're seeing
on the screen now, and you can go through and look at some examples on the screen, but I want you to watch
the live demos because I'm just giving you notes right now and you have to really see how things work while
the simulations are running for real. In fact one thing that some of the kids do in the live classes is that they
bring their computers and they follow along in Arena and they work things as I work on Arena in front of them.

So a little bit more overview. Like I said, I'm going to be grouping the lessons into sets of roughly equivalent
topics or related topics.
● So right now I'll be spending the first few lessons on introductory material, which is what we'll be doing
now.
● Then we'll do some multi channel systems. Now that just means you can have models where things are
not moving literally. They're coming in from different places. You have different types of arrivals, more of a
network-type structure.
● And then I'll build to a call center model. So it's going to be a little bit more sophisticated. It's going to
have crazy arrivals, crazy workers that have goofy schedules. It's going to be very interesting model. You'll be
able to do that, and once you're done with that, you can be consultant.
● And then I'll give some demos on numerous interesting other types of models that you can work on
using Arena or any simulation language.

So some notes. Arena is really easy, fun to use, easy to learn. But if it's not your cup of tea for whatever
reason, maybe like another language, maybe your company already bought another program... There are
many other simulation languages around, and they're great. Arena turns out to be a good choice to enable you
to learn those languages. So if you learn Arena, you can learn a lot of other things. But in any case, I'm going
to do lots of examples and you'll be able to find a lot of useful applications. So Arena is certainly a good
jumping off point.

Now here's what we'll do in the next few lessons. I've got to cover some basics. That's the purpose of these
first guys.
● So I'm going to talk about process interaction. We kind of know about that a little bit from the previous
module.
● I'll meet Arena show you how to get to it.

153
● We'll look at what's called the basic template, which has sort of the baby things in Arena.
● I'll look at Create-Process and -Dispose modules from the Arena template. These are the three basic
commands. The word modules here is a is a word in Arena that means commands.
● And then I'll look at what's called the Process Module very carefully.
● Then we'll look at resources (which just means workers), schedules (both worker schedules and arrival
schedules), and queues.
● Then finally, for this introductory material, I'll look at a Decide Module, which is basically an if-then
statement.

So the summary? Well, we just discussed what's coming up in in this lesson. And in the next lesson I'm going
to be doing a brief review of process interaction. We've kind of seen some of this before. We'll use some of the
terminology that we talk about, you know, for the rest of the module. And I thought it might be a good idea to
reintroduce what Process-Interaction is, because that's what Arena lies on. So I'll see you in the next lesson.
This is going to be fun.

Lesson 2: Process Interaction Review

So we're just starting to get into Module 5 on the Arena simulation language, and before I can show you Arena,
I want to just do a quick review of the Process-Interaction approach to modeling.

So here's the overview. Last time I just discussed what's coming up in this module. Now I'm going to do a very
quick review of the Process-Interaction approach that Arena uses, and we talked about this in the previous
module. So the basic idea is just a flow chart, that's all, and it's a flow chart based on generic customers. What
does one guy undergo as he passes through the simulation?

So here's Process-Interaction.

● I want to consider a generic customer, and in Arena we call the customer “entity,” and the sequence of
events and activities that it undergoes as it moves through the system. So we'll kind of call these things
processes. So an entity (a customer) moves through various processes. Inside the processes, we'll find
servers and queues and things like that.
● Now, at any time, the system can contain many of these generic entities interacting with each other. So
they might be competing for a server. Only one can get a server at a time time, so maybe some of them have
to stand in line. Maybe some of them have priorities over the other ones. So these entities are competing,
they're interacting with each other to gain access to the processes.So that's why this is called the
Process-Interaction approach.
● So Arena takes this Process-Interaction worldview or approach. It looks at the generic customers how
they move through the system, how they compete with each other, what do they need, how long does it take to
get them, that kind of thing.

So Process-Interaction continued.

154
● The entities flow through a network of modules. Remember I said in Arena, the word modules means
something a little bit different than in our lessons, so I sometimes call modules blocks and you'll see what I
mean in a little while. But the customers flow through this network of modules that describe their behavior.
● And the network can be represented very nicely as a process flow chart.
● So for example, let's suppose that people show up at the barbershop, and they get served, and maybe
after they wait in line, I guess it would be the barber queue and then they leave. So a generic customer shows
up, tries to get served. Eventually he gets served after waiting in line and then he leaves. He's really only doing
three things.
● In Arena:
○ You generate the customer arrival using a Create block or Create module.
○ You process the customer: you use the Barber and maybe you have to wait in line, but you attempt to
use the Barber. Eventually you do.
○ And then you leave. You're out of here. That's called a Dispose module, so that's how you do the
program in Arena. Click, click, click you're done.

So this is what it looks like. If it quacks like a flow chart, then that's what it is. So here's what's happening.

● I generate the arrivals. That's called Create. Create customers.


● Uh, I try to get a haircut. This is called a {rocess module. I try to get a haircut. If the single Barber, say,
is busy, then I wait in line. So you have a little line forming here.
● And when I'm done I move...da-da-da... out of here using that Dispose block.

Now Arena is very very nice. You can see I've generated 62 customers so far. Four of them are inside the
process, one getting served, three are waiting in line and 58 have left. So it's very nice. It keeps track of
everything. Very, very nice.

So here's a summary of what we just did in this very, very simple lesson.
● I talked about the Process-Interaction approach and we've kind of seen that before in the previous
module, but now I talked about how it relates to Arena. We even looked at a little baby Arena program just on a
picture.
● In the next lesson, I'm finally going to take our first look at Arena itself. How to get it a little bit of a
sneak peek and this will be very nice. It will be the first date, so to speak.
So anyway, hope to see you in the next lesson and get ready for Arena.

Lesson 3.1: Lets Meet Arena

Well, it's finally time to get to meet Arena. We're in module 5 and just started with some introductory material.
Now we're going to get to the good stuff.

Here's the overview for this lesson. Well previously, we looked at the Process-Interaction approach and this is
very applicable to Arena because that's what it uses. And it just models how customers go through the system
encountering different processes. In this lesson, we finally meet Arena. And what I'm going to do is I'm going to

155
show how you get the software and we'll take a first look at the screens and a little bit of the functionality, but
we'll do just an intro demo at the end of the lecture.

So getting Arena, it's really easy. Very simple to download and you can install the free version of Arena. The
student version very very, very nice. You just go to that website. It's a Rockwell software product, but you can
go to www.Arenasimulation.com. Get your free student version. It's a Windows product. Don't panic if you're
actually using Windows, that's fantastic. There's certain Arena stuff that you're going to have to look for that
stored deep down in a rock called software directory. I don't want you to worry about it for now. There's a lot of
goodies around that we’ll occasionally search for. Not just yet though. I just wanted you to be aware of that. If
you don't have Windows, it's not too much of a problem. First of all, you can use Arena that's available via the
Georgia Tech virtual labs, or you can be totally nerdy and partition your desk to use the Windows operating
system, and then you sort of have pretend Windows.

That is what the Arena work screen looks like. You see, a lot of buttons up on the top. They all have certain
properties. Some of them help you draw things. Some of them help you get graphs. We'll talk about those as
we go on. Here's the main work area. This is where you're going to drag and drop This stuff and you connect
that stuff up in here. So that's called the Template panel. It's where you get most of your stuff. And down here,
this is where there's some runtime information and certain spreadsheet information that's occasionally useful.

So here's what the main dialogues look like.

● File of course lets you do the usual stuff. Open an old file, make a new one close, save, et cetera, et
cetera. But it also allows you to import different template panels in background pictures. So remember we had
this template panel called the Basic Process Panel. It turns out that there are many Arena template panels and
you need to go to file to get those, and we'll show you how to do that later.
● Edit allows you to edit various things, insert objects, other nice stuff.
● View allows you to gain access to the different toolbars, customize what are called “named views.” So
you click on a key and it pops over to a particular part of your simulation for nice easy viewing. Very very nice.
● Tools. Lots of cool toys.
○ There's the Arena Input input Analyzer. What that does - it fits distributions for you.
○ OptQuest does some optimization.
○ AVI Capture allows you to capture runs so you can show your friends
○ Arrange. This is basically giving you some visualization aids. I use this occasionally. Make things look
pretty. The same thing with Object and Window. They're just to make things look pretty.
● And run. Of course, that enables you to set up the runs, deal with runs, make the runs go fast, or
step-by-step, or batch runs. All sorts of things.
○ And this is the sort of the set of buttons that enable you do runs very quickly. This thing starts a run.
This allows you to go step-by-step so it's a lot slower than that. This is Fast forward. That pauses. This one
completely stops the run. And this little fella allows you to make the run go faster or slower.

Here's what the Basic Process template looks like. Basic Process template panel. Those words are all used
interchangeably. So look at this. You've got a bunch of bunch of different icons here. The purpose is to do
really, really easy stuff here that even a University of Georgia student might be able to do, at least on a good
day.

156
● And the panel consists of these icons here. These are called Modules. I call them blocks sometimes but
they're really called modules: Create, Dispose, Process. We've heard those words already a couple times. And
what you do - you drag them out to the main work area. You make your little models with them.
● Then this stuff down here. These are related to spreadsheets. So this keeps track of the different types
of customers. This keeps track of properties of the customers. Attribute, Entity spreadsheets... so there's a
bunch of these as well, and we'll talk about those as we need them.

We finally got to meet Arena. We downloaded Arena. I showed you how to do it, and then we reviewed sort of
what the main screen looks like and some of the characteristics on the main screen. Next time I'm going to
take a closer look at this Basic Process Template, we'll look at that in more detail, and it's the components of
this template that will allow us to put together our first simulation. So this will be a lot of fun. We'll finally start to
do some real simulation, so we'll see in the next lecture.

Lesson 3.2 Lets Meet Arena Demo

Here's our demo for the introductory material. As we meet Arena. So what I'm going to do is just show you
around the screen a little bit. So this is the main Arena screen as we outlined in the notes. What I'd like to do is
make a tiny little drag and drop model. I don't need to give you any details on this. And here's how we do a
model.
● I'm going to create some customers.
● I'll zoom in using the plus button.
● I will grab what's called a process. Look at that - it connected it up automatically. That's cool.
● And then I'll dispose the customers. And again it connects up automatically.
So, here's what I'm doing and we'll talk about this in a lot more detail later on. I'm creating customers every
once in a while. I'm processing the customers, and then I'm disposing them. So you don't need to see any
other details other than when I click the Go button here. That's the Go button. And I'm going to reduce the
speed. This is a speed factor. Let's reduce a little bit. Here's what you get. See, these are customers flowing
through, getting self-service it turns out, and then leaving. They show up over here and they leave. That's what
this thing is doing.
● If I wanted to, I could pause it right there.
● I could go click, click - and this is stepping ahead one event at a time so you can see every time I click,
I get another event. That's kind of cool.
● And then this stops the simulation. Right there, so see how that works?
So you'll notice that what I did I did a drag-and-drop of these modules. I've made my little program and then I
was ready to run. Now what we'll do later on is that we'll click inside and you can see that there's various
properties that each module has.
● So this is saying, I'll just be very informal here, this is saying that customers show up according to an
exponential distribution with mean one hour, so interarrival times are about an hour apart on average.
● They do self-service over here. They're just delayed. They haven't, they haven't grabbed any servers
yet. I'll show you how to do that later on. The amount of time it takes in that little process area is a triangular
distribution with the mean of about one hour.
● And then they leave so you can see what an easy little easy little program it is

157
Now if I want to move around, use the arrows left arrow, right arrow, up down.
If I want to zoom like I said before, plus zooms in, minus zooms out. So there you go, there's the end of the first
demo. We'll start programming next time.

Lesson 4.1: Basic Process

We'll continue in Module 5 where we're talking about Arena, the simulation language. Now we'll talk about the
Basic Process template which we introduced in the last lesson.

I'm going to go into a little more detail now, so we actually got to meet Arena and go on our first date with it and
the last lesson. Now we'll look at the Basic Process template in a little bit more detail. And it's a really nice easy
template and we'll be able to use the items from this template to make our individual initial simulations. It's
quite easy. All we have to do, like I showed in the last demo, is to click and drag reasonable items to build the
simulations. Very, very easy.

So here's the top half of the Basic Process template I could. I could widen it, put them all in one in one picture,
but I wanted to show you this top half because these are so called Modules. So these guys are sort of little
subroutines, little commands that you can do, and these are the things that you drag onto the main screen to
do stuff.

● This is the top half of the basic process panel. It does very, very basic stuff that a UGA guy can handle
on a good day.
● Now, if you want more advanced stuff, so suppose you're not interested in merely creating and
disposing and processing,decide - you can do you can do just a little bit of work to get much more interesting
and advanced templates. You go to File> Template Panel> Attach, and then depending on where you've stored
stuff, you sniff around for it and you can grab a bunch of more interesting templates. I'll show you how to do
that later on in the demo.
● Now this panel that we have on the on the left of the screen. The Basic Process panel. That consists of
Modules such as Create, Dispose, Process and we'll connect them up together as a flow chart like we did
inlast demo. So “Create” generates arrivals. “Dispose” gets rid of them. And “Process” kind of grabs a server
and maybe makes you stand in line.
So all these all these modules have particular responsibilities, so we'll drag these over to the work area, build
the flow chart, fill in a few numbers, hit the Go button, bang - we've got a simulation.

● Now this is the second the bottom half of the panel. This thing iIs sort of more informational this area.
● These items, such as Attribute Entity, et cetera. These are spreadsheets that are both informational and
they'll also allow us to easily change certain system parameters, like maybe the number of people who are
working at the barbershop or, you know things like that. You can go into these spreadsheets and change those
parameters quite easily. Maybe the service rate. You know all sorts of things you can change.
● Here's an example. The Variable spreadsheet (so this guy right here). The variable spreadsheet defines
global quantities. That what a Variable is. Such as maybe you've got a variable called WIP for Work in Process
and that gets updated as the simulation progresses. As a guy shows up, work in process goes up by 1, as a
guy leaves, WIP goes down by one. So that variable defines quantities like that. It keeps track of them. The

158
Resource spreadsheet keeps track of the names and the capacities of the different servers, the different
resources. And you can go and change those capacities, or maybe the capacities can change by themselves
over time if you make a schedule. But you could do all this very, very conveniently by going to the spreadsheet.

So here's our summary for this lesson. This time I reviewed the modules and the spreadsheets that are
available on this really easy Basic Process template. And in the next lesson, we'll learn about specifically the
Create, Process and Dspose modules. So these are coming from the template, and I'm going to build our first
official simulation. We'll look at this in much more detail in the next lesson. So see you then.

Lesson 4.2: Basic Process Demo

Now I'm going to give a demo on this Basic Process template and we'll just talk about a couple of things that
you can do with it. Then I'll talk about how you can get other templates if you so desire.
So this is the little baby model that I put up in the last demo: a Create block, a Process block and a Dispose
block. So customers are created every once in a while, they they do self-service here. It turns out they do a
little job - I'll show you how to get a real server later on. And then they leave. Create, do something, leave, and
every customer goes through the same pattern.
So what I did - I just drag stuff over.
If I want to. I can have them do a couple processes.
● So let's let's click this. I click on it and delete, so I've deleted my connection. Don't worry about how I did
this specifically. You can play with that later.
● I'm going to grab another process, insert it, and then I'll see this connect it up automatically. This is what
you click if you want to connect it yourself. So there you go.
So I've got two processes now. Create a customer, he does something, he does something else, he leaves. So
it's very, very simple. You see, look at this. It's very interesting - information has magically appeared here. It's
keeping track of which processes are around. It keeps track of which Create blocks are around.
And if I go here to the Resource spreadsheet, well, it it turns out I don't have any resources. I can make one
later on. I can do so right here. I'll do what's called a Seize-Delay release. We'll talk about that later on. Add a
resource, let's call him Barber. Again, don't worry about this for now.

Go back to the Resource spreadsheet. Well, look at that. It knows that I have a resource called Barber, and
now I can. I can run the simulation. Looks like this. Doesn't look real pretty, but it will at some point. Right, so
that's how you make an easy little program in Arena.
Yeah, so we'lll go into more detail on that later on.

One other thing that I promised to show you in this demo is how you can go and load more advanced
templates. Now we're going to be spending some time learning this stuff right here, but once we're done with
that, you might want to go and look at the advanced process templates. So how do you do it? File > template
panel (What a surprise) > and Attach. Now, here's where the adventure begins. I've got this random location
that I'm in right now, so I've got to go searching around. I happen to know where the thing is. Up up up. So I'm
currently in a Dropbox and I really need to go to the computer, so I'll go to the C drive and it turns out I know
that these templates are stored here in program files. Then I have to go down to Rockwell Software. Hopefully

159
this is easier for you. Then Arena. And then template. And so for instance, I can grab the Advanced Process
like I promised. And look at that - a bunch more stuff appears here, Things like a delay and and CS and all this
stuff. And if you're a spreadsheet person, a number of additional spreadsheets. So that's how you do it. If we
ever look at an advanced model, which we will later on, you'll know where you can grab this template from. So,
OK, well that's the end of the demo and we'll see you in the next lesson then.

Lesson 5.1: Create Process Dispose Modules

While we're continuing our little adventure with Arena in Module 5, what I'll look at in this lesson, a little trio of
modules called Create, Process, and Dispose.

So here's our overview for this lesson. Last time I I learned about, we learned about the Basic Process
template, which is where we're kind of getting our easy preliminary stuff from. And in this lesson, we'll discuss
the Create Process and Dispose modules that can be found in that template, and we'll build our first official
simulation. I mean, we've run simulations already, but now I'm going to build one from scratch and and we'll
care about the inputs and things like that.
Now these modules are very easy. Create,Process, Dispose, but there's a lot of stuff going on inside of them
and I'll have some detail on just that topic.

So here's the Create-Process-Dispose trio.

● Create - periodically we generate customer arrivals.


● In the Process block (process module) we have work performed on the customers and maybe these
guys have to wait in line
● the dispose - customers just leave the system after they're done. So here are a couple fun facts.
○ Dispose is called “terminate” in the language GPSS . Every language you have to get rid of customers
somehow, and
○ in Automod it's called “Send to die.” So these are actual names and so I think Dispose is quite a
reasonable name considering the competitors.

So here's how to use this stuff.

● I just drag and drop from the template. We go from the template, drag and drop them onto the main
work area. We've done that a couple times already.
● The modules usually, but not always, connect automatically and I quickly showed you how to connect
last time. This time I'm going to do a little more carefully so you'll get something that looks like this. One way or
the other, they connect and we have a Create connecting to a Process connecting to a Dispose. So we create
the customer. We work on them. They leave.
● The connects are actually instantaneous for customers, so even though they appear to be moving
between the modules, it's actually taking no real time. So connects are instantaneous travel times. We'll learn
how to make travel times actually positive at some point, not just yet though.
● So, what we'll do then at that point is that you make this little mode,l you hit go on the icon and you see
the guys moving around.

160
● So I have to admit the customers at this point don't look very good and we don't see any lines forming. I
mean I showed you right at the end, the last time we had some lines. But I just want you to be patient. See the
subsequent lessons and demos and I'll show you how to have lines form and make things look better.

So let's dive a little bit more deeply into the Create module. We’ll do the same for Process in a little while.

● so let's click into the modules and see what they need from you. You're going to need to put in some
numbers.
● So here's an example. The Create module. Looks like that - a lot of stuff there.
● There are fields for the
○ name of the module (it's highlighted). The name of the module is actually occasionally important. First
of all, it's displayed so you can see it when you're running the model, but it's also useful for other things.
Sometimes you need it to say “I want to go here” or “I want this queue.” So you do need to be a little careful
naming the thing. You're not allowed to name two things the same name. So Arena will catch you on things like
that. But anyway, that's where you name it, and that's usually what appears on the when you look at the model
itself.
○ You also want the type of customer (type of entity). That's given right there. Uh, it's defaulting here to
Entity One, but you might want to call it Part A or Barber Customer, or you can name it anything you want, but
that's the type of customer you have now.
○ You're also interested you're generating arrivals. What's the time between the arrivals? Well, in this
case, it's random exponential - means it's an exponential distribution. If you look to the right, the parameter is
one. That's actually the mean. So unlike when we did the probability review, that number is going to be a
mean, not a rate. So if you had random Expo seven, that means the mean interarrival time is Seven. Random
expo the exponential distribution with parameter seven. I'll look at a couple more of these in a while during the
demo.
○ Number of customers per arrival. Well, if customers show up one at a time, that's great. But what if you
go into a restaurant? Customers will show up several at a time in different size groups, and so I'll show you
how to do that in more generality. In the demo, right? Right here, it defaults to 1 customer per arrival.
○ And then what's the maximum number of arrivals allowed? Well, sometimes you want to end the
simulation after 100 guys show up. Here, we're letting an infinite number of customers show up and we'll end
the simulation some other way. But in this case the default is -It runs until you tell it to stop. Here we're not
telling it to stop, but if I put like 100 there, it would stop after 100 arrivals.
○ So then first creation means the time that the first guy shows up, right there. So in this simulation a guy
is waiting at the door and he shows up at time Zero. I can change that as well.

○ Process and dispose modules well. Here's what the Process module looks like.
There's actually a lot more space in there, and you'll see the word delay appears in the middle of it.
Many, many more options are available and we'll talk about that in the next lesson, so the Process block has
fields for the name of the module.
Just like before, the type of action, so this is the tough part. This is where you can try to reserve a server.
Or worker or free a server who's currently in use? We're going to be looking at the word CS delay release in
the next lesson.

161
We only see the word delay right now, but we're going to be looking at many more general possibilities with
that drop down.
So more on this in the next lesson. So what action do you want? Do you want just delay? All that means that's
customer self service.
CS delay release means I want to grab a Barber. I want to delay him for a while while it works on me.
Then I want to release them so these delayed release. We'll talk about that in next lesson. How long are you
going to be delayed?
So what's the service time? Well, right here it's a triangular distribution. So how long are you going to be in
there in that Process module before you can leave zero, then the dispose module just means you're out of
here.
Nothing deep going on. You just use that to leave to get rid of entities. And you could also name that module
two in the same.
Anyway.
So then a summary of today's lesson. We did a lot of stuff I talked about specific aspects of the basic Process
templates Create, Processed, dispose, trio of modules.
Next time we're going to take a deep dive into the Process modules. Specifically, does a lot more than you
might think. There's a lot of possibilities there.
So OK, well I'm I'm looking forward the next lesson. Make sure you look at the demo and then you'll be
prepared for what's coming up next. So see you soon.
2nd.

Lesson 5.2: Create Process Dispose Modules Demo

What I'll be doing in this demo is going over aspects of the Create Process. Dispose trio of modules, and then
I'm going to sneak in a bunch of functionality from Arena. So for the third time fourth time, I don't know what
we'll create. We'll process.
We'll dispose, so we're generating customers. They're going to show up, do self service for now and then
they're going to leave one at a time. See, we'll just draw the picture again. Go there they go see.
And then I'll stop there.
OK, fantastic, so this is boring. Just watching these little pictures. Let's double click inside and see some of the
stuff that you can do.
I moved it for us all. It's not. I do this all the time. Double click and see it's got all this stuff.
There's the name like. I promise this is the entity type, not a very sexy name, and any one. This is the time
between arrivals.
Right now they're occurring according to a Poisson process, so the time between arrivals is exponential one,
that's why.
We put the word random because that really signifies X.
Potential what I could do? I could have a constant arrival rate, so this is every one hour exactly every one hour
they're showing up.
I can change the units every minute they show you see very easy. I can have them show up according to a
schedule. We'll talk about that later on. Don't want to touch that now. I can have them.

162
Arrive according to an expression. An expression is sort of just a general expression for the arrival time, and
you know it could be something like this uniform distribution.
Uh, between 2:00 and 4:00 minutes. Unif, 24. I've just shown you an Arena expression, congratulations
uniform expression plus one.
So what that means is that we sample a uniform, then we add one to it. You know that's not very sexy. We
could do more interesting, more interesting things maybe.
Times a a normal distribution with parameter mean three and this is a standard deviation, not variance .5. Well,
we'll learn all about this later on, but you can build these completely.
Well.

These wonderful, completely general expressions and they work really well, so I'll just. I'll go back if I want. I
can just write Expo.
Expo 10 So every 10 minutes on average, according to an exponential distribution, customers show up. That's
exactly the same as writing Random Expo 10.
How many entities? How many customers for arrival one I could write 2 here. Then I would have two
customers show up I could write.
Poisson.
Three, then a Process and distribution number of customers would show up, and these are integers, so it's
perfectly OK, so sometimes I'd have one customer show up to three, but sometimes it also have 0.
So just to be safe, I could put Poisson 3 + 1. That would always guarantee me that at least one customer.
Shows you every exponential 10 amount of time. OK, very simple. I'll go back here. Go to one if I want to be
really complicated, I can right click and look at this. There's this thing called build expression.
And what that does that allows me to build mathematical expressions. I am not going to go into the details here
except just to show you that you can do anything.
So for instance, I can build expressions using different probability distributions, so here's a gamma function
gmwa to surprise a beta distribution.
A lognormal normal, all these different distributions.
There's various math functions truncate maximum value, exponentiate, natural logarithm, all sorts of cool
things. And I'll also show you.
Entity related variables.
So you can look at different attributes or characteristics of your customers and one other thing I'd like to show
basic Process variables Process.
Look at this cost times, so these are all things that are related to Arena that you can build.
I'm not expecting to remember all these, but look at all that stuff.
Replication string variables. Yeah, all sorts of really cool things and I'll give you one at the at the end of this
demo. OK, so that's how the Create block works. Create module.
In the Process module similar, you have all these different distributions. You can use a triangular normal
uniform, an arbitrary expression.
Oh, I better change these units right here to be in minutes right to match up my arrival rate.
OK, great.
Let me I'll click back in this delay right here. We saw last time that there was some interesting stuff that might
be possible and look at that.

163
We have CS delay CS delay, release delay release. All sorts of possibilities. We'll talk about this in the next
lesson. Very, very interesting.
And then you can dispose. So at this point I can run the thing. Maybe what I I might want to do is just show you
one other thing that you can do.
Like I I did last time.
Remember I I clicked off, I deleted that connection and that happens occasionally. Sometimes it doesn't
connect automatically. What do you do if that's the case and you gotta go right up here and look how it seems?
If it says if I go over it connect this is a little connect button so.
Click on that, click click and there you are. You connect it. Very wonderful.
Sometimes what I do, let's delete both of these.
This.
Delete.
Delete sometimes it's faster to double click.
And then you can do multiple Connects Connect here, Connect here. So that's that's kind of nice. OK, So what
I'll do in the next demo is that we're going to be concentrating right there on that process.
Block that Process module because you saw that we just had a delay there and that was boring. Let's see what
happens when we do CS delay and release.
In the next lesson. So that's coming up, alright. See you then.

Lesson 6.1: Details on the Process Module

We're in Module 5 starting to wind our way through Arena, and in this lesson I'll talk about details on the
Process module.
In the last lesson, we learned about the Create Process and dispose, trio of modules, and we saw that the
process might have had a little bit more in it than at first glance.
So what we'll do in this lesson? We'll learn much more about those goodies that are inside the Process
module.
So here's the idea. Process module allows you to grab servers if you need them. Use them. Maybe you have
to wait in line, but you use them and then you let them go for the next guy the next guy to use along the way. It
automatically sets up a line in case you need it, so it's a very very nice.
A module for that purpose, but it's a little bit more general. Even than that you'll see.
Let's use the terminology CS delay release. We've mentioned these words before, so we're inside of a Process
module now, and we're going to do something called a CS delay release.
What this means is we're going to seize the server if possible, delay him and us while they work on this for a
while, and then release them.
Those are the three things that you would want to do with the server. Grab them, use them, let them go while
you're inside the Process module.
Here are the different actions that you can take. You can do just a plain old delay like we.
In the last lesson, boring, boring, boring. And all that means is you're delaying yourself. You're spending time in
the process Process module all by yourself.
Self service time is passing. It's you're being delayed. You could also do this CS delay release, and that's that's
kind of the most popular thing you grab at least one server, one resource you spend time getting served.

164
Maybe you have to wait in line.
Then you get served eventually, and then you free the server for the next customer. If you try to use season,
the server isn't available.
If he's serving other people, you may have to wait in a line in a queue and you can watch this For more
information. Now you could also do CS delay. That's one of the choices in the drop down menu.
Here you grab at least one resource and.
And spend time getting served fine, but there's no release, so it doesn't look like you're releasing him here. In
fact, you have to remember to release the guy sometime later on, else he's going to get deadlocked and a
giant line will form.
So people make this mistake all the time. If you do a season delay, eventually you have to do a release.
Now, why might you just do a season delay? Well, think of an example.
Maybe you're in a hospital room so you seize the room and you get treated there. Maybe it's a bloody mess by
the time you're done, so when you leave, you leave the room, but you don't release it yet because it needs to
be cleaned up so you would do the release later on. That's a good example where you just do a CS delay
without a release.
Similarly, you may do a delay release, so in this case you'd use the. You would use a previously seized server
for a while and then free him for the next guy to use. OK, so here's the dialogue box in the Process when you
want to do work with resources.
So if you do a Caesar release.
A dialog box box pops up and it asks which and how many resources you want to add or delete.
So add means you seize a resource and the delete means you're going to release. So here's what the thing
looks like. Looks a little scary.
It's really not so bad.
You have this. The main screen is just this Process block that we've seen before and you can see that I've
chosen to use CS delay release here. CS delay release, and then as soon as I do that, the resources dialog
box pops up.
And it's asking me how many resources did I want to use and I was adding them. So how many did I want to to
use in this case? I want to use something called resource 111 server from resource.
This one, that's what that thing means. We'll go into more detail in a second you resource 11 server. So let's
talk about that.
A little more example, customer walks into the Process module, does a CS delay release. He wants to grab
and use one unit of the Resource Barber. So there we go.
I've I've done a CS delay release again and I want to grab one unit of the Resource Barber. I hit the add button
right there. I name the.
Source Barber, that's the one I want, so I've added a resource there. I set the quantity equal to 10.
I want to use one Barber now. It could be that the Resource Barber is actually a group of barbers.
It could be there's five barbers all going under the name Barber, but I just want one of them. I just want one of
them right now.
That's all I need. OK, then. Here's what I get when I click the OK.
I now have this one Barber that I've seized as my resource.
It's very nice so this Barber is going to serve me a triangular amount of time. Oh oh, oh.

165
1.51 and 1.5 hours so the Process is given a default name of Process one, so you can see that up on the top
there, and it includes the resource as well as the default Q Process one Q. So as soon as you do a seasonal a
release of that Barber and automatic Q forms.
And it's named after the Process Process one Q. We'll talk more about that later, so the Process is the
resource plus the queue.
So the resource here is called Barber. There could be a bunch of barbers. I'm only using one of them and then
the queue that forms in front of either that individual Barber or all the barbers.
If there's like 5 barbers, I've just grabbed one of them. There still could be a line in front, because other people
are using the other barbers.
OK, continued. Suppose now for some reason I've got a lot of hair and you can see from the videos that that
might not be the case for me, But suppose that it's a really spiffy barbershop and everybody uses 2 barbers.
How do you do that? That's easy. You just grab two of them, set the quantity equal to two.
In the ADD or Edit box when I'm getting the Barber in the 1st place, I could just say I want two or if I use the.
Edit mode there I can change that one to a two. So here for some reason I go into the barbershop, I want two
barbers to serve me, not just one.
So here's a big distinction. There may be 5 barbers in the store, and so that's a different issue. There are five
barbers in the store, but the customer needs exactly two of them.
To get two of them, that's what I have to do. I asked for two, just like I've done in the in the screen here. 2
Barber.
Here's the Five is another issue that we'll talk about in the next lesson. OI need two barbers. There may be a
total of five in the store, but I as an individual customer only need two people make this mistake all the time. So
I need 2 but the capacity of the store is 5.
So in the next lesson I'll show you how to set the resource spreadsheet to the barbers capacity of five. So I
need 2 the capacity maybe five. You do that five stuff later on.
Now we'll do a demo, so go off and take a look at the demo. I'm going to look at different permutations of CS
delay and release the the ones that we discussed before.
And you'll notice that when I use Caesar release, this queue magically pops up and you'll see customers
standing in line. When I run the simulation.
Let's see just like that.
A little Q.
And I also show how the season release multiple resources, so I might simultaneously sees the Barber and a
manicurist. For instance, you can do both of those at once.
And I'm also going to give a little warning about deadlocks, especially if you seize multiple resources that could
be interesting. We'll talk about that in the demo.
So here's a summary of this lesson. We're finally getting to do some stuff. We learn lots more about that
Process module, including how to do CS, delay release, and the Associated queue that magically gets set.
And next time I'm going to finally take a look at these spreadsheets. In particular, we look at the resource
schedule and Q spreadsheets.
Those are those are sort of the most important ones, at least for our purposes, and you'll see that the models
that we'll start building are starting to get more and more significant. So OK, we'll see you in the next lesson
then.
Yeah.

166
Lesson 6.2.: Details on the Process Module Demo

In this demo, we're going to be going over different issues involving the Process module, and I'll begin by
setting up a little example.
It's a very simple Create Process dispose we're getting pretty good at this by now, plus plus makes it bigger
and so let's let's see what we have.
Here this is just a very boring run. Look at that, no queues, no nothing, extremely boring, and So what I'll do.
Let's do a. Let's see. We're only doing a delay here in the action in the in the Process module.
Let's do CS delay release as we talked about in class I'm, I'm going to add a Barber just like I promised. I keep
on using this barbecue joke, which I love.
So I've added a Barber. Just believe me for now, that the various times associated with the inner rivals and the
service times are compatible.
So, so let's click OK. Look at that. We have a magic Q that is popped up as I've promised. When we do the
runs now see we, we get a little Q occasionally.
Very, very simple.
OK, now what I could do?
I could do a CS delay instead of a CS delay release, and now that that's an issue, let me move this a little bit.
That's an issue because what's going to happen is that the queue is going to be arbitrarily long. What I've just
done, I've dragged the queue so that it will.
Allow us to see more people in the queue. We'll learn about that later on, but but let's see. Here we go. Let's
watch. I'm going to slow it down a little.
But I am not releasing anything, so look at that. Oh bad bad bad, bad bad and let me stop the thing for a
second.
This queue is arbitrarily long. I zoom in, we see we have 47 people in the Process. Now one guy is in the
server and well, actually 47 people are in this.
Q That's what this means in this. So clearly what's going on is that I bottleneck things. And how do we fix
something like that? Well, let's go over here. To dispose, I'll.
Delete that connect line. I will add in another Process.
I'll connect them up, double click into the Connect button here Connect, Connect, Connect, Connect not being
real good about making these lines nice and straight.
In here.

I'm paranoid about that, but better than nothing. So what I'll do here is we'll do another action now. We'll do a
delay release.
And we'll add the Barber, because this is the guy who I'm going to delay release Baber and Barber.
Bieber.
OK, so I'm doing a parallel release to that Barber. I'll keep the delay. Yeah, in fact, you know I can get rid of the
delay.
Let's make it a trivial one. Now .10 point 2.3. So this is sort of a trivial delay. I could even make it 0 if I wanted.
And So what should happen now is that this queue goes back to its nice normal self. So let's see, here we go
click.

167
Yep, OK, so we're having we're having good behavior and it's because, uh, I managed to remember to do the
parallel release so you can see that that works quite nicely.
Now another thing I could do, let's kill this off. I'll reconnect. Remember I said you could seize multiple
resources at once. That's a nice straight line.
Yeah.
So instead of just the Barber.
Uh, I could see two barbers if I wanted to, but let's let's sees a couple distinct guys. Let's suppose I want to. I
want to have a Barber and a Manny petty at the same time. So let's add.
Let's add a manicurist.
One many.
And they'll let's say that they serve us in about the same time, so I won't change the delay. And let's do a CS
delay release 'cause I've been squawking about that.
So I'm now seizing 2 servers at the same time. There's going to be no change in what you see now, so here
we go.
There we go and both of them are serving me with the same service times. They're kind of doing it
simultaneously now, so very simple.
I can grab multiple different servers at the same time. Now if the servers have different service lines and we
have to do more complicated stuff, But this this is fine.
For now.
OK, great, let me give a slightly more difficult example.
Involving deadlocks, so I worked on this a couple days ago just to show you now we have a deadlock and
what's going to happen.
Let me verbalize in English you can go and look at the program yourself, 'cause we'll provide that. I'm going to
create two separate streams of arrivals, perfectly legal to do that.
This is a customer Type 1 customer Type 2 perfectly legal customer Type 1. The first thing he does is that he
sees is server one.
Then he sees his server 2. Then he releases both of them. After he's done getting service, you can take a look.
Here's customer type one. He does a CS delay.
On resource one.
Then he goes over here he does a CS delay on resource two and then over here he does a delay release on
resources one and two simultaneously.
So I warn you, you got to be careful about that. Meanwhile customer type B or Type 2 he does sort of the
opposite he grabs.
Server number two first, then server number one first and then he generously releases them. But let's see what
happens now.
You have to be a little lucky for the bad thing to happen, but let's watch it. We're going to step through it so I do
one event at a time. So step it initializes the simulation.
And the first guy just showed up here at Server One.
You didn't see this very easily, but that zero became a one there. What that means is that he's now being
served.
The first guy, a customer Type 2 showed up at server two. I bet that zero changes to one next and it does. That
means that he's being served OK. Now the fun starts.
A second guy shows up at Server 2 and he's got to wait in line, so there he shows up.

168
He's waiting in line at Server two, one more step, another guy shows up at Server One and he has to wait in
line.
OK, now you also saw the first guy escaped out and he's now trying to access Server 2. By the way, is it OK to
access to attempt to access server 2IN completely different places in the simulation?
Yeah, sure, Process is nice in general. I can access server two over here. I can access them over there.
Now it turns out I'm going to be very unsuccessful here.
He's waiting in line. He's waiting in line for this guy to be done.
Now this guy, now the first guy that was being served by Server 2 now is going to try to be served by server
One.
He has to wait in line and then everything grinds to a stop.
And it is just that everything is deadlocked now. See because these guys in these queues are causing
deadlocks back here and nothing is going to go through.
OK, so I'll leave it to you to play around with this. You have access to this program, but this is very typical and it
happens all the time if you're not careful.

Lesson 7.1: Resource Schedule Queue Spreadsheets

Hi everybody, so we're moving along in our Arena lessons. We'll be looking at some spreadsheets in this
lesson. Here's the overview.
Last time we talked about the CS delay release trio and how they comprise the Process module. In other
words, how do you reserve?
Use and release servers.
In this lesson, I'm going to look at some of these spreadsheets that I've been talking about.
Involved with the resources and the queues etc. And these are very nice because you can change certain
things with respect to resources and queues.
For instance, the resource capacity, the types of queues like first in first out, last in, first out you can do lots of
cool stuff.
Here's what the resource spreadsheet is all about. You click on the thing in the basic Process template.
And you get a list of resources that the model is using and that'll be. That'll display at the bottom of your
screen, so let's let's look at an example.
For instance, suppose I have two resources that I'm using. Are Barber and a pedicurist, and now the Resource
Barber itself may actually have what's called a fixed capacity of four.
That means the resource Barber actually consists of four barbers on duty, so resource could be a whole bunch
of servers, and they're all named Barber. They're all in this Barber resource and all kind of identical, but they're
bar.
Numbers and let's suppose there's only one pedicures. This is what you'll see, so you'll see two resources,
Barber, with a fixed capacity of four.
There's always four barbers around on duty, and there's one pedicurist with capacity one. So capacity is the
number of resources the servers from the resource that are.
On duty, not necessarily the number of servers that are a custom.

169
Request, usually when I go and get a haircut, I want one of those four barbers, although in the previous lesson
we had an example where we requested more than one, but usually you request one of those four.
That's a common mistake. Our resources servers are usually regarded as identical and interchangeable if
they're not identical. We have to handle them.
In a different way, usually with something called a resource set, which we'll talk about later, resources are
automatically sent to the spreadsheet when you define them in the Process module.
So remember when I did all that, clicking around in the in the resource, uh, spreadsheet, the Process module?
So they automatically get generated into this resource spreadsheet. Or we could double click over here in the
resource spreadsheet and automatically generate a new resource right here.
So there's a couple ways to get resources ready to go. I can also change this fixed capacity to a schedule, and
so that means that the capacity of the resource.
Varies over time, so like the barbers can take breaks and new guys can show up, et cetera, et cetera. Here's
what the schedule spreadsheet.
Looks like see that's when I don't have fixed capacity. I need schedules. I can set schedules for servers. I can
also set schedules for arrivals, but let's look at the service for now.
So go the resource spreadsheet, change the type to based on schedules so there's a type of resource instead
of capacity.
Change it to based on schedule.
The dialog box is going to change a little bit to accommodate the schedule name, so right here instead of a
type equal to capacity, I've now changed it to based on Schedule C right there, and I've named the thing
Barber schedule. Barbara said you have to make the schedule so you go over to the schedule spreadsheet.
And you'll see that Barber Sked is already conveniently there, waiting with great anticipation for your input. So
the fact that you defined it right here, and I'll show you how that works.
Later, the fact that you define it right here automatically gets put into the schedule. The schedule name
automatically gets put into the schedule spreadsheet and there we go.
So we're in the schedule spreadsheet and we see that Barbara schedule is already there and the type of the
type of schedule. This is sort of a resource schedule, and so it's based on.
The capacity of the resource as it varies over time, so the key is to enter stuff into the durations box.
What this is going to do? Let me circle that what this is going to do. This is going to tell you how many servers
are on duty for different time periods of the day, and so I'll show an example of how to do that a little bit.
Later we can also set schedules for arrivals if you're at the. If you're at the Waffle House, people don't show up
at the same arrival rate all day.
It may be that more people show up for breakfast and there's a lull. Then more people show up for lunch, etc
etc. So the arrival rates change over the day and they need a schedule.
So example you go to Create module, change the type to schedule and that will schedule your arrivals. These
are arrival schedules, so you'll get a dialog box that will again change slightly to accommodate the arrival
schedule name, and we're actually calling it arrival schedule.
Let's see, that's a that's a Create block. I've changed the uh, instead of having these.
Uh.
Constant the usual arrivals. We've now like Ran Expo or constant. I've changed the arrival type to schedule
and it automatically wants me to give a schedule name.
I'm calling it arrival Schedule C very easy now since I've defined something called arrival schedule, it better be
sitting there in the schedule spreadsheet.

170
Again, lovingly waiting for input and there it is. You'll see that the Barber schedule was a type called capacity
schedule.
That's for a resource and the arrival schedule is an arrival type specifically for arrivals, and again we have to do
the thing with the.
The the rows in the durations box because that will enable us to figure out how the schedule changes over the
day.
So now I'll give a little demo. We've got a bunch of things that we'll do this time.
I'm going to look at the resource spreadsheet, show you what it looks like.
We'll look at the resource and arrival schedules.
I'll look at the queue spreadsheet.
I'll look at resource animation just very, very quickly.
You can see those are the icons you got to be looking for.
And I'll look at Q animations. These are very easy to do, not really much of an animation, but very easy to do.
And then we're done.
So the summary of this lesson is that we discuss these spreadsheets. We'll see more in the demo.
Looked at some of the ramifications.
In the next lesson, I'll discuss how to use the decide module, which allows us to make various probabilistic and
conditional decisions that affect what happens to the customers as they go through the model. It's an incredibly
important module, so we'll see you next time.
2nd.

Lesson 7.2: Resource Schedule Queue Spreadsheets Demo

So I'm going to be doing a series of mini demos now on the various spreadsheets that we talked about in the
lesson, so I just made this really easy.
A little model create. There you go. Random Random Expo 5 So customers are showing up every once in a
while.
They're going to do a CS delay release on the Barber. Get that barbeque going and the dispose you'll notice
I've got little people going on here. I'll just briefly show you how I do that.
See, that's a very nice little bit of a line. Green means busy here and I'll show you how we do that in a minute.
Wow, the line is getting kind of big there, don't you think? So let me stop this.
Turn it off. The reason the line is getting busy is because it looks like the number of hours that it takes to serve
a guy is about one hour on average, and yet the customers are showing up about every 30 minutes, and so
obviously the queue is getting bigger and bigger. I hinted at this in the last demo, but I can I can make this
queue.
Little bigger, this is the graphic for the queue. I just dragged it out there so you can see the queue is getting
arbitrarily long.
It's just on average the the port server can't keep up.
Yeah, see, it's just getting bigger and bigger.
Let's stop it. So I'm going to provide the server with some relief. Let's go to the resource spreadsheet right
there, so it looks like a little spreadsheet resource.

171
Click and you'll see. This Barber has a fixed capacity of 1 Barber. There's only one Barber in there. He needs a
buddy, so let's change the capacity right here. This is what you can do in the spreadsheet.
Fantastic two, so using the spreadsheet I now have two servers, two barbers that is easy. Let's see what
happens. Rerun the simulation.
He's still busy. Green means busy, but the lines looking pretty good here.
Look at that.
Very nice so you can see how easy it is to use the spreadsheet to change the capacity a bit.
Now I had mentioned that you can instead of having fixed capacity, you can have a schedule.
So let's let's change this. Hit the the button to do this based on a schedule. It's giving me this yellow warning
sign, so let's call it the Barber schedule.
This is a resource schedule, assuming I can see and type properly. Barb Sked has made the name up and it'll
be OK with that, even though it's still now it's.
White again.
The barbs get and if I go over to the schedule spreadsheet, which is right here?
There look at that. It's sitting there waiting for me brilliant so the Barber schedules is there waiting for me.
And let's see if, let's see if we can make a little schedule here OI click into this durations like I mentioned
before. And let's go like that and I get this little.
Looks like a graph almost, and so here's what I'm going to do. I'll just do the easiest possible thing.
I click here and this means I have one.
Server I click here that means 2 servers. Let's let's then make the servers.
Go on break. So I'm going to have zero servers for a while.
And then I'll have, let's say 4 servers so you can see what's going on is that I have one server I'm going to get
a little bit of a line 2 servers.
The line will go away. Then I'm going to get this. I'm going to get this huge line because they're going to be on
break for an hour and then I'm going to have 4 servers come in and hopefully they'll dispense with the line. I
have no idea what's going to happen.
I see OK, so I'll click the Oklahoma button. So again, what's happening now. In fact, you know, since these you
know the units are in hours.
I, I think what I'll do is I'm I'm going to change the units here into the for the arrivals I'll change.
Goes into minutes.
And over here I'll change the arrivals of the service. At times the units of the service time in the Minutes.
So for one hour we're going to have a lot of arrivals and they're going to get a fairly big line.
Then I'll have two servers come in the second hour and the line will decrease. Then I'll have no servers, the
lines going to get Gigantic.
And then for the 4th hour, the line will really decrease again.
And what I can do? I can make this little clock here.
Not supposed to learn about this yet, but I'll say let's do a let's do a digital clock.
So we can see the time.
OK, so you're not supposed to know about that yet. Kind of a bonus clock right here we go.
So guys are coming in, it's only 1203. We're already getting a huge line. If you zoom in on this.
The queue here you can see 1516. I mean, this giant line is forming in my version of the program.
It turns out that I have kind of unlimited access. At 1:00 o'clock, we're going to get our second server.

172
So my my version of Arena is not going to blow up at when it hits 150 people in the system like your student
versions may.
So at 1:00 o'clock we're going to get 2 servers. Let's see if the queue dies down. It's all 5812 fifty nine OK, one
o'clock. We've got 50 people in line.
And it looks like the queue is dying down moderately.
I'm kind of hanging around 50.
Yeah.
At least it's holding its own.
Down into the low 40s.
And you can see I'll speed it up a little bit. Now we get the idea.
When it hits two o'clock, all heck is going to break loose, 'cause we have no servers then.
So two o'clock bad things are going to happen and look at this. Nobody going through there are no servers on
duty.
And the lines just getting bigger and bigger and bigger. There's 100 people in there now.
120 it's only 230.
OK, look at that by three o'clock 170 people online. Now we have 4 servers.
And let's see if we can get rid of these guys.
The lines dying down. I hope we have enough time.
Let's see if we can get under 100 now. We probably need more than four servers, right?
It's under 100 uh, we're not going to make it back to steady state by 4:00 o'clock now. What happens is that we
revert back to one server, so it goes 12041204.
That's kind of a nice little demo. Now, how do you? How do you get this animation? I'll just show you where the
magic button is.
Uh, right there. This is the resource animation button. Click right here and you see this is where I got the icon
for for busy and for idle.
OK, so and you can. You can play around by editing this kind of thing. I'll leave that to you to explore that a little
bit.
OK, I'll show you one last thing involving the queue, which is kind of nice. Let me zoom out.
So let's look at this some you know, not all queues. I just add another resource icon. Let me delete that.
Not all queues, of course, are nice straight lines, so I can change this into a bunch of points if I want.
And let's add a few points. These are these numbers are degrees. It turns out let's say OK.
And I can I can manipulate. I can manipulate the queue.
So that it you know winds around if I want.
It's kind of nice, so we're only going to. This queue will only see a few customers, even though many other
people are joining the queue. We're only going to see the first few.
You know, watch what happens. See, they're joining this funky looking queue and they wind. They wind
themselves around and stuff.
That's kind of cool. I'm going to do one last thing. Let's go back to the cube.
And I'm going to change these back into a line Q here.
It's just the plain old line. Drag it out. I'm going to do one last little experiment. We'll now go over to the queue
spreadsheet and there it is.
Process one Q. Remember it's named after the Process Process one Q. We currently see that it's first in, first
out and you could tell they were going through just first in first out. That's why I drew the people.

173
Yeah.
Say you can see they move to the right first in first out.
Now let's look and see with one click. I can change it to last. In first out I could change it to anything.
I could change it to, uh, order by how much tip they're going to pay the server, but let's do last in, first out, and
I'll go through this a little more slowly. Let's watch carefully now.
This is why I drew people. Here we go. Let's again we have the sharing violation. I don't know why there's a
guy there's a line, and notice that when a new customer comes in, he goes to the front.
You watch carefully the new customer goes to the front or is served immediately. Isn't that cool?
So if you see a customer come in and he immediately disappears, it means he's getting immediate service
pretty much.
So that's a very nice little demo.
OK, and the last thing I'll show you in in order to make the people look interesting, I went to yet another
spreadsheet, the entity spreadsheet, and I initialized the entity.
The picture of the person of each customer as a person. We can make any kind of pictures we want.

Lesson 8.1: The Decide Module

Greetings and felicitations class. In this lesson we'll be talking about the Disid module, which is a really, really
nice general module that allows us to do lots of different types of decisions.
Here's the overview.
In the last lesson we went through and did discussion on various spreadsheets, namely the resource schedule
and Q spreadsheets, and actually I snuck in the entity spreadsheet for a second. We'll talk more about that
later.
In this lesson, the decide module. Now this allows customers to make different types of decisions on what
they're going to do next.
Probabilistic decisions, conditional choices, et cetera, et cetera. And for that reason, it's really powerful. So it's
very flexible, and it allows you to go this way in that way.
And you can see some additional verbiage on that at that site.
So the module looks like a little diamond, and if you're a flowchart or this is a diamond, kind of makes sense.
When an entity gets to a decide module, he can do the following things.
He can randomly go to either of two locations.
It's called a two way by chance and you make the decision based on probabilities and percentages. People
mess that up.
Sometimes they'll put like .35, that actually means .35%, so you have to be a little careful about that, but you
basically go to the right with probability 90% and you go to the bottom.
With probability 10%, if you do a 90% two way by chance.
There's also an N way by chance which allows you to go to any of north different places and can be anything.
I'll show you how to do that.
And you can go to either of two locations via a condition that's called a two way by condition. You can say if if
it's raining today, go here. If it's not raining today, go there.
If the queue size is too big, go here. If the queue size is fine, go there two way by condition and symmetrically.
You can go to any of various locations using called an end way by condition 0.

174
These are, you know, pretty straightforward types of choices that you can make.
So here's an example. Let's go to Process two with 75% probability and Process three with 25% probability.
How would I do that? Well, here's the block and you can see this is a decide.
Block
Side module and I've entered 75% into the entry and it's a two way by chance. This is one of the four choices
that I could do, so I put that into the field 75% and what happens is a customer comes up here into the delay.
Sees 75 in the field and goes here with 75%.
And down here with 25% chance and then then they both get disposed afterwards. That's unimportant, so very,
very straightforward.
And now it's demo time, so this is very very quick. I'd like you to take a look at the demo. It's very
straightforward, but it's extremely powerful.
In summary, we looked at the decide module, gives customers lots of choices about how they're going to
proceed to move through the model.
And next time I'm going to look at the Disid module, which allows us to give values to attributes variables.
You can even assign graphics and pictures to different customers. It's also very powerful and you should see
what it does when you use it in conjunction with some of the other modules. So really, really important.
We'll try to make it as easy as possible, but it'll be fun. It'll give you a lot of power, so see you in the next. In the
next class.

Lesson 8.2: The Decide Module Demo

So we're going to be doing a couple of demos involving the decide block and what I did. I've created a couple,
a couple little programs that you can follow along with.
This is just a plain old Create into a decide and then wherever you go you get disposed, so it's kind of a trivial
model. But why do we have?
Three possible exits from the disable this, let's kind of watch for a second go.
And see The thing is kind of randomly going to the customers going to randomly different places.
And these these numbers right here are keeping track of the proportion of customers that are going to each of
the three disposes. You're seeing some action down here, but we'll get into that later.
OK, so and what's going on is I'm doing a freeway by chance, so we go in here and click on the Northway. By
chance you'll get percentages I've typed in these percentages 30.
50 and then Arena is smart enough to know that it has to add up to 100, and so the the first selection will go to
that will go to that block with the 30% chance and the next guy 50%. Then the last guy which I don't even have
to type is 20%. So in words.
We're going to this dispose block with a 30% chance here with a 50% chance in this dispose block from the
bottom with 20% chance so that matches the customer Type C customer Type 123.
It goes to these different locations with the various probabilities. Now you've seen here I've got this thing. DISC
O3 1.8.
What are these? We'll talk about that when we get to the next when we get to the next little piece of the.
Demo anyway, so let's watch and see if we actually are going from four to five to six. These disposes with
these probabilities, .3 point 5.2. So what I'll do, let's run it. I'm going to run it very quickly.
Now look at that and as I get more and more customers, I mean we've got maybe 100 customers so far.

175
I'll, I'll run it even faster and you can see that these numbers are. They're actually converging to point 3.5 and
.2, so the the the three way by chance is doing what it's supposed to do.
So it works really nice.
Sleep OK now. What was all this verbiage here? I probably can get rid of this. This discrete point 31.8.
21.03 'cause I didn't really use it. I am going to use it here. This is a this is a different model using this assign
block, which we'll talk about.
In the next lesson, but here's what's going on. I'm just going to tell you what it's doing. We can even sneak a
peek at the assigned.
Block it looks like it's saying customer type equals disk DC and then this complicated thing. Let me just tell you
what it's doing in English because we haven't even gotten to the assigned module yet. It's assigning a
customer type with percentages thirty 1520% so.
By the time you're done with this assigned block, you're going to be a customer Type 1, two or three with
percentages.
3050 and 20? Right? So we'll learn about that in the next lesson, but just take my word for it for now. Anyway,
the point is the side block. Let's go over to the side.
This is now a decide using an Northway by condition and this is, uh, not anyway. By chance it's anyway by
condition, and here are the conditions.
If customer type equals one, you go to the first exit point. If customer type equals two, you go to the second
exit point and otherwise you go to the third exit point. Now let's look at this. I'm going to edit this.
This just says in plain English, if the attribute called customer type equals, one will define what an attribute is in
the next lesson and double equals means sort of if logical equals if it equals one, you go here.
And by here we mean you follow that first thing out to the dispose. If it equals two, you go here and if equals
three you go there. And again, if I run this thing.
Turns out I'm running it quickly again. We're getting well, this is slowly take my word for it, converging to
thinking it's time to.
Hopefully it goes down to 3050 and 20. Yeah, and it looks like it's slowly winding its way down.
Let me speed it up even more. Yeah, it's getting down to 3050 and 20, so this assign block, whatever it is, is
doing the job that allows me to do 3 way by condition.
OK, fantastic.
OK, so I'll stop that. I'll do one more little demo.
So this is a decide that uses a funny condition, so I'm going to have two different types of arrivals. Let's tell you
how this works.
Uh.

Men and women arrive occasionally. They go through this Process block. Actually, I don't think much is going
on in the Process. But yeah, not much is going on.
It's a a non needed block and we're just this module right here is just seeing how many how many servers are
being used.
Many shopping carts are being used so these guys are going shopping.
And.
All I care about here in this decide.
Is.
Is this guy a man or a woman? Is the entity type A man or woman? If it's a man, it goes up here.

176
The man is going to do the shopping if it's a. If it's a woman, the woman comes down here and does the
shopping and the way that it knows what kind of entity you are is right here in the Create block.
I define when I create these men, I call them men and I create the women I call them women. And how do I get
the nice pictures? Remember we had this.
Entity spreadsheet, which you don't really know about yet, but I I assign a picture man the men and picture
woman to the women.
So it's pretty easy. And then in this Process block all I was doing, I do a CS delay. I grab a cart and then after
I'm done shopping I free the card. I do delay, right?
Let's see, so that's very nice. Now do I expect you to do this demo all by yourselves at this point?
No, because you don't properly know the assigned block and there's entity spreadsheet, but I just wanted to
show you that you can use. You can use these.
Nice capabilities of the side both from a stochastic probabilistic point of view and conditional point of view. So
very very nice.

Lesson 9.1: The Assign Module

Hi everybody, so in this lesson we'll be looking at what's called the assign module. I've sort of hinted at that in
the last demo, gave you a little advanced preview.
What this marks is a a point in the Arena lessons where we're going to kind of aim towards a little bit more.
Sophisticated techniques a little bit more sophisticated tools, and we'll start those off with this assign module.
Here's the overview in the last lesson we looked at this beautiful decide module which allows us to make
probabilistic and conditional decisions about where the entities are going to be going next.
And this time around, I'll look at the assign module, which allows us to give different values and uh, and well,
even different pictures to things.
But we're going to be primarily assigning values to what are known as attributes and variables, and I've
mentioned those words before. Again, hinted at those, but we'll kind of formally.
Define those next.
So it's a very very powerful tool, especially when you use the assign in conjunction with decide and other
modules.
So here are the next few lessons.
I'm going to cover material that's eventually going to allow us to simulate multi channel customer flows. We'll
be streaking towards that.
We'll do the assign module in this lesson. Then I'll look at the attribute variable and entity spreadsheets, which
we've hinted at.
I'll look at what are called Arena internal variables, which are quite powerful. These are sort of variables that
are predefined that are getting updated in the background that you don't really need to know about until now.
I'll talk a little bit about displaying different things, graphics and counters, and things like that.
And then we'll look at the batch separate and record modules. These will be the last modules from the basic
resources panel.
And finally, I'll look at run setup and control, which allows us to conduct the run and determine how long it's
going to be, how fast it's going to run.

177
And then we'll get to this two channel manufacturing example. It's actually pretty easy, but it needs all these
concepts. Sort of put together.
The first thing I wanted to talk about before I can do the assigned block.
Is the concept of attributes? Again, we've mentioned this before.
Each customer passing through the system has various properties or attributes unique to that customer.
So, Tom, he's a customer. He's 6 feet tall. He weighs 160 pounds. He loves baseball and his cholesterol of
108, so he's got 4/4 attributes his height weight.
Hobby and his cholesterol and these all have values. The attribute of height is 6 feet 160 pounds.
He likes baseball. Now baseball is not a number, but we can associate a number with baseball and his
cholesterol is 108.
Justin Bieber, I mean Justin B is 411. He weighs 280 pounds. He loves eating lard and his LDL is 543, so he's
got those same four attributes but with different values.
Now both guys have these same attributes, though. In fact they may have other attributes. You know, maybe
Justin has an attribute that says here the instruments he can play and and Tom may not even have those
attributes, but they certainly have. They have these four attributes in common with different numbers
associated with their.
Particular.
Attributes O attributes do need to be numerical, so I could associate baseball with the number 11 and larde.
Let's say with the number 28.
What about variables?
Unlike attributes whose values are specific to the individual customers, variables are global. So if you change a
variable here or there, it changes every place you change variable anywhere in the Arena program. It gets
changed everywhere. If you change somebody attribute, it only changes for that guy.
Doesn't change if you change Tom's attribute doesn't change for Justin.
So example, you might have a work in Process variable. It gets incremented whenever you have a. An entity is
created and starts to go into work and it's decremented whenever the entity is disposed, and these events can
occur anyplace in the program, and so you might need to keep track of what's the work in process anywhere in
the program.
This is where the assign module comes in. It allows you to assign values to some of these things, and like I
said before, you can also assign pictures.
How can you change the attributes, variables and other things? Well, you can use this very powerful and
flexible assign module. That's what it looks like. You see the name of the assign module.
I can do a change of variable. I can change an attribute. All sorts of things, even the entities picture and so
using the menu.
I've decided to change an attribute called weight and I've changed it to 160 pounds here. It's very simple.
So what this says then is that this customer now has a weight of 160 pounds.
And it's demo time.
Here's what I'm going to look at.
The use of unassigned module to change attributes and variables.
The use of assign for entity pictures.
And then how do I use decide and and assign together? See very very nice very.
Interesting things that you can do.
With both of these blocks.

178
So summary of what we did in this lesson. I talked about attributes, variables and then how we can use the
assign module to assign or change those things.
In the next lesson, I'm going to look at various additional spreadsheets. We have spreadsheets for attributes,
variables and entities. Remember, we looked at that entity spreadsheet before.

Lesson 9.2: The Assign Module Demo

In this demo I'll be looking at some of the things that the assign module can do sometimes in conjunction with
the decide module as well and other things.
So in the first demo, let's just take a look and see what's going on. I'm creating some customers here. I'm going
to assign a picture to the customers.
In this, in this case, it's a man.
Going to have him do a process so we do see delay release Gee, what a surprise the Barber and it looks like
the Barber takes about 15 minutes on average to serve the guy.
Then it turns out in this decide block 90% of the customers are satisfied and they leave. So you can see most
of them just.
You know they just go right through.
There's a satisfied customer. There's another one, OK, so let me stop it. So most of them leave. 10% of them
come down here.
I'm going to do another decide I'm going to change the picture again to indicate that they have to go through
and they're going to come back around here and undergo another haircut because they weren't satisfied with
the first one.
Uh.

So in fact what I could do since I'm here? Let's suppose that this is a triangular distribution with 515 and 25
minutes. What I could do?
I could make this an expression, let's call it expression service time.
I was typing this in and I better define service time.
Uh.
So let's do that here. In the assign block, so every customer is going to have a service time and it's a.
It's an attribute because that service time is for him a little bit of an advanced concept, but let's.
Service time that's an attribute, and if I remember correctly, that was a triangular distribution with parameters of
515 and 25, so this would sort of be the equivalent thing every customer gets his service. He gets initialized as
a man he comes through with the service.
As a sign and when he gets to the process block.
There's this service time, so hopefully this works, and let's see I'm going to. I'll make it a little bit bigger so you
can see what's going on and and here we go so customers come in there.
And they go through the side block. Yeah, yeah, it looks like everything's happy and occasionally about 10% of
the time they're going to. There we go. Oh, look, what happens in the assigned block, a dissatisfied customer.
The picture gets changed.
Goes from a man to a woman that looks like oh so this. This may be more than just a plain old barbershop.

179
So you can see what happens. We change the picture in this assigned block. How did I do that zero over here I
assigned a picture and in the service time and down here was sort of very easy.
All we did, we just changed entity picture to picture that woman. So it's very very simple. We we just did edit
Entity Dot Picture which is the name of the pictures attribute as well. But it's a special one called Entity Dot
picture. This is an Arena name and we changed it.
The picture Dot Woman, which is a one of the pictures that's available right so very very simple. Very very nice.
Let's look at one more model very quickly. This is kind of a funny one that I've made up here. We have a simple
process. Just go to the barbershop.
And it turns out the queue is ordered by tip and what I'm going to do is, I'm going. I'm looking at the run speed,
it's reasonably slow.
I put the clock here again and So what I want to do is let's look and see what's in that assign module.
Looks like again we have a picture dot person, so picture dot person is a randomly generated picture, not just
man and woman.
So here's the eventual tip is going to be a normal observation with parameters 10 and five. I guess it could go
negative.
But the eventual tip is normal, with a mean of 10 and a standard deviation, not a variance of 50.
Here's the uh CS delay release on the Barber and where the tip playing in. How does it know how to order the
Q? Well, remember we have this Q spreadsheet.
Let's go there Q and look carefully.
The Barber station Q. That's what this is. This is the Barber station Q. It's ordered by highest attribute value.
Remember, eventual tip is an attribute, so whichever is the highest eventual tip he goes to the front of line.
Now I think I'm going to. I'm going to step through this because I don't want this to go too fast.
So here we go. 1203 First guy shows up he's getting served.
Next customer shows up he's getting. He has to wait in line. Next guy shows up.
Ah, why didn't he wait in line because he blew right by this guy. He must have had a bigger tip, so he blew by
the guy waiting in line. Let's do another one.
Let's see.
Ah, he blew by the guy waiting in line, but there's currently somebody being served. See if we zoom in here.
Yeah, there's one guy in service and two guys in the queue. That's three in total, and so this woman in yellow
blew through this guy 'cause she's apparently paying a bigger tip. Let's do another customer.
Oh he he actually yellow? Let's see. He blew by both of them. He's paying a bigger tip. So this is kind of funny.
How about this guy he he actually blew through two of them, but his tip wasn't as good as that guy.
So you can see it's a pretty pretty funny little scenario and you can watch. Oops, I didn't mean it to go fast and I
just went fast. So now we're moving ahead here.
Yeah, you can see we have a giant line by the way, 18,000 people, so you're not going to 1800 people. You're
not going to see any movement in the front here for a while.

Lesson 10.1: Attribute Variable Entry Spreadsheet

In this lesson, we'll be looking at different spreadsheets. Again, these are actually kind of easy. They're
powerful but easy. Namely the attribute variable and entity spreadsheets.

180
Here's our overview. In the last lesson I talked about the assign module and how it can be used to give values
to attributes and variables, and you can also use it to change entity graphics and any pictures.
In this lesson, we'll look at the spreadsheets associated with those things. Attribute variable and entity.
Here's kind of what they look at me. Look like spreadsheets. In particular. The attribute spreadsheet keeps
track of existing attributes that, for instance, you might define in an assigned module.
So you define an attribute and it pops over to the attribute spreadsheet so you can manipulate it sometimes.
Use it, see there it is.
For instance, I've got an attribute called height and it has three rows, whatever that is.
And we can specify the initial values. In fact, what's going on? I could have also defined that here by double
clicking I could define my own, but what's going on?
Is that in this case we have three rows because it can be vectorized, so you can have VEC vector attribute and
I'll.
Give an example in a while that shows how this one has three.
The entries in the vector corresponding three rows and the values. The initial values are 1.31 point 2 and 1.8
respectively.
These so three rows, one point 1328. Now there's a variable spreadsheet which you can pretty much do the
same thing with. You can keep track of all the variables I actually use that a little bit.
More than the attribute spreadsheet.
But it looks sort of pretty much the same. The entity spreadsheet allows you to set an initial picture for your
customers along with a couple other things, but we'll probably use it only for setting initial pictures. See there. It
is very simple, and we've hinted that you've actually seen it a couple of times.
So in summary, we did a very easy little discussion about the attribute variable in entity spreadsheets, so we're
really pushing ahead with finishing off this basic process template.
And next time I'll discuss what are known as Arena internal variables. Now these are things that are all always
hanging around in the background.
You don't really need to know about them unless you need them for something, but they're being calculated all
the time every time something happens, the internal variables are automatically updated, and if you need to
know what's the queue length.
Right now or how many servers are available? Arena maintains that information all the time, so these are really
really useful.
So I'll talk about these in the next lesson and, you know, make sure you show up on time because you know I
don't wait for people. We'll see you then.
2nd.

Lesson 10.2: Attribute Variable Entry Spreadsheet Demo

In this demo, easy little fun demo, I think I've set up a small little discussion on kind of a combination of
attributes, variables, assigns the whole megillah and what we'll do is I'll run this little.
Example 1st and then show you what the what it looks like internally. Let's run it nice and slowly I hope.
So.
This is interesting, let me slow it down. First of all, it's a little too fast. You can see something that's causing a
problem in this line here.

181
And let's slow it down and see what caused the problem. So we go to run, set. We'll learn more about that
later. I I could use another zero here. This will slow it down by a factor of 10.
And so let's run the little fella.
I get a little fast. You can see in that assigned block up on top. It turns into a woman.
Here it turns into a guy and here it turns you into a truck. I was running out of icons.
And I'm going to step through this.
One step at a time here.
Click.
Woman shows up. She's getting served.
Guy shows up he's being served now he's in line, he's waiting in line. Truck shows up.
Now, so the the guy the woman just left, the guy is now being served. You can see there's two people in the
system.
The guy is being served and the truck is waiting in line. I'm just going to tell you you'll see in a minute the truck
is more troublesome. This is what I mean by a troublesome customer Type 3 customer.
So this guy turns out has longer service times. I'm not saying that this truck is going to mess things up, but they
will eventually. Let's watch what happens. I'll step through step.
Couple other customers there's another guy has gotten generated. He's going to wait in line right now. There
see OK.
Now the guys waiting in line the truck is being served at this point and this is the this number. Here is keeping
track of the number of people in queue so we have.
One truck is being served. One man is waiting in line. Two people total, one guy in the line. That's all those
numbers mean. Let's watch.
Some things that there's another woman generated, she waits in line.
Another guy generated.
He waits in line.
Another truck generated and we don't like these trucks because they take a lot of. They take a lot of time, so it
seems that this truck that's currently sitting getting served has caused a little bit of a line to develop and that
was the point. We'll see why in a minute.
Woman shows up. She's in line so that truck has taken some time.
And we can see, yeah, what a freaking hassle. Look at that.
And letting it run now.
The truck is causing a huge traffic jam.
And now let's look at the guts of the program and see why.
This is a regular old process.
Except CS Delay release, Barber and here's the interesting thing Expo. We know Expo, we know mean that's
no problem.
But what does this mean? Parentheses, customer type? Let's remember that if you have a parentheses like
that, it kind of reminds me of a vector, right? So mean customer type.
So I'll just give it away. Mean customer type is the mean service time corresponding to the customer type.
And where is the customer type define right here. It's an attribute, so the attribute is customer type. If you're
generated from this block.
Then you're going to be customer one. If you're generated from this, create your customer two and your man. If
you're generated from create #3.

182
Which doesn't happen very often. It happens every 10 hours.
Then your customer type 3A truck.
These other Creates, by the way, every one hour.
In summary, the the trucks are generated rarely but.
They have long service times. Let's see what the service times are.
So we have an attribute called customer type.
It's a should actually be.
It's just one, two or three.
And let's look at the variable.
Mean.
Service time it's got three rows and look at this. The service times are mean zero, point 5.5 and 20.
So these trucks have a a mean service on a 20 way way higher than the other guys and that's what's causing.
Problem zero we've assigned, and that's a vector variable called mean 5.520. It's a vector variable, and we call
it right here.
Expo mean customer type. So if your customer Type 3 you've got a mean of 20. If your customer Type 1, the
mean is 5. See how easy it is.
So I'd like you to go and play with this and you can do your own variations, but it's quite simple.

Lesson 11.1: Arena Internal Variables

In this lesson, I'm going to be talking about Arena internal variables. These are the ones that I've talked about
before, where they're kind of hiding off in secret updating as the simulation progresses.
In the last lesson we went through and talked about the attribute variable and entity spreadsheets.
In this lesson, I'm going to talk about these internal variables that automatically update and recalculate
themselves as the simulation goes on. They're incredibly useful. You make a lot of decisions using these
things.
So they look at the simulation current state and then based on that state you can take certain actions.
Here's a summary of the kinds of things that you see. First of all, as I've said, we keep track of all these
variables and we update tons and tons of stuff as the simulation runs. I don't even know all the things that it
updates because I only use a small subset.
If you go into this build expression that we looked at once before, there's a whole list of things that it does, but
even that is only a subset of the various kinds of things that you can update behind the scenes, so these things
are good for making decisions.
Drawing graphs all sorts of things. Here's a couple examples. T now is very, very useful because it keeps track
of the current simulation time and so every time something happens in the simulation, every single event.
And T now gets updated every single time something happens. T now gets updated and you can use it to
calculate what's the current time.
And for a variety of reasons that we'll see as the lessons progress, T now is very important energy.
Keeps track of the number of resources that are.
Currently working the number of servers in a particular resource, I should say it like that the number of servers
in a particular resource who are currently working.

183
So if you have, if you have a resource called Barber and he's got 5 servers on duty, it could be that only three
of them are currently working, so NR Barber.
Would equal 3 the number of resources number of servers in the resource. Barber currently working. Similarly,
if I want to know how many customers are currently in a queue, we've seen this before.
The we've got this thing called process one Q, which is the automatic name that a queue is given. If you're in
the process called process one and Q keeps track of how many people are currently in the queue and it
updates every time somebody arrives or leaves that queue. There's also some more esoteric.
Stuff, create one dot number out if you're so inclined to look at. That is the number of customers who have so
far left the module.
The Create module named Create 10. That's a way of keeping track of how many customers have shown you
out of that Create block. It's just this huge list and I would say go and look in build expression.
And you'll get an idea of what's out there, and now it's demo time. One of the things we look at is going to the
shortest line. It's kind of an interesting demo, and it's based on these internal variables.
The summary of what we did.
Well, this time we introduced these internal variables such as T Now and R&Q, et cetera, et cetera, et cetera.
They're constantly updated and they give you lots of relevant info that you can use for all sorts of decision
making.
In the next lesson, I'm going to give you a brief tutorial on how you display variables, graphs and results.
It's quite nice and the graphical capabilities of green are very nice because you know, managers and people
like that like to see these things. So OK, I'll see you in the next lesson. Take a look at the demo.

Lesson 11.2: Arena Internal Variables Demo

In this demo I'm going to show you how to do a very, very simple manipulation to have customers try to go for
the shorter of two queues.
I mean, that's that's what I always do when I'm going grocery shopping. Of course, I'll get the shortest queue,
but it'll be the one where the guy is paying for his food.
You know in pennies, so it takes forever, so short is not always the best, but whatever. Let's let's look at this so
you can see. Here's the queue size and keeping track.
And the customer is very smart. It always goes to whatever line is currently smaller.
And the lines are kind of keeping up with each other. I've rigged this so that the lines are building up.
I don't mean them too, but they're building up, but you can see that the customer is being very careful about
trying to get to the shorter one each time, so they're kind of staying even with each other.
Right, so let's see how this logic works. This is the side block, and it's trivial.
The side block is a two way by condition. It says if this expression is true, go to the right. So up here, if the
expression is false, go down here.
Let's take a look at the expression in plain English.
If the number of people in process one, Q that's the first one is smaller than the number of people in process
two Q.
I go to process one.

184
If it's less, if it's not less than I go to process two. Very, very easy, and there's nothing interesting going on here.
This is just a CS delay release of resource One.
Sees the later release of resource Two. You can see in two seconds you got yourself a scheme that goes to
the shorter queue, and you can do this for all sorts of things. O OK, simple demo, we're done.

Lesson 12.1: Dipslaying Variables Graph Results

In this lesson, I'm going to talk about displaying things. You know. How do you display variables as they update
graphs, even results? And there's a lot of nice capabilities that Arena has as a graphical simulation package.
Here's our overview. In the last lesson I talked about these sneaky little internal variables. That Arena is
constantly updating. They're hiding out doing stuff for you.
In this lesson, we'll talk about how can we display the values of certain variables in real time. How can we
construct useful graphs, histograms and produce nice output files that you can take a look at after the
simulations over with?
How do we get some information out of the simulation? How do you display information as the simulation is
running after the simulations over et cetera, et cetera. It provides lots of capability, simple.
That, for example, let's look at there's a toolbar that you'll see in the main page. Looks a little bit like that. It's
got a lot of funny looking icons on it, so you can get a clock.
Right there click on the clock button, and we've we've seen a digital clock before. I've snuck those into a couple
demos.
You get a calendar if you want click on the calendar button. You can look at different displays of variables as
they update in real time.
See this little thing right there. That's a variable counter. That's twenty of something you can look at
histograms. I'll give that in the demo. You can look at plain old graphs. We'll do those.
OK.
In the demo and then finally, when the simulations over it generates an output file that gives sort of useful
information on server usage, what are the lengths of the queues? The average length, maybe confidence
intervals?
Customer weights and cycle times and anything else that you can define yourself. So a lot of that stuff is
automatic.
But if you define certain things yourself, it'll also give you information on those zero. I'm going to give a fairly
detailed example during demo time.
Now we'll look at a drill press example and then I'll show some of the capabilities of the different.
Graphics functionalities.
The summary is that, well, we discussed ways to display things today, looked at how to display variables in real
time graphs and output files that you can look at after the simulations over.
And.

Now I'll do a quick tutorial on these final modules from the basic process block, namely batch, separate and
record, and then we'll be in a position to do all sorts of really nice programs so, so that'll be the goal of the next
lesson, and make sure to put a 10.
'cause it's going to be an important one.
2nd.

185
Lesson 12.2: Displaying Variables Graph Results Demo

In this demo, we're going to be looking at a drill press with the intention of.
Talking about some of the display and graphics capabilities that Arena has, this, I'm actually plagiarizing this
from Rockwell. This is their model.
31 I've added some functionality to it, but you got to be fair. I sold from them. Let's let's watch. We're going to
have a.
Stuff arrives, we go to a drilling center. I think there's only one server here. Yeah, drill, press. We're going to
ask for one guy. I think there's only one person. Anyway, let's see.
Whoops, basic process. We'll go to the resource spreadsheet. Yep, so there's one drill we ask for it, and then
we leave the system along the way.
I'll make a little graph for the number of people waiting in line. The number of servers, drill presses, busy now
there's only one of them, so.
We could just go from zero to 1 here, not zero to four, and I'm going to give you a histogram of the number of
people in queue. So the histogram means over time.
Uh, what proportion of what proportion of time was were zero people waiting one person, two people, three
people, etc.
So this these two graphs are complementary to each other, the graph and the histogram. And then we'll look at
the output after The thing is over, I'm going to go to run.
Set up and replication parameters. Oh, this thing is going to end after 2000 minutes, we'll go into more detail
and run control later on on. Since we're doing graphics. So now let's put a clock there. Let's do a digital clock,
OK?
OK, and my digital clock let's we can also look at the day of the year, so we'll do a little calendar.
Let's see starting date. I guess it's August 2nd today. When I'm doing the show.
There so we have all sorts of nice information.
Let's drag it a little bit. OK, so so many things, and here we go. Let's let's run it.
Yeah, the histogram look at that. It's updating constantly all the time up to everything updating. Here's the
number of people waiting in line.
Not too much usually.
A couple see it. It pops up a little bit. The server is always busy, so this thing is just staying at one.
And look, the histogram is kind of neat.
Yeah, well this is a little boring to me. Let's see if I can just make a little change so the drill center is.
It looks like the average amount of time in this triangular distribution I happen to know is 1 + 3 + 6 / 3, so it's
about oh what is that 10 / 3 so three and a third? That's the average service time.
The arrival rate seems to be 4, so let's make that a little closer. Let's make that 3.4 so the arrival rate is slightly
longer than the service rate, so that means the lines are going to get a little bigger. And let's watch.
Yeah, so the histogram is going to be a little bit more over to the right. Yeah, looking the queue is building up
more occasionally. Not not huge, but the queue will occasionally get fairly large.
Yeah, see it's getting bigger in the histogram. It's certainly flatter and more to the right than it was before.
So right now 200 is also clocked down here, by the way.

186
203 hundred minutes have passed it's 5:00 o'clock now we're going to run the thing for 2000 minutes.
Let me make this go faster. Notice as we go more and more, the histogram is still changing, but less and less.
And apparently I'm not updating the this graph. Sometimes I can make it flow along, but the histogram it looks
like it's almost in steady state. It's basically not changing anymore. We're going to hit 2000 in a minute.
1800 nineteen 100 Tooth and the simulation is over, so I want to see the results of course.
And it's going to take one unfortunate thing is that sometimes it takes a while for the results to manifest, so I
will stall for a second.
Yeah, depending on if the computer happens to be in a good mood or a bad mood. The results take a while.
And here they are. So this is a crystal reports set of results we'll find out in a later lesson how to avoid the time
that it takes to look at this thing.
There's only four pages. See one through 4 and it gives me key performance indicators like it looks like the
number of customers that left the system.
Was 576, I guess that makes sense if they're coming in around every three or four minutes out of 2000.
Minutes, let's look and see what other stuff we have and anytime it's telling me how long are the parts in the
system.
So on average guys waited 37.4 minutes. One poor soul waited 80 minutes. You know lots of interesting things
here.
Uh, how long did it take to process these guys? Well, on average it took 3.33 minutes to process customers.
That's right, that was what the mean was. So this is the average time in the system, 37.4 minutes waiting plus
33 getting served. You're out of there 40 minutes. Very nice.
And like we said, 576 guys left 595 entered and so it seems to me that there's about 19 guys still in the system
when the simulation ended.
And so here's the additional information on the queue time number of people in the queue. 11 on average.
Although it got up to 25 at one point, got down to zero at another point and the usage.
Well, it says here that the server is busy 96% of the time. We're really giving them a work.
But that's because customers were showing up almost as fast as he could handle them, and so that makes
sense to me.

Lesson 13.1: Run Batch Separate Record Modules

Hello gentle class, I hope everybody had a nice break since last lesson in this lesson. I'm going to be talking
about the batch separate and record modules within Arena.
These are sort of the last ones that we'll do in the basic panel and these are kind of bonus ones that I use a K?
Actually, so in the last lesson. I looked at a bunch of ways that you could obtain information during and after the
run, namely histogram graphs output reports etc.
And in this lesson, we'll discuss so I said the batch separated and record modules and that's it. For the basic
process template at least we'll be doing some more advanced stuff.
In a while.
We'll start off looking at the batch module and what the batch module does is it combines or batches multiple
customers into a super customer.

187
Really cool customer this is what it looks like so for instance, if I take a batch size equal to 3. What that does is
that it accumulates 3 guys before sending.
That group off as one super customer the batch size 3, now if you want to.
Retain information about those individual customers. We'll talk about that in a second. But if you don't need the
information about the individual customers later on, then what you can do is choose. The type equal to
permanent and the customers. Some of them will lose their identities in some sense.
If you want to eventually reconstitute the original members.
Set type equal to temporary because that means we're batching them temporarily when they when they get
reconstituted they're going to be.
Their good old selves.
The what they started out with temporary batches are going to have to be split before being disposed so in
terms of being split keep on watching we'll talk about that in a minute.
Now the sister module to the batch module is the separate module batch separate what this does it duplicates
a single customer single entity or it splits multiple entities that had previously been combined in a batch?
Module so it can do a couple things if if you want. Some guy just shown up and split them into a bunch of
clones that's easy.
Or if you've already combined somebody into a super customer in a batch module. It can split those guys back
up, either into the original guys or if you've signed on to permanent.
Type it'll just split them off into clones of one of the Super customers so if you're dealing with a permanent
batch.
What I usually do is I usually use duplicate original to get several customers but they all have the same
attribute original in this case means the thing that entered the separate so if it's in permanent mode. That
means it's one super customer and all the customers are permanent.
In their attributes they're not going to get split up into the old customers that went into the batch.
They're in one super customer when you split them up. And they're in permanent mode. You're going to get
clones with the same attributes.
This is what that looks like you hit the duplicate original and you're going to get one duplicate so that's 2
altogether.
If you're dealing with a temporary batch now. This is where we want to get the original customers that went into
the batch.
With their old attributes.
You split the existing batch to reproduce those customers split existing batch and retain their old entity values
from before.
It's pretty easy you have to do this a couple times, but it's pretty easy. There chord module. All that does it's a
statistics gathering module that's all nothing no big shakes?
And we'll talk about this module as we encounter it in future examples. It's very straightforward and then he
passes through it, and it clicks off a statistic that it'll memorize for later on demo time.
Now I'm going to look at a couple of permutations of the batch and separate modules and you'll see what the
differences are.
In this lesson. We finished off the basic resources template by looking at the batch separate and record
modules all pretty easy.
In the next lesson, I'm going to look at a bunch of run setup and other control functionalities that allow us to run
the simulation in different modes zero gotta run see you next time.

188
Lesson 13.2: Run Batch Separate Record Modules Demo

In this demo I'm going to talk about some of the things that you can do with the batch and separate modules
because there's a couple different permutations of these things.
Let me run this little program that I wrote and you can see women enter this assigned block and what I've done
is I've put some logic in there. You can take a look.
That yourself is kind of cute. That enables them to change into women and men sequentially as they go
through this decide and then that assigns so you can see they come in. There's a woman, the next one is
going to come down through here.
You'll see come down and become man. It's just for purposes of illustration, woman, man, woman, man. And
you can look at the logic if you want, then a pair of women and men meet up at this batch block. This batch
module let me zoom in.
So you'll see woman and man will meet you here actually. It's man and woman. When they meet you they they
then go off together into the separate block.
Kind of like marriage I guess. So woman, man and then they go into a separate block and what I'd like you to
do now is we'll look into the logic here.
And see why is it that two women are coming out of the separate block and the reason I drew the different
graphics was to show you the difference in the things that you can choose in the batch and separate blocks. So
let's take a look real quick.
In this batch block, what I've done is I've we have a batch size of two, that's why two people have to go in
before the batch leaves the Super customer.
The type is permanent. Remember that that means that when they come out, they're going to have the
characteristics of the Super customer and they will lose their original attributes.
The save criteria when I use the word last here. What that means is we'll save the attributes according to the
last person in the batch.
So it's going to be a woman in this case. 'cause man and woman going in that order. Now in the separate
module.
Remember we said duplicate original, So what happens now?
Is when the Super customer comes in comes in as a woman of batch size 2.
And when we separate them, they separate out as women because we're duplicating the original, so the
attributes go as the as the attribute of the woman. So again, let's watch.
Woman comes in.
Because the woman is the last person in the batch watch. Again man, woman, woman comes in the separate
and out is two women because we kept the same attribute as the as the entity the Super customer coming into
the separate. You can play around with that and you'll see I've put some explanation in.
In text there OK Now let's go down a little further in the same demo and we're going to do sort of the opposite
thing.
Now I'm going to use a split because I'm using temporary entities, so this is everything kind of the same util
here.
Here.

189
We we have the in the same order we have man coming in. There's a man. He's going to wait for the woman to
come into the batch.
Here we go and off they go. But look at that they come out as a man and a woman, they're retaining their
original attributes and I'll just show you how you do that. It's very easy.
So click.
Let me scroll in zero in the batch.
Let me ask.

Remember, see I say temporary.


So we we keep we we set it to temporary and then in the separate and we said we had to say split existing
batch and retain original entity values just like in the notes and there you go. That's how you do it.

Lesson 14.1: Run Setup Control

In this lesson, I'm going to be talking about run setup and control. How do you run these Arena simulations?
Here's the overview in the last lesson I finished off the basic resources template by talking about these batch
separation record modules. They're kind of nice. We don't use them all the time, but they're very nice.
And in this lesson, I'm going to be looking at a bunch of easy little pieces of trivia just to help you get your
simulation runs going and to take a look at some outputs afterwards.
Here's a discussion on run setup what you do in the in the Arena screen is that you go to the run setup
functionality and you'll see lots of stuff.
So go up there on the right up top, click on run setup and you get tons of things. It looks overwhelming actually,
but there's lots of tabs, lots of things.
We can do and we'll talk about a couple of the replication parameters tab. That's the one that I kind of use most
often. It's got a lot of nice stuff in. It keeps track of the number of independent run.
Of the simulation, because what you have to do in most simulations, you repeat the runs over and over. That's
called the number of replications you can initialize between the replications.
So you start completely from scratch. Do you turn off the statistics collection so that you start from scratch each
time?
With respect to the statistics with respect to the system state.
So these are little decisions that you have to make, and we'll talk about those as the course progresses. How
long is your warm up here going to be?
For instance, how long do you run the simulation before you start keeping data? So if you're interested in a
steady state simulation, why bother to keep data right at the beginning of the simulation when the darn things
warming up? That's another decision.
You can make replication length. How long is each?
Ron, how long does it run until the simulation stops? So replication length is typically in time, so I could say,
well, I want to run the simulation for 24 hours.
There's other ways to stop the simulation. I could use the Create module to run it for a certain number of
customers, or you'll see in a little while.
Can do another trick. I could use a terminating condition.

190
That's the trick I'm going to show you. We'll give an example a little while. These are special ways to stop the
simulation, and we could say, well, maybe stop the simulation when there are exactly 4 customers in the
queue, because we've got a problem, it doesn't make any sense to go on, or, you know, do something like that.
But if you have a specific terminating condition, you can specify that.
In this run set I can also adjust the speed of the run. You've seen me do that a couple times already in the
demos. Sometimes I need to speed up or slow down the simulation by more than what I can.
And do by using the main screen. This little tab I can move over. I can hit the the greater than or less than sign
to go faster or slower, but if if those aren't enough I can actually go into the run speed tab and change some
numbers and I'll show you how to do that Demo Arena as we've seen.
In a previous demo has a variety of different reports. I'll go into a little bit more detail on those.
These reports are what come up at the end of the run. Sometimes they're annoying, and sometimes they're
more succinct, so of course the reports.
In any case, contain information on customer waiting times, the expected observe length of queues, server
utilizations, other things like that.
In addition, you can do a category overview that summarizes the information from all the replications, so
sometimes a particularly annoying thing is if I do 100 replications.
The simulation and I'm not careful. I get all the information from those hundred replications displayed in the
report. This is annoying.
So if I want I can just get the category overview, which summarizes the info from all the replications. I'll show
you what I mean by that.
If I want for some crazy reason to look at every single tedious replication, I can do that by looking at category
by replication. And finally I can look at a very concise Simon report.
Which gives a text file kind of a primitive looking thing, but if you're old like me, you'll remember that Simon was
sort of the original version of Arena before it was nice and pretty and had.
Graphics Simon was like a regular programming language and this was the old text file version of the report
that came out of that program.
So you can see it's a little bit old looking, but in fact it's kind of nice. It's very succinct in some ways.
In terms of run control, well I can go the run set up, run control area and that gives me a variety of ways that I
can actually run the simulation.
For example, I can run the thing in batch mode, which is really nice. It turns off all the graphics and results in
extremely fast runs because you're not worried about keeping track of the graphics.
At any point that takes up a lot of processing.
Time now of course you would only use this if you're happy with the simulation. You don't need to see the
graphics, but once you're happy with it, this is a fast way to conduct the runs.
So let's do demo time and I'm going to look at a couple reports and I'm going to look at an example where I
stop the simulation the first time the queue hits size 4.
Here's a summary of what we did in this lesson.
I showed how to do various set up and run control tasks. For instance, we heard about these independent
replications, which will be very important when we do output analysis.
Later on we learn how to do run termination from the setup menu, among other places, and we talked a little bit
about output.
Reports.

191
In the next lesson I'm going to do a small case study involving a 2 channel manufacturing system where we're
going to put a lot of this material together, so we've kind of been looking at all these easy little lessons to aim to
this slightly more substantial example, so we'll see you then.
2nd.

Lesson 14.2: Run Setup Control Demo

In this demo I'm going to be talking about reports and run control and terminating conditions, and it's actually a
very easy a nice little demo I think, and you can see I'm starting off with.
I pre set up this Barber example Create process dispose. We've got to seize delay release of this Barber and
you can see I've been spending the last few minutes.
Trying to make this nice beautiful line, but.
Now I'm I'm I'm old and I can't do it. How do you make a nice beautiful line? I figured this is a good place as I
need to show you a little trick that you can do.
Let's do view and I'm going to click Grid so I get these nice little grid dots and then I'm going to click, let's see
let's do snap to grid.
And this just enables me to do a little bit more fine tuning here. Look at that kind.
There we go. This is a pretty good line now and it you know, since I'm a little obsessive compulsive, it allows
me to have this more or less nice straight line.
I guess I could probably get one or two pixels better, but that's good enough for now. So anyway, let's run the
little thing I've set this up so that you know occasionally look at that line forms.
Actually, a substantial one sometimes, and So what I want to do is let's go to run setup to go the run setup area
here and you can see all these little things that I promised.
Oh, here's the first thing, boy that was going kind of fast before. Wasn't it? Even if I slowed down? Let's see
what I get here.
Yeah, it's still a little bit fast. It's a little bit fast and you can see I've I've gone as slow as I can here.
So while I'm at it, let's do run setup and we'll slow the thing down here. What I'll do is I'll make a little slower by
adding another zero to that animation speed factor. Here we go. It's probably going to be a little too slow now.
A little slow.
Now I can use the greater than sign or I could I could use this fellow up here to speed it up just a little bit, see
how it works, very simple.
I'll keep it nice and slow.
Yeah, so I'll use the less inside slowed it down a little more. OK so I go back to the run setup and let's go to
replication parameters.
You can see all these things that I promised you. Let's just do one replication for now and I can control how
long I want to run the simulation. We have no warm up period. I can run the simulation.
An infinite amount of time? Well, that's crazy. So maybe we could run it for, say, 100 hours. Let's see what
happens.
And just to prove that we're running it for 100 hours, you'll be able to see a little clock down here as the
simulation progresses.
See, right there, there's a clock. We've run it for one hour so far. Again, let's speed it up now and see.
Look 100 hours as promised, so do I want to look at the output? Yeah, let's let's look at it.

192
And look at this. It has conveniently made me this is a Simon output sheet. It's giving me information like
Barber utilization.
He was working all the time. Pretty much that I think I had too many customers showing up. Lots of customers
in 116 customers, only 97 got out so you can see that the line was just getting too big and the reason was.
As I add customers coming in too quickly, so that's nice. How did I get that output? Well, let's go back to run
set.
And we go to reports and you can see that I'm using the Simon summary report if I wanted to look at category
overviews or category by replication, let's just look at category overview.
I can rerun the simulation, there we go, it's over.
And just like at the end of a previous demo, this is going to take a second before the.
Before the output loads, this is always a hassle. I never know how long it's going to take for this crystal reports
thing to work, and this is incredibly annoying sometimes.
That's why I primarily use the sign in report. So here we go. This is a three page thing that took forever to load,
saying the number out.
We had 97 customers in that 100 hour run and it keeps tracking nice. There's a lot of nice stuff here, tells you
how many entities were in the system work in process. There were 79 customers in the system on average.
Now lots of nice things.
The queue had 6.2 people in it on average, so you know very, very nice. Oh sorry, this is the waiting time.
6.2 hours and the number of people in the queue was 6.9. On average. The server is utilized 100% of the time,
so yeah, it's nice gives you a lot of cool information.
Yeah.
So let's get out of there and back to the main screen. Let's suppose I really hate the fact that I've got such a big
wine.
O, let's go to run set. I'm going to do one last thing. Go to replication parameters. I'm going to just stop this
thing when we have a.
When we have a queue of length four or more, so we use that as a terminating condition. Now I forget what the
queue is called, so let's go to the Q spreadsheet. Oh no, it's called process one queue. I'm very lazy, so I'll just
copy the.
This.
Process one Q.
Go back to run set.
And my terminating condition is going to be NQ.
Process one cube equals 40 when the queue size becomes 4, the simulation is going to end. Let's run this
thing. Now I'm going to do it a little bit slowly. I'll step through it.
Oops, syntax is wrong. What I have to do I have to use a logical a logical expression. So let's do that again.
See I did that on purpose. I made that mistake.
Equals four really equals.
Yeah, so let's run it. The nice thing is that they tell you if you make a mistake. Of course I did that totally on
purpose for your viewing pleasure. So here we go.
First guy shows up.
OK, now that means there are two people in this process. One is being served, one is now in the queue, so it's
actually the second guy showed up here we go do another one.
Oh, I need to slow this down.

193
Oh
The next guy shows up. He is now the second person in the cube.
The next guy shows up. He's the third person in the queue. When are we going to stop the simulation when N
Q = 4? Any second? Now let's see.
OK, now he's not in the queue yet till he pops up there he showed up. I think in the next click things are going
to happen.
Ah, he got into the queue and the simulation wants to know if we want to stop and we say yes and then crystal
reports is going to take 9000 years to load and so this is a great place to end the demo and we'll see you in the
next lesson.

Lesson 15.1: Simple Two Channel Manufacturing Example

In this lesson, I'm going to finally look at sort of a non trivial 2 channel manufacturing example, so this this is a
step up from some of the earlier examples that we've been doing with Arena.
Here's our overview. In the last lesson I looked at different ways to do run control setup type stuff. You know
how to implement independent replications, run speed reports, that kind of thing.
In this lesson, I'll put kind of everything together and we'll demo a 2 channel manufacturing system where we
look at some of the concepts over the last few lessons. Sort of simultaneously.
And I want you to pay particular attention in this demo coming up in this lesson, how we use attributes we're
going to use attributes to sort of memorize things.
Here's the story. I'm plagiarizing this model as Model 4, one from the simulation with Arena book, which you
may or may not have.
I just want to be straightforward about this. In this example we have two different types of arrival streams, and
we've looked at multiple arrival streams before. Now I'll be a little bit more formal about it.
You have type A parts and they show up one at a time. Occasionally we have type B parts. They occasionally
show up four at a time in little groups in little batches.
Type As show up a little more often than Type Bs. On average, there's you know, sort of the same number of
A's and B's in the system, and I've got little.
Colored marbles and you'll be able to tell which is which.
Type as feed into a prep a server or I guess they get prepared type BS. Go to a prep be server and they have
different service times with these preparation places.
This is the cool thing. The parts get processed by the same sealer server. I don't know what a sealer is.
Whatever it is, they get processed by the same guy.
So they start out at Prep A and prep B, respectively, but then they they both go to the same guy. They compete
for the same resource.
What makes this interesting is that part part as and Part B's have different service time distributions at the
sealer, so the sealer is going to have to identify which part is he dealing with, so all parts then undergo an
inspection.
If they pass, they're out of there. If they fail the inspection, they have to get reworked. They go in another.
Server then they get another inspection and whether or not they pass, they're out of there so they get two tries.
Hopefully they pass on the 2nd.

194
At least here's a flow chart that I stole from the book part as come in according to an exponential distribution
with the mean of five minutes part bees.
They come in every Expo 30, but in batches of four, so you know about the same arrival rate. If you looked at
them individually, but they come in in batches.
Of four
Part as go to the preparation area part a prep. They have a triangular distribution for the preparation time. It's
an important part. B's go to the separate preparation center.
They have a different triangular distribution, and again that's fine. However, both types of parts A&B go to the
Sealer common sealer and the parties have a triangular.
Sealing time and a Part B's have a Weibull distribution for some other reason, but they're different and the
sealer has to know that. How is he going to know it? I don't know.
In any case, 9% of all parts coming out of the sealer have to get reworked. If they are so unlikely to get
reworked, that's going to take 45 minutes on average and then 20% of those are totally no good and it gets
scrapped. 80% are OK, they're salvage, and they ship them out back.
Over here, the 91% that passed the original inspection get shipped out. Happy, happy, happy.
So the big question is how to handle different A&B service times at the sealer. How does the sealer know?
I've got a couple different things you can do. Here's this trick. One, let's pre assign the service times as an
attribute. We'll call the attribute sealer time. What a surprise in an assigned module immediately after the
customer arrives.
So, so the first thing that happens when the customer arrives, no time passes. We hit him with an assigned
module and the purpose of the assigned modules to tell him what is your upcoming sealer time.
Going to be and whether he's a type A or type B, we give him a triangular or we give him a Weibull depending
on if he's type a type B, since it's an attribute each customer memorizes for the future.
What his sealer time is going to be doesn't forget it when he gets to the sealer, he tells us the sealer. Well I
gotta I gotta triangular I got a wife.
And so they each customer has his own attribute that it remembers from then on, and so we use that attribute
regardless of whether it's a type A or type B.
We know what the sealer time is going to be, even though we assign it earlier. I mean, heck it's our simulation.
We can do this stuff whenever we want, right? It's our simulation.
This is actually a sub trick one, but while I'm at it I'm going to use the assigned module to store each
customer's arrival time. I'll store the arrival time also as an attribute.
Because we're interested in seeing what the cycle times are going to be, so we need to know when does the
guy show up and we'll store that as an attribute.
When does the guy show up now? How do we know when the guy shows up? Well when we hit the assigned
block, I'll use the Arena variable T Now, which is the current time and I'll assign the arrival time.
Attribute to the value of T now, so it's perfectly OK to assign an attribute attribute value equal to Arena internal
variable T.
Now, so you assign at that point if he showed up at time 11.7, he always has the arrival time for that customer
as 11.7.
What we'll do is we're going to record the departure times just before the parts get disposed. We're going to
use that record block finally, and this will allow us to get cycle times. What we'll do is I'll take the departure
time.

195
You know when he hits the record block just before the dispose minus the arrival time, the arrival time attribute,
and what we're interested in is how long do these guys stay in the system? Depending on if they pass the first
inspection pass, the second inspection or fail both inspections?
Let's get logical. This is actually my real second trick. Is there another way to model the process time at the
sealer without having to use that assign block?
You know the sealer time attribute for a zombie zombies. Well there is. It's a little bit of a hassle, but I wanted to
show you this involves will work. It's nice to know this is the kind of thing that you can use.
Later on, note that the entity types Part A and Part B. We assign those. In any case right in the Create
modules, I'll show you those are signed right away.
So instead of using the sealer time as an attribute in the assigned module, what I can do is I won't do anything
until I get to the sealer process and I'll use a logical expression.
Now a logical expression reads like this. We say that X = = y = 1. So what this means is that does X really,
really.
Equal why we say that this is going to be equal to 1? If in fact X does equal Y. So if I had something like.
3 = = 3 That value 3 = = 3 As a logical expression is true and so it equals one.
If I had 3 = = 2, then this thing equals zero, so that's why I say see why. So I always interpret the double equal
sign, meaning they really really equal each other. OK, so here's the logical.
Statement that I'll use when I get to the sealer if the entity type is a part a, I use the triangular distribution.
If it's a part BI, use the Weibull distribution, which is what we're doing now. Why does this work?
Well, let's see if entity type is part A. This sync equals one and I have one times triangular.
Which is correct, but if it's part a I also have this part equal to 0. So if the entity type is really part a, this logical
expression equals zero, so the Weibel doesn't matter.
On the other hand, if entity type is a Part B that equals zero and this equals one, so I get the right answer now
it's cumbersome.
But it works. Now I'll just show you how, how that does when we do the demo. So here is what the darn thing
looks like part. As arrive I do the assign of various things.
They get prepped, they get sealed. They go through an inspection. If they pass 91% chance they go through
the record block and they leave.
They're shipped off very nicely. Happy if they fail the inspection, they get reworked another inspection out they
go for better or for worse. Part B's same thing.
They get created, they get these various assignments that we'll talk about like the sealer time they get
prepped. Go to the sealer inspection out of here one way or the other.
So here we go demo time, take a look and you'll get details on what's going on in the simulation.
Here's the summary of what we did in this lesson.
This time we sold our first real system. It was a small manufacturing system and I toss in a couple of nice little
tricks involving attributes and certain types of logic.
Next time we're going to look at fake customers, these are additional ways that you can do some cool little
tricks, and we'll put them in our bag so we can use them later on. So this is a very interesting concept, and I'll
see you then in the next lesson.

196
Lesson 15.2: Simple Two Channel Manufacturing Example Demo

Hi everybody, we're going to demo this small electronic assembly and manufacturing system that we've been
talking about in the lesson. Notice that we have two different types of arrivals.
Part as they show up exponentially every five minutes on average. I think that's what we had in the in the
specifications. The point is, they show up one at a time.
Part B's they show up every 30 minutes.
But they show up four at a time and you'll see when I run it, you'll see, oh, I need also to show you see the
entity type which we define in the Create module. For part A is called part A. What a surprise.
And Part B's are called part.
Be now the critical assigns that make the whole thing work are. Remember, I say we have a sealer time and
I'm assigning an attribute called Sealer time to the triangular distribution of your part.
It's going to be Weibull distribution of your part. BI also assigned the arrive time as an attribute called T now.
Which I which I discussed in the lesson itself for Part B's same thing except the sealer time as Weibull, which
we'll use in the future, and the arrive time is still T. Now. Here's the part A the prep a process.
Parties go here. We just do a CS delay release on Prep. A totally boring same thing with prep B sealer process
is the only thing that's interesting here.
And there it is. I do a CS delay release on the sealer and instead of like exponential sealer times the
expression that we use is called sealer time and that's defined as an attribute back in these assigned blocks.
So if I get over here to the sealer.
And it wants to know the sealer time says, oh, you're a part A. I'll use that sealer time or your Part B.
I'll use this sealer time great, then we undergo the inspection. This is just a two way by chance and it goes with
9% chance to the right 91% chance down.
Where it's shipped off, there's a rework here. More inspections here and they exit there. Let's see what the
thing looks like. I've purposely slowed this down so you can see the.
Pretty Little marbles. It's still yeah, you know, it's still too still too fast really. So what did we learn how to do last
time? Go to run set and let's see run speed and we'll slow it down a little more amazingly.
I'd already slowed it down beforehand, but let's run it again.
OK, you can see there's the red red marble and 123 and.
Fourth guy shows up 4 blue marbles. There's one being served here, three in line, four more about to show up.
1234 there are now seven in line here. See they're showing up in groups of groups of four.
Now there's only six in line 'cause one just got processed.
So you can see this is very very nice, straightforward system. I'm speeding it up now finally.
And if you if you run it long enough, you'll see various colorful lines forming over here at the sealer process, but
it's you know very straightforward model.
Now the only thing I think I'll do is that I'll show you what the full run looks like, so let's hit the.
Go fast button. Oh, and it looks like this is interesting. We did three replications for 24 hours. That's what the
three means here. Let's look at the results.
Oh, what a lovely chance to look at Crystal reports again. Boy, I'm never going to tire of complaining about
Crystal reports, so maybe maybe we'll have some music that we can listen to while this is going on.
2nd.
2nd.

197
David Snack out as we wait.
2nd.

And here we go so you can see we have three replications.


And this is a category overview, So what it's showing us is that we look at the values across all the replications
so you can see that.
Part A is had to wait .46 units on average. I think we're in hours. Part B is a little bit more.
And the total time in the system. It's it's quite nice.
This is the additional information on work in process, so you can see if you go to the reports. Lots of interesting
things.
OK, I'm going to show one more small demo and it looks exactly the same as the previous one, except I'm
changing my server logic and right here we have this assign block that supposedly contains the the sealer
time. In fact, I'm not going to use it.
I'm going to do let's click into the sealer and there's that logic that I told you about.
And so you just believe me that when I run this thing.
Now look at it go. We get pretty much the same type of behavior in the system. The random numbers are
there.
There, the uniforms that the thing uses to run are evoked in a slightly different order because I change things
around, But basically you're getting the same results from.
In this run, even though I use logic instead of the assignments over here, so very nice. OK that ends this demo
and I'll see you in the next lesson.

Lesson 16.1: Fake Customers

This lesson has sort of the interesting name of fake customers as we continue our little journey through Arena.
Here's our overview. In the last lesson I demo'd a small electronic assembly and test manufacturing system.
Very nice, pulled together a bunch of concepts in this lesson I'm going to talk about fake customers which we
use for a number of tricks.
The idea is that these aren't real customers at all. Their purpose is just to do some specialized tasks that Arena
might need in certain circumstances.
Let me give a little game plan on the next few lessons we're going to cover material for a while that will lead us
to the simulation of a call center a little bit sexier.
Simulation we'll talk about fake customers for now, just building up some tools to get to that call center. Then
I'm going to look at the Advanced process template we just got done with the.
The basic process template. Now we're going to become advanced.
I'll look at how we can schedule resource failures and maintenance.
Scheduling sounds funny by scheduling me as we model the random times that failures occur. We'll then look
at the blacks template.
Blocks are sort of primitive blocks originally found in the old Simon language, but they're very useful for a
number of different tasks and.

198
You know we're going to start to get a big vocabulary of these things. I'm not going to expect to memorize them
all, but this is just another template that you can look at.
Then we'll do some material on sets which allow us to define things like different sets of servers that have
unique specialties, so we'll talk about those in that lesson, and then I'll describe the call center. Then we'll
demo the call center.
So with that in mind, on we go to fake customers.
So you can use these fake customers to accomplish various tasks during a simulation. Like I said, they're not
actual customers that you care about in terms of waiting times or user resources.
They're not. They can use resources, but only for certain reasons, and we don't care that they have to wait.
Because they're not. They're not real. In some sense, zero demo time is going to explain it all.
Here's what I'll do in the demos I'm going to use fake customers to calculate normal probabilities, like for
instance, maybe I don't have my book available, I just have Arena around and I don't have real customers. I'm
going to use these fake customers to somehow calculate normal probabilities, show you how to do that.
Uh, in the call center example, I'm going to keep track of which time period you're in, so we'll we'll use fake
customers to do that task, and we'll show it at some point in the future, and then I can also use fake customers
as a breakdown demon so the fake customers will trigger.
Breakdowns now in fact, in a uh in a lesson coming up, there's a direct way to trigger breakdowns. But I'll show
you one thing that you can use fake customers for.
The summary of what we did in this lesson is that I just introduced this concept of fake customers and I'm
alerting you to the fact that we're going to be using these occasionally several times and upcoming
applications. So when you see the demo, you'll see this.
Next time I'm going to be talking about this advanced process template, 'cause we're we're really good at. The
basic stuff now I'll show you how to get it and some of the good stuff that you can find in it, so see you then.

Lesson 16.2: Fake Customers Demo

In this demo I'm going to calculate some normal probabilities and the goal here is to use fake customers. This
is a demo and fake customers to assist me in calculating normal probabilities based on successes and failures
of certain.
Observations, So what I'll do is I'm going to create a fake customer once per hour. I'm going to do this. Let's
see what is that number 1,000,000 times I'm going to generate a million fake customers.
I'm going to.
Assign an attribute to each fake customer. He's going to be a normal observation, a normal 31, and I want to
know what's the probability that a normal 31 is going to go negative.
That's all I'm going to do in this example, so I take my normal observation that the fake customer is generating.
And to see if if the normal observation is greater or equal to 0.
It's not negative, so I count it as a non negative observation and off it goes I dispose it. On the other hand.
If it is, uh, if it's less than zero according to that, the side block over here I count it as a negative observation,
and off it goes into this record block.
And you can see that I'm counting it in the counter negative observation, so this gets incremented by 1:00
every time a customer goes through this one. The counter is.
Not negative observations that counter gets incremented by 1:00 every time a positive observation goes
through. So I generate a normal.

199
If it's positive it goes up here and the counter gets incremented. If it's negative, it goes down here, and that
counter gets incremented, and then we'll look at the output for this thing. Now since I'm paranoid about these.
These awful crystal reports things. Let's let's look at the reports and I'm going to go back there we go there.
We're into Simon on this one, so that's fantastic.
OK, so here we go. Let's run it.
OK, I had it in batch mode. The run is over. Let's take a look. Here's the report and it turns out out of those
million observations, 900 ninety 8591 were positive and 14109 were negative.
And it seems to me I can divide by a million. It seems to me that we've got about a 99 point.
86 probability that a normal 31 will be positive and I can tell you that's the right answer.
If I went to the back of my probability book, that would be the right answer. Congratulations, thank you, Arena.
Right, so now the next demo that I'll do with fake customers is I'm going to use fake customers to generate
breakdowns, so you'll see let me just run this very slowly whenever you see a truck. A truck is a fake. Oh so I
got to stop. The other one. Just sick.
Let's go back here. Stop that run. Always remember to stop it.
So whenever you see a truck that's a fake customer and he's going to be a breakdown, there's a breakdown,
right? The breakdown is going to occur.
Right there.
You see how we had a long line form?
So the truck is being he's in service right now. That's why we had the long line before.
So let's do this again a little more slowly. You can see what happened. I create regular customers every half an
hour. I create a truck.
Every 10 hours on average.
I assign.
The service time to the regular customer just plain old triangular distribution and I assign the regular customer
priority level of two. Keep that in mind priority level of two. The truck.
I have this breakdown time of triangular and his priority level is 1, so he's got a priority level. That's the number
is lower.
Just remember that let's then we go. This process block sees delay release the server. The Barber serves you.
What we need to do actually is.
Put service time as a as an attribute, so let's go as a service time. Let's go here expression.
Let's go right out service time.
I think that's two words we'll find out in a minute.
So what I've done?
That sort of time is one word. Let's go back.
What I've done is I've set up the service Times Now depending on if you're a truck or if you're a regular person.
And I'm going to step through this thing slowly. OK, one last thing I've got to show you the queue, the queue in
front of the Barber process.
One Q is ranked by lowest attribute value of priority level. So if a guy with the priority level of 1 comes through,
he beats out the guy with priority level of two and you'll see a truck goes to the front.
Of the line because he has a higher priority level. Actually it's a lower number, but higher priority.
OK, so let's watch this. I'm going to step through.
First guy comes in. He's getting served. You can see right there, that zero changed to a one. He's now being
served. A human is being served. A truck comes in.

200
The truck is in line. He's going to get served after that human leaves.
Another human comes through human is in line behind the truck.
Another human.
He's in line.
Yet another human that makes sense because the trucks don't.
Show up very often. Another truck is about to show up. Watch what happens.
OK, a couple things happened there. The second truck actually showed up right here.
Here, because the guy left and the first truck is now in service so that truck.
Entered went all the way to the front of line 'cause he's a truck. He can do that and the first truck is now being
served. So now we're going to get a big long line developing unfortunately.
More humans show up more lines because these trucks take a long time. Now the second truck, the first truck
just left.
The second one is now in service, so there's still a big long task ahead before these guys can start getting
served. Just out of curiosity.
Let's see if I can generate another truck and see what I have. There's another truck and he goes to the front
line.
Isn't that awful? So there you go. That's how this system runs. The fake customers are trucks, and they, they're
essentially generating breakdowns where these real humans can't get served.

Lesson :17.1 Advanced Process Template

What we'll be doing in this lesson is talking about the advanced process template.
Here's the overview. Last lesson I talked about fake customers and we explained how they can be used for a
variety of purposes, including their use as arrivals of breakdown demons. They can be used for other things,
which we'll talk about in upcoming lessons.
In this lesson, we're going to add to our arsenal or vocabulary modules by introducing the Advanced process
template. None of this basic stuff anymore. That's UGA. University of Georgia stuff. Now we're going to do
advanced Georgia Tech stuff. Advanced process template.
And the idea is that we'll get lots of great new stuff, although I don't expect to have to memorize everything.
You'll look at a bunch of important things and at least I want you to know where they are, not necessarily what
they all do.
Here's the template. You go to file template panel attach, then you have to sniff around for the Rockwell
Software Directory where you can find it.
I'll show you when we do the demo. Here's what it looks like. I need to tell you the truth.
It's usually going to appear on your screen as a big skinny thing, but I made it. I expanded it so you can see it
all in one.
Fell swoop you can see a bunch of modules.
The yellow things and a bunch of spreadsheets. Lots of them. Now I've said already. I'll say it again. I don't
expect you to memorize all these.
Maybe you should know where you can find them in a pinch, but we'll only look at a few of them for now, so for
the next couple lessons we might be interested in, say CS delay release module.

201
I go sees the labor lease. You may have heard those words before, and the expression and failure
spreadsheets. So there's special special spreadsheets for those guys. And you've heard those words before as
well. Expression and failure.
CS Delay released and you've heard these things, why do we have to use separate CS delay in release
modules when we have those in the process module?
The CL release capability is already in the process module. Why are we messing around with those things now
in kind of second identities? Well, I agree they're in the process.
Module.
But it turns out that there are often things that are too complicated to do in the process module alone.
It can't handle certain things that you would want a seasoned delay and release to do kind of separately and
remember the basic process template is for University of Georgia people.
And so it's supposed to be really easy when you want to do more complicated.
Stuff it may be the case that you have to do CS delay and release separately, possibly together, but still
separately.
We'll talk about various reasons as we go. In particular, you can handle things like CS, assign delay, release.
So what if you wanted to put something in between those blocks? Now you can do it. You can do.
Uh, Nonsymmetric multiple seasons and.
Leases, so before when you when you did, uh, seize delay release, all in one fell swoop in the process block
and you wanted to seize multiple resources, usually release those same resources in the same in the same
block.
Now you can avoid having to do stuff like that, and I'll show an example later. You could also look at very
complicated.
Seasons and releases that might depend on sets of servers. Now we'll talk about resource sets later on. You're
going to see those in the upcoming call center example, but the kind of thing that I have in mind is just not
possible in a process block.
Demo time we're going to talk about some of these seasonal release issues, and then I'm also going to sneak
in a little bit of wisdom about the expression spreadsheet that you'll see.
So we use expressions all the time. I'm surprised that they didn't put it in the basic process spreadsheet
template, but you know.
As long as you know where the expression spreadsheet is, it's in the advanced process template. There you
go and I'll talk about a couple of things associated with that in demo time.
The summary of what we did in this lesson is that we talked about the Advanced process template. This is
going to give us more and more modules and spreadsheets to play around with.
And in the next lesson in particular, I'm going to use the failure spreadsheet to talk about resource failures and
maintenance, so do not fail to show up for that. I'll see you then.

Lesson 17.2: Advanced Process Template Demo

Well, now we're going to look at a CS delay release. Actually, just a CS release example, as our demo to
introduce the Advanced Process template.
This is short and sweet. Very straightforward, very easy. What I'm going to do here is I'm going to use.

202
Separate season release blocks to do overlapping season season releases. So what do I mean by that? Well,
let's just take a look here. Let's run the thing for a second.
I've set this up and you can have access to that, so we got a bunch of customers here and they're lining up this
process getting served.
Then I'm not sure if it's a Barber shop or whatever. I'll look into it in a minute, but look at this, it doesn't seem
that there's ever a line forming at this process.
And why is that? Well, here's what I'm doing.
I've got a line forming in front of this process.
Where I do icees and delay?
And then this CS is for that process. This release is for that process O.
A customer can't go through.
And therefore cannot release this server.
Until a slot is ready over here in process #2. So that's why no line is forming over here all the line gets pushed
back over to this guide.
There's no line now, but if there was a line, it's going to be over here, so let's see exactly what I did. This is
very short and sweet, like I said.
Create the customer in the usual way.
CS delay release. So for CS delay there's no release here. Oh, for once it's not a Barber resource one, so I just
do CS delay.
This is a CS block that I got from the Advanced process template season. It just dragged it over. It comes with
its own cube, but since there is no queue you don't see one appear here or here. Then in this release I release
resource one.
And now in process two, I've already seized it. At this point I do delay release and all the customers go off
happy. So again I'll show it to you one more time.
There you go see all the Q is relegated to this area. Now. One thing I promise to do is show you reminds you
how you get the Advanced process template.
It may be easier for you than for me, but I always go to file template, template panel, attach and then you start
looking around. Oh, and since the last time I looked around.
I actually got to the template panel. Eventually it remembered that, and here I am. So look at all these things I
could get.
I'm just going to get the advanced process which I clicked on, that's why I have the panel sitting to my left.
And I could also have gotten advanced transfer basic process which we already have blocks which you'll see a
couple lessons from now, et cetera, et cetera. Very, very nice.

Lesson 18.1: Resource Failures and Maintenance

In this lesson, we'll be looking at how to generate resource failures in order to conduct maintenance or just
failures in and of themselves. Arena is very good at this and very simple to do this task.
Here's what we'll do.
Last lesson we learned about the Advanced process template and all the cool stuff that's there. A treasure
chest of stuff.

203
In this lesson I'm going to study model breakdowns and maintenance in a way that's way more direct than
these fake customer things that we use in the last demo.
And the idea here is that we'll just be looking at a certain type of schedule of failure schedule.
You can cause resource failures by scheduling breakdown demons like we did before, but it just ain't elegant.
It's perfectly OK, and if you're on a desert island, that's what you can do.
A better way is to use the resource and failure spreadsheets in conjunction with each other. Very simple way to
do this, it takes just a less than a minute, so here's the resource.
Spreadsheet on the bottom of the screen you'll see in the last column there's a failure tab. I've clicked on.
One row which is going to correspond to a failure that we'll call drill failure. We'll give the recipe for all this in a
minute so the failure name is going to be called drill failure.
Then if I happen to go over to the failure spreadsheet, which is in the advanced process template, I'm going to
see drill failure is all there waiting for me to use it.
And it's this thing called account failure.
With a count of 10, or I'll just give it away. What that means is we're going to have a failure after 10 people
have used the drill, so this could be like maintenance.
You purposely take the drill down after 10 customers are there, then you'll work on it for exponential 30
downtime. But let's let's look at the recipe here.
We go the resource spreadsheet and the basic process template. Click on the failures column, add a failure
name drill failure.
For instance, choose the failure rule, which we'll talk about in a minute. So right now, the failure rule says
ignore. Let's ignore that for a second, then go the failure spreadsheet.
Wait, you will see the failure name that you just chose waiting for you all happy and ready to go.
It's in the Advanced process template. Choose the type of failure so it could be account failure. Remember,
after 10 arrivals, 10 uses of machine, we have a failure or it could be could be.
Time failure, so after a random amount of time you have a failure. So the boats are perfectly reasonable. Then
choose the downtime for repair. Could be any expression and we had exponential on the last page.
Page couple remarks. You can schedule multiple failures by clicking on multiple rows of the failures column in
the resource spreadsheet.
So we just had drill failure, but we could also click on rows and we have a type one failure type.
Two failure scheduled maintenance we we can make all sorts of failures by clicking on more and more rows
and they get scheduled separately now.
Remember I told you ignore the ignore? Well, these are different types of failure rules, ignore means.
It's not a killer failure in the sense that the failure happens, but you get to complete service of a current
customer.
If there's one in the system. So if you're working on somebody and you fail, it's not a terrible pay.
You get to keep on working at the guy, but then you reduce the repair time. So in other words.
If my repair time is supposed to be an hour.
And the failure occurs and my customer still needs 10 more minutes. I complete working on him so I work on
him for 10 more minutes and the repair time goes down to 50 minutes.
It still finishes at the 60 minute or one hour mark. O the ignore means I reduce my repair time. On the other
hand, await means well, I still complete.
The service of the current customer, but I delay the full repair. So again, if the repair time is an hour and my
customer current customer needs 10 minutes.

204
Then the repair finishes at the 10 + 60 = 70 minute mark. So I've waited to finish the repair and the worst thing
is preempt.
When the failure occurs, I immediately stopped service with the current customer, but I complete it after the
repair is done, so I got a guy in service. He needs 10 more minutes but the failure occurs.
The breakdown occurs so my my current customer stops getting served. I complete the repair after 60 minutes
and then I remember I've got 10 more minutes to work on to finish that customer so he's out of there after 70
minutes. So all these are kind of nice and very easy to use, obviously.
Here's our summary.
How ironic we have literally succeeded in learning how to implement failures, and it's crazy.
And Speaking of crazy, I'm going to give a quick intro next time to the so-called blocks template. It just gives us
unbelievable crazy number of things to look at, but I'll try to be real general about it because we only use a few
of them for now. So really looking forward to talking about blocks.
See you later.

Lesson 18.2: Resource Failures and Maintenance Demo

In this demo I'm just I'm just going to show you how to use failures and we'll take a look and see that they
actually work.
Failure is working. It's kind of funny. So this is the old example out of the Arena book model 31, and what I've
done is I've added in some failures so this this drilling center is going to have a failure every once in a while, so
let's take a look.
At the resource.
This resource is called drill Press. It has a fixed capacity of 1 and I have a failure schedule. So what I did I
clicked in here.
I've got one row worth of failure. It's called drill failure or the surprise. I named it that and the failure rule isn't
ignored. Doesn't matter for now. So what I say to do, we click in there.
And then we go to advanced process and we look at the failure spreadsheet. There it is right there.
And we see that I have a drill failure and I've defined it as account type failure. So after 10 customers come in
we fail and we have a downtime Expo 30, that's nice.
Expo 30 and so the system will fail for about 30 minutes and then come back on. We get 10 more customers
than another failure. One thing that's interesting is clicking into this graphic.
We'll see that.
Something is going to happen. It's going to disappear when a failure occurs.
Oops, let me get out of here. Let's click OK. Alright so here we go.
I think what I'll do is I'm going to step through so you can see one customer.
2.
3.
4.
I think we're on our fifth now 6th.
7th
8th

205
9th, I think the 10th is coming up there saying oh, and look what happened gone. So we're experiencing a
failure.
Now let's run it a little bit longer and we'll see what happens. Yeah, failure failure error. The queue is getting
bigger. All this was terrible. Now he's back and the queue is building down a little bit.
Little bit.
It fails again, so obviously we're going to have some simulations over a very quick one so you can see if the
failure really does occur and it negatively impacts the Q easy demo we'll see in the next lesson.

Lesson 19.1: The Blocks Template

In this lesson, we'll talk about the blax template, which is yet another template.
Here's our overview. In the last lesson we talked about this concept of resource failures did a nice easy little
demo.
And in this lesson, my objective is to add to our ever growing bag of tricks by introducing this blocks template.
And there are tons and tons of these things. These blocks that you'll see in a little while. I don't want you to be
intimidated. We'll use a few as examples.
Here is what the template looks like. Whoa, there are lots of blocks you can get them from file template, panel
attached. Sniff around, find it.
Huge number, don't be scared. I'm not going to have you memorize all these things. That's not possible. You
know, if you can look at the names and kind of put them in the back of your mind in case you need these
sometimes, but they're.
There's sort of a self contained language related to the old version of Arena called Simon, and they're very
specialized in low level.
You use them occasionally and they have the ability to be powerful, but we won't be concentrating them too
much, but we'll use them here and there.
In specialized situations they're very nice, but we're only going to use a few for right now. CS delay, release
queue and alter just for now.
12345
And so you may have seen against some of these words.
What well, we're seeing CS delay release yet again and why is that? I mean why? Why do we constantly see
these things all over the place?
Well, the main reason, at least in my opinion, is that Arena has been built in layers over the years. You know
when they make.
Improvement they they add a new a new template and sometimes they like stuff. So much gets repeated and
so CS delay release have found their way all the way from the original version Simon into Arena and they
appear in several.
Places, on the other hand, for whatever reason, some of these primitive blocks, such as the Q block which
we're going to see in our first demo example, you can't even connect to a CS block with a Q block from the
Advanced process template or in a process module from the basic process template. If you want to use.
The Q Block for whatever reason you're stuck. You have to use one of these sort of primitive CS blocks.
So for that reason alone, it's good to learn these things. Anyway. Let's do demo time. I'm going to show you a
couple things.

206
We'll do a QC delay release example using only primitive blocks, and the reason I'm showing you this thing is
that we're going to see it again in the call center example coming up, so I'm going to blow through this fairly
quickly. Not going to go into.
Tons of details, so I'm just going to show you what it looks like and then I'll look at a cute little example using an
altar block to change the number of resources that are available, the number of servers that are available.
As the queue gets bigger and bigger, we might want to bring in some, you know, some more people to help us,
and so we use the altar block for that. So go take a look at the demo, the nice one.
As for today's summary, this time we looked at that blocks template. Lots of stuff, and later on we're going to
find some uses for these guys.
Next time we'll look into what are called sets. There are resource sets, other types of sets, and I want to show
you how these sets make modeling lots easier in certain circumstances. They're really quite intuitive and
they're easy to implement, so we'll see you next time.

Lesson 19.2: The Blocks Template Demo

In this demo I'm going to show you the use of certain primitive blocks, in particular the queue block, the seize
block and the delay block.
So let me show you here blocks. I previously loaded that template and I go around. I look they're in
alphabetical order, so I want to if I want to delay block bang there it is. I can just drag it over. I don't want him
so let me.
Delete it.
And we're going to be seeing pretty much the same example again when we do the call center example later
on, and so I'm not going to go into huge detail here other than to say I create some arrivals. I'm going to try to
seize this resource. The resource is called Trunkline. That's just the number.
Of telephone lines that I have.
And if I remember correctly, let's go to basic process resource. I have a trunk line that has a capacity of three,
so I have 3 trunk lines.
In other words, three phone lines are available. I'm willing to put up with a queue of size 4, so this is where the
queue is defined trunk line.
Q.
So if I go over here, this is a first in first out queue and the capacity of this queue is 4.
If I use a queue block, which is the main reason I use a queue block, if I exceed the capacity to Qi
automatically kill off the customers by sending him to this dispose.
So that's what's nice about the Q block. That's why people use it. If the Q block is overcapacity dated out, you
go.
So the only way I can use the seize block with this particular Q block is I've got to use the primitive sees block.
That's why I'm not using the yellow one from the advanced process or the process versions of CS, so I'm stuck
with this primitive sees.
And and then I'm stuck with a primitive delay, so I just want to show you what's going on here.
This is not a big deal because we'll be seeing this again later on. These numbers here represent the number of
people in the queue and the number of servers being used. One thing that's queued here is that I have a 3 way
by chance.

207
And so the customers will go to the first to the first choice with probability 76, just like the number of trombones
in the big parade, with probability 16% to the second choice and with remaining probability 8% to the third
choice. So let's watch. I'm going to step through this.
Here we go.
Click first customer shows up. There's nobody in the queue right now, so he goes through. There's nobody
seized at this point. One server is now being used.
Second guy shows up two servers are being used.
Third guy shows up 3 servers are being used.
Fourth guy shows up one person in the line. See how that works. We also had somebody leave and so that
fourth guy is now being served so you can see how this works. It's very nice. Let's run this sort of slowly.
And you will see how the the queue builds up and down. Ah, there we go. See we had a queue of four, so a
guy had to leave.
We'll see that again. See when the queue gets to be 4 customers have to leave. Isn't that nice? Very, very easy
example, and you can see how the blocks work so nicely. Like I said, we'll be going into much more detail on
that later on.
And so let me now do a very cute little example that I worked up. This is using an alter block.
This is a block this is alter.
Where is he?
In alphabetic or I drag this guy over alter.
And what the alter block does. He's responsible for adding servers to a resource or subtracting service. So let's
look at this when it says.
A Barber one. What that is doing is if you if you pass a customer through here or fake customer the number of
servers in the Barber.
Increases by one and if I wanted to I could put a 2 here or negative one I could. I could change the number of
servers by anything I want, but alter is used to change the number of servers at a resource.
Sometimes I say number of resources, but I really mean number of servers at a resource number of parallel
service. So let's watch. Here's the.
The create.
Block this to side block is just saying if the number of people in the queue is greater than or equal to three, add
a server and we can keep track of the number of people in the queue and the number of resources being used.
Let me actually just see how many people are in this Barber, so we currently have.
In the Barber, I'm going to look at the number of resources.
So the Barber has a capacity of 1. There's a trunk line which we're not using anymore, so you can ignore that.
In fact, I'll just delete, delete it out, you go.
So we're keeping track of the number of barbers that we have in use and the number of people in line, and
we'll step through this.
OK, so first guy shows up.
There's only one guy he's not waiting for anybody so.
There he goes.
Not the fastest guy ever.
So one person being used next guy shows up.
We still don't have a cube bigger than three. I gotta speed these guys up.
So, uh, oh, there's only one server, so this queue size is 1 now. Next guy shows up.

208
Queue size is going to bump up to two.
Next guy shows up. Let's see what happens to the queue size.
Huh, that's interesting.
Now three I needed to click.
Another guy shows up. Now something is going to happen here. Ah, the queue is greater equal 3, so we go
through an altar block and now we have two servers.
See what happened. We have two servers because we went through the altar block and the queue went down
to two.
The guy showed up, so then it popped back up to three and see if we have another server here. Q is 3 if we're
going to add another server.
If you look at that now, we have 3 servers and you can see this is quite nice. We're going to keep on adding
servers until everybody is happy and there's another one C, so you can. We can run the simulation for a while.
And you'll see that occasionally the number of the the number of servers will go up by. By the way, NR means
the number of servers in use.
So we'll see it's not going to. It won't go above 4 until we really need it.
And let's see if it if ever does.
Well, it looks like 4 is probably what we needed, so we're not getting these gigantic lines. So yeah, 4th
probably a very good number for this system.
OK, well this is a good place to end the demo. I don't want to have you waiting around off the queue size will
go up so we'll see in the next lesson.

Lesson 20.1: The Joys Of Sets

Hi class, well we have an interesting lesson. This time we're going to be talking about two of my favorite things,
set theory and joy, so we'll see what's in store for us.
In the overview in the last lesson we talked about the so called blocks template where we came across all
these millions and millions of new blocks that we could use in our simulation and they occasionally will help us
with specialized little issues and we'll see a couple example of those coming up.
And then in this lesson, we're going to learn what sets are and how they enhance our modeling ability. You'll
see that there's numerous ways to use them.
So we'll especially concentrate on what are called resource sets. Those are the most important things for us
right now.
Sets
As in baby math that you learned in 4th grade, A set is just a group of elements and elements are allowed to
belong in more than one set.
Elements can be. It can be anything in Arena. So there are various types of sets. You can have a resource set,
so a resource set would be a set of servers.
You could.

If a counter set, remember in a previous demo I showed how to do counts of certain things as they pass
through the system.

209
We could have a set of these counters. We could have a set of what are called tallies. We could have a set of
entity types of entity pictures and even more difficult things which we'll talk about later on.
Now resource sets in particular are very easy to construct. What we'll do is we'll use the set spreadsheet in the
basic process template to define sets, so the set spreadsheet is sitting there in the basic process template.
And like I said, we'll just study resource sets. For now, let's review what a resource is, a resource. Currently a
vanilla.
Resource currently has identical interchangeable servers, so if you have a resource that has five barbers,
they're all the same.
We don't care which is which, we just want to use one of them. They're all the same. Does a resource set,
though can have distinct servers?
Joe, Tom, Bill and they can have different schedules, different service speeds, different specialties. So in fact a
resource set is much more general than this plain vanilla resource.
That itself might have five different servers. A resource set can have five distinct servers, all with different
properties.
In our call center example, we'll have three products with the following resources, so we'll have in the call
center you go and you have a question about one of them.
And you have to talk to one of the people associated with the three products. So let's let's see who can do
what product one.
If you have a question about product one, you're going to need to talk to charity, Noah, Molly, Anna, or Sammy.
One of those five product two. You'll need to talk to Tierney, Sean, Emma, Anna, and Sammy. So those last
two.
Kids appear as they did in product month, so you're allowed to have people appearing in multiple sets and in
product three set.
You'll need to talk to Shelly, Jenny, Christy, Magliana, and Sammy. The reason I call them kids, by the way, is
that these are actually the kids of the people who wrote the Arena.
The book they're no longer kids, but you can see that some of these kids have cross functionality and you'll
notice that I've listed them at the end of those lists, because it turns out you can order sets.
Unlike what you learn in math, class sets can be ordered and what I'm doing, I'm saving the most talented guys
for last because you don't want to waste them.
If you don't need to, so all eleven of these servers, by the way, have different schedules too. So not only do
they have different specialties, they they have different schedules that they adhere to.
Now, continuing with resource sets, let's say I want to define a resource set called product one. What I'll do is
when I get to the spreadsheet, I choose type equal resource because there's other types of sets, so the type of
this set is resource. I click in Members and then I enter product one servers.
Under resource name, so here's what it looks like.
This is the what the resource spreadsheet looks like and you can see we have numerous types of set. This is a
set spreadsheet.
I have numerous types of sets. Product one is a resource set product two also resource set. We have a tally
set. Don't worry about these. For now. Let's just look at these resource sets and I chose.
Type people or resource.
I click in Members and then I just start clicking in the members of that set. Charity, Noah, Molly, Anna and
Sammy.

210
Just like I listed on the previous page. See there's a type I'm going to have. I click into the number of rows
there and I just start typing away. Now product ones preferred order all things considered.
I want to use charity first.
I want to wait to use Anna and Sammy because of their cross functionality. I could use those guys for other
purposes later on.
South Charity goes first. If she's not available. I try Noah blah blah, blah blah blah. I say I've saved Sammy for
last because he's cross functional, cross functional and it'll turn out by the way, that other orders are possible.
We'll talk about that in a minute.
Now when I do a CS delay release of people associated with a resource set, so I want to seize one of these
product. One workers. I've got to be very very very careful.
Have to be careful because there's a little European warning sign. The problem is that you have to make sure
that you release the same exact person that you originally seized.
You just can't say CS release arbitrarily, because if you release a random server, some other customer may
say, hey, where did my server?
So, so you've got to be very careful about the person that you release.
Given that you see if you see Sammy, you gotta release Sammy. So here's what it looks like. CS delay release
using the advanced process blocks CS delay release.
Let's suppose we see the CS server from the set product. One in this preferred order from top to bottom, and
we got to make sure we remember who it was, and then when we're done.
We got to release that same guy. Here's how we do it. Let's go to the.
To the CS Block and you'll see I click on the CS and I've got this. It looks like this complicated bunch of stuff
here, so let's click on to what amounts to add or edit and you'll see that the type of CS we've done is that we've
seized a set. The set is called.
Product one.
Then we see the quantity of 1 in preferred order and we save the name of that guy in an attribute called Tech
Agent Index.
I'm going to go over this again, 'cause it's complicated. The release looks like this same kind of mess up here,
but I'm going to do a release from the set called product one.
I'm going to release one guy and now I'm going to release a specific member, namely the person whoever it is
stored in Tech Agent index.
So sees one guy from the set called product one in the preferred order. One guy from product, one in the
preferred order, and whoever that is, I'm going to store in an attribute called Tech Agent Index. So if I see
Sammy I'm going to remember Tech Agent Index is going to remember.
That I, as a customer have seized Sammy, so that name is stored in the attribute. Now I realize that attributes
only store numbers, but let me just assure you that the name Sammy is associated with an attribute a number
as well, which you don't need to.
So about OK then when I do the release sometime later on, I'm no longer releasing in preferred order. I'm
releasing Sammy, a specific member who's stored in Tech Agent Index.
OK, so once I've got him once I seized him, he now becomes a specific member that I need to release.
So some remarks, various C selection rules are possible, not just prefer.
Order I could see cyclically and this allows me to kind of run through the servers and not tire any of them out.
I could see these randomly. I could see some preferred order. I could see specific member largest remaining
capacity. Now if I sees like Barber Barber himself, there, maybe five barbers, and so I'll.

211
Sees whatever resource has the largest number of servers remaining, so I could see Barber. I could see
manicurist if there were five Barber.
Hours remaining and only three manicurist side sees. One of those five barbers, so this is really general. I
could seize the resource with the smallest number busy.
There's all sorts of stuff I can do, and this stuff is going to be very useful when we do our call center example
later on.
So now demo time I'm going to do a simple model with four servers and you'll see it's quite easy.
Here's a summary of what we did.
How joyful we introduce sets and especially resource sets. And then I showed how to use these guys with CS
delay release.
And next time I'm going to describe in plain English the call center example and it's all the rage these days, so
we'll see you soon. It's going to be a fun next lesson.
2nd.

Lesson 20.2: The Joys Of Sets Demo

In this demo I'm going to show how you use sets with a small example involving four different servers. So let's
just watch this for a second.
It's a simple CS delay release and it looks like I've got these four barbers and I'm going to have the four
barbers be used in preferred order ABCD.
So a goes first, then B, then C, then D if possible. So just let's watch. They're all being used here and let's see.
You'll see that I'm going to attempt to fill in.
The leftmost zero first. If there's a tie, so right now there's no ties, there's a zero. He gets filled in immediately.
See when we had two zeros, the one on the left gets filled in first. Just watch carefully when the Q builds down,
we'll see some zeros.
Eventually the Q will build down. We hope. Come on, let's go.
Zero. It takes a little longer, we think, so see lots of zeros left, left, left, left. See how that works so these
queues get handled in in the CS resources get handled in the preferred order, all right? Well, let's look at some
details here.
First of all, I'm going to show you how I set up the set.
Set.
Barbers and Barbers is a set with four rows. Let's click and see the Members.
Barber ABCD as I promised, not very imaginative names. So what can I say?
And now let's look at the the language that I have to use in the season release. Remember, they were awful
looking.
So here's the CS and what I did is I added this line in plain English. It says CS one guy from the set called
Barbers.
I want to seize that guy in preferred order ABCD, and I'm going to store whoever I seized in the Barber index.
And let's suppose I, as customer #17, sees Barber B when customer 17 is done, he will release Barber be OK.
Then we go through the delay.

212
Then just standard boring delay and then I release that specific guy, click click. So I've added this release in
from the set called Barber one from set called Barbers.
I release one guy, the specific Member who stored in the Barber index. So if you want I can look at this more.
Carefully by clicking edit.
And that has you can see all the letters this way if I wanted to, I could do other releases you don't want to do
this necessarily, because you'll release random guys, but in this case I used a specific member. So very nice,
very easy. Now let's just see this one more time.
This is.

This is just keeping track of which server ABC or D is being used.


And again see it goes from left to right. We fill in the leftmost zeros first, 'cause we want to use abedy in that
order. Very, very simple. You can go play around with this model at your leisure.

Lesson 21: Description of Call Center Example

Now it's finally time to do this call center example that I've been Hawking. The last few lessons. Let's just jump
right into it.
The lesson overview is as follows. In the last lesson, I mainly discussed how to use resource sets along with
the CS delay release capability and remember we have to be a little careful. The reason I did that.
Is because that's a key component to the call center example you'll see in a little while.
Oh
Careful in this lesson I'm going to just describe in plain English what the call center example is. We've been
waiting for this all along.
And when we're done with this, you can actually start being a consultant if you want, because I think this
problem is general enough so that you can do a wide class of practical problems. And I really, really mean that.
So let's start out with a call center description in plain English. First of all, the program is arranged in
Submodels.
A submodel is just a sort of a subroutine that undertakes a specific task in the program that would otherwise
take a lot of space, and you'll see that submodels save.
Space in the screen. This is what one looks like you click inside of it. You see all sorts of good Arena stuff just
saves a lot of space on the screen in the various submodels.
Here's what you'll do. You'll use one submodel to create and direct arrivals. So how often do arrivals show up?
Calls and where do they go? Do they go to tech support or sales or or whatever? Then we have a tech support
sub model where we handle all the tech support calls.
So in particular, what kind of tech support do you need? Turns out there are going to be 3 different types of
products in this model it also.
Turns out that some of the tech support calls require return calls, so that's another so.
Model sometimes the guys got to call you back 'cause he doesn't know the answer. We also have some sales
calls, order status calls you know is my order ready to go and a time period counter.
What 1/2 hour period of the day is it? And we'll be using fake customers to increment the time period counter.
We've seen that trick before.

213
Let's look at arrivals. Calls show up according to a nonhomogeneous posson process. So we're going to have
to schedule arrivals.
The call center accepts calls from 8:00 AM till 6:00 PM. The arrival rates change every half hour. Here's a table
below that I sort of plagiarized from the from the book.
So you can look at that table and you'll see that the call frequencies change every half hour. And also I'll show
you in the demo how to use the arrival schedule in the basic process template to type these numbers in.
So during the first half hour period we'll have a rate of 20 calls per hour. In the second half hour.
Period 839 the rate goes up to 35 calls.
Hour, by the way, did you notice that I said calls per hour? Little warning sign there? These rates are per hour,
even though the time intervals are 30 minutes.
Just a little trick that you have to remember that Serena for you. A few staff actually stay at work until 7:00 PM
in order to let those calls that came near six o'clock filter out.
I do that all the time. I call just before the call center closes, which is stupid.
But you know, they just don't. They don't hang up on you at 6:00 o'clock. They have to patiently get you
through.
So what you have to do, though, is given that there are no calls between 6:00 and 7:00, originating at least you
have to explicitly Model 0 arrivals for those last half hour segments of the arrival.
Schedule Zero with the phone lines. It turns out we have 26 of them. If you get a busy signal, you balk and get
out of the.
So busy signals kill you off immediately. 26 is the limit. Now what we'll do is we'll use a cubelock.
We actually did a demo on precisely this. We're going to use a cubelock with capacity 0 to try to seize align.
So what that does if the CS of these 26 lines fails? In other words, if 26 lines are all being used.
There ain't no place to go in the hotel and you check out so the queue will immediately kick you out.
'cause it's got a capacity of 0. This is a nice little trick. The queue and the seas both come from the blocks
template. We did a demo on that and this is the only place where you can get a queue.
Block and it only connects to this kind of CS. We explicitly did this in an old demo, right? So recall that if you
use a CS within a process module and the basic process template and the other type of seeds in the advanced
process template, you can't connect it up with a queue block. So now.
Too bad that that trick only works the cube lock only works in this case with the CS block, not a CS module.
Three general types of calls. They come in randomly. We'll handle them with an end way by chance, decide
module.
So each customer hears a recording, you know they hear a little music. What type of car do you want to make?
And then he makes his choice of call 76% like the number of trombones in the big parade. Go to tech support.
16 Go to sales. It turns out there are 7.
Identical faceless vanilla sales guys that we'll need to schedule as a group of seven, not on an individual basis,
and the sales calls take triangular amount of time unimportant.
And then 8% go to order status. Is my order ready? Basically you do that all the time when you call up these
places.
Most callers do self service and triangular amount of time, but some callers about 15% of those guys going to
order status need a real sales guy for a triangular amount of time.
Now let's look at tech support. This is the best part of the program. There are three different types of tech
support calls.

214
Again, we'll handle these via any way by choice, decide module, so these are random decides to go to three
different types of tech support calls.
25% or for product 134% or product 241% or for product 3.
Each customer gets another little piece of music while they make their choice, and all tech support calls require
staff for this.
Triangular 3/6/18 duration of time we could make each type of call require a different amount of time but this.
This is already complicated enough.
There are some call backs, 4% of tech support calls require additional investigation, so the tech support guy
you you talked to may not be able to handle your request right away, and this additional investigation is actually
carried out by another group of staff that we are not going to worry about. So what happens is you talk to your.
Server you know, Sammy or tyranny, wherever they can't answer your question. So they ask their boss once
they ask their boss, attorney or Sammy or whoever you talk to is freed up to talk to other people. They hang up
on you for a.
While the boss does some work and then they tell the answer to Sammy or Tierney or whoever you talk to and
then they call you back the investigation by the boss who we don't care about, takes an Expo one hour amount
of time and then at that point the original tech support guy gets the information from boss and then they.
Call the customer back using one of the 26 phone lines but with high priority. There's no busy signal.
They wait till the 1st ones available and then they call you. That return call takes a triangular amount of time.
Now let's talk about tech support. The tech support guys have some interesting.
Issues, they all have 8 hour days, 30 minutes for lunch and they all different schedules. Every one of them has
a different schedule and so we'll need the resource schedules spreadsheet to model these guys on an
individual basis. They all have different product expertise, so we'll need sets to model this. And we've talked
about this.
Before here are the preferred orders for the sets and we actually gave an example of this in a previous lesson
product one can be handled by charity, Noah, Molly and Blue, and in Sammy and Red product two Tierney,
Sean, Emma, and in Sammy, again same.
Guys Product 3 Shelly Jenny Christy Molly in Blue Anna and Sammy in Red Molly by the way turns out was a
math major.
In real life you can see the cross functionality that we had alluded to before in previous lesson. Here is the
schedule of the various support staff.
They're all listed on the left. All the products they can handle are listed in the product column and then that
mess is there scheduled. You can see everyone has a little lunch somewhere in there.
So here's a summary of what we did. We're going to do the demo in a separate lesson.
I gave a long tedious verbal description of the call center model. That was the only purpose of this lesson.
Next time it's gigantic demo time and we're going to get to see this Arena program finally, in all of its full glory
and you can see that we're putting together a lot of stuff.
Everything is actually easy and we just have to be patient about putting it all together. So you'll see it's a really
fun demo and I guarantee you play around with a little bit.
You are consultants. By the time it's over, so let's sit back, relax and we'll look at the demo in the next lesson,
so see you then.

215
Lesson 22.1: Call Center

All operators are busy at this time. Please hold.


Well, now we're finally going to do our call center demo and what I'll do is I'm going to describe a little bit of
what we can expect to see in the demo.
The lesson overview is that in the last lesson I described in plain English, what was going on in the call center
model?
So I just talked and talked and talked about all the stuff that was going on in the model. We didn't do a demo.
That's what we're going to do this.
9.
Demo time now what we've done is that we're putting together lots of different ideas for many of the recent
lessons.
Everything from sort of this concept of fake customers to resource sets all that takes place in this demo.
Let's be on the lookout for the following things. Keep an eye out for these things.
Sub models that's that funny looking icon that I showed you at the beginning of the last lesson.
Fake customers used as timers.
We're going to use a queue with a capacity of 0. This is, uh, they kick you out of the hotel. Uh, if it's full.
So we'll kick customers out if all 26 phone lines are busy. You don't even get a chance.
Non homogeneous Poisson arrivals.
Three resource sets for tech support, each one of which has a specialty of product Type 1, two or three, and
there's some cross functionality.
Tricky Seas delay releases.
We've talked about those a couple Times Now.
And quirky callback procedures.
Now it's demo time.
You'll be seeing the following interesting screenshots. This is a top level view of the model with the various sub
models displayed.
This is what tech support looks like, so you click into that tech support sub model. Up here you click into that
and you get this mess. This is the time period counter. It's only job is to click off.
123 up to 22.
There's there's 22 half hour periods in an 11 hour day to keep track of what period of the day is it, and then the
arrival submodel.
We've actually looked at a very, very close variation this thing before. This is the arrivals where we use that Q
block, et cetera, et cetera.

Lesson 22.2: Call Center Demo

Well, now the call center demo finally going to do this after all this time. Here is the main screen for the call
center.

216
You can see it's arranged in these submodels. Here's where we get the sub model from. Click on here sub
model.
Then you click inside of it if you want to start editing. So I've already done that. We've got the sub models here.
Time period counter create arrivals.
Tech support sales calls order status calls return tech support calls notice how they're connected up. Except for
this time period counter, which is an island all by itself.
No man is an island, but some sub models are, so let's click into this. We'll do this first, but by the way, see
how it runs.
OK, it looks a little boring, but we're generating arrivals and they go through some tech support, some sales,
some order status, some return calls.
What's going on in the time period counter? Its only job is to use fake customers to update the time period.
He goes every. It turns out every 11 hours.
One of these fake customers is generated every half hour. They go around the circle and they update the
period.
The period is useful for certain statistics gathering and for keeping track of when the arrival rate changes,
when, when the resources the servers go on, breaks things like that.
So you can see when I get to 22 it's going to kill this off. Watch what happens here 1890.
Nineteen 2021. Here we go 11 hours 22 and off. It goes and the new one generated now.
This clock here says it doesn't go to 6:00 PM because we're starting it sort of at midnight and you have to
pretend that midnight corresponds to 8:00 in the morning when the simulation you know when the call center
actually opens OK and it just goes round and round.
Generating these fake customers, keeping track of the variable called period so period is a variable. This gets
updated every half hour. OK, that's all I have to say.
There now I'll go to the navigate bar. I go back to top level and then let's go to create and direct.
We've seen this stuff before. I did an entire demo on these blocks. Customers are created. I'm just going to run
through here. Let's run it a little more slowly.
Now it doesn't need to.
Be that that much slower customers are created. They go through a queue of capacity 0.
We'll talk about that in a second. They attempt to seize a trunk line. There are 26 trunk lines.
This queue has capacity 0. See the zero there, so that means if it can't get into the CS block, it's forced to stay
over here at the Q Block, but the Q block has capacity 0, so it's kicked out over here. Go through a couple of
statistics gathering things. Don't worry about those and is disposed if it.
Can sees the guy we on one of the trunk lines we assign an arrival time. It listens to music.
And then we determine the call type. That's what this, anyway? By choices. Let's take a look.
Anyway, by chance I always say choice anyway, by chance 76% goes off to product type 116 percent, 76%
goes to call center tech support.
16% goes to sales and the remaining 8% goes to order status. That's the purpose of this. Then it looks like we.
Exit the sub model. These things are provided automatically so we exit the sub model, go back to navigate.
Top level and see 123 Tech support sales order status. Let's do sales right now it's the easiest thing.
Very simple, sees a sales guy delay him for a while.
Release the sales guy now. In fact, you're done, so you have to release the sales guy. And don't forget to
release the trunk line.

217
So this is a delayed release of the trunk line that you originally seized back in the back in the submodel where
we created the arrival.
Right, so this is a good point. Now to take a look at some resources, basic process, resource spreadsheet.
Let's look at these resources.
A lot of resources, holy smokes, charity, Noah, Molly. These are all the tech support guys. We look at them in a
minute.
They all have schedules and then the Salesforce. It turns out they have a scheduled. There's seven of them,
but they they have schedules 'cause they go on lunches, but they're all. They're all vanilla and faceless and the
same.
They do have a schedule though, and then the trunk line is a fixed capacity of 26 phone lines, so we'll look at
these schedules, charity nomaly, blah blah blah in a minute.
Schedules here we go.
Here's charity schedule. Click and so she's working. This is 8 in the morning. This looks like 1-2 three 3 1/2
hours.
So this is 1130. She works for 3 1/2 hours lunch break work, some more goes home, so that's a that's charity.
Not going to save her schedule. Let's look at Noah.
Yeah, he works later in the day, so see everyone's got a different schedule.
We don't need that noise.
Then we can look at the sales schedule.
These are the faceless sales team we start out with three in the morning, goes up to seven. They take some
lunches, a couple of them sequentially goes back up to 7.
Then there's five by time 11, so that corresponds to 6:00 PM. Uh, that's eleven. So yeah, that corresponds to.
7:00 PM when they will.
Yeah, and these are in half hour increments.
Sometimes you don't want to. It doesn't make sense to click on these because you want to get more fine
tuning, but you know when you have these discrete integer increments, this is just fine.
I'll show you how to do this without clicking on these in a second. When we look at the arrival schedule. So
notice this is an arrival type schedule.
So in the type you have to click arrival, let's take a look at this.
It's much more complicated looking, but what this is see we have about this number is you can look under the
OK.
This is 20 arrivals per hour 35. You can see it goes up and then down as the day progresses. These last two
zeros correspond to the period between 6:00 and 7:00 o'clock.
When there's still a couple staff members around, but we cut off the arrival so these the remaining customers
filtered through the system at this point and now.
It's perfectly OK to click these numbers in, but what some people like to do? I'm going to say no.
I'm not going to save the changes. You can right click, right click and edit via a spreadsheet. This is what I
usually do if I have a complicated one.
And let's do this, see here. We've just typed these in by hand. Notice that when it says duration here, this
means we have a rate of 75 customers per hour for two time periods.
This is 2 half hour time periods. This means 70 customers per hour for 1/2 hour time period. Remember the?
These time periods are always.

218
What we define, but the 70 and 75 are always per hour. So we define the time periods as 1/2 hour time
periods.
But the arrival rates are always in per hour units. A little confusing, but you just have to get used to it. So when
we see the 75 for two.
Half hour time periods. What this is? Let's take a look at it again. See when we have.
Let's see where does that happen right there. See two time periods in a row of 75 and what they're doing.
They're saving space in that little spreadsheet.
A little stupid, but that's what they do. OK, so that's easy and this gives me the various arrival rates.
OK, now the last thing I'll look at are the resource sets.
And here they are. We have these resource sets. Ignore these guys. Like I said before, product one.
There's charity, Noah, Maliana and Sammy in preferred order. Product 2.
Tierney, Sean, Emma, Anna, Anna, and Sammy again in preferred order so you can see the cross functionality
of Anna and Sammy.
Let's go to navigate top level and I am now going to cruise right into the tech support module right here so I can
do that.
There, and that's what it looks like. What a mess, but it's actually quite easy.
So I'm zooming in and let's run the model.
See, they come in. They randomly choose one, two, or three. These are the product types. They do a certain
assignment.
This is like when they actually start getting served, so CS when do we start getting served sees a server when
they start getting served. Delay for a while, release the server and the phone line and leave.
Now they all go into. There's a possibility of a call back. We'll get into that in a second.
OK, and so let's watch this. We're going to seize product one, assign the arrival time delay product one
servers, and then release product one server in the phone line.
So let's do a deeper dive there. This of course is just the probability going to product one, two or three. So CS
as I promised CS.
One server from the set of product one guys in preferred order. Store that guys name the person that you
seized and tech Agent Index.
Let's assign the product type as one. We'll use that later on the call starts at time T now.
Delay the guy for an expression called Tech Time that's that's defined over here. Let's see, I think it's an
expression, so we go into advanced process.
Yep, there it is. Expression tech time triangular 3/6/18. The reason that we put it as an expression instead of
just typing.
Triangular or whatever it was here is because we used the same tech time in three places and we we want to
be lazy and not have to type triangular blah blah blah in three places, especially if we change it. OK, then when
you're done getting served, release.
One guy from the set called product one. Make sure it's the specific member that you used and that specific
member is stored in tech Agent index.
So be careful about that and then of course release the phone line so somebody else can use it. Hey, very
simple, right? So then, let's go back to the top.
Navigate and here's what. Here's what we're looking at.
Whoops
We just got done with tech support.

219
By the way, I can move, I can move it around like this too, but that I always screw this up.
So we just got done with tech support and you'll notice that all these customers are routed into return tech
calls.
See, they're all going to return tech calls now. Only 4% of the customers actually get a return tech support
called Zoom in and so that means that 96% of them are not affected, that happens.
Right here, let's watch.
Blah blah blah blah blah. See almost everybody going here. What happens is that they are immediately.
Dispose so only 4% actually go to return tech support all the way through. Here 96% of are disposed.
You can watch I see every once in a while you'll actually get one that's there we go. We finally got one and they
go to the product Type 1, two or three.
I'm going to just click in on these now and go through this fairly quickly. Remember, we set as an attribute in
Texas in the tech support.
The submodel, what kind of product type you were so we do an N way by condition.
Which product type you are? If your product Type 1, two or three, that's where you go. This is a CS delay
release. Again, let's see.
So you have to seize a phone line CS trunkline because the guy the guy is calling you back with high priority.
So he's actually the sales guys.
Actually calling you back, but it's equivalent to you calling them back with high priority and not having to go
through that queue block. So you get the first available trunkline so you sees that you also.
You don't actually sees it until you're able to CS your tech support guy first, so you do this one, then you get
the phone line.
So CS one guy from product one, the specific member that you talked to before not preferred order anymore.
It's the specific guy you talked to before because.
You remembered who he was. Tech agent index.
You use them for a while. Return type called Time which is returned tech time. It's another expression and then
you release that guy using the same command as before release one guy from product one.
Set specific member, Tech Agent Index and then release the phone line. You're done so there's one thing I
forgot to show.
And I'll just do that in two seconds 'cause it's so easy.
Order status and basically what that is. Let's watch it. You go through a little delay from the order status that
you're asking.
Well, how's my order doing? You do that by yourself and then some small number require a sales guy. So all
this is doing this looks real sloppy. All this is doing is just seizing a sales.
And then either way, either if you go through the sales guy or you don't, you end up right here and you release
the phone line. So very, very simple.
And that's it. So what I encourage you to do is you may need to look at this demo a couple times.
There's a lot of stuff going on here, and I went through things fairly quickly, but look at the demo a couple
times.
Once you understand everything, you are consultants. Congratulations. You've just done a very, very difficult
model and you're ready to go.
Make a lot of money now.
Here's a summary of what we did.
This time I put together everything and we finally demo'd the call center example in its radiant glory.

220
In the next lesson, what I'll do is describe in plain English an inventory system, and then do a demo there.
We'll be doing a bunch of these specialty demos from now on.
So see you then can't wait.

Lesson 23.1: An Inventory System

Now that we've done this call center demo, I'm going to do a few additional specialty demos and we'll describe
them in a little bit of detail as we go. You'll see kind of the wide range of things that Arena can do.
Here's the overview. Last lesson we did that call center simulation demo.
This time I'll give a plain English description of a simple inventory system and SS inventory model, then the
demo.
In the next few lessons, well, we'll look at several demos like I just promised inventory model for now.
A quickie little demo on whether you should use one line versus 2 lines.
What's called a reentrant queue. That's a lot of fun.
Things called smart files and other Rockwell demos. These are all Rockwell things.
And then a manufacturing system with transporters, conveyors and various forms of movement.
Here's a description of the SS inventory policy that we'll be looking at in the current demo, I'm going to simulate
the widget inventory of some stock over time, and we'll be using modules from the basic and advanced panels,
customer arrivals, a form of pasoan process exponential .1 interarrival so we have about.
10 arrivals per day of customers.
The demand size uses the DC function which we mentioned once or twice before. That's a discrete distribution.
I'll mention what it is when we look at it.
Demand is always quote unquote, net zero. Even if you don't have stock, we backlog, so eventually it's net.
You never send a guy away and.
3 inventory is taken or evaluated at the start of each day. You only notice that you're understocked at the
beginning of each day.
If you run out of stock, too bad you just backlog backlog backlog. If inventory is below little South, we order up
to Big South. That's the SS inventory policy. If little South is 5.
And Big S is 20 and you end up with an inventory of four at the beginning of the day, you order 16 that day.
The delivery lead time is a uniform distribution takes more than half a day but less than a full day.
Before an order arrives, it's pretty quick, but by the time the order arrives, other customers will have also
arrived depleting inventory.
Further, that's an issue. There are order costs like setup costs for an order incremental costs. The more you
order, the more costs holding costs. If you've got stuff in stock, it costs you money.
To hold it and penalty costs just because orders are all eventually filled doesn't mean that you don't pay a
penalty for having the backlog. You know, maybe customers notice that you didn't have in stock, and they're
ticked off.
Now what we'll do is that using all these various costs, we want to automatically calculate average total cost
per day over a 120 day period, and I'll show you how we set that up.
This is going to involve some statistics that you'll have to figure out how to use yourself, but I'll show you where
that's contained. Inventory unit.

221
Costs like how much does it cost to order and things like that? These and all other parameters are variables,
so I'll show you the list of variable.
Inventory is decremented by demands incremented by received orders, so demand comes in. We decrement
inventory when we receive an order after the lead time, we increment inventory in arrival. Times demands lead
times. These are all expressions that we get from the advanced process. Template expressions.
Come from advanced process. It's a little piece of trivia that I always forget. Accumulated costs are calculated
in the statistic spreadsheet that's located in the Advanced Process panel, so it's a spreadsheet in the
Advanced Panel Daily Inventory review is conducted by a fake customer at the beginning of the day.
In this lesson, we took a little time to discuss the SS inventory policy, not the most complicated one, but
certainly an interesting one.
In the next lesson, I'll look at that old Doctor Seuss demo one line, 2 lines Red Line, blue line.

Lesson 23.2: An Inventory System Demo

This is the inventory policy demo using the SS inventory method that we talked about in the lesson. Let's kind
of watch it.
You can see all these things. These are all arrivals occurring right here. These things going through the
system, the arrivals if I zoom in, these are arrivals of customers. They show up about 10 times a day.
See the Interarrival time distribution turns out to be Expo one. I'll show you that in a minute we decrease
inventory by a discrete amount of the order and then the customer leaves. That's all that's going on.
I don't care about servers or anything here. All I care about is inventory. Let's we can watch this over here.
This is called an inventory level fill bar, which you can grab from the template up here. This is a graph of
inventory level. Blue means in the black, red means.
In the red, this is backlogged inventory. This is displaying the familiar sawtooth function that you would see
using a standard SS inventory policy.
If I zoom in, you can see inventory goes down, down, down, down, down, sometimes in the red, then it jumps
up.
When an order is received, when it goes below little South, which I forget what it is here we order, then the next
day inventory shows up and it jumps up, then down, down, down, down, down, up, up, up, OK. See how that
works and whatever we're doing here we're not going in the red very often.
OK, so let's now we'll zoom back out and I'll show you what I'm doing. These customers come in, according to
an interarrival distribution, as I promised, inner inner demand time. Turns out that's going to be exponential.
That's an expression which I'll show you in a minute.
We decrease inventory by the inventory level equals the inventory level minus the demand size. Demand size
is also an expression which I'll show you in a minute. Inventory level is a variable because it's global.
If you decrease inventory here, you decrease it everywhere.
Once a day I use a fake customer.
Evaluation interval is once a day. This is a fake customer. His only job is to review the inventory so he goes
through.
Didn't really do much. He seems to be clear and going through no order, no order. But let's look and see what's
going on. Sometimes he goes up to this side. That right there we placed an order.
Let's take a look.
Block on going that just means the simulation's continuing.

222
If the ordered decision if the inventory level is less than little South, the ordered decision is to place an order. If
the if the inventory is very unequal, little less, no order.
So scroll over assuming an order has been placed.
So.

We update some costs. These are various different costs. This the order quantity is Big South minus the
current level as I described.
Now we have an ongoing order. What I'm doing is I'm keeping track of whether or not an order has occurred,
so I don't place multiple orders.
Within two days we just do one order at a time.
That's what ongoing is doing.
Then we wait for delivery by a delivery lag. It's a uniform point 51.0. I'll show you where that expression is in.
A minute and then eventually the delivery occurs thanks to the fake customer and we update the inventory by
adding inventory level equals inventory level plus the order quantity and we turn the ongoing into zero, which
means there are no ongoing inventory orders. Then the fake customer.
Goes away, that is it. That's how the thing runs. And then like I said, we keep track of the inventory level up
here.
There we go. Inventory on hand. That's what we're referring to is. This is the name of the data piece, and the
expression is there's actually, this is the red version MX inventory inventory level 0. This is the maximum of the
inventory level and 0.
Which produces the red curve that you see here, the black curve. Sorry, not the red curve blue curve. Get my
colors straight.
So you can see the curve this. This produces the positive part of the curve, and then there's another version
that produces the red negative version.
OK, fantastic, so let's look at a couple of things here in the advanced process I've got some expressions.
Enter demand time exponential as I promise exponential .1 demand size DC looks like disco and that means
discrete distribution with probability.
1/6 equals one with probability 5 -- 1. Six equals two, so these are the demands 12345 with one.
Sorry 1234 with different probabilities described here.
Evaluation interval
Well, this is every .5 days. Ah, that's why we're getting so much blue because I was evaluating the thing all the
time. I was doing multiple evaluations. Now let's see what happens if I evaluate only once per day.
We should we have less information and then the delivery lag is U1 half 10. I've just changed the evaluation
period to be more pessimistic in a sense and then we have a number of variables in the basic process.
Inventory level little South Big South. Total ordering setup costs incremental. These are all variables.
How many days to run 120 you can take a look at all those and then the last thing I want to show you before
looking at some output advanced process is we also use this statistic block.
Which keeps track of various costs, holding costs shortage costs, average organ cost, average soul cost. I will
leave it to you to figure out how these are being calculated.
These are using internal Arena command. See for instance MX is maximum. You can look at these things. It
takes 5 minutes to figure them out. OK, so the only thing I'm going to do now.
I'm going to run this one more time with my updated evaluation period, so we're not evaluating as much. I
predict we're going to see more red.

223
Let's see, yeah we we are getting a little more red than what we saw before.
Yep, that makes sense. More red than before because we're not evaluating as often.
And just out of curiosity, let's go back and look at what little S and Big S were in the variable.
So little SC equals 30.
And Big S = 60. That's very conservative. Let's see what happens if I change these things. Little South equals
10, which is really cutting it close, I think.
And Big S = 30. Let's see what happens.
This will be the last change that I make, and we'll run the thing.
And I predict, Oh yeah, we're going to get much more red now. Oh yeah, this is a disaster.
I'm evaluating things too late. I'm getting these gigantic reads up too bad. Let's run it to completion so I'll hit the
go. Fast sign.
Bang 120 oh I ran it for 600 days, not 120. Let's look at the outputs.
And let me see, I'll just show you where the costs and curb if I had you to look at various different trials of this
to optimize, you would want to look at the average total cost. 142 dollars, $143.
This is the thing that you care about and I can guarantee that if you mess around with the various.
Variables you can get this down to about 1:15. I guarantee that and perhaps you'll see an exercise about that
pretty soon.

Lesson 24.1: One Line Versus Two Lines

Hi everyone so it's one fish 2 fish time we're going to compare one line versus 2 lines in a particular queuing
model.
Here's the overview. Last lesson I worked on a simple little SS inventory policy example. Very nice. You can
really play around with that a lot.
And in this lesson I'm going to ask the simple question. Should we use one line feeding into two parallel
servers or separate lines feeding into two individual servers? You may notice that different fast food restaurants
explore this problem.
So we'll use a very cool trick called common random numbers to do an apples to apples comparison.
Here's our game plan option. A customers show up and they join one line in front of two identical servers and
they go to whichever server is available first.
Option B customers randomly choose which of two lines in front of single servers to join. Again, you could
picture this happening.
At certain fast food restaurants, we're going to compare which of A&B is better by using the exact same
customer arrivals. Thanks to a separate model.
Rule, so we're going to use the same exact customers. That's what allows for apples to apples comparison,
like a paired T test, and we'll use the same exact service times for a particular customer.
Whether or not he's doing option A or B, and we'll use an early assign module to do that. So again we're using
for apples to apples.
Comparisons the same exact customers arriving and the same exact service times. The only thing that
changes is the discipline that they use. Option A or B.

224
So I'm going to guess that option A is better because, well, if you use option B, it may be that you randomly
choose not to use a server that has an empty line, so almost certainly option A is better. So again, apples to
apples comparison using what's called common.
Random numbers the same arrival times and service times, and this is like doing a paired T test in statistics.
Wow apples and pears in the same example, this kanran numbers is something that we'll explore again when
we do. Module 10. Really, really nice trick in simulation.
Here's what the thing looks like up here. This is going to be option a. Here's option B down here.
The separate block what that's going to do? It's going to say put the same exact customer arrivals down to
A&B, and then we'll see what happens.
Here's a summary of what this lesson did.
Here we used Arena to compare two different server configurations for a simple queuing system. Very, very
simple.
And next time I'm going to look at a really crazy reentrant queuing system. It's not intuitive, but it's very, very
cool. Simulation is actually the 1st way that people use to analyze these crazy systems, so see you then.

Lesson 24.2: One Line Versus Two Lines Demo

Now we're going to be demoing this one line versus two line problem on the top here. I'll actually run it.
I've got one line feeding it to two parallel servers on the bottom. I split the customers evenly, 5050 into 2
separate servers. So let's watch customers arrive.
Here I can zoom in. You can see the little marbles coming in the blue marbles. Parts arrive to the system.
I assign a server time to them right away so wherever they go they're going to have the same service time.
They're going to have the same arrival time, so these are the same exact customers that I split.
It.
Into two identical clones. So this is a separate duplicate. Original, duplicate, original, and these are clones.
One clone goes up here into the single line feeding into two parallel servers. The other clone goes here and
this decide block randomly.
It's a 5050 decide splits them into two parallel servers. The single drilling center up here as a Resource Center
with capacity.
Two so these are basically the only differences.
Capacity 2 Single Line 2 separate lines, each of which goes into server capacity one. So the only thing I care
about now are.
The average number in the queues.
So it looks like so far I've got 1.23 customers average number in this queue and it looks like .11 here 2.16 here.
So in this system I've got 2.3 as opposed to 1.6.
A little hard to see these numbers and the cycle times for option AR-16 cycle time for option BR21.
But what's going on is that this simulation is very, very early. It's only 166 minutes. Let's run it more quickly. I'm
zoom in here now. We're at 1000 minutes now, 3000 minutes.
And you can see.
These queues on average average number in queue is about 7. When you add them up only about two. When
you add them up here and the cycle times down in option B are twice as much as the cycle times up here.
Let's run this thing to completion. I think I run it for one. Oh oh, some huge number of minutes.

225
On it goes aisle stall, stall stall. I think it goes to 100,000. We're at 60,070 thousand 80,000 and let's see what
we get.
Up it goes even further than that, so I'm going to what I'll do is I'll run it in batch mode.
Zero Run control batch.
Run it and yeah, whatever the number was, it was quite a long time.
And let's see some of the output here so.
You can see that, uh?
You can see that we have just this huge number of runs and each drilling center had a drilling center one.
These are the two separate guys. This is the drilling center, one drilling center 2, the two single servers and
single drilling center.
Is the double server so they have about the same number?
Uh, and these are the number in and out.
Yeah.
So the statistics I'm interested in are the waiting times and a single drilling center. This is with two parallel
servers.
41 minutes, that's the average waiting time.
And the the two, the two single server lines, option B is almost it's even more than 200 minutes.
Unbelievable, so there's a huge difference and I would definitely use option A just by looking at this. OK, you
can play around this with this a little bit more if you want.
But the story is pretty clear. Option A definitely beats option

Lesson 25.1: Crazy ReEntrant Queue

In this lesson, we're going to be going over a crazy reentrant queue that could drive you insane, so be careful.
Here's our overview. Last time I looked at this one line versus 2 lines example, and in this lesson I'm going to
look at a much more interesting system, the so-called reentrant queue.
Now what happens here is that you're going to reuse servers and mess around with priorities and mess around
with service times. So I'm reusing servers.
One customer will go from one to another, then back to the first guy. You can get some very non intuitive
outcomes and you'll see in a second.
So here re entering QS customers go to server one, then the customer will go back to Server 2 then back to
server one back to two back to 1012121. That's sort of re entering.
That's why we call them that they're depicted in Arena as five separate process modules with CS delay
release.
Treos and like we've done many times, it's OK to seize the same server in different process modules, so even
though they look like they're in physically different places on the screen, you are still seizing the same server.
So CS delay release 5.
Times servers 12121, in that order. Now, this seems like a totally perfectly boring model. Let's make it
interesting.
All service times are going to be exponential and here are the means. The first time we go to server one the
mean is 1. The first time we go to Server 2 the mean is 5.

226
Then .1 and server one, then .1 and Server 2, then one and server one. These are actually chosen very
carefully here. The priorities when the customer goes to server one the first time he actually has low price.
Priority, so if there's another customer going to server one further down the line, he's going to have higher
priority anyway.
When the customer gets to Server 2 for the first time, he's got high priority. Then when he goes to server one
the second time he's got medium priority. So see what's going on. The service times are changing and the
priorities are.
Changing as they move from server to server. So in other words, on the customer's third visit to server one he
has a high priority exponential .5 service time high priority.
That's the third visit. The server one with exponential .5 service time. So it's a complicated little model. In a
sense, simple model, complicated little logic.
Here's what it looks like, very simple. We'll see this in a minute. He goes from 12121.
We're going to keep track of how big the lines are, and we're going to keep track of the total queue in front of
server one.
So total queue is going to be the queue here plus the queue here plus the queue here. So again, even though
server one.
Is physically in the same place we can print?
Then that, at least for purposes of the screen, these are separate little lines. This is the line that forms in front
of server one when you're there for the first time.
This is a line that forms when you're there for the second time. Maybe they're next to each other, but they're
technically treated as separate lines.
And here's what you get when you add up the two lines. O it's demo time, and we're going to see some cool
behavior.
Here's the summary.
It was just so cool, so very cool words escaped me. Hard to believe.
Next time I'm going to get smart finally and literally.
So see you next next lesson.

Lesson 25.2: Crazy ReEntrant Queue Demo

Here's our demo on reentrant queues. Very cool. I'm going to show you what's going on and then we'll run it so
create customers every once in a while. They go to server one, Server 2 Server, One Server 2 server, one in
that order.
So I'm literally I was very non original CS delay release server one. How boring exponential .1 service times.
You can refer to the lessons to the lesson to see what the server times were in each case. They're exponential,
you know, like .15, but you can look at the order. The only thing that's a little different.
Is the priority.
And in each case, the first time I see server one it's low priority. First time I see Server 2 it's high priority.
First, second time I've seen Server One medium priority, so that's where I'm getting those. I'm keeping track of
individual queue sizes and I'm keeping track of overall queue sizes. Here we go. This is a very fast demo
ready.
This is scary. Here we go.

227
So you can see.
Boring, boring, boring. Some cues are forming. Here you can see the numbers.
And it looks like server one, Server 2, they've they've got some queues. The queues look a little bigger now.
Server 2 all of a sudden got some big queues.
Now Server one seems to be getting some big queues, that's interesting.
Now it looks like it's gone back to server 2 and what's going on now? Let's look down here. You don't have to
look the picture, just look at the numbers.
We are getting bizarre oscillating queue behavior between server one and Server 2.
One goes up, they clear up the other one comes down. This is a result of the careful selection of the priorities
and the service times, and we're going to get oscillating behavior.
And not only is it oscillating behavior, it gets worse and worse. Let me run it faster and we'll see.
So this number peaked out at 40 and 60.
Let's see what this time.
It will get worse and worse.
75 this time.
So what's happening is you're getting this crazy oscillating behavior because of the way that we chose the
priorities and the the service times, et cetera et cetera actually died down a little bit here.
If you let this thing run and run and run, it gets worse and worse, and this is extremely interesting behavior.
Which you can play around with and shown amaze your friends so it's a quick demo. Take a look at it and we'll
see you in the next lesson.

Lesson 26.1: SMARTS and Rockwell FIles

In this class I'm finding it gets smart by looking at Arena smart files and Rockwell Demos. Rockwell is the
company that owns Arena.
These are hidden little gems that you can find and play around with. I actually like playing around with the
smart files, you know, during the commercials while watching a TV show. So these are fun.
2nd.
In the overview, last time I looked at that nutty, reentrant queue, and the crazy behavior in this lesson I'm going
to look at these little gems as smart files in the Rockwell Demos.
And like I said, it makes great commercial and bedtime reading.
These are very nice little tutorial files that are organized by subject area and occasionally help you with difficult
problems that you might not otherwise be able to get.
Using Arena terrible help menus, for example, there's smart files to create customers with a time dependent
arrival rate that varies according to some equation.
That's interesting, and in fact there are hundreds of smart files that you can use. That's why I look at them for
fun. Rockwell also has many prepared professional demos. I'm going to show you one of each of these.
Things to look for. These guys you have to sniff around a little.
But on my implementation, if I go to libraries, documents, Rockwell, Software Arena, that's where you'll find it.
It may differ a little bit on your computer, and I'll demo a couple typical examples.
For instance, smart files where arrival rates vary according to express, you know things like this emergency
room. These are all very nice little demos.

228
The summary of what we did in this lesson. Very simple. I looked at several cool Rockwell smarts and demo
files.
And in the next lesson I'm going to look at a very easy looking.
Yet extremely sophisticated manufacturing cell and I'll do an allusion to variations, including transporters and
conveyors, for that cell. That's sort of a more advanced concept, but it's fun to look at, so we'll talk about.
That next time, see you then.
2nd.

Lesson 26.2: SMARTS and Rockwell Files Demo

What we'll be doing here is we look at an arrival rate that varies via an expression, and actually a variable that
depends on the day of the week.
So here we have. I'm going to create some customers and the arrival rate is going to depend on this
expression, one per day of.
Week and and this day of week rate is defined as a variable. So let's take a look at that day rate of day.
So Tuesdays are different than Wednesdays, let's say zero. We go into basic process variable and look at this.
We have day, which can be. That's just the day of the week.
It's going to be we'll start out at one day. Rate 7 rows corresponding to different rates per day.
So how do we know what day it is? Well, very simple. We use this fake observation right here to generate the
days of the week. Let's just watch.
He goes round and round every once in a while. Watch this see customer comes around 2:00.
You come around again.
Three, so once a day this updates. And this is kind of smart. It goes 1234567.
Then it's smart enough to come back to one. You can take a look at that and here the rate just changes by
looking up what value in the variable vector to take. Does it take the first, second, third, 4th, 5th, 6th, or 7th?
So very, very simple and this is just one little tutorial. That Arena gives you in the smartfile.
Another thing that Rockwell is very good about is that it has a lot of cool little demos. I'll show you one, for
instance, let's look at an emergency room and this is a professionally prepared demo from Rockwell. You'll see
mostly the graphics and not the underlying logic to this thing, but here the graphics.
Let's watch.
It's beautiful.
And if I make this a little bit fuller screen.
Quite nice so you can see customers coming in the clock changes. It's quite nice.
And this is prepared by Rockwell and they have numerous demos that you can show people very nice. They
come into the emergency room.
Gets served, there's a waiting area. It's quite nice and it keeps track of ongoing conditions and utilizations.
Things like that. So there you go

Lesson 27.1: A Manufacturing System

Now I'm going to give a description of a manufacturing system within company demo. This is a very nice little
model that I'm stealing from the Arena book.

229
Here's the overview in the last lesson I looked at some very cool smartisan Rockwell demos.
In this lesson I'm going to demo a pretty easy looking yet very sophisticated manufacturing cell, and I'll look at
a couple of variations, say involving transporters and conveyors.
The idea is that we can introduce movement and multiple part paths. Now you may not be interested in
movement.
Things like that in your particular application, but the fact that I can do multiple sequences of paths is very
interesting and very nice to generalize, so we'll see how that works.
Here's a description of the general model. I've got a manufacturing cell that's got three different types of parts
that are being produced.
Each part follows a different path or what's called a sequence through the system. Different service times occur
at each station depending on the part type.
And the place in the visitation sequence. So, for instance, you could have part type twos visiting station 12423
in that order, and each of those stations will have different service times. Even the multiple visits to Station 2
might have different service times. Movement requires the Advanced transfer template.
So that's another template that you have to load and then you use blocks like use modules like route station,
enter and leave.
You'll see those in the demo. You also use something called the sequences spreadsheet which keeps track of
the set of paths that the customers can undertake. What sequence of.
Visitations do they undergo, and you'll need advanced sets to handle sets of sequences, so you can have a set
of anything.
It's perfectly OK to have a set of sequences. Now there are ways to get around that in this particular demo you
need advanced sets, but you can look at those.
If you want, I won't go into detail during the demo itself, and parts can move in a variety of ways.
They can just move by themselves, or I can put them on a transporter or conveyor. Lots of ways they can move
if you move them on transporters or conveyors that will require construction of transporter and conveyor paths,
but that's something that can be done easily and you'll see what they look like in the demo.
This is where all the graphics take place, and this is the code. It is trivial code once you get used to it.
So this this right, here is a route block. You might have an enter or leave block or and enter block here.
These things are called stations sometimes, so there's lots of different types of blocks that you could use for
the variations.
Problem, but it's a very, very easy problem to code up seemingly a complicated problem, but it's actually easy
to code.
So here's a summary of what we just did.
This time we did demos on a small yet very sophisticated manufacturing cell that really has a lot of
generalizations. We also looked at a couple of variants involving movement.
And in fact, this ends our module on Arena, even though I don't really want it to end, we're still going to see
Arena later on, and in fact, we've only scratched the surface of Arena. There's so many other things that we
could do.
Next time, though, we're going to start an entire module on generating U0 ones. Very easy things, but very,
very, very important, so make sure you get a little break and come back and see those uniforms. It'll be well
worth it, so see you then.

230
Lesson 27.2: A Manufacturing System Demo

I'll be doing a demo on this manufacturing cell. Let me just get the darn thing running. It's very pretty.
Yeah, let me slow down a little bit. You can see different types of customers. In fact, let's let's do that setup and
make it quite a bit slower. Run speed at a zero. We're getting used to this by now.
See different types of customers, blue, red and green marbles going around the place and they actually have
different paths that they're traveling.
The blue guys go from cell 123 and four in that order. The red guys go. If I remember 1/2.
423 in that order, the green guys I think are 213, but you can see they're all moving around. They have
different sequences.
And here's the code.
I'm just going to do this in very, very high level language.
Parts are created, we assign them their sequence here, along with their part type. There's a blue.
Right, he goes to what's called a station. He figures out what his sequence is going to be in the blue guys.
Let's wait for a blue one. He'll go to station 123 and four and that or there's blue.
See right there station one, he pops over. Now he's going to go to Station 2, see there. Then he's going to go
to Station 3 and then in a second he'll go to Station 4. Let's watch for him over here.
Come on, go to Station 4.
He'll show up in a second.
Come on, come on.
You really will mostly there might have been a line at Station 3.
There we go. So he goes to Station 4 and then he's out of there, so it's very very nice and the only thing you
need is this sequence block.
The use of sequences in this route block, so Raja says, where are we going to go next and it says in plain
English, in transfer time.
In this case, this is a variable of length 2 minutes, so it takes 2 minutes to go to the next place by sequence in
your sequence. That's all you need.
To do and all this stuff is.
In the so called Advanced Transfer Template, which I've loaded here, you can see this is the sequence
spreadsheet and it's very very nice if I want the route block, I just drag it over.
Stations are just places that you go, so you can see that it's very easy to run this model.
Take about an hour to learn how to do it, but that's for another time. The last thing I want to show you then.
Is I can do movement involving a machine?
Hey.
In this case, the transport is, so see the little green thing there. This is just a guy that picks you up like you
know, like a taxi cab or a cart. And let's run this.
See, he picks up a customer. Two drops him off at cell one, then he's free.
And he's not free anymore. He's going to go over and get the next guy.
When they move, they're not free.
I'm going to go back here and wait for the next guy.
Pick some up, drop some off isn't that nice?

231
And if I want just the last thing, I'll show you. If I don't think that these are enough transporters, let me make
this a little faster.
If I don't think that I have enough transporters, maybe I don't, I can go to the advanced transfer.
Transporter spreadsheet let's make four of them.
Isn't this easy?
And now.
We'll rerun it.
And see now I have four transporters moving around, so easy, and so I'll give you a small assignment that you
can play around with this a little bit.
You don't have to do the programming itself, so OK, well, that ends. This last demo of the module. I hope you
had a nice time looking at it.

Lesson 28: Advanced Transfer Panel


Hi class. I put together several lessons kind of bonus extra credit very informal lessons that manufacturing
system that we looked at at the tail end of the arena model and what I'd like you to do is go through these
things learn a little bit more about the manufacturing system and I'll get some extra credit along the way so
coming up I've got 5 lessons there are bunch of additional lessons related to that manufacturing example that
we ended the arena discussion on

In this lesson I'm going to talk about the advanced transfer panel modules that are related to movement there's
a whole bunch of them we won't talk about too many of them but there's a bunch of them. In particular We'll
look at the station route and I'll mention enter and leave we'll talk about those subsequently in the next lesson
will deal with so-called sequences of visitation locations

So a guy comes into a store and he visits 4 different places in the store in some order that would be a
sequence of visits to those different locations and this is really really useful functionality in arena then I'll talk
about the concept of advanced sets we all kind of know what sets are you know like sets of resources and
we've done those before at that sets are made up of more complicated constructs in particular I will look at a
set of sequences and that's so complicated I guess they call it an advanced set and you find that in advance
that's in different places in arena a little confusing but I'll show you where you go then I'll do a detailed
walkthrough of the manufacturing example I did a quick one previously when we were doing the module but
now do a more detailed one.
And then after we do that walk through I'm going to do some generalizations it turns out that parts can move
around the manufacturing system in a variety of ways so for instance, you can you can grab somebody to walk
with you you know you can do a Seize-Delay-Release on on a resource to walk around with you you can grab
a transporter. Like a car a bus you can get on a conveyor or a I suppose you can walk around yourself too so
there's all different ways of doing that so by themselves by a transporters by conveyors if you if you use the
latter 2 ways to move around transformers and conveyors
I also need to show you how to construct sort of a network of the transporter and conveyor Pav so will will
stand sort of lessons 4 and 5 are going to be fairly big demos as well. This wasn't concerned with the
advanced transfer functionality and we devote an entire panel related to movement called the Advanced

232
transfer panel looks like that there's a whole bunch of stuff on it we're going to look at mostly the pink stuff for
now and then as we move on and these lessons will look at some of the other things too in this fuss and I'm
going to concentrate sort of on where you move.
For instance you can pop from station to station stations just somewhere you go to and there it is you can see
the station outlined on the panel and think of like like Star Trek you teleport from place to place those places
are stations. You get there well I guess you could get from station to station we sort of know the concept of a
connect line but the easiest way to do a route and a route is very similar to connect except it takes you some
time you can say well it takes me 2 and a half minutes to get from a to b. I suppose you could with 0 as well but
a route block route module it's highlighted there it's a module and it says Ok you're going to go to the following
place. And it's going to take you a certain amount of time Ok. Now. We're also going to have these other blocks
called enter and leave. And they kind of handle the route function but in a much more general way so enter
leave are general ways to get into a station in to depart the station respectively so let's say let's say going to
leave a station will do the 2nd one 1st I could leave a station on my own power I could just walk out I could
grab a resource like a person to walk with me I could get on a transporter like a car I could get on a conveyor
so there's use different words to for instance I would seize.

A server like a somebody or walk with me and then eventually released them you use different words to do that
for a transporter and for a conveyor but we'll learn those in subsequent lessons so you leave the station. And it
tells you where to go what your next station is going to be and then when you get to that next station you enter
it and you know of course when you enter the station you may release your friend if he's walking with you and
you have to leave the transport you have to when you enter the station you get out of the transporter you get
off the escalator. Conveyor and there are certain words to do that so we'll cover those a little bit later when we
do the demos right now will concentrate mostly on station and route so this lesson is going to tell you about
where you move in later lessons how you move you know so by yourself or with a resource or using a
transporter conveyor.

Right now I'm going to do a mini demo involving the station and route modules and this is what it looks like you
can see over here I create customers they meet we go this takes no time to be created go to the station they're
just sitting at the teleportation station. They while they're in the station they kind of walk over here in no time to
this route module and the route tells them where to go next it's going to pop over to the station 2 So again
teleports the station 2 then they they go over to the side block and there's a 5050 chance that there is a
Dispose or they go to this other route module and it pops them back over to station one it's kind of cool it's
really really cool now the only thing I'm complaining about here is that all these pink guys look exactly the same
so I try to label them so that you know this is a station this is a route that's a route that the station so. In later
versions of arena which are supposedly doing a couple months they might change how these things look but
right now they're just pink rectangles so

Ok let me escape out of here and I'm going to go to a Reno which I previously set up the there's the template
over here and here is the. The program itself so let me just started what I'm going to ask you to do is watch the
1st guy very carefully he's going to go to station one who immediately walk over to this rock block this is a
route module it's not the station self just as route to station 2 so this takes no time because there's nothing
going on he's just appearing at station one this instruction pops them over to station 2 now this route takes a
little while but you know just like in Star Trek you don't materialize instantly you've got to have your molecules

233
flow through space right so after a little while he goes over to Station 2 and then he hits the side block where
he's either rerouted bad. Back to station one or he gets disposed so let's hope this works on this particular
version of arena and this particular version of my operating system I'm getting some very interesting
phenomenon that every once in a while. Meaningless error when I start to think some just warning you about
that in the subsequent lessons hopefully it'll work out perfectly here we go so watch the 1st customer ready. As
promised the little air ignore that air and here we go there's the 1st guy yellow guy so he goes to route to
station to whom is that station to the sidewalk now he's going to route to station one station one back to station
2 boom the side block back to station one

Ok Now there's actually taking some time meanwhile this blue guy has shown up he gets routed over the
station too and he's now disposed is that the yellow guy remember he was sitting in station one because he
had to get routed there so you could see what's happening it's all very very nice I wish we had different colored
clothing here it seems like it's just the yellow but look at this he's going back and forth this is so cool so I'll have
this available on the website you can play around with it I think you'll like it quite a bit and there's a great guy
let's watch the great guy station to the sidewalk up he's out of here see you later buddy Ok the guy in a black
jacket he goes right to station to the sidewalk he's going to go back to station one back to Station 2 and back to
sit this is so cool isn't it. We can only live with so much cool so this ends the demo and in the next lesson we'll
talk about sequences which are extremely important tells customers where to go in order okey doke So we'll
see in a little while then.

Lesson 29: Sequences

Hi class. Here we are already on our 2nd bonus lesson talking about that manufacturing system in more detail
in this lesson I'll talk about the concept of sequences of visitation locations what are sequences let me go back
and remind you of one of the concepts we looked at in the last.

The last lesson namely we looked at routes and stations and things like that and remind you what the
simulation looked like there's a guy moving around he goes from that station goes to a route module pops to
the station and I didn't go into detail on this last time but let me just click into these things and see what they
look like

then we're going to generalize things greatly so I click into the grid nothing interesting there random x. they
show up about once an hour I mean I just I didn't even bother changing those numbers then the station module
c I pulled the station module over from here. Click into that if nothing all that we have here is just the name of
the station station one we might need need that for later on this name that you see up here is merely what you

234
see in the pink background here that we're I could name it Joe and it would come out Joe here but this is that I
don't write their station name is the name of the station if I want to refer to it Ok let's look at the route.

The Robb module This is saying again this is the name of the module this is saying it's going to take me Expo
point one hours to go to station and the station name is Station 2 so it takes about 6 minutes on average Expo
point one hours to pop over to here again this station is just the name nothing else just the place to go to the
site block that $5050.00 like I promised and then this route block this tells you to go back to station one takes
Expo point 2 hours to do that so go to a station called Station one so that's pretty easy well now what if things
get a little more complicated so I'll go back to my go back to my show here.

I love the slide show concept so now I'm going to look at sequences of visitation locations and. As I alluded to
each type of customer follows a different path or sequence of visitation locations as the customer goes through
the system and again let's go to the advanced transfer module template and there's this spreadsheet right
there called the sequence spreadsheet where you define your sequences that's what we're going to use so a
customer comes in. And he's assigned a sequence and we've predefined what those sequences are

Ok so that's great a sequence has a customer going from station to station. And quite general maybe so not
only does it tell you where to go it may be that while you're defining the sequence you can also find certain
properties things that the customer is going to encounter there for instance. If the customer goes to the paint
department. We may say that they at that paint department going to take 23 minutes plus or minus 10 minutes
to get served Ok and we can define that service time while we're doing the sequence then he goes to the lawn
and garden department and he spends half an hour there so we can define all these things when we're
defining the sequence we can define the service Times among other things and what's kind of neat is that
these service times can depend both on the customer type and whether or not he's been to that station already
during his visit to the store

so like I could go to the department I really liked I could go there 3 separate times and have 3 different
distributions the amount of time that I spend there so cool and well I'll show you that when we get to the little bit
later as an example in this manufacturing example I've got 3 types of parts and I'll show you them on that
minute part type ones visit stations 1234 in that order then they leave the system so technically I create them
they go to station one station to station 3 Station 4 then they leave the system part type to lose. Get created
then they visit the station $1.00 to $4.00 back to Station 2 and then Station 3 then they leave now it turns out
the service times at the stations they're all different distributions even the 2 visits to station 2 have different

235
distributions and you'll see how we do that's very easy and then there's a part type 3 they go to 13 a little bit
different order.

So like I said each part type has different service times different distributions depending on the part type and
whether or not it's their 1st or 2nd visit to a particular station Ok Now let's define sequences and attributes so
we can kind of do that in the same place so in this case we have 3 different sequences and here they are I'm
going to call them part one process plan part 2 process plan part 3 process plan I'm literally plagiarizing these
from the arena guys and this is what they call them so my 1st sequence is for the part one customers their
sequence is going to be called Part one process plan that's the name of the sequence for part 2 is part 2
process plan for part 3 part 3 process plan you'll notice that there is this thing called steps.

I'll go through in the demo showing how to do this part one process plan has 5 steps 5 rows Part 2 a 6 part 3
has 4 I'll just tell you what they are before we even look at them part one process plan. Just means you get
created you go to. You go to the called cell location server number one then 2 then 3 then 4 then you exit the
system so that that exit is going to be the 5th row then part to the let's see what do you do you go 12423 and
then you leave so that 6 rows see that's pretty easy.

Because here for instance is the part 2 sequence you start out at cell one that's the 1st service location you go
to sell to sell for sell to sell 3 and then you exit the system all these are station names so I just name them sell
$1.00 to $4.00 to $3.00 so you pop over to these places and then once you define the sequence every
customer is given his or her own sequence each part is given his or her own sequence as a giant attribute in
arena knows where you're going to go next depending on what part is that cool now right here remember I've
alluded to the fact that you could make certain segments of attributes you can so while I'm at it I'm going to
assign the service time over here to sell to the service time itself or service time sell to service time itself
through and there's no service when you leave the system and I'll show you how to do that and 2nd you may
say why am I not going to sign a service that white wire their zeros here well the honest answer is the arena
people wanted to challenge you and so they're going to deal with the cell one service times in a different way
they

don't worry about it's coming up right so these are the sell to service times and ups that went backwards. And
here are the type to the type to process time at self too so these are customer type 2 part part type 2 part type
to process times service times at cell 2 and so when I click into that assignment It says I'm going to make an
assignment of an attribute called process time and it's going to be triangular 468 when I get to the next place

236
I'm going to reassign the attribute call promise time proper process time to whatever the new value is some
other triangle or distribution so every time I go to a new cell a new service center I'm going to reassign the
attribute process time the service center is then going to look at the process time and say Ok this is how long
the service is going to take and again sell one which is which is a source of on ending aggravation for me they
handle the service times there in a different way sorry that's the way it goes

it's demo time so let's let's run through this is going to this is a you'll see that start to become a little bit tricky.
And let me go over here I'm going to go to my. There we go a. Basic manufacturing so this is what the thing
looks like when we just started out so you can see what's going on here this is this is the program here and up
here I think I showed you in a previous lesson.

This is the graphical area I will not go into huge detail on how they did the graphics the let's follow along we
have 3 part types 12 and 3 the blue guys are part type ones read guys are part type 2 and the the green guys
are part type threes they are running around at random speeds and random places. Tell you what it all means
in the 2nd. First so I want to tell you some about the speeds the speeds are a figment of our imagination the
reason is that in this particular example I have to find all travel times between any 2 cells as to right so it takes
you 2 minutes to go from place to place doesn't matter if you're going from cell one way over here to sell for or
just from cell one to sell to it all it always takes 2 minutes so that's why it appears that sometimes one of these
guys will look look as if it's going faster or it's passing another guy it's because it's always taking 2 minutes to
see there are 3 is passing through just that yes of 3 past 2 because it always takes exactly 2 minutes to go
from place to place that's what's going on 3 is blowing by one right there great so they're going from place to
place each time in 2 minutes and the order is still 1234 that's how they're laid out you can see that let's watch
the green guy he goes he goes to sell to the he's getting worked out insult to come on then he's going to go
over to sell one getting work done and sell one and then he's going to go over to sell 3 he's waiting in line right
now and he's done I don't know he's down here now there is right there's there's the green guy I got him
confused with another green guy he's he's over here Ok come on he's going to be done in a minute come on
come on waiting waiting waiting.

A lot of stuff going on here there goes and he's going to now leave the system cd and out he goes so they're
doing what they're supposed to insult the customer 31 from 2 to one to 3 and then he left you may notice that
cell 3 actually has 2 servers We'll get into that later there's the this is the new fast guy and this is the old
decrepit slow guy right there Ok so this is kind of cold and you know the blue guys go from one to 2 to 3 to 4
and the red guys go what is that 12423 and then they weave

237
So these are beautiful graphics I didn't write these but it would have taken about a couple hours at least this is
tedious to say the least but whatever Ok Meanwhile here's the program down here it's a very easy program.
And. The reason that we're. Going to show you how to do sequences Now I don't want to get into enormous
detail yet on how they're being used but let's just say I create the parts here they get assigned whatever the
sequence is here and now show you that in a 2nd then they go to this is a rock block write this let me show you
the raw block it says. You're going to go. Very very interesting.

Wherever you go to next it's going to take you the amount of time transfer time transfer time you go to basic
That's got to stop the run sorry stop the run stop I'm going to go to basic process I think it's a variable Yeah
transfer times down here. Transfer time is exactly 2 minutes so I go to this rock block module and the transfer
time is 2 minutes so we're never going to go and then it says destination type now before remember when we
did the previous example it was 2 station and we could go to there's lots of different things that go cell one cell
to cell 3 cell 4 I could go to all these different stations but the problem is where I go depends on the part type
right so I can't say I'm going to go to station one I have no idea where the part is going to go it depends on the
part type so the way around that is you go by sequence

in a minute I'm going to show you how you define a sequence and then this is so great sequence is an attribute
is a giant attribute of each part so what happens when you get here it says this guy wants to go by sequence
I'll look up where is he in his sequence and we'll just go there it's so nice if you're part type one and you get
here it's all gone by sequence and says You're right here you must now have to go to sell one and you go here
to the cell one station then you get served over here it's all one this is a this is a. Process block that's a process
block you do a seize delay release

I won't go into detail just yet and then you go to the next route module thing and again it's the same thing
transfer time 2 minutes by sequence so if you're part type one you now know you're going to go to sell to next
so it's nice very very easy to use so you go. Route to the proper cell get served route to the proper cell get
served and I don't have to go from $1.00 to $2.00 each time depending on if you're green guy you're going to
go you're going to start here you're going to go to sell 2 then you're going to go to sell one then you're going to
go to sell 3

So let's watch the process so you can see how this works it's really quite nice and I'm going to move this over
just a little bit so you can see a little better that didn't help Ok let me I need to I need to have a little flexibility to
show his stuff here Ok so let's run this Ok There you go this is the right guy so he's going to go to sell one think

238
there is sell one he's going to get served let's just watch the red guy see the lay release he's getting served
getting served getting served this may take a little while but I just want I want to follow him around we've got all
the time in the universe right wait wait wait wait wait these other guys already moving around but we're still
there's a wreck I now he's going to go to sell to right there is he's going to get served here for a while so he's
sitting in here getting served

Thank God there's not a lot of there was a small line of us but he's being served right now the red guy is in
here there he goes now he's going to go where is he going to go sell for and he's going to get served there
now and then he'll leave self or I guess he's going to go to sell to after he's done with sell for but we're waiting
for him to get done itself for this great Come on come on.

You can see I plan the so well ahead should pick the green guy so we wait for the red areas and boom he's
going to come back to sell 2 in a 2nd see it took 2 minutes literally to get there he's going to be served at cell 2
we're very close to being done I'm glad you're being patient here there and now he's going to go to sell 3 let's
wait form over here for a 2nd he should pop up there after 2 minute travel time there see the areas now he's
going to be served at Cell 3 I think he's actually in line itself 3 if we looked up in the graphics you'd see him
being in line to go there he is right there being served he's being served as well 3 he's going to pop out in a
minute this is cool of everything it's just a logical Hey watch this is going to go from cell 3 that is going to leave
Ok Any minute now he's going to be done with cell 3 Come on we're waiting for you here that was the green
guy.

There you go see it done he's done and watch this takes 2 minutes in 2 minutes he's going to leave you go
come on there goes and that's how it works Ok great so we've seen this stuff in action now I've got to get you
through the confusing process of as I promised let me just go back very quickly to the side see we got to go
we're going to set up these sequences. Where do we do that and then then we have to. Look at. You know
look at the specific sequence and then the and then we'll dive deep into what's going on in the sequence Ok so
back to the arena.

So let's go here at Vance transfer the fonts a little bit hard to see and look let's look at sequences Ok so here
they are way down here. We're down here at the bottom and let me drag this out for you. Ok So part one
process plan part 2 process plan part 3 process plan so let me show you where we are in this so we're right in
the middle here I have not shown you how to get these 3 different where these names defined here are 3
different sequences were just were were. I actually I didn't find this find were actually in the right place I lied so

239
we're in the right place to go back to the arena so here we are part one process plant I'm going to I'm going to
show you where you define all these in one fell swoop in a minute so here we are part one process plan part 2
process plan part 3 part process plan now let me click into the power one process.

And look at this we go to sell one sell to sell 3 fell for that we exit the system that's our sequence for part one
part type one and let me show you the assignment here. Attribute process time triangular distribution so very
nice and then when we go to sell 3. Again. Attribute process time triangle distribution and the same thing for all
these except cell one where I haven't made any assignments if I wanted to I could go in there I could double
click and I could make some assignments you know just like you would normally do but the arena guys have
chosen not to do that I won't say that while the guys have chosen not to do that should be 0 rows there Ok
great so the same thing for part type to let me click here and you can see that cell 12423 exit the system that's
the order that you do that so that's very very interesting that's how you do these sequences Ok fantastic now.

The reason I had kind of gone off into left field for a 2nd that I'm going to actually define Part one Part Part 2
Part 3 these as I'm going to define this thing as a giant vector it's called in advance that will do that in the next
lesson so. That's how we've done everything and now you're ready to write your little program here so it's really
really nice and it proceeds as follows I'll do the very careful run through a little later but create assign the parts
sequence this is just a holding station. And then go the route block it tells you where to go next like if you're if
your blue guy you go here sees delay release go the next place that might be sell to seize the later release go
to the next place and you can see these these modules these little sequences are pretty much all the same.
This is a station sees the late process route station process route and if you have a lot of time this would be for
a lot of extra credit which I won't offer really now but you could actually vectorize this process so instead of
having to write for the sequences for the sequences of pink yellow pink I could just literally only do one and
then they would be identified by part type I mean really really nice.

Now I'm going to do one brief aside because now we have identified sequences and so all we have to do here
we just we just say what happens that cell one what happens that cell to what happens at Cell 3 what happens
for you may say this is a lot of work to do to go through you know. Such an example and I would say well yeah
you're technically right let me you know something let's see where did I I've got some additional. Model for me I
didn't have the set up properly but I'm going to go through my class notes and show you the. Model that we
might 1st think of doing I think I might have brought this up when we were going through the. Notes to begin
with I think I have this one right it's called easy.

240
Baby baby version I showed this to you before Ok and that's it looks like I could've done it with the side block
but just saying these are these percentages correspond to the different part types so if you're part type one you
go up here and I've labeled them so you can see. The running will slow here let's make it go a little faster you
know unfortunately sometimes the beat leaves something to be desired.

Let's get there a little bit faster. So up on top of the park type ones in the middle of the park type tubes and
down here the park type 3 it's a park type one goes to sell 123 then it leaves that's what's going on here part
type 2 is going one to 4 to 3 that leaves take my word for it that's what's going on. This is a little bit more stuff
here but you would say this is kind of easy I could've done this right away I agree with you the 2 things I don't
like about this though are I'm getting separate cues in each place I don't like that you know you're not seeing
the cues mix like they would in the more advanced process which I'll return to this one right here you'll
definitely see different cues with different colors form here whereas they sort of have a separate queue for
each colored marble in the baby example what I don't like about the baby example is suppose we had a 1000
different part types going through a 1000 different sequences you'd have to write a 1000 different rows of code
whereas instead I would just need to define the different simply a 1000 of them but it would be very easy to
define the 1000 sequences I could kind of do that with a spreadsheet and so it's way easier to avoid billions of
blocks this is the way to do it you just say you know what happens to generic customer it's all one what
happens to generic customer and sell 2 as opposed to making a separate schedule for each of the 1000
customers.

Ok so that and this demo in the. In the next. Lesson I'll talk about kind of a hit or miss topic on Vent sets it's
actually not difficult at all it will be sort of a shorter quicker. Concept except for one nasty nasty block right there
right there the assigned block probably the worst one in the entire course but we'll see so that's that's our goal
for the next lesson and I'll see you hopefully in a few minutes.

Lesson 30: Advanced Sets


Hi class. This is the 3rd of our bonus lessons this one is on the concept of advanced sets in the context of our
manufacturing system example so here we are sets in advance that we already know what sets are these you
know things like resource sets we use that in the call center example for instance among other places now
we'll talk about advanced sets and what I'd like you to do is kind of think of those as things that you wouldn't
ordinarily.

241
Think about and the kind of interesting in the particular example that we're playing around with we're going to
look at sets of sequences you can have a set of anything in particular we'll look at sets of sequences so
advanced sets these are found in the advanced process panel so we've got to go to advance process it's sort
of like a memory game to see where is each concept found and there it is advance process see right there
vents that it's a spreadsheet. It's used for crazy types of sets for example sets of sequences in fact I think I've
used it dance sets for Wanted to other things that I don't remember I'm I just.

All I'm most familiar with is using them for sets of sequences so think of like a vector of different sequences so
I'll give an example the 2nd. So are events that is going to be called part sequences that's the name of the
advance set and it's going to consist of a number of elements all sets consists of elements write are elements
are going to be sequences gate So again a set can consist of anything but those anythings are elements the
elements here are sequences so our elements are part one process plan part 2 process plan part 3 process
plan remember we saw those last time those are the sequences corresponding to Part 12 and 3 very nice so
we have an advanced set called part sequences and here are the elements of that set now to tell you the truth
I sort of alluded this to this way back when we talk about resource sets. This set in some sense is ordered now
a real set isn't ordered but let's pretend this set is ordered and so you can treat it like a vector

So this is a dance that called parts sequence is a vector with 3 elements part one process plan part 2 process
plan part 3 process plan these are the. Sequences that we looked at in the last lesson so think of part
sequences as a vector. In particular part sequences element 2 is the Part 2 process plan sequence cool very
nice right and now many demo time and we we've kind of seen this demo several times now we're getting used
to it same same program as before.

So what I'm going to do I'm going to show you where the step is that the advanced set is used and how we've
defined the vector of part sequences and then I'm going to go right over here this is the nasty one this is the
nasty black that I mentioned at the end of the last class so here we go. I promise that it was in advance
process you just have to know that this is this goofy by the way go to basic process this is where you find sat
by guess because it's so advanced you have to go to a dance process to find at van't set all right so there you
go here's the dance that now there's only one advance set in this program and it's called part sequences they
like they want they want you to say what kind of set is it is that a cue said Stuart said I can usually occasionally

242
use cue sets but don't worry about that for now no this is a goof ball set this is another set it's a set of
sequences and here they are Remember this is a vector really a vector of sequences and there are 3 rows.

And they're called process part one process plant part 2 process plan part 3 process when we already knew
that and so like the 2nd element is part 2 process part 2 process plan so keep in mind part sequences is a
vector each component of which is a sequence Part one Part 2 Part 3 in that order Ok this is tricky took me a
few tries to get this right. Now I'm going to do a partial walk through this process and then will sit back have a
coffee come back for the next lesson well go through the whole thing so over here I'm creating parts let me
click and I'm just going to tell you every 13 minutes on average Expo 13 apart shows up I have no idea how
they decide on 13 but they did so every 13 minutes on every departure was up so now we go to the sign block
there's a lot of stuff going on here this is the toughest block of the entire course in my opinion the 1st thing we
do I hope you can see the fun we got a disco here right which means that if the discrete distribution and what
you're assigning here.

No no no ARENA went away so this happens every once in a while I don't know why I'm going to get ARENA
to come back up because I'm so far into this. Demo already I'm going to let it boot up again because I don't
want to waste too much time what I'll do I guess I'll walk you through it remember with disco. You know what I'll
just pause. I re one of the coming back off of the pause while I waited for ina to boot up I apologize for that
Luckily this is an informal extra credit lesson so what I was going to say is that in this is sign black which is the
most difficult the entire course that actually caused me to reboot arena

I'm generating parts part type $12.00 and $3.00. Using the disco function remember discrete distribution and
the parts are called part index I don't know why they just don't call part type but part index corresponds to part
type 12 and 3 and the probabilities are part type 1.26 for part type 2 remember this is the c.d.f. here so it's 0.74
minus 0.26 part type to have 48 percent chance and part type threes have 1 - 0.74 = 26 percent chance so 26,
48, 26 those are the probabilities of part type ones, twos, and threes and instead of part type, for some reason,
they call it Part Index. All right, the next line right here is the toughest one in the course in my opinion but we
have sufficient background so that it is easy. Every customer in every ARENA program, deep in the
background, has an attribute called Entity.Sequence especially if you have defined a sequence. So every
customer has the ability to have a sequence associated with that and there's a canned word for
Entity.Sequence.

243
Arena keeps track of everybody's Entity.Sequence. You're not allowed to call something else Entity.Sequence
this is an ARENA word that you're not allowed to change. Every customer has an Entity.Sequence. What's that
customer sequence. Ok well I'm going to assign it that's an attribute let's add it because it's a property of the
customer the attribute is called And Then he got sequence that's in the arena name and here's what it is we've
already assigned the part index that was done in the previous line let me remind you that was done right here
these assignments go in order so by the time you get to the 2nd line which I'm now editing the part index was
already determined it was either a 12 or 3 with probabilities What were they 264826 percent so by the time we
get to this line we know the part index it's 12 or 3 and so what do we do the sequences remember we got this
advance that called part sequences it's either part sequence number one part sequence number 2 or parts
sequence number 3 depending on the part index and let me remind you go back to advanced transfer. Events
transferred I.

Well I go through the sets are you advance process set and here we go right here remember part sequences.
Part sequence number one that's part one process plan if it's the 2nd type of customer part 2 if it's part index 3
part 3 process plan so by the time you get to that 2nd line in the assigned statement. That customer knows
what it's sequences cool and I'll just say right here the customer also knows what its energy type is 123 or it's
picture blue red or green that's what these other ones are doing Ok and now we're ready to go so what I'd like
you to do if you didn't understand this stuff she with that little arena crash in the middle of this lesson go back
to take a look at it again I think that by the time you get to this point even this time of the 2nd time you'll
completely understand how we got there

Ok so at this point the part knows what it's sequences and we are ready to rock and roll so the next lesson I'm
going to actually go through everything show you how it works Ok so you know while.

Lesson 31: Model Walkthrough

Hi class. At this point we're finally ready to do sort of a detailed walk through of our manufacturing system this
will be nice so here we are stage 4 of our 5 part bonus series on the manufacturing system the detailed walk
through and here's a description of the model one last time we have a manufacturing cell with 3 different types
of parts remember the blue guys the red guys and the green guys each part type follows a different path or
sequence through the system and in fact different service times are needed each station different distributions
depending on the part type and in fact the place in the visitation sequence so you could have a part that's
going to a particular station several times and each of those several times they're going to have a different
service time distribution no nice in general

244
part type 2 for instance going to visit stations one to 4 to 3 in that order before leaving the system and each of
those 5 stops has a different service time even even the 2 visits the station to the cell to movement requires the
advance transfer template and we're going to be using the route and station modules we're going to look at the
enter and leave modules later on in the next last less we also need the sequence of spread sheets so I could
define the sequences for part types of 12 and 3 we also needed to vent sets because I kind of vectorized the
different sequences.

Service times are usually handled in the sequence definitions where we define I think the it's an attribute called
process time is defined for each customer at each step in the process except if they happen to be in sell one
and I'll show you what I mean by that when we do the walkthrough there is no reason at all for that except the
programmers at Arena are having a bad day simple as that they just wanted to confuse you so we could have
easily done this all in the sequences but they wanted to define the service times for sell one someplace else

Ok fine whatever you go there with also cell 3 happens to have 2 servers There's an old slow guy and a young
20 percent faster guy that's handled by a resource at old guy and young guy in a fudge factor incorporating that
20 percent and I'll show you what that's why we're doing the walkthrough you'll see that here's what it looks like
I mean we've seen this several times already the program is at the bottom and the graphics are at the top of
the graphics these days would actually take me longer to make than the program so you can see I mean look
we have separate lines form in front of each cell that's why you don't see lines down here although you can tell
if there's a line because it these numbers of below the processes keep track.

And here are the servers and this is sort of the path. By which the customers move around so it's kind of neat
graphics take a long time and anyway it's demo time so just the 2nd hopefully arena is cooperating. There we
go Ok that's our old friend. I'm not going to bother showing your Much of what's going on here I'm going to walk
you through the system a look at the program can see are you know how runs and I just want to show the
details so as we saw last time I create the parts about every 13 minutes on average Expo 13 this is Steinbock
which is the nastiest thing in the course is assigned the type of part this go 123 with probabilities 2648 and 26
percent then it assigns the parts sequence so you can be assured if you happen with 26 percent to be a part
type one you are going to get part sequence number one so if you're assigned partite number one part index
it's called Number one you're going to go to stations or cells 1234 and then out of here you get the supposed if
you're part type 2 you go what is it 124 really 23 and then out of here and if you're part type 3 I think you go to
13 and then out here these 2 attribute assignments are for each customer's type and picture Ok now the 1st
thing you do you go to the orderly station in the picture that's just the starting place so through that little thing
that's where you start and it's just a station so nothing happens there you just go there poof making the floppy
then this is a route block.

And as we've looked at this before it takes you a transfer time which is 2 minutes if you don't believe in 2nd
and the next place you go is by sequence so whatever your sequences that determines your next step in the
sequence so if you're a type one customer you go to sell one station corresponding to sell one if you're if you're
Type 2 customer also sell one if you're Type 3 customer you go to sell 2 so everything if the term and buy
sequence.

245
Ok that's great so here's here's cell one station if that's a place you go to poof you're there nothing happens
except you do a cs delay release on the cell one process there you go see seize the later released cell one
machine it's called you want one of them nothing mysterious about that here's the mysterious thing remember I
said cell one was handled differently well all that means is the cell one times also depend on the type of part
you are 12 or 3 but instead of handling these through the sequences definitions that will remind you a minute
they handle these through a vectorized expression that's all this is so if you're and saw one when you do your
sees the lay release your service time is a is a vectorized expression expressions are given right here in
advance process and there we go see right there cell one times there are 3 rows I hope you can see I'm.
Trying you are 681011131571013 if you're a customer type one you go to the 1st one if your customer type 2 it
automatically go through the 2nd one and customer type 3 you go to the 3rd I don't like it because arena is not
consistent about this but this is easy enough I'm not going to dignify it with more respect so sell one sees the
later releases handled it differently

Ok you're done you go to a routing root block module that looks precisely the same as the other one actually it
looks the same as all the root modules the route modules and out of say router root or Ruth. So again just like
every other one of them. You go to the next the next cell by sequence and it takes transfer time and the only
thing that's different is the name that appears on the pink part of the cell route Princella every Otherwise it's
completely the same so this looks a lot like that which looks a lot like that you can kind of vectorized things if
you wanted to I'm not going to some people out there are probably wondering what is transferred time again
basic process variable transfer time it's down here. To 2 minutes while we're here there's another thing called
factor this is this is another variable this is has to do with the cell 3 member cell 3 got an old guy and a new
guy.

And I guessed the must be a new guy and old guy in that order because these are the fudge factors on the
amount of time it takes the new guy corresponds to a point 8 The old guy 1.0 The new guys 20 percent faster
just in the back your minds because I was here in the in the basic process module template I looked at factor
it's a vector of 2 numbers point a 1.0 these are the fudge factors that determine how how fast server the old in
the new the new and the old guy are in that order

Ok then let's say you go from cell one to sell to this is the cell to station great cell to process seize the lady
release l 20 Here's what we did do the process time this is I love this process time is an attribute it only
depends on what type of customer you are and where you are in your in your sequence and so you'll see that
every place along the way here is cell 33 there's process time again there's that factor which we'll talk about in
a minute process time and sell for. Process time cell one does not have it because they're doing that goofy
vectorize expression but process time remember is defined when we did the sequences and I will remind you
of that sequence at Bent's transfer sequence let's look at the here they are down here the 3 of them let's look
at part 2 process plan.

Here they all are fell one look at this there is no assignment here because remember cell one is misbehaving
cell to assign attribute process time triangular so every process time attribute is defined in the sequences
Except for any time you have to happen to go to sell one for all the other cells and all the customers process
time is updated every time you go to a new place and then the server looks at your process time which is an
attribute and says Ok We're going to work on this guy for process time so the only thing that remains in this

246
quick walkthrough is what happens itself 3 member cell 3 has 2 servers so you may remember from when we
did the call center remember when we had 2 different servers with different characteristics we had to appeal to
a set of servers.

And in this example the set is called Cell 3 machines there's 2 of them so remember you might remember that
this set is defined in basic process fat and look at the cell 3 machines it's a resource set let's look at it New Guy
old guy in that order Ok so we've done this before just like in the call center example and. It's so nice Lou Look
at this so we go here and the the amount of time it takes is process time factor machine index and what is the
machine index Well we do seize the letter released and in plain English it says I want to get one of the
machines from the cell 3 machines that there's 2 of them the new guy and the old guy so I want that set cell 3
machines I want one of them I want them to serve me sick quickly so. There's no priority I don't want to make
the new guy work more than the old guy just cyclically and whoever it is if it's the new guy he's number one if
it's all guy he's number 2 I store that number in the machine index it's either one for the new guy or 2 for the old
guy if it's one factor one this point it if it's 2 factor 2 is 1.00 that's my fudge factor for the process time so if I'm
lucky and it's the new guy process time is really process time times 80 percent were right excellent and we are
done once we do this easily release we go to this route it goes to whatever station is eventually exits station
you are gone you are out of here and that's the program wonderful just give us one little fleeting memory of
how the thing works isn't that nice is moving all the way around
Ok we've got one more lesson life form going to do a couple nice generalizations that I'll walk you through. And
if if you like the stuff it's just so much more powerful to add in different types of transportation that So Ok we'll
see in just a little while then thanks a lot.

Lesson 32: Transporters and Conveyors


Hi class. Here's our 5th bonus lesson where we're going to be sort of generalizing what we can do with our
manufacturing system so this. Is going to consist of transporters and conveyors and even some other stuff so
here we are in the big scheme of things parts can move around in a variety of ways by themselves I should
have also put with a helper a resource by a transporters by conveyors all sorts of different things you can do.
Very a list.

In the latter cases we might require construction of a transporter conveyor path system but it's very very
doable so other ways to move what we're going to do is we're going to look at the enter leave modules for
Station to Station Move and I've talked about these sort of in passing previously finally after all this time we're
going to actually look at them so they're located of course in the advanced transfer template and ter leave
station Those are the pink guys you can sort of not worry about pick station for now and we've already talked
about route route Ruth we've already talked about that but enter leave generalized route quite nicely so
different ways you can travel by.

A resource that would be like an assistant somebody helping you. So you see these. Something delay but you
release eventually the resource seize release that's what you do with the resource. You could use a transporter
for example. A car or even maybe a wheelchair sometimes I'll say but that's something you kind of have to get
into takes full time maybe and there it is. The template instead of saying C's release you request and then

247
eventually 3 a transporter this is the language of arena but this is just what you have to do request and 3 a
transporter is equivalent to seizing and releasing a resource kind of understand they're really trying to
emphasize that resources and trance transforms are different so they're really going out of their way to use
different terminology. Resources require.

Transports require a distance set so I'll show you how to define that this in seconds you have to know how far
does the car have to travel between points Ok so you need a distance set that's a spread sheet which you can
see on the template there finally we have a conveyor conveyor is a very very very specialized transporter so
specialized You know because it moves at the same speed in stuff that it gets a separate name there it is. And
it has separate terminology so to get on and get off the conveyor you access and exit those and that's the
arena language for that it requires a so-called segment set and there it is on the template

Ok so I'm just going to run through some demos now. Go through the same manufacturing sample with
example with these augmentations and I'll leave it to you to play around with these things that demo time
aright. And I'll just give a quick summary of these last lessons we did much more detail of this very small but
sophisticated manufacturing cell along with some of the variants involving movement when I get done with the
demo you'll see that and that completely ends our module on arena even though there's plenty of more stuff
that you can look at it is just the exploration type thing it's really fun to go and look at the different. The different
aspects
Ok so now let's go this last time I want to go through a bunch of different models augmentations and this 1st
one actually looks kind of the same as all the stuff we've been looking at recently and if I scroll down here looks
exactly the same everything's pretty much the same not quite by the way you're seeing if you look very
carefully you see little things flying across here I can't reproduce those very well I tried really I just can't do it
but there are little customers flying around here I think it has to do with the fact that Time is passing in a certain
way up in the main graphics that prevent us to easily see what's going on down here don't don't worry about let
me stop this and this looks exactly the same same create same assignment same station I think now
everything's the same except right here.

Start sequence remember that was a rock block and now all the sun's big huge gigantic thing this is a generic
leave block leave module from the events transferred one of the subjects of this lesson I'll be talking about this
in some detail as we go through the different ways that you can. Move around so for instance we use a leave
to get out of a. Block out of a module and go someplace else it takes me no time to leave I guess what I'm
doing I'm. Pretend I'm grabbing a friend it takes me no time to grab the friend as long as he's available and I
just grab him so I seize a resource that's all I'm doing seizing my friend that's how I transfer sees a friend while
I'm here I could do nothing if I'm just walking by myself. I could request a transporter remember that was the
language that I use request Transformers or I could access a conveyor so I can seize my body sees a resource
request the boss requests a transporter or access get on to the conveyor so right now I'm going to seize the
resource

Ok now it turns out there's a. You know this resource is called transfer that's that's the name of the resource a
name for the resource and. The transfer may not be ready so I may have to stay in a queue that happens to be
the queue name it's called sequence docu i don't know why they named it that but the resource type is a
resource as opposed to. That of resources or an attribute I never almost never changes but this resource type

248
is a resource here and the name of the resource is transfer the connect type is a route I could also so that's if if
I'm walking just by myself that's a connect.

Well look at take some time in a color but a rout is usually when you grab a resource you rout them over the
transfer time is 2 minutes if I'm going on a conveyor the connect type is called convey and if it's a transformer
It's called transport there's a lot of words here you get used to it Ok I'm not going to ask you trivia questions on
this on a test I might ask you in the assessment on the casket the informal incessant system they'll do
afterwards Ok so that's how you get a resource and that guy's going to walk with you over to the next station
this is not quite a station block this is a this is an enter and you enter it's sort of a station plus how to get to the
how to get off the the conveyor or get rid of the friend it's more than just the station and how you leave the
thing that you travelled with behind so you enter cell one that's the station you just throw the friend away that
doesn't take any time and and then you release the friend so there you go
That's easy right so that's basically sees travel little delay and then released that's all that happens here and it
looks exactly the same as using the route block you can't see but you know it looks exactly the same for that's
nice All right perfect so that's that's how you handle. Enter or leave and enter with a resource now let's do a
transporter this is like you're going to get on a little car this looks a lot more complicated and so I had to draw a
separate transporter system you know let me just for one second I'm going to go backwards as if I'll keep it like
this so I have a separate transporter system. And let's watch what happens here the little. Ignore that. Again.

This is this is the bane of my existence through these little green thing if these are the buses these are the
trends we're seeing and the grab somebody put them inside this is that nice so this guy's waiting there at cell
one for him to complete this in the school so they're going around when they're emptied some of the usually
return here pick up the next guy isn't that neat and these this transfer is called cart c a r t k is that nice brilliant
the moving around getting people dropping them off the people are still doing the same sequence as before
but they need a transporter that's not nice

Let me stop it I want to show you a couple things here so this these horizontal guys are the lines for customers
when they're waiting to get served these vertical guys are lines for the customers when they're waiting to get
picked up to be transported to the next place and you should just know that Ok and the blue thing is the is the
the little movement network for the cart it takes a little more work because the more places it can go sort of
you'll see in a minute Ok same same same.

The leave block is different see the leave again Now remember you're going to transfer out by requesting the
transporter the transporters name is cart the selection rule is you this just means you get the card that's the
closest one to you and your number its number the connect type that's how you get to the next place it's not a
connected thought routes call the transport and you go by sequence as usual this delay up here this is the
amount of time it takes you to get into the car you know takes a few seconds.

Ok and you go the next place let's pretend it's sell one and you enter the cell which means you leave the cart
so see free transport that's the word is free free the transporter It's called cart free that one and in particular
free whatever number it is why do we have a card number you'll see why in a minute you can have more than
one card simple as that. And then the delay for getting out is point 25 minute then everything else is the same
just like before so the only thing I haven't shown you then is how do you define a card how do you define this

249
this mysterious network and the travel times it turns out here you may have noticed there is no mention of
travel time here because the cart moves along at its own speed along the network so we have to define that so
where do you think let's go to dance transfer where do you think we define the transporter well in the
transporters spreadsheet there we go look at that cart there's 2 of them we have 2 carts pre-packed means
that they can kind of pretend to pass each other they go 50 units per minute I guess.

And then what the units from minute mean well the cart has to go from point to point based on the distance.
So they go 50 units per minute. But in order to get from a to b. we have to know how far apart are in b. and
we're going to have to define that and 2nd let me just say if I wanted to I could make 5 carts 5 Alright I make 5
carts when we see the graphics in a minute the cart distances that's how long those are the distances between
all the points of interest where do you think you find that for the cart distance. Well go the distance spreadsheet
What a surprise distance and there's a set called cart distance what a surprise this is a set of all the interesting
distances.

And we click on the 25 rows there's a little tedious sometimes and not too bad Ok this is a tedious but
necessary ordering of how to go from place to place order least the cell one has a distance of $37.00 units and
if I can travel 50 units per minute that's a velocity that means that I can get from order lease to sell one in about
3 quarters of a minute right if I want to go from cell for all the way around to exiting the system you really have
to go all the way around almost that takes $188118.00 distance units which is about 2 and a half minutes if I
want to go from cell one to orderly switches again all the way around that takes over a little over 3 minutes 155
distance units so it looks like there are 25 rows here I think.

You know let's see so for instance. Like like you won't see you don't see style one going back to sell one so I
would have thought you might have had. I know there's there's 6 stations I would have thought you would have
30 different 6 times 530 different places you go but the the a couple that you just never go to yeah exit system
for instance. I never ever go from cell one to exit system so why bother with the distance right and so when you
when you put in all the distances that you actually need there's only 25 of them are right so great that's how
that works let me just run this again because I'm going to I now have. 5 carts and I think I'll make it go for it let
me let me create more parts to sell me every 3 minutes on the part I just want to show you that looks kind of
cute with having all the carts around ready

so here we go so right now I've got a lot of carts feeling that there's there's 5 of them running around this kind
of need this isn't a very very nice you're going to show you one last thing conveyors these are really fun so go
to a window can bear and this looks the same as the transporter except I don't have the greenish yellow ish or
blue thing he's here but I've made my own little red network it's not as it's not as fuzzy because I don't have to
make as many Pav it's just one connected network and again I've got separate lines for the customers wanting
to get service and then for the customers waiting to get on the conveyor there's only so much space on the
conveyor Ok Great so let's look at the program I'll show you it looks like area. I think the main thing that you'll
notice here is that. Disappear of course I'm in a pause you know minute how I am back after another or arena
crash related pause there

we go so here is the conveyer and I want you to watch see they all seem if you have multiple customers
around the me take a 2nd to see they all appear to be moving at around the same speed. It may appear the

250
speed changes but that but while you have 2 customers traveling whatever their speed is it's the same that's
because the conveyor you know there's only one speed for the thing right so let's dig a little deeper into this
and similar to the transporter you know we're going to need to. Play around with the lever the enter blocks and
then I'll show you how to evoke define a conveyor in the Advanced Transfer template.

So let's click on this leave block and we can see, to transfer out we don't seize a resource or request a
transporter, we access the conveyor. This conveyor is called Loop Conveyor and “# of Cells”, this just means
we want to spaces in the conveyer. I won't go into detail here because I confuse myself when I go into detail
but there is a finite amount of room in the conveyor and so the number of cells has to do with that and if there's
not enough space you have to sit in the queue for a little while if this is a number of cells that you take up the
conveyor name is called Loop conveyor and you access the conveyor and the amount of time it takes to get on
the conveyor is called Load time

I think just out of curiosity let me look it up low time I think is a is a variable I might be wrong low time there it is
and it's equal to point $25.00 funny that they even bothered specifying here on low times also point $25.00 will
need that a minute Ok so let me go back to the. Program Ok so. Then the connect type is called convey that's
how you go from place to place by sequence not a surprise when we get to let's say Cell one. We the stations
called Salwan this is and or block you unload. And that's called Exit conveyor that's how you get off the
conveyor the unload time I think was point 25 minutes or whatever it was and the conveyor that you leave is
called Loop conveyor so that's in fact the only conveyor in the system All right well we better define loop
conveyor Where do you think you do that right here on the conveyor spread sheet and what do you know there
it is.

It's called Loop conveyor well with fries the now we have to also the find the segments of the loop conveyor
These are called these are not accumulating I'm a little embarrassed I always was not accumulating if one is
like a. One's like an escalator where you just keep on going there's no traffic jam the other. Maybe it's called
accumulating as like a traffic jam is an accumulating conveyor I'm not exactly sure about that I guess. I'll put
the notes at some point then that you have some information about the cell size next on number of cells

251
occupied such and such for that kind of allows you to define what the size of the conveyor is then you have the
thing called the last city which tells you how long does it take to move around the loop loop conveyor and so
the conveyors.

Name is loop conveyor the segment this is sort of the path is called Loop conveyor segment let's look that up
here and this is going to give me all the segments of the loop conveyor Ok so I start at order really so that's the
1st station that's the beginning station it's $24.00 units to sell one I think I read remember in the last one it goes
a 20 per minute that's about you know 1.2 minutes and then from cell one to cell 2 is $39.00 units cell 2 to exit
$21.00 c. and then goes around c. So this when you add them all up you get the total length of the conveyor
system this is very very very nice

Ok so let's just see this one more time. Maybe I'll add some more customers and so you can see how you
know it's a lot more congested customer every 3 time units and then we're done with the demo and you you
guys can become quite nice consultants. Yes I mean I'm just generating a lot of customers here this is kind of
neat but of course you know there's a bottleneck in that the service can only serve so quickly but I think it's
kind of cool that you can make such an arbitrary decision very very very quickly arbitrary program very quickly
so Ok well that's a good place to stop what you'll see when we evaluate you with a bunch of very very easy
bonus homework questions that you can get extra points for and you let me know if you have any comments of
course Ok So thanks a lot and good luck on the final and hope to see you in another class sometime Bye bye.

252

You might also like