Conc Lect06 PDF
Conc Lect06 PDF
Slide 2 of 63
erl +P MaxProcesses
Conc_Lect06.nb 3
Slide 3 of 63
A Simple Timer
-module(simple_timer).
-export([start/2, cancel/1]).
below line 3 is typed quick enough so the timer is canceled while running
1> Pid=simple_timer:start(5000,fun() -> io:format(“timer event~n”) end).
<0.33.0>
timer event
2> Pid1=simple_timer:start(25000,fun() -> io:format(“timer event~n”) end).
<0.35.0>
3> simple_timer:cancel(Pid1).
cancel
4 Conc_Lect06.nb
Slide 4 of 63
receive - summary
Ë Each process in Erlang has an associated mailbox.
Ë send sends a message to the mailbox of the process and not to the process itself.
Ë receive tries to remove a message from the mailbox.
Ë The mailbox is examined only when the program evaluates a receive statement:
receive
Pattern1 [when Guard1] -> Expressions1;
Pattern2 [when Guard2] -> Expressions2;
...
after
Time -> ExpressionsTimeout
end
Conc_Lect06.nb 5
Slide 5 of 63
receive - summary -2
1. When the program enters a receive statement, a timer is started (but only if an after section is
present in the expression).
2. Take the first message in the mailbox and try to match it against Pattern1, Pattern2, and so
on. If a match succeeds, the message is removed from the mailbox, and the expressions following
the pattern are evaluated.
3. If none of the patterns in the receive statement matches the first message in the mailbox, then
the first message is removed from the mailbox and put into a "save queue." Then the second
message in the mailbox is then tried. This procedure is repeated until a matching message is found
or until all the messages in the mailbox have been examined.
6 Conc_Lect06.nb
Slide 6 of 63
receive - summary -2
4. If none of the messages in the mailbox matches, then the process is suspended and will be
rescheduled for execution the next time a new message is put in the mailbox. Note that when a
new message arrives, the messages in the save queue are not rematched; only the new message is
matched.
5. As soon as a message has been matched, then all messages that have been put into the save queue
are reentered into the mailbox in the order in which they arrived at the process. If a timer was
set, it is cleared.
6. If the timer elapses when we are waiting for a message, then evaluate the expressions
ExpressionsTimeout and put any saved messages back into the mailbox in the order in which they
arrived at the process.
Conc_Lect06.nb 7
Slide 7 of 63
Registering Processes
Ë The problem - to send a message to a process you need to know its pid
Ë Pro - this is secure
Slide 8 of 63
Slide 9 of 63
Slide 10 of 63
Example - Clock
-module(clock).
-export([start/2, stop/0]).
start(Time, Fun) ->
register(clock, spawn(fun() -> tick(Time, Fun) end)).
stop() -> clock ! stop.
tick(Time, Fun) ->
receive
stop -> void
after
Time -> Fun(),
tick(Time, Fun)
end.
Slide 11 of 63
Ë A process creation is always successful. Thus the error is detected within the process
Ë Error in a spwaned process is deteced by the error logger (part of the running time)
2> processes().
[<0.0.0>,<0.3.0>,<0.5.0>,<0.6.0>,<0.8.0>,<0.9.0>,<0.10.0>,
<0.11.0>,<0.12.0>,<0.13.0>,<0.14.0>,<0.15.0>,<0.16.0>,
<0.17.0>,<0.18.0>,<0.19.0>,<0.20.0>,<0.21.0>,<0.22.0>,
<0.23.0>,<0.24.0>,<0.25.0>,<0.26.0>,<0.27.0>,<0.31.0>]
12 Conc_Lect06.nb
Slide 12 of 63
ª
gen_server:loop/6 9
<0.27.0> supervisor:kernel/1 233 58 0
kernel_safe_sup gen_server:loop/6 9
<0.31.0> erlang:apply/2 987 14591 0
c:pinfo/1 50
Total 42776 379986 0
218
ok
4>
Conc_Lect06.nb 13
Slide 13 of 63
Slide 14 of 63
The Scheduler - 1
Ë Notice the Reds column in the result of the i() command ª reductions
Ë Each command (i.e., function call, BIF, arithmatic operation) is assigned a number of reduction
steps.
Ë The VM uses this number to check the activity level of the process
Ë When a process is dispatched, it is assigned a number of reductions it is allowed to execute
Ë This number is reduced for every executed operation.
Ë As soon as a process enters a receive clause where none of the messages matches or its
reduction count reaches zero, it is preempted.
Ë As long as BIFs are not being executed, this strategy results in a fair (but not equal) allocation
of execution time among the processes.
Conc_Lect06.nb 15
Slide 15 of 63
The Scheduler - 2
Ë BIFs - all mathematical operations, for example, have the same number of reductions assigned
to them.
Ë BIFs such as lists:reverse and lists:member, known to vary in execution time based on
their inputs, will interrupt their execution (a trap) and bump up the reduction counter.
Generally, new BIFs which are added to Erlang are implemented with traps by default.
Ë Incrementing a process reduction counter using the BIF erlang:bump_reductions(Num),
Ë erlang:yield() can be used to preempt the process altogether. Using yield/0 in the
standard symmetric multiprocessing (SMP) emulator will have little, if any, effect.
You should not make the behavior of the scheduler influence how you design and program your
systems, as this behavior can change without notice in between releases. Knowing how it works,
however, might help explain observations when inspecting and profiling your system.
16 Conc_Lect06.nb
Slide 16 of 63
go() ->
Pid = spawn(echo, loop, []),
Pid ! {self(), hello},
receive
{Pid, Msg} ->
io:format("~w~n",[Msg])
end,
Pid ! stop.
loop() ->
receive
{From, Msg} ->
From ! {self(), Msg},
loop();
stop ->
true
end.
Conc_Lect06.nb 17
Slide 17 of 63
Slide 18 of 63
hello Slide 19 of 63
receive
{Pid, Msg} Pid
spawn
hello Msg
io:format/2
hello stop
loop/0
receive stop
true
20 Conc_Lect06.nb
Slide 20 of 63
loop() ->
receive
{From, Msg} ->
From ! {self(), Msg},
loop();
stop ->
true
end.
Conc_Lect06.nb 21
Slide 21 of 63
1> c(echo2).
{ok,echo2}
2> echo2:go().
hello
ok
3> whereis(echo).
<0.38.0>
4> echo!stop.
stop
5> whereis(echo).
undefined
6> regs().
** Registered procs on node nonode@nohost **
Name Pid Initial Call Reds Msgs
application_controlle <0.6.0> erlang:apply/2 436 0
code_server <0.18.0> erlang:apply/2 188975 0
erl_prim_loader <0.3.0> erlang:apply/2 581247 0
error_logger <0.5.0> gen_event:init_it/6 226 0
file_server_2 <0.17.0> file_server:init/1 512 0
global_group <0.16.0> global_group:init/1 59 0
global_name_server <0.12.0> global:init/1 50 0
inet_db <0.15.0> inet_db:init/1 262 0
init <0.0.0> otp_ring0:start/2 2418 0
kernel_safe_sup <0.27.0> supervisor:kernel/1 58 0
kernel_sup <0.10.0> supervisor:kernel/1 34392 0
rex <0.11.0> rpc:init/1 35 0
standard_error <0.20.0> erlang:apply/2 9 0
standard_error_sup <0.19.0> supervisor_bridge:standar 41 0
user <0.23.0> group:server/3 36 0
user_drv <0.22.0> user_drv:server/2 1411 0
Slide 22 of 63
Timeouts
receive
Slide 23 of 63
Process Skeletons - 1
Ë There is a common pattern to the behavior of processes, regardless of their particular purpose.
Ë Processes have to be spawned and their aliases registered.
Ë The first action of newly spawned processes is to initialize the process loop data.
Ë The loop data is often the result of arguments passed to the spawn BIF and the initialization of
the process.
Ë It is stored in a variable we refer to as the process state. The state is passed to a receive-
evaluate function, which receives a message, handles it, updates the state, and passes it back
as an argument to a tail-recursive call.
Ë If one of the messages it handles is a stop message, the receiving process will clean up after
itself and terminate. This is a recurring theme among processes that we usually refer to as a
design pattern, and it will occur regardless of the task the process has been assigned to
perform.
24 Conc_Lect06.nb
Slide 24 of 63
Process Skeletons - 2
spawn
Conc_Lect06.nb 25
Slide 25 of 63
So, even if a skeleton of generic actions exists, these actions are complemented by specific
ones that are directly related to the specific tasks assigned to the process.
26 Conc_Lect06.nb
Slide 26 of 63
Concurrency-Related Bottlenecks - 1
Ë Processes are said to act as bottlenecks when, over time, they are sent messages at a faster
rate than they can handle them, resulting in large mailbox queues.
Ë processes with many messages in their inbox behave badly.
1. The process itself, through a selective receive, might match only a specific type of message.
If the message is the last one in the mailbox queue, the whole mailbox has to be traversed
before this message is successfully matched. This causes a performance penalty that is often
visible through high CPU consumption.
2. Processes sending messages to a process with a long message queue are penalized by
increasing the number of reductions it costs to send the message. This is an attempt by the
runtime system to allow processes with long message queues to catch up by slowing down the
processes sending the messages in the first place. The latter bottleneck often manifests
itself in a reduction of the overall throughput of the system.
Conc_Lect06.nb 27
Slide 27 of 63
Concurrency-Related Bottlenecks - 2
Ë The only way to discover whether there are any bottlenecks is to observe the throughput and
message queue buildup when stress-testing the system. Simple remedies to message queue
problems can be achieved by optimizing the code and fine-tuning the operating system and VM
settings.
Ë Another way to slow down message queue buildups is by suspending the processes generating the
messages until they receive an acknowledgment that the message they have sent has been
received and handled, effectively creating a synchronous call.
Ë Replacing asynchronous calls with synchronous ones will reduce the maximum throughput of the
system when running under heavy load, but never as much as the penalty paid when message
queues start building up. Where bottlenecks are known to occur, it is safer to reduce the
throughput by introducing synchronous calls, and thus guaranteeing a constant throughput of
requests in the system with no degradation of service under heavy loads.
28 Conc_Lect06.nb
Slide 28 of 63
Slide 29 of 63
Slide 30 of 63
Slide 31 of 63
Slide 32 of 63
The solution
Slide 33 of 63
start() ->
case whereis(db_server) of
undefined ->
Pid = spawn(db_server, init, []),
register(db_server, Pid),
{ok, Pid};
Pid when is_pid(Pid) ->
{error, already_started}
end.
34 Conc_Lect06.nb
Slide 34 of 63
2. The first process calls whereis(db_server), which returns the atom undefined.
3. This pattern-matches the first clause, and as a result, a new database server is spawned. Its
process identifier is bound to the variable Pid.
4. If, after spawning the database server, the process runs out of reductions and is preempted,
this will allow the second process to start executing.
5. The call whereis(db_server) by the second process also returns undefined, as the first
process had not gotten as far as registering it.
Conc_Lect06.nb 35
Slide 35 of 63
7. It tries to register the database server it created with the name db_server but fails with a runtime
error, as there already is a process with that name.
8. What we would have expected is the tuple {error, already_started} to be returned, instead of a
runtime error.
Race conditions such as this one in Erlang are rare, but they do happen when you might least
expect them.
A variant of the preceding example was taken from one of the early Erlang libraries and
Slide 36 of 63
Ë The only rule need to be followed is that if process A sends a message and waits for a response
from process B, in effect doing a synchronous call, process B is not allowed, anywhere in its
code, to do a synchronous call to process A, as the messages might cross and cause the
deadlock.
Ë Deadlocks are extremely rare in Erlang as a direct result of the way in which programs are
structured. In those rare occasions where they slip through the design phase, they are caught
at a very early stage of testing.
Conc_Lect06.nb 37
Slide 37 of 63
Slide 38 of 63
Slide 39 of 63
Ë etc.
Ë Although these processes may handle different requests, there will be similarities in how these
requests are handled. We call these similarities design patterns.
40 Conc_Lect06.nb
Slide 40 of 63
Slide 41 of 63
Ë Events received by the FSM do not necessarily have to trigger state transitions. Receiving an instant message or
a status update would keep the FSM in an online state while a logout event would cause it to go from an online or
busy state to the offline state.
42 Conc_Lect06.nb
Slide 42 of 63
Ë Upon receiving these events, you might want to perform a set of actions such as triggering an
SMS (Short Message Service message) or sending an email if certain conditions are met, or
simply logging the time stamp and stock price in a file.
Conc_Lect06.nb 43
Slide 43 of 63
Client/Server Models - 1
printerserver:print(File)
Slide 44 of 63
Client/Server Models - 2
Ë Clients use these resources by sending the server requests to print a file, update a window,
book a room, or use a frequency
Ë The server receives the request, handles it, and responds with an acknowledgment and a return
value if the request was successful, or with an error if the request did not succeed
Conc_Lect06.nb 45
Slide 45 of 63
Ë it could be registeredand
Ë We do not expose the message protocol being used between the client and the server, keeping
the interface between them safe and simple.
All the client needs to do is call a function and expect a return value.
46 Conc_Lect06.nb
Slide 46 of 63
Ë There might be so many requests that the server response times become unacceptable.
Conc_Lect06.nb 47
Slide 47 of 63
A Client/Server Example
frequency:allocate()
Slide 48 of 63
Ë The server handles it and responds with either a message containing an available frequency or an
error if all frequencies are being used.
Ë The result of the allocate/0 call will therefore be either {ok, Frequency} or {error,
no_frequencies}.
Conc_Lect06.nb 49
Slide 49 of 63
frequency:deallocate(Frequency)
frequency:deallocate(Frequency)
ok
deallocate/1
50 Conc_Lect06.nb
Slide 50 of 63
Slide 51 of 63
call(Message) ->
frequency ! {request, self(), Message},
receive
{reply, Reply} -> Reply
end.
52 Conc_Lect06.nb
Slide 52 of 63
Slide 53 of 63
Slide 54 of 63
Testing
1> c(frequency).
{ok,frequency}
2> frequency:start().
true
3> frequency:allocate().
{ok,10}
4> frequency:allocate().
{ok,11}
5> frequency:allocate().
{ok,12}
6> frequency:allocate().
{ok,13}
7> frequency:allocate().
{ok,14}
8> frequency:allocate().
{ok,15}
9> frequency:allocate().
{error,no_frequency}
10> frequency:deallocate(11).
ok
11> frequency:allocate().
{ok,11}
12> frequency:stop().
ok
Conc_Lect06.nb 55
Slide 55 of 63
Ë Spawning a process for every window is the way to implement a truely concurrent activity.
These processes would probably not be registered, as many windows of the same type could be
Ë spawn
running concurrently.
receive
Slide 56 of 63
Slide 57 of 63
Ë Every event relating to this window is translated to an Erlang message and sent to the process.
Ë The process, upon receiving the message, calls the handle function, passing the message and
state as arguments (remember CSP?).
Ë If the event were the result of a few keystrokes typed in a form, the handle function might want to display
them.
Ë If the user picked an entry in one of the menus, the handle function would take appropriate actions in executing
that menu choice.
Ë If the event was caused by an external process pushing data, possibly an image from a webcam or an alert
message, the appropriate widget would be updated.
58 Conc_Lect06.nb
Slide 58 of 63
Slide 59 of 63
After the window has been shut down, the process terminates because there is no more code
to execute.
60 Conc_Lect06.nb
Slide 60 of 63
Slide 61 of 63
stop(Name) ->
Name ! {stop, self()},
receive {reply, Reply} -> Reply end.
init(Data) ->
loop(initialize(Data)).
62 Conc_Lect06.nb
Slide 62 of 63
Slide 63 of 63
Ë This variable is passed to the recursive loop call, ensuring that the process is kept alive.
Ë Any reply is also sent back to the process where the request originated.
Ë If the stop message is received, terminate is called, destroying the window and all the widgets
associated with it. The loop function is not called, allowing the process to terminate normally.