0% found this document useful (0 votes)
66 views

Conc Lect06 PDF

The document provides an overview of process registration in Erlang. It introduces four built-in functions (BIFs) for registering processes: register/2, unregister/1, whereis/1, and registered/0. An example demonstrates registering a spawned process under an atom name and then sending a message to it using the registration name rather than PID. The document also briefly describes some other useful BIFs like processes/0 for listing all processes and i/0 for inspecting process activity.

Uploaded by

vskulkarni
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
66 views

Conc Lect06 PDF

The document provides an overview of process registration in Erlang. It introduces four built-in functions (BIFs) for registering processes: register/2, unregister/1, whereis/1, and registered/0. An example demonstrates registering a spawned process under an atom name and then sending a message to it using the registration name rather than PID. The document also briefly describes some other useful BIFs like processes/0 for listing all processes and i/0 for inspecting process activity.

Uploaded by

vskulkarni
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 63

Slide 1 of 63

Concurrent and Distributed Programming


Concurrent programming with Erlang
Lecture - 06

Department of Communication Systems Engineering


2 Conc_Lect06.nb

Slide 2 of 63

Reminder from last lecture


Ë In slides set “lecture 5” we met the following functions
Ë spawn - to activate a process
Ë send (!) - to send a message
Ë receive - to extract messages out of the mailbox and responde accordingly
Ë Processes are identified by their Pid or their registration names
Ë You can change the default maximal number of processes using

erl +P MaxProcesses
Conc_Lect06.nb 3

Slide 3 of 63

A Simple Timer
-module(simple_timer).
-export([start/2, cancel/1]).

start(Time, Fun) -> spawn(fun() -> timer(Time, Fun) end).

cancel(Pid) -> Pid ! cancel.

timer(Time, Fun) ->


receive
cancel ->
void
after Time ->
Fun()
end.

below line 3 is typed quick enough so the timer is canceled while running
1> Pid=simple_timer:start(5000,fun() -> io:format(“timer event~n”) end).
<0.33.0>
timer event
2> Pid1=simple_timer:start(25000,fun() -> io:format(“timer event~n”) end).
<0.35.0>
3> simple_timer:cancel(Pid1).
cancel
4 Conc_Lect06.nb

Slide 4 of 63

receive - summary
Ë Each process in Erlang has an associated mailbox.
Ë send sends a message to the mailbox of the process and not to the process itself.
Ë receive tries to remove a message from the mailbox.
Ë The mailbox is examined only when the program evaluates a receive statement:

receive
Pattern1 [when Guard1] -> Expressions1;
Pattern2 [when Guard2] -> Expressions2;
...
after
Time -> ExpressionsTimeout
end
Conc_Lect06.nb 5

Slide 5 of 63

receive - summary -2
1. When the program enters a receive statement, a timer is started (but only if an after section is
present in the expression).
2. Take the first message in the mailbox and try to match it against Pattern1, Pattern2, and so
on. If a match succeeds, the message is removed from the mailbox, and the expressions following
the pattern are evaluated.
3. If none of the patterns in the receive statement matches the first message in the mailbox, then
the first message is removed from the mailbox and put into a "save queue." Then the second
message in the mailbox is then tried. This procedure is repeated until a matching message is found
or until all the messages in the mailbox have been examined.
6 Conc_Lect06.nb

Slide 6 of 63

receive - summary -2
4. If none of the messages in the mailbox matches, then the process is suspended and will be
rescheduled for execution the next time a new message is put in the mailbox. Note that when a
new message arrives, the messages in the save queue are not rematched; only the new message is
matched.
5. As soon as a message has been matched, then all messages that have been put into the save queue
are reentered into the mailbox in the order in which they arrived at the process. If a timer was
set, it is cleared.
6. If the timer elapses when we are waiting for a message, then evaluate the expressions
ExpressionsTimeout and put any saved messages back into the mailbox in the order in which they
arrived at the process.
Conc_Lect06.nb 7

Slide 7 of 63

Registering Processes
Ë The problem - to send a message to a process you need to know its pid
Ë Pro - this is secure

Ë Con - you need to explicitly send the pid

Ë The solution - registering processes


Ë Identifying a process with an atom rather than its pid (similar to the idea of a DNS)
8 Conc_Lect06.nb

Slide 8 of 63

Registering Processes - use 4 BIFs


Ë register(AnAtom, Pid)
Register the process Pid with the name AnAtom. The registration fails if AnAtom has already
been used to register a process.
Ë unregister(AnAtom)
Remove any registrations associated with AnAtom.
Note: If a registered process dies it will be automatically unregistered.
Ë whereis(AnAtom) -> Pid | undefined
Find out whether AnAtom is registered. Return the process identifier Pid, or return the atom
undefined if no process is associated with AnAtom.
Ë registered() -> [AnAtom::atom()]
Return a list of all registered processes in the system.
Conc_Lect06.nb 9

Slide 9 of 63

Registering Processes - Example


Ë Register a spawned process

1> Pid = spawn(fun area_server0:loop/0).


<0.51.0>
2> register(area, Pid).
true

Ë Now access it using its registration atom

3> area ! {rectangle, 4, 5}.


Area of rectangle is 20
{rectangle,4,5}
10 Conc_Lect06.nb

Slide 10 of 63

Example - Clock
-module(clock).
-export([start/2, stop/0]).
start(Time, Fun) ->
register(clock, spawn(fun() -> tick(Time, Fun) end)).
stop() -> clock ! stop.
tick(Time, Fun) ->
receive
stop -> void
after
Time -> Fun(),
tick(Time, Fun)
end.

Eshell V5.8.3 (abort with ^G)


1> c(clock).
{ok,clock}
2> clock:start(5000, fun() -> io:format("TICK ~p ~n",[erlang:now()]) end).
true
TICK {1303,307992,942088}
TICK {1303,307997,943081}
TICK {1303,308002,944079}
3> clock:stop().
stop
Conc_Lect06.nb 11

Slide 11 of 63

Some more usfull BIFs and shell commands - 1


Ë processes() - BIF returning a list of all processes running in the system
Ë i() - a list describing what the activity of the running processes
1> spawn(no_module, nonexistent_function, []).
<0.33.0>

=ERROR REPORT==== 1-May-2011::01:28:34 ===


Error in process <0.33.0> with exit value: {undef,[{no_module,nonexistent_function,[]}]}

Ë A process creation is always successful. Thus the error is detected within the process
Ë Error in a spwaned process is deteced by the error logger (part of the running time)
2> processes().
[<0.0.0>,<0.3.0>,<0.5.0>,<0.6.0>,<0.8.0>,<0.9.0>,<0.10.0>,
<0.11.0>,<0.12.0>,<0.13.0>,<0.14.0>,<0.15.0>,<0.16.0>,
<0.17.0>,<0.18.0>,<0.19.0>,<0.20.0>,<0.21.0>,<0.22.0>,
<0.23.0>,<0.24.0>,<0.25.0>,<0.26.0>,<0.27.0>,<0.31.0>]
12 Conc_Lect06.nb

Slide 12 of 63

Some more usefull BIFs and shell commands - 2


3> i().
Pid Initial Call Heap Reds Msgs
Registered Current Function Stack
<0.0.0> otp_ring0:start/2 1597 2418 0
init init:loop/1 2
<0.3.0> erlang:apply/2 2584 189437 0
erl_prim_loader erl_prim_loader:loop/3 6
<0.5.0> gen_event:init_it/6 610 698 0
error_logger gen_event:fetch_msg/5 8
<0.6.0> erlang:apply/2 1597 436 0
application_controlle gen_server:loop/6 7
<0.8.0> application_master:init/4 377 44 0

ª
gen_server:loop/6 9
<0.27.0> supervisor:kernel/1 233 58 0
kernel_safe_sup gen_server:loop/6 9
<0.31.0> erlang:apply/2 987 14591 0
c:pinfo/1 50
Total 42776 379986 0
218
ok
4>
Conc_Lect06.nb 13

Slide 13 of 63

Some more usefull BIFs and shell commands - 3


Eshell V5.8.3 (abort with ^G)
1> Pid=self().
<0.31.0>
2> Pid!hello.
hello
3> flush().
Shell got hello
ok
4> <0.31.0> ! hello.
* 1: syntax error before: '<'
4> Pid2=pid(0,31,0).
<0.31.0>
5> Pid2!hello2.
hello2
6> flush().
Shell got hello2
ok
7>
14 Conc_Lect06.nb

Slide 14 of 63

The Scheduler - 1
Ë Notice the Reds column in the result of the i() command ª reductions
Ë Each command (i.e., function call, BIF, arithmatic operation) is assigned a number of reduction
steps.
Ë The VM uses this number to check the activity level of the process
Ë When a process is dispatched, it is assigned a number of reductions it is allowed to execute
Ë This number is reduced for every executed operation.
Ë As soon as a process enters a receive clause where none of the messages matches or its
reduction count reaches zero, it is preempted.
Ë As long as BIFs are not being executed, this strategy results in a fair (but not equal) allocation
of execution time among the processes.
Conc_Lect06.nb 15

Slide 15 of 63

The Scheduler - 2
Ë BIFs - all mathematical operations, for example, have the same number of reductions assigned
to them.
Ë BIFs such as lists:reverse and lists:member, known to vary in execution time based on
their inputs, will interrupt their execution (a trap) and bump up the reduction counter.
Generally, new BIFs which are added to Erlang are implemented with traps by default.
Ë Incrementing a process reduction counter using the BIF erlang:bump_reductions(Num),
Ë erlang:yield() can be used to preempt the process altogether. Using yield/0 in the
standard symmetric multiprocessing (SMP) emulator will have little, if any, effect.

You should not make the behavior of the scheduler influence how you design and program your
systems, as this behavior can change without notice in between releases. Knowing how it works,
however, might help explain observations when inspecting and profiling your system.
16 Conc_Lect06.nb

Slide 16 of 63

Example - an Echo Process


-module(echo).
-export([go/0, loop/0]).

go() ->
Pid = spawn(echo, loop, []),
Pid ! {self(), hello},
receive
{Pid, Msg} ->
io:format("~w~n",[Msg])
end,
Pid ! stop.
loop() ->
receive
{From, Msg} ->
From ! {self(), Msg},
loop();
stop ->
true
end.
Conc_Lect06.nb 17

Slide 17 of 63

The echo Process - 2: explanation


Ë Calling the function go/0 will initiate a process whose first action is to spawn a child process.
Ë This child process starts executing in the loop/0 function and is immediately suspended in the
receive clause, as its mailbox is empty
Ë The parent, still executing in go/0, uses the Pid for the child process, which is bound as a
return value from the spawn BIF, to send the child a message containing a tuple with the
parent’s process identifier and the atom hello.
Ë As soon as the message is sent, the parent is suspended in a receive clause.
Ë The child, which is waiting for an incoming message, successfully pattern-matches the {Pid, Msg}
tuple where Pid is matched to the process identifier belonging to the parent and Msg is matched
to the atom hello.
Ë The child process uses the Pid to return the message {self(), hello} back to the parent (thus, an
echo operation)
18 Conc_Lect06.nb

Slide 18 of 63

The echo process - 3: explanation


Ë At this point, the parent is suspended in the receive clause, and is waiting for a message. It
will only pattern-match on the tuple {Pid,Msg}, where the variable Pid is already bound (as a
result of the spawn BIF) to the pid of the child process.
Ë The atom hello is bound to the Msg variable, which is passed as an argument to the io:format/2
call, printing it out in the shell.
Ë As soon as the parent has printed the atom hello in the shell, it sends the atom stop back to
the child.
spawn
Conc_Lect06.nb
self() 19

hello Slide 19 of 63

The echo process - 4: running receive


{Pid,
1> echo:go().Msg} Pid
hello Msg hello Pid
stop {self(), hello} self()
2>

receive
{Pid, Msg} Pid
spawn

hello Msg
io:format/2
hello stop

loop/0
receive stop
true
20 Conc_Lect06.nb

Slide 20 of 63

The echo process - 5: using registration


-module(echo2).
-export([go/0, loop/0]).
go() ->
register(echo, spawn(echo2, loop, [])),
echo ! {self(), hello},
receive
{_Pid, Msg} ->
io:format("~w~n",[Msg])
end.

loop() ->
receive
{From, Msg} ->
From ! {self(), Msg},
loop();
stop ->
true
end.
Conc_Lect06.nb 21

Slide 21 of 63
1> c(echo2).
{ok,echo2}
2> echo2:go().
hello
ok
3> whereis(echo).
<0.38.0>
4> echo!stop.
stop
5> whereis(echo).
undefined
6> regs().
** Registered procs on node nonode@nohost **
Name Pid Initial Call Reds Msgs
application_controlle <0.6.0> erlang:apply/2 436 0
code_server <0.18.0> erlang:apply/2 188975 0
erl_prim_loader <0.3.0> erlang:apply/2 581247 0
error_logger <0.5.0> gen_event:init_it/6 226 0
file_server_2 <0.17.0> file_server:init/1 512 0
global_group <0.16.0> global_group:init/1 59 0
global_name_server <0.12.0> global:init/1 50 0
inet_db <0.15.0> inet_db:init/1 262 0
init <0.0.0> otp_ring0:start/2 2418 0
kernel_safe_sup <0.27.0> supervisor:kernel/1 58 0
kernel_sup <0.10.0> supervisor:kernel/1 34392 0
rex <0.11.0> rpc:init/1 35 0
standard_error <0.20.0> erlang:apply/2 9 0
standard_error_sup <0.19.0> supervisor_bridge:standar 41 0
user <0.23.0> group:server/3 36 0
user_drv <0.22.0> user_drv:server/2 1411 0

** Registered ports on node nonode@nohost **


Name Id Command
ok
try ... catch
22 Conc_Lect06.nb

Slide 22 of 63

Sending a message to a nonregistered process is a bug

Timeouts
receive

104 | Chapter 4: Concurrent Programming


Conc_Lect06.nb 23

Slide 23 of 63

Process Skeletons - 1
Ë There is a common pattern to the behavior of processes, regardless of their particular purpose.
Ë Processes have to be spawned and their aliases registered.
Ë The first action of newly spawned processes is to initialize the process loop data.
Ë The loop data is often the result of arguments passed to the spawn BIF and the initialization of
the process.
Ë It is stored in a variable we refer to as the process state. The state is passed to a receive-
evaluate function, which receives a message, handles it, updates the state, and passes it back
as an argument to a tail-recursive call.
Ë If one of the messages it handles is a stop message, the receiving process will clean up after
itself and terminate. This is a recurring theme among processes that we usually refer to as a
design pattern, and it will occur regardless of the task the process has been assigned to
perform.
24 Conc_Lect06.nb

Slide 24 of 63

Process Skeletons - 2

spawn
Conc_Lect06.nb 25

Slide 25 of 63

Process Skeletons - 3: Diferencess


Ë The arguments passed to the spawn BIF calls will differ from one process to another.
Ë You have to decide whether you should register a process, and, if you do register it, what alias
should be used.
Ë In the function that initializes the process state, the actions taken will differ based on the
tasks the process will perform.
Ë The storing of the process state might be generic, but its contents will vary among processes.
Ë When in the receive-evaluate loop, processes will receive different messages and handle them in
different ways.
Ë On termination, the cleanup will vary from process to process.

So, even if a skeleton of generic actions exists, these actions are complemented by specific
ones that are directly related to the specific tasks assigned to the process.
26 Conc_Lect06.nb

Slide 26 of 63

Concurrency-Related Bottlenecks - 1
Ë Processes are said to act as bottlenecks when, over time, they are sent messages at a faster
rate than they can handle them, resulting in large mailbox queues.
Ë processes with many messages in their inbox behave badly.
1. The process itself, through a selective receive, might match only a specific type of message.
If the message is the last one in the mailbox queue, the whole mailbox has to be traversed
before this message is successfully matched. This causes a performance penalty that is often
visible through high CPU consumption.
2. Processes sending messages to a process with a long message queue are penalized by
increasing the number of reductions it costs to send the message. This is an attempt by the
runtime system to allow processes with long message queues to catch up by slowing down the
processes sending the messages in the first place. The latter bottleneck often manifests
itself in a reduction of the overall throughput of the system.
Conc_Lect06.nb 27

Slide 27 of 63

Concurrency-Related Bottlenecks - 2
Ë The only way to discover whether there are any bottlenecks is to observe the throughput and
message queue buildup when stress-testing the system. Simple remedies to message queue
problems can be achieved by optimizing the code and fine-tuning the operating system and VM
settings.
Ë Another way to slow down message queue buildups is by suspending the processes generating the
messages until they receive an acknowledgment that the message they have sent has been
received and handled, effectively creating a synchronous call.
Ë Replacing asynchronous calls with synchronous ones will reduce the maximum throughput of the
system when running under heavy load, but never as much as the penalty paid when message
queues start building up. Where bottlenecks are known to occur, it is safer to reduce the
throughput by introducing synchronous calls, and thus guaranteeing a constant throughput of
requests in the system with no degradation of service under heavy loads.
28 Conc_Lect06.nb

Slide 28 of 63

A Case Study by Francesco Casarini - 1


Ë A group of engineers were developing an IM proxy for Jabber.
Ë The system received a packet through a socket, decoded it, and took certain actions based on
its content. Once the actions were completed, it encoded a reply and sent it to a different
socket to be forwarded to the recipient. Only one packet at a time could come through a
socket, but many sockets could simultaneously be receiving and handling packets.
Ë The original system did not have a process for every truly concurrent activity - the processing
of a packet from end to end but instead used a process for every different task - decoding,
encoding, and so forth.
Ë Each open socket in Erlang was associated with a process that was receiving and sending data
through this socket. Once the packet was received, it was forwarded to a process that handled
the decoding. Once decoded, the decoding process forwarded it to the handler that processed
it. The result was sent to an encoding process, which after formatting it, forwarded the reply
to the socket process that held the open connection belonging to the recipient.
Conc_Lect06.nb 29

Slide 29 of 63

A Case Study by Francesco Casarini - 2

The badly designed concurrent system


30 Conc_Lect06.nb

Slide 30 of 63

A Case Study by Francesco Casarini - 3


Ë How many packages can be processed simultaneously?
Ë At its best performance, the system could process five simultaneous messages, with the
decoding, handling, and encoding being the bottleneck, regardless of the number of
simultaneously connected sockets. There were two other processes, one used for error
handling, where all errors were passed, and one managing a database, where data reads, writes,
and deletes were executed.
Conc_Lect06.nb 31

Slide 31 of 63

What is the task that should be duplicated?


Ë The actions of decoding, handling, and encoding that were not the answer
Ë The handling of the individual packets themselves is the process that should be considered as
the concurrent process to be spawned.
Ë Having a process for every packet and using that process to decode, handle, and encode the
packet meant that if thousands of packets were received simultaneously, they would all be
processed in parallel.
Ë Knowing that a socket can receive only one packet at any one time meant that we could use this
socket process to handle the call. Once the packet was received, a function call ensured that it
was decoded and handled.
Ë The result (possibly an error) was encoded and sent to the socket process managing the
connection belonging to the final recipient.
Ë The error handler and database processes were not needed, as the consistency of data through
the serialization of destructive database operations could have been achieved in the handler
process, as could the management of errors.
32 Conc_Lect06.nb

Slide 32 of 63

The solution

Race Conditions, Deadlocks, and Process Starvation


Conc_Lect06.nb 33

Slide 33 of 63

Are There Race Conditions, Deadlocks, and Process Starvation in Erlang?


Ë Erlang is not completely problem-free
Ë Race conditions

start() ->
case whereis(db_server) of
undefined ->
Pid = spawn(db_server, init, []),
register(db_server, Pid),
{ok, Pid};
Pid when is_pid(Pid) ->
{error, already_started}
end.
34 Conc_Lect06.nb

Slide 34 of 63

The Race Condition Scenario -1


1. Assume that the database server process has not been started and two processes simultaneously
start executing the start/0 function.

2. The first process calls whereis(db_server), which returns the atom undefined.

3. This pattern-matches the first clause, and as a result, a new database server is spawned. Its
process identifier is bound to the variable Pid.

4. If, after spawning the database server, the process runs out of reductions and is preempted,
this will allow the second process to start executing.

5. The call whereis(db_server) by the second process also returns undefined, as the first
process had not gotten as far as registering it.
Conc_Lect06.nb 35

Slide 35 of 63

The Race Condition Scenario - 2


6. The second process spawns the database server and might go a little further than the first one,
registering it with the name db_server. At this stage, the second process is preempted and the first
process continues where it left off.

7. It tries to register the database server it created with the name db_server but fails with a runtime
error, as there already is a process with that name.

8. What we would have expected is the tuple {error, already_started} to be returned, instead of a
runtime error.

Race conditions such as this one in Erlang are rare, but they do happen when you might least

expect them.

A variant of the preceding example was taken from one of the early Erlang libraries and

reported as a bug in 1996.


36 Conc_Lect06.nb

Slide 36 of 63

The Deadlock Scenario


Ë A good design of a system based on client/server principles is often enough to guarantee that
the application will be deadlock-free.

Ë The only rule need to be followed is that if process A sends a message and waits for a response
from process B, in effect doing a synchronous call, process B is not allowed, anywhere in its
code, to do a synchronous call to process A, as the messages might cross and cause the
deadlock.

Ë Deadlocks are extremely rare in Erlang as a direct result of the way in which programs are
structured. In those rare occasions where they slip through the design phase, they are caught
at a very early stage of testing.
Conc_Lect06.nb 37

Slide 37 of 63

The Process Manager


Ë The process manager is a debugging tool used to inspect the state of processes in Erlang
systems.
Ë You can start the process manager by writing pman:start() in the shell.
Ë A window will open, displaying contents similar to what you saw when experimenting with the i()
command.
Ë Double-clicking any of the processes will open a trace output window. You can choose your
settings by picking options in the File menu.
Ë For each process with an output window, you can trace all the messages that are sent and
received. You can trace BIF and function calls, as well as concurrency-related events, such as
processes being spawned or terminated. Your can also redirect your trace output from the
window to a file. Finally, you can pick the inheritance level of your trace events. A very detailed
and well-written user guide comes with the Erlang distribution that we recommend as further
reading.
38 Conc_Lect06.nb

Slide 38 of 63

The Process Manager


Conc_Lect06.nb 39

Slide 39 of 63

Processes Desing Patterns


Ë What can Erlang processes do?
Ë Gateways to databases

Ë Handle protocol stacks

Ë Manage the logging of trace messages

Ë etc.

Ë Although these processes may handle different requests, there will be similarities in how these
requests are handled. We call these similarities design patterns.
40 Conc_Lect06.nb

Slide 40 of 63

The Client-Server Design Pattern


Ë The client/server model is commonly used for processes responsible for a resource such as a list
of rooms, and services that can be applied on these resources, such as booking a room or
viewing its availability.
Ë Requests to this server will allow clients (usually implemented as Erlang processes) to access
these resources and services.
Conc_Lect06.nb 41

Slide 41 of 63

The Finite States Machine (FSM) Design Pattern


Ë Imagine a process handling events in an instant messaging (IM) session. This process (or FSM),
will be in one of three states.
1. It could be in an offline state, where the session with the remote IM server is being established.
2. It could be in an online state, enabling the user to send and receive messages and status updates.
3. And finally, if the user wants to remain online but not receive any messages or status updates, it
could be in a busy state.
Ë State changes are triggered through process messages we call events.
Ë An IM server informing the FSM that the user is logged on successfully would cause a state transition from the
offline state to the online state.

Ë Events received by the FSM do not necessarily have to trigger state transitions. Receiving an instant message or
a status update would keep the FSM in an online state while a logout event would cause it to go from an online or
busy state to the offline state.
42 Conc_Lect06.nb

Slide 42 of 63

The Event Handler Design Pattern


Ë Event handler processes will receive messages of a specific type.
Ë These could be trace messages generated in your program or stock quotes coming from an external feed.

Ë Upon receiving these events, you might want to perform a set of actions such as triggering an
SMS (Short Message Service message) or sending an email if certain conditions are met, or
simply logging the time stamp and stock price in a file.
Conc_Lect06.nb 43

Slide 43 of 63

Client/Server Models - 1

Ë Both client and server are implemented as Erlang Processes


Ë A server could be a FIFO queue to a printer, window manager, or file server
Ë The resources it handles could be a database, calendar, or finite list of items such as rooms,
books, or radio frequencies, etc.
printerserver ! {print, File}

printerserver:print(File)

118 | Chapter 5: Process Design Patterns


44 Conc_Lect06.nb

Slide 44 of 63

Client/Server Models - 2
Ë Clients use these resources by sending the server requests to print a file, update a window,
book a room, or use a frequency
Ë The server receives the request, handles it, and responds with an acknowledgment and a return
value if the request was successful, or with an error if the request did not succeed
Conc_Lect06.nb 45

Slide 45 of 63

Client/Server Models - Some General Comment


Ë Clients and servers are represented as Erlang processes. Interaction between them takes place
through the sending and receiving of messages.
Ë Message passing is often hidden behind functional interfaces, so instead of calling
printerserver ! {print, File}
a client would call
printerserver:print(File)
Ë The client is not aware that
Ë the server is a process

Ë it could be registeredand

Ë it might reside on a remote computer.

Ë We do not expose the message protocol being used between the client and the server, keeping
the interface between them safe and simple.

All the client needs to do is call a function and expect a return value.
46 Conc_Lect06.nb

Slide 46 of 63

Hide Details with Care


Ë The message response times will differ if the process is busy or running on a remote machine.
Ë In most cases not cause any problems but the client needs to be aware of it and be able to cope
with a delay in response time.
Ë Things can go wrong behind this function call:
Ë There might be a network glitch

Ë The server process might crash

Ë There might be so many requests that the server response times become unacceptable.
Conc_Lect06.nb 47

Slide 47 of 63

Synchronouse vs. Unsynchronouse Messaging


Ë If a client using the service or resource handled by the server expects a reply to the request,
the call to the server has to be synchronous
Ë If the client does not need a reply, the call to the server can be asynchronous.
Ë When encapsulating synchronous and asynchronous calls in a function call,
ok
Ë Asynchronous calls commonly return the atom ok, indicating that the request was sent to the server.
Ë Synchronous calls will return
ok {ok,the value expected
Result} by the client. These return values usually follow the format
{error, Reason} ok,
{ok, Result}, or {error, Reason}.

Synchronouse client/server requests

A Client/Server Example

frequency:allocate()

allocate/0 {ok, Frequency} {error,


no_frequencies}
48 Conc_Lect06.nb

Slide 48 of 63

Client/Server Example - The frequency server in a cellular communication


system
Ë This server is responsible for managing radio frequencies on behalf of its clients, the mobile
phones connected to the network.
Ë The phone requests a frequency whenever a call needs to be connected, and releases it once the
call has terminated.
Ë When a mobile phone has to set up a connection to another subscriber, it calls the
frequency:allocate() client function.
Ë This call has the effect of generating a synchronous message which is sent to the server.

Ë The server handles it and responds with either a message containing an available frequency or an
error if all frequencies are being used.
Ë The result of the allocate/0 call will therefore be either {ok, Frequency} or {error,
no_frequencies}.
Conc_Lect06.nb 49

Slide 49 of 63

The Frequency Server

frequency:deallocate(Frequency)

frequency:deallocate(Frequency)
ok
deallocate/1
50 Conc_Lect06.nb

Slide 50 of 63

The Server’s functions


-module(frequency).
-export([start/0, stop/0, allocate/0, deallocate/1]). -export([init/0]).
%% These are the start functions used to create and initialize the server.
start() ->
register(frequency, spawn(frequency, init, [])).
init() ->
Frequencies = {get_frequencies(), []},
loop(Frequencies).
% Hard Coded
get_frequencies() -> [10,11,12,13,14,15].
Conc_Lect06.nb 51

Slide 51 of 63

The Clients Functions


%% The client Functions
stop() -> call(stop).
allocate() -> call(allocate).
deallocate(Freq) -> call({deallocate, Freq}).

Ë Hide Message Passing and Message Protocol

call(Message) ->
frequency ! {request, self(), Message},
receive
{reply, Reply} -> Reply
end.
52 Conc_Lect06.nb

Slide 52 of 63

The Main loop


%% The Main Loop
loop(Frequencies) ->
receive
{request, Pid, allocate} ->
{NewFrequencies, Reply} = allocate(Frequencies, Pid),
reply(Pid, Reply),
loop(NewFrequencies);
{request, Pid , {deallocate, Freq}} ->
NewFrequencies = deallocate(Frequencies, Freq),
reply(Pid, ok),
loop(NewFrequencies);
{request, Pid, stop} ->
reply(Pid, ok)
end.

reply(Pid, Reply) ->


Pid ! {reply, Reply}.
Conc_Lect06.nb 53

Slide 53 of 63

The Helper Functions


%% The Internal Help Functions used to allocate and
%% deallocate frequencies.

allocate({[], Allocated}, _Pid) ->


{{[], Allocated}, {error, no_frequency}};
allocate({[Freq|Free], Allocated}, Pid) ->
{{Free, [{Freq, Pid}|Allocated]}, {ok, Freq}}.

deallocate({Free, Allocated}, Freq) ->


NewAllocated=lists:keydelete(Freq, 1, Allocated),
{[Freq|Free], NewAllocated}.
54 Conc_Lect06.nb

Slide 54 of 63

Testing
1> c(frequency).
{ok,frequency}
2> frequency:start().
true
3> frequency:allocate().
{ok,10}
4> frequency:allocate().
{ok,11}
5> frequency:allocate().
{ok,12}
6> frequency:allocate().
{ok,13}
7> frequency:allocate().
{ok,14}
8> frequency:allocate().
{ok,15}
9> frequency:allocate().
{error,no_frequency}
10> frequency:deallocate(11).
ok
11> frequency:allocate().
{ok,11}
12> frequency:stop().
ok
Conc_Lect06.nb 55

Slide 55 of 63

And what about the design pattern?


Ë There are similarities between the client-server example and the process skeleton

Picture an application, which handles many


simultaneously open windows centrally controlled
by a window manager.

Ë Spawning a process for every window is the way to implement a truely concurrent activity.
These processes would probably not be registered, as many windows of the same type could be
Ë spawn
running concurrently.

Recursion and Memory Leaks

receive

Chapter 4: Concurrent Programming


56 Conc_Lect06.nb

Slide 56 of 63

Design Pattern - The initialize Function


Ë After being spawned, each process would call the initialize function, which draws and
displays the window and its contents.
Ë The return value of the initialize function contains references to the widgets displayed in
the window.
Ë These references are stored in the state variable and are used whenever the window needs
updating.
Ë The state variable is passed as an argument to a tail-recursive function that implements the
receive-evaluate loop.
Conc_Lect06.nb 57

Slide 57 of 63

Design Pattern - The receive-evaluate loop


Ë In the loop function, the process waits for events originating in or relating to the window it is
managing.
Ë A user typing in a form or choosing a menu entry

Ë An external process pushing data that needs to be displayed.

Ë Every event relating to this window is translated to an Erlang message and sent to the process.
Ë The process, upon receiving the message, calls the handle function, passing the message and
state as arguments (remember CSP?).
Ë If the event were the result of a few keystrokes typed in a form, the handle function might want to display
them.

Ë If the user picked an entry in one of the menus, the handle function would take appropriate actions in executing
that menu choice.

Ë If the event was caused by an external process pushing data, possibly an image from a webcam or an alert
message, the appropriate widget would be updated.
58 Conc_Lect06.nb

Slide 58 of 63

Design Pattern - The receive-evaluate loop (2)


The receipt of these events in Erlang would be seen as a generic pattern in all processes.
What would be considered specific and change from process to process is how these events are
handled.
Conc_Lect06.nb 59

Slide 59 of 63

Design Pattern - stopping


Ë A stop message might have originated from a user picking the Exit menu entry or clicking the
Destroy button, or from the window manager broadcasting a notification that the application is
being shut down.
Ë Regardless of the reason, a stop message is sent to the process.
Ë Upon receiving the stop message, the process calls a terminate function, which destroys all of
the widgets, ensuring that they are no longer displayed.

After the window has been shut down, the process terminates because there is no more code
to execute.
60 Conc_Lect06.nb

Slide 60 of 63

The Design Pattern in Erlang - essentials


-module(server).
-export([start/2, stop/1, call/2]).
-export([init/1]).
Conc_Lect06.nb 61

Slide 61 of 63

The Design Pattern in Erlang - start & stop


start(Name, Data) ->
Pid = spawn(generic_handler, init,[Data])
register(Name, Pid), ok.

stop(Name) ->
Name ! {stop, self()},
receive {reply, Reply} -> Reply end.

call(Name, Msg) ->


Name ! {request, self(), Msg},
receive {reply, Reply} -> Reply end.

reply(To, Msg) ->


To ! {reply, Msg}.

init(Data) ->
loop(initialize(Data)).
62 Conc_Lect06.nb

Slide 62 of 63

Design Pattern - the loop and related functions


loop(State) ->
receive
{request, From, Msg} ->
{Reply,NewState} = handle_msg(Msg, State),
reply(From, Reply),
loop(NewState);
{stop, From} ->
reply(From, terminate(State))
end.

initialize(...) -> ...

handle_msg(...,...) -> ...

terminate(...) -> ...


Conc_Lect06.nb 63

Slide 63 of 63

Design Pattern - related functions


Ë The initialize function draws the window and displays it, returning a reference to the widget
that gets bound to the state variable.
Ë Every time an event arrives in the form of an Erlang message, the event is taken care of in the
handle_msg function.
Ë The call takes the message and the state as arguments and returns an updated State variable.

Ë This variable is passed to the recursive loop call, ensuring that the process is kept alive.

Ë Any reply is also sent back to the process where the request originated.

Ë If the stop message is received, terminate is called, destroying the window and all the widgets
associated with it. The loop function is not called, allowing the process to terminate normally.

You might also like