Real Time Programming Course
Real Time Programming Course
STM (software
transactional
Also:
memory) - Tasks/Futures
- Dataflow
- Just Threads
+
Alexandru Burlacu Spring 2020
Actor Model
Initially proposed by Carl Hewitt*, Actor Model was a way to structure/model concurrent systems that took
into account the perils of shared memory.
Actor Model is a message passing paradigm, inspired from physics and biology, as a result being a bit less
formal than for example CSP.
*Carl Hewitt; Peter Bishop & Richard Steiger (1973). "A Universal Modular Actor Formalism for Artificial Intelligence"
The fundamental idea of the actor model is to use actors as concurrent primitives that can act upon receiving
messages in different ways:
1. Create more actors
2. Send messages to other actors
3. Designate what to do with the next message by changing its own internal behavior
- Actor Model uses asynchronous message passing, i.e. an actor doesn’t wait for an acknowledgement
of the message by the receiver actor.
- Actor Model does not use any intermediate entities such as channels.
Each actor possesses a mailbox and can be addressed.
- Addresses != identities.
Each actor can have no, one or multiple addresses.
Actors Model based systems must be designed with a special property in mind - Inconsistency Robustness,
i.e. a system must be able to function properly even if the internal state is inconsistent.
Sort of like social systems, like organizations, governments or communities. They are all capable to cope
with some degree of inconsistency.
This property is necessary due to the lack of in-order delivery of messages, in the formal description of the
Actor Model.
Today’s implementations of Actor Model, be it embedded in language, like Erlang and Elixir, or available as
libraries, like Akka for JVM systems and Celluloid for Ruby, must meet the following capabilities:
- Fault tolerance
- Distribution transparency
- Scalability (local and nonlocal)
Your default case must be one or more actors per, connection/client/whatever. Actors are light, so it’s not a
problem having plenty of them.
Due to their internal rules (being sequential) using one actor for multiple tasks doesn’t let you have
maximum concurrency in the system.
Supervision Tree - Actors come in systems, and their preferred system organization is a hierarchy.
Systems built using the Actor Model tend to be hierarchical, employing special supervisor actors that take
care of the worker actors, especially when a failure occurs.
Having supervisor actors taking care of workers makes the overall system robust to failure, a philosophy
embraced and popularized by Erlang developers: Let it crash
Another very common pattern when using actor model is having having a publisher-subscriber aka observer
pattern implementation. In fact, PubSub is fundamental for most modern reactive systems, but about this
later.
This way, it is possible to have topics and entities interested in receiving updates about these topics. Actor
model is especially good at it due to its distribution transparency property, and lightweight nature.
PubSub in such languages as Elixir or Erlang is commonly done via :gen_event OTP behaviour, but that’s
just technicalities.
Character Actor, related in Erlang folklore as the Error Kernel pattern, is a way to ensure reliability in
case that the system was asked to perform a task with high error chance.
The idea is to create a dedicated actor to perform a task, such as if it fails, it won’t damage the rest of the
system, while if task completed successfully, the actor is disposed.
During the design process, identify the components that must always be correct, they are the kernel, as in OS
parlance, and the ones that could in principle be faulty. This is Error Kernel pattern. Also, in practice, the
kernel has a way to write into memory information about its state and the state of its children, for reliability
purposes.
Actor Recursion - for an actor to send a message, it need to know only the address of the recipient. Because
of this, an actor can in fact send a message to itself.
This way it could be possible to postpone messages that have lesser priority than others, for example.
Note, actor recursion should not be confused with recursively spawning more and more actors.
Request-Reply - recall that Actor Model is asynchronous in nature, yet sometimes a way to receive a
response is necessary.
Request-Reply - recall that Actor Model is asynchronous in nature, yet sometimes a way to receive a
response is necessary.
Add the sender’s address/reference to the message sent, and have a way to receive the reply on the server.
Beware, that way you introduce coupling in the system. At least make sure the sender doesn’t block while
waiting for a reply.
Recall what a transaction is - a sequence of operations that either are all successful or discarded, and the
state of the system rollbacked to the state before the transaction.
This is actually a hard problem, and one of the reasons why transactions are rare in actor-based systems.
Generally, if you need transactions, either try using an external database-like component, or design a
coordination protocol using messages.
In Erlang-land there are no definite patterns of doing transactions because of the availability of other tools,
namely ETS tables and the Mnesia database.
There was a special class in Akka 2.0.x for JVM called Transactor, today it is deprecated due to Akka’s
orientation towards distribution transparency.
Anyway, Transactor was capable to enable transaction within an actor, using STM semantics.
On a single laptop one can spawn thousands, if not hundred of thousands of actors. But keep in mind that
that same laptop won’t have more than 4 physical/8 logical cores, as of 2020.
This discrepancy between physical parallelism and actor model capabilities results in terrible parallel
performance. Recall - Concurrency is not Parallelism!!!
Therefore, a system using actors, generally, won’t work well in CPU bound tasks, like linear algebra
routines, or signal processing. On the other hand it is proven that actor model is probably ideal for
simulations, due to it’s easy mapping to multi agent systems.
Another more subtle problems with the actor model are not really composable and they couple
concurrency and mutual exclusion, leaving synchronization to be implemented via custom messaging
protocols.
1st, actors are not composable because, unless the developer before you was caring, the receiver of a
message is hardcoded, therefore, having 2 actors A and B, and A sends messages to C, therefore making it
non-trivial to make A send messages to B.
2nd, coupling happens as a result of actors encapsulating pices of state and having internal sequential
execution, while due to asynchronous communication, the whole system is concurrent.
See: https://noelwelsh.com/posts/2013-03-04-why-i-dont-like-akka-actors.html
If you have some state, multiple threads reading and writing from/to that value will end up inducing the
possibility of deadlocks and race conditions, beside increased complexity due to synchronization. If you put
that state inside an actor, suddenly you can be sure that it's accessed safely and you aren't dropping writes or
getting stale reads.
Therefore, as a rule of thumb, use actors when you need safe state.
In case of not Erlang-based languages, Futures and Tasks are simpler and more suitable if you don't need to
mutate state.
Actor model, and further communicating sequential processes, due to their lightweight nature allow anyone
to use hundreds, thousands, even millions of concurrently running operations.
It formally describes the relationship of queue size (N), throughput (X) and latency (R):
- N=XR
- X=N/R
- R=N/X
Although not all systems use queues, Little’s law still holds. For example when trying to keep the latency at
bay when the number of connections to the system surges.
See: https://codahale.com/usl4j-and-you/
Amdahl’s Law - suppose we have a data processing pipeline, processing 10k data points per second, on a 8
core server.
Now, if you will change the CPU to a 16 core one, you will notice that the number of data points processed
isn’t 20k, more like 17-19k.
Why is that?
There are portions of the workload that can’t run in parallel, called serial workload.
Knowing what the fraction of the workload is serial helps understand how will it scale given more
cores/machines/compute units to run on.
Assuming an 8 core CPU on which a program that maps a numerical function to a set of values and then
aggregates the newly formed set of results, again, numerically speaking; what configuration of the worker
pool will be faster, 8 workers or 24, or even 64?
Universal Scalability Law was proposed by N. J. Gunther, as model which combines Amdahl’s Law and
Gustafson’s Law.
It takes into account the cost of communication between processes, to produce a nonlinear model to predict
a system’s behavior when scale changes.
X is the throughput of the system as a function of the level of concurrency N (think # of threads or
connections).
Sigma is the overhead of contention, how many contend for a resource
kappa - the overhead of crosstalk, how much time until consistent
and lambda is the maximum performance of the system, sometimes omitted.
See: https://codahale.com/usl4j-and-you/
and https://blog.acolyer.org/2015/04/29/applying-the-universal-scalability-law-to-organisations/
Alexandru Burlacu Spring 2020
Actor Model A quick not-so-off-topic
In the end, Universal Scalability Law states, and reasonably so, that as the number cores/machines
computing something in a parallel/concurrent fashion grows, at some point not only the system will scale
slower, but in fact the performance will degrade.
In a way, even context switches can be modeled as communication, due to the cache being flushed for the
other thread/worker to be resumed.
Close your laptops; Get a sheet of paper; For forests worldwide sake, cut it up to A6 size, you won’t need
much; Share with your colleagues.
Don’t worry, the results won’t affect your midterm grades. Or maybe they will… or… we shall see.
Q5. The limiting factor when running parallel/concurrent tasks is (assuming ∞ memory):
a) Communication overhead b) Context switches c) Number of threads
- https://ferd.ca/the-zen-of-erlang.html
- https://theory.stanford.edu/~jcm/cs358-96/actors.html
- http://ulf.wiger.net/weblog/2008/02/06/what-is-erlang-style-concurrency/
- https://stackoverflow.com/questions/8107612/the-actor-model-why-is-erlang-special-or-why-do-you-
need-another-language-for
- http://www.perfdynamics.com/Manifesto/USLscalability.html
CSP, aka Communicating Sequential Processes is another well known message passing approach to
concurrency. Same as Actor Model, only different.
CSP was first proposed by Tony Hoare in 1978, as a way to formally specify the behaviour and model
concurrent systems.
Initially it was closer to Hewitt’s Actor Model, also requiring the address of the recipient to be known, but
later it switched to channels and anonymous processes.
CSP has 2 main primitives, events and processes. They can be combined using different operators.
Events are fundamental for CSP. They are values that determine the behaviour of the system.
Processes are anonymous in nature and either process or react depending on the event.
Channels, which are used in practice are not really defined in CSP formalism. Rather, these are related to the
events.
Say, you can have an inp and out channel to your process but what matters to the underlying theory are the
inp.x and out.y events, where x and y are values from the inp and out channels respectively.
Same: Same:
- Message oriented - Message oriented
- Used for modeling/formal - Used for modeling/formal
specification specification
Different: Different:
- Processes/Actors have identity - Anonymous processes
- Asynchronous by default - Synchronous by default
- Built-in unbounded nondeterminism - Only bounded nondeterminism
Practically equivalent.
One can build CSP using
Actor Model and vice versa
“With great power comes great responsibility” - Uncle Ben minutes before dying, telling Peter Parker not to
abuse CSP in projects.
Main things to keep in mind when considering using CSP-like abstractions in your project are:
1) How expensive is it to work with, system resources-wise.
- Is it cheap to have many processes?
- Is it cheap to switch between them?
- Where in the codebase is it critical to have low latency access to resources? Hint: Look for
contentions.
1) Is it the right abstraction to work with? Hint: Might not be.
Before we start, the examples will be in a pseudocode similar to the Go programming language.
For example:
go func basic(inp: ch, out: ch): void { v <- inp; out <- process(v); }
Futures are a CS concept that describes a proxy object for a result that is initially unknown, usually because
the computation of its value is not yet complete.
Depending on the language/framework in use, they can be sometimes called tasks, promises, delays or
deferred.
In scenarios where you know beforehand that you will need a value, you can start computing it on another
processor and have it ready when you need it.
future_result = Future(some_arg);
result <- future_result;
Generators are a concept that describes an concept that looks like a function, i. e. can have parameters and
returns a sequence of values (in the end), but in fact, it is an iterator, that is, return values one by one, and
abstract away how these values are stored.
Generators are a weaker form of coroutines. They too can suspend their execution, but can not specify to
whom yield the execution context.
Generators can run more than just once. Potentially these can run indefinitely.
gen = GeneratorAdInfinitum();
next <- gen; // 1
next <- gen; // 2
next <- gen; // 3
Say you have many concurrently running processes and you want to have a way to collect data from all of
them. Or just the opposite, distribute work among processes. Then you need Fan-In and Fan-Out
respectively.
Semaphores can be easily implemented using buffered channels. Although CSP focuses on unbuffered
channels for brevity, in practice it is possible to add a size to the channel, thus making the communication
asynchronous.
In such case, a semaphore is a buffered channel of some tokens.
// mutex // signaling
func (s semaphore) Lock() { func (s semaphore) Signal() {
s.P(1) s.V(1)
} }
func Query(
Sometimes, there’s a need to call multiple functions conns []Conn,
but only a single result will suffice. query string) Result {
ch := make(chan Result)
Say, you want to query a set of databases and need to for _, conn := range conns {
return the first arrived message. go func(c Conn) {
select {
In a language like go, you could do something like case ch <- c.DoQuery(query):
the snippet on the right. default:
}
}(conn)
}
return <-ch
Alexandru Burlacu } Spring 2020
CSP - Common Patterns
Combined with fan in and fan out patterns, it is possible now to create directed acyclic graphs of
computations. You should be excited now!
DAGs are a natural fit for data processing use cases. Many modern tools rely on this abstraction.
By default, channels in CSP are single element, or rather, are synchronization variables.
So, if I have processes P and Q and a common channel/event ch - (ch!in -> P) || (ch?out -> Q) means that if I
put a value in ch, my process will block until that value is fetched. This gives easy way to synchronize
processes, but is not always desired.
In practice, one can have buffered channels, that for example help a tidy bit when the producing process
sometimes works faster than the consuming one.
If that happens more than just sometimes, than different flow control techniques must be used.
Now you already know pretty well both CSP and Actor Model. At least I hope so.
You know their main features, when to use them, and how to reason about these systems.
It’s probably time to learn how to divide work between concurrent (or parallel) entities to achieve
maximum performance.
When it comes to parallel and concurrent program design, it’s mostly about decomposition.
There are two main points of view that you must think of in the process.
To achieve maximum performance, you must take into account the system your programs will be running on.
Taking hardware into consideration when designing software is sometimes known as Mechanical Sympathy.
A term first used not in software engineering, but more on that later. Like, a couple of weeks later.
To design an efficient program running on a parallel processor and/or using concurrency efficiently we need
to understand the problem and what are the general patterns of solving it.
As said earlier, programs can be considered from the point of view of data, and of computation.
Using Flynn’s Taxonomy we can see the relations between different data-computation partitions.
Also consider:
SPMD and MPMD, where instead of Instructions (I),
Programs (P) are considered, relaxing the lockstep
requirement.
Source: PAD course, “Distributia: spatii de decentralizare”, conf. univ. Ciorba Dumitru
Alexandru Burlacu Spring 2020
CSP A quick not-so-off-topic #2
This is the most basic way to achieve task parallelism. Task parallelism means dividing the program into
distinct tasks, each running independently and communicating with others.
Maybe you can even recall the example of parallel map over a sequence of values, from last course (Network
Programming). Well, that one is the most basic example of data parallelism.
Now, recall the DAG, also from the CSP lectures earlier.
In fact, a DAG structured program can model any of the 4 variants of Flynn’s Taxonomy… almost, if we ignore
the lockstep requirement.
for i in 0..M:
for j in 0..N:
for k in 0..K:
C[i, j] += A[i, k] * B[k, j]
Modern processors have special instructions allowing SIMD operations. GCC and JVM (to a lesser extent) can
identify cases where it will work and apply the optimization.
Now, let’s say we have a chain of operations where we want to multiply 2 matrices and the result to be added to
the 3rd matrix.
The best way to do it, is to have the 2 operations combined, or “fused”. Btw, modern CPUs can also combine
multiplication and addition into a single very efficient operation.
We all know there are better algorithms to multiply matrices, like Strassen algorithm.
Recall, it splits a matrix in blocks, then creates intermediary matrices M1 to M7 and multiplies them:
M1 = multiply(add(A11, A22), add(B11, B22));
M2 = multiply(add(A21, A22), B11);
M3 = multiply(A11, sub(B12, B22));
M4 = multiply(A22, sub(B21, B11));
M5 = multiply(add(A11, A12), B22);
M6 = multiply(sub(A21, A11), add(B11, B12));
M7 = multiply(sub(A12, A22), add(B21, B22));
Although not as performant in parallel as naive matmul, Strassen algorithm is a good case of divide et impera
(you should know that) which results into a recursive data decomposition, in contrast with block
decomposition of the naive matmul.
Alexandru Burlacu Spring 2020
Reactive Programming
Reactive Manifesto was first published in 2013, with an update in 2014 that proposes a new mindset of
system design, based on asynchronous message passing for communication, allow decoupled entities to
scale up and down, or be elastic, also, because of the 2 properties, such systems would be resilient against
failures, allowing components to fail and recover independently, and as a result be responsive.
Quite around the same time as the Reactive Manifesto was proposed, a set of tools appeared, called
ReactiveX and a specification - Reactive Streams.
The core idea - treat events as an infinite stream of discrete values and work with these as they arrive.
Basically, it’s a combination of:
- Observer/PubSub Pattern (being notified of arrival, decoupling the Observers/Consumers)
- Iterator Pattern (obtain the elements of an aggregate object without knowledge of it’s representation)
Although both are working with events, there’s a subtle difference between Reactive programming and
Event-Driven programming, which entirely changes the game.
Event-driven programming model treats each event as a separate entity, whereas Reactive programming, and
specifically Reactive Streams are about treating events as parts of a stream, and the go-to abstraction
becomes the stream.
Now it is possible to define behaviours like - if there were more than 5 clicks within 2 seconds on the
button, change it’s color.
To make event-driven programs easier to work with. Including UIs, event processing and notification
systems.
TK
First and foremost, the ReactiveX set of libraries, for many programming languages, including Swift, Java,
C++, Python and JS, to mention a few.
Also, the core ideas are available in Akka (remember it?). Even if of higher granularity, the principles are
still the same.
Most recently, since Java 9, using Flow API will give you pretty much the same capabilities.
Oh, and since most of you are learning Elixir, check out GenStage ;)
But, here’s a working example in JS, from scratch, and as a side note one in Python, half-working.
https://keitheis.github.io/reactive-programming-in-python/#
http://eniramltd.github.io/devblog/2014/10/24/reactive_dataflow_programming_in_python_part_1.html
https://jakubturek.com/functional-reactive-programming-in-python/
https://github.com/dbrattli/aioreactive/tree/master/aioreactive
https://itnext.io/functional-reactive-programming-in-scala-from-scratch-part-1-9f9db0c47478
https://gist.github.com/staltz/868e7e9bc2a7b8c1f754
https://medium.com/priceline-labs/choosing-a-reactive-framework-for-the-jvm-ec66f6cde552
The TCP way to handle congestion isn’t by far the only way!
TCP uses a so-called open loop congestion control mechanism, with it’s congestion windows.
Backpressure, generally, is another way. A closed loop mechanism that notifies the producer that it can’t
handle anymore traffic.
TK Methods: http://reactivex.io/documentation/operators/backpressure.html
https://gist.github.com/garukun/2acb859dbaaf4a8888e7
https://gist.github.com/wolfeidau/128ff492f9668a67520a32f47e10a9fe
https://gist.github.com/MarianoGappa/00b8235deffab51271ea4177369cfe2e
- Mechanical Sympathy
- CPU caches
- memory thrashing:
https://stackoverflow.com/questions/19031902/what-is-thrashing-why-does-it-occur
- locality (temporal/data)
- CPU affinity
- Branch prediction: https://stackoverflow.com/questions/289405/effects-of-branch-prediction-on-
performance
https://en.wikibooks.org/wiki/Optimizing_C%2B%2B/Code_optimization/Pipeline
Topics
Dead letter channel
Durability
Broker vs Event bus
Now what if we have multiple publishers and any subscriber wants to subscribe to some of them?
Eventually it will become a mess. Recall from graph theory the bipartite graphs, and how many connections
can be there if the graph is, God forbid, complete.
For 7 subscribers and 3 publishers we have at most 21 connections to manage, that’s quite a few. In practice,
we may have hundreds on each side.
Each connection to the broker can specify a quality of service measure. These are classified in increasing
order of overhead:
- At most once - the message is sent only once and the client and broker take no additional steps to
acknowledge delivery (fire and forget).
- At least once - the message is re-tried by the sender multiple times until acknowledgement is received
(acknowledged delivery).
- Exactly once - the sender and receiver engage in a two-level handshake to ensure only one copy of
the message is received (assured delivery).
“[...] concurrency refers to the decomposability property of a program, algorithm, or problem into order-
independent or partially-ordered components or units.” - Leslie Lamport*
“A concurrent program whose processes „are executed in an abstract parallelism, that is, not necessarily on
distinct processors” - Horia Georgescu**
Concurrency is about ability to perform certain program fragments independently, but not necessarily these
running at the same time. I.o.w. one can run some code concurrently, yet have just one thread of execution
running at the same time. Think JS, or Python/Ruby threads with GIL.
A “race condition” can be defined as “Anomalous behavior due to unexpected critical dependence on the
relative timing of events” [FOLDOC]
Given at least 2 threads that try to access some shared data and try to change it at the same time, a race
condition occurs, due to the thread scheduling algorithm that can swap between threads at any time.
Therefore, the threads are racing to access/change the data.
Mutex aka Mutual Exclusion is a solution to the race condition problem when multiple threads try to access
a shared resource.
The requirement for mutual exclusion was first identified, and a solution was proposed, by Edsger Djikstra
in a 1965 paper ”Solution of a problem in concurrent programming control”.
Mutex basically solves the problem by designating a so-called critical section in the program, a segment of
the program where the manipulation of the shared resource happens, and enforcing that only one thread can
access it at any given time, others have to wait.
A mutex is a lock. Only one state (locked/unlocked) is associated with it. However, a recursive mutex can be
locked more than once (POSIX compliant systems), in which a counter is associated with it, yet retains only
one state (locked/unlocked). The programmer must unlock the mutex as many number times as it was
locked.*
Deadlock. If a thread which had already locked a mutex, tries to lock the mutex again, it will enter into the
waiting list of that mutex, which results in deadlock.*
*Source: https://www.geeksforgeeks.org/mutex-vs-semaphore/
Alexandru Burlacu Spring 2020
Synchronization Primitives 1: Mutex
Mutexes, also known as locks, have so-called granularity: how fine/coarse is the segment of the program
inside the critical section.
Two important properties to keep in mind are lock contention and lock overhead.
Overhead happens due to the memory and computational requirements of a lock (it ain’t for free).
Contention happens when a thread has to wait until another thread releases a lock.
What one must remember is that there’s always a trade-off: less locks mean smaller overhead but worse
contention while more locks mean smaller contention but more overhead.
Supposing you have some threads that are trying to read from some resource, and some threads want to write
into that shared resource. How to achieve mutual exclusion? What about reading only when values have
changed?
// Readers
mutex.wait()
// critical section
read_count++
// Writers if read_count == 1
resource.wait() resource.wait()
// critical section mutex.signal()
resource.signal()
// perform reading
mutex.wait()
// critical section
read_count--
if read_count == 0
resource.signal()
Alexandru Burlacu mutex.signal() Spring 2020
Shades of Readers-Writers problem: No-starvation
// Readers
service_queue.wait()
// Writers mutex.wait()
service_queue.wait() // critical section
if read_count == 0
resource.wait()
resource.wait()
// critical section read_count++
service_queue.signal() mutex.signal()
service_queue.signal()
// perform writing
// perform reading
resource.signal()
mutex.wait()
// critical section
read_count--
if read_count == 0
resource.signal()
mutex.signal()
Alexandru Burlacu Spring 2020
Synchronization Primitives 2.3: Condition variables
#include <iostream> void loadData()
#include <thread> {
#include <functional> // Make This Thread sleep for 1 Second
#include <mutex> std::this_thread::sleep_for(
#include <condition_variable> std::chrono::milliseconds(1000));
using namespace std::placeholders; std::cout<<"Loading Data from
class Application XML"<<std::endl;
{ std::lock_guard<std::mutex> guard(m_mutex);
std::mutex m_mutex; m_condVar.notify_one();
std::condition_variable m_condVar; }
bool m_bDataLoaded;
public: bool isDataLoaded()
Application() {
{ return m_bDataLoaded;
m_bDataLoaded = false; }
}
Alexandru Burlacu Spring 2020
Synchronization Primitives 2.3: Condition variables
void mainTask()
{
std::cout<<"Do Some Handshaking"<<std::endl;
std::unique_lock<std::mutex> mlock(m_mutex);
// Start waiting for the Condition Variable to get signaled
// Wait() will internally release the lock and make the thread to block
// As soon as condition variable get signaled, resume the thread and
// again acquire the lock. Then check if condition is met or not
// If condition is met then continue else again go in wait.
m_condVar.wait(mlock, std::bind(&Application::isDataLoaded, this));
std::cout<<"Do Processing On loaded Data"<<std::endl;
}
};
● http://zguide.zeromq.org/page:all
● “Enterprise Integration Patterns” - Gregor Hohpe and Bobby Woolf
https://www.enterpriseintegrationpatterns.com/docs/EDA.pdf