0% found this document useful (0 votes)
24 views24 pages

Concurrent Programming Fundamentals - Ankur

Uploaded by

kotavidhya68
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views24 pages

Concurrent Programming Fundamentals - Ankur

Uploaded by

kotavidhya68
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 24

Concurrent Programming

Fundamentals
• Concurrency
• Concurrency means multiple computations are
happening at the same time. Concurrency is
everywhere in modern programming, whether
we like it or not:
– Multiple computers in a network
– Multiple applications running on one computer
– Multiple processors in a computer (today, often
multiple processor cores on a single chip)
• In fact, concurrency is essential in modern
programming:
– Web sites must handle multiple simultaneous users.
– Mobile apps need to do some of their processing on servers (“in the
cloud”).
Two Models for Concurrent Programming

There are two common models for concurrent programming: shared


memory and message passing.
Shared memory.
In the shared memory model of concurrency, concurrent modules interact by
reading and writing shared objects in memory.
Other examples of the shared-memory model:
A and B might be two processors (or processor cores) in the same computer,
sharing the same physical memory.
A and B might be two programs running on the same computer, sharing a
common file system with files they can read and write.
A and B might be two threads in the same Java program (we’ll explain what a
thread is below), sharing the same Java objects.
Message passing.
In the message-passing model, concurrent modules interact by sending
messages to each other through a communication channel.
Modules send off messages, and incoming messages to each module are
queued up for handling. Examples include:
A and B might be two computers in a network, communicating by
network connections.
A and B might be a web browser and a web server – A opens a connection
to B, asks for a web page, and B sends the web page data back to A.
A and B might be an instant messaging client and server.
A and B might be two programs running on the same computer whose
input and output have been connected by a pipe, like ls | grep typed
into a command prompt.
The message-passing and shared-memory
models are about how concurrent modules
communicate. The concurrent modules
themselves come in two different kinds:
processes and threads.

Processes, Threads, Time-slicing


Process.
A process is an instance of a running program that is isolated from other
processes on the same machine. In particular, it has its own private
section of the machine’s memory.
The process abstraction is a virtual computer. It makes the program feel like
it has the entire machine to itself – like a fresh computer has been
created, with fresh memory, just to run that program.
Just like computers connected across a network, processes normally share
no memory between them. A process can’t access another process’s
memory or objects at all. Sharing memory between processes
is possible on most operating system, but it needs special effort. By
contrast, a new process is automatically ready for message passing,
because it is created with standard input & output streams, which are
the System.out and System.in streams you’ve used in Java.
Threads
Threads are sometimes called lightweight processes. Both processes and threads
provide an execution environment, but creating a new thread requires fewer
resources than creating a new process.

Threads exist within a process — every process has at least one. Threads share
the process's resources, including memory and open files. This makes for
efficient, but potentially problematic, communication.

Multithreaded execution is an essential feature of the Java platform. Every


application has at least one thread — or several, if you count "system" threads
that do things like memory management and signal handling. But from the
application programmer's point of view, you start with just one thread, called
the main thread. This thread has the ability to create additional threads, as
we'll demonstrate in the next section.
How can I have many concurrent threads with
only one or two processors in my computer?
When there are more threads than
processors, concurrency is simulated by time
slicing, which means that the processor
switches between threads.
The figure on the right shows how three threads
T1, T2, and T3 might be time-sliced on a
machine that has only two actual processors. In
the figure, time proceeds downward, so at first
one processor is running thread T1 and the
other is running thread T2, and then the second
processor switches to run thread T3. Thread T2
simply pauses, until its next time slice on the
same processor or another processor.
Figure: concurrency is simulated by time slicing
Shared Memory Example

Let’s look at an example of a shared memory


system. The point of this example is to show
that concurrent programming is hard, because
it can have subtle bugs.

Imagine that a bank has cash machines that use


a shared memory model, so all the cash
machines can read and write the same
account objects in memory
• To illustrate what can go wrong, let’s simplify
the bank down to a single account, with a
dollar balance stored in the balance variable,
and two operations deposit and withdraw that
simply add or remove a dollar:
In this simple example, every transaction is just
a one dollar deposit followed by a one-dollar
withdrawal, so it should leave the balance in
the account unchanged. Throughout the day,
each cash machine in our network is
processing a sequence of deposit/withdraw
transactions.
Message Passing Example
• Now let’s look at the message-passing approach to
our bank account example.
• Now not only are the cash machine modules, but
the accounts are modules, too. Modules interact by
sending messages to each other. Incoming requests
are placed in a queue to be handled one at a time.
The sender doesn’t stop working while waiting for
an answer to its request. It handles more requests
from its own queue. The reply to its request
eventually comes back as another message.
Unfortunately, message passing doesn’t eliminate
the possibility of race conditions. Suppose each
account supports get-
balance and withdraw operations, with
corresponding messages. Two users, at cash
machine A and B, are both trying to withdraw a
dollar from the same account. They check the
balance first to make sure they never withdraw
more than the account holds, because overdrafts
trigger big bank penalties
• Concurrency is Hard to Test and Debug
• If we haven’t persuaded you that concurrency is tricky, here’s the
worst of it. It’s very hard to discover race conditions using testing.
And even once a test has found a bug, it may be very hard to
localize it to the part of the program causing it.
• Concurrency bugs exhibit very poor reproducibility. It’s hard to
make them happen the same way twice. Interleaving of instructions
or messages depends on the relative timing of events that are
strongly influenced by the environment. Delays can be caused by
other running programs, other network traffic, operating system
scheduling decisions, variations in processor clock speed, etc. Each
time you run a program containing a race condition, you may get
different behavior.
• These kinds of bugs are heisenbugs, which are
nondeterministic and hard to reproduce, as opposed to a
“bohrbug”, which shows up repeatedly whenever you
look at it. Almost all bugs in sequential programming are
bohrbugs.
• A heisenbug may even disappear when you try to look at
it with println or debugger! The reason is that printing
and debugging are so much slower than other operations,
often 100-1000x slower, that they dramatically change
the timing of operations, and the interleaving. So
inserting a simple print statement into the cashMachine():

…and suddenly the balance is always 0, as desired, and
the bug appears to disappear. But it’s only masked, not
truly fixed. A change in timing somewhere else in the
program may suddenly make the bug come back.
• Concurrency is hard to get right. Part of the point of
this reading is to scare you a bit. Over the next several
readings, we’ll see principled ways to design concurrent
programs so that they are safer from these kinds of
bugs.

You might also like