0% found this document useful (0 votes)
13 views

Lecture 02 - Concurrency Control & Synchronization

Uploaded by

shahida.jasmine
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views

Lecture 02 - Concurrency Control & Synchronization

Uploaded by

shahida.jasmine
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

Concurrency Control & Synchronization

Lecture 02
Kashif Ali
BSCS 7th AB
CSIT, UET Peshawar

Subject: Parallel and Distributed Computing

1 [email protected] | PDC

• Definition of concurrency and parallelism


• Synchronization techniques: locks, semaphores, barriers • Deadlock
detection, prevention, and avoidance
• The critical section problem and solutions in distributed systems

[email protected] | PDC 2

Concurrency
• Concurrency means multiple
computations are happening at
the same
time.
• Concurrency is everywhere in modern
programming, whether we like it or not:
• Multiple computers in a network
• Multiple applications running on one
computer
• Multiple processors in a computer
(today, often multiple processor
cores on a single chip)

[email protected] | PDC 3
In fact, concurrency is essential in modern programming:

• Web sites must handle multiple simultaneous users.


• Mobile apps need to do some of their processing on
servers (“in the cloud”).
• Graphical user interfaces almost always require
background work that does not interrupt the user. For
example, Eclipse compiles your Java code while you’re
still editing it.
• Being able to program with concurrency will still be
important in the future. Processor clock speeds are no
longer increasing. Instead, we’re getting more cores with
each new generation of chips. So in the future, in order to
get a computation to run faster, we’ll have to split up a
computation into concurrent pieces.

[email protected] | PDC 4
Two Models for Concurrent Programming
• There are two common models for concurrent
programming: • Shared memory
• Message passing

[email protected] | PDC 5

Shared Memory
In the shared memory model of concurrency,
concurrent modules interact by reading and writing
shared objects in memory.

• A and B might be two processors (or


processor cores) in the same computer,
sharing the same physical memory.
• A and B might be two programs running on
the same computer, sharing a common
filesystem with files they can read and write.
• A and B might be two threads in the same
Java program, sharing the same Java objects.

[email protected] | PDC 6

Message Passing
In the message-passing model, concurrent modules interact by
sending messages to each other through a communication
channel. Modules send off messages, and incoming messages to
each module are queued up for handling. Examples include:

• A and B might be two computers in a network,


communicating by network connections.
• A and B might be a web browser and a web server – A
opens a connection to B, asks for a web page, and B sends
the web page data back to A.
• A and B might be an instant messaging client and server.
• A and B might be two programs running on the same
computer whose input and output have been connected
by a pipe, like ls | grep typed into a command prompt.
[email protected] | PDC 7

Shared Memory Example


• Let’s look at an example of a shared
memory system. The point of this example
is to show that concurrent
programming is
hard, because it can have subtle
bugs.
• Imagine that a bank has cash
machines that
use a shared memory model, so all
the cash
machines can read and write the same
account objects in memory.

[email protected] | PDC 8
Message Passing Example
• Now let’s look at the message-passing approach to our bank
account example.

• Now not only are the cash machine modules, but the
accounts are
modules, too. Modules interact by sending messages to
each other.
Incoming requests are placed in a queue to be handled one
at a
time. The sender doesn’t stop working while waiting for an
answer
to its request. It handles more requests from its own queue.
The
reply to its request eventually comes back as another message.

• Unfortunately, message passing doesn’t eliminate the possibility of


race conditions. Suppose each account supports get-balance and
withdraw operations, with corresponding messages. Two users, at
cash machine A and B, are both trying to withdraw a dollar from
the same account. They check the balance first to make sure they
never withdraw more than the account holds, because overdrafts
trigger big bank penalties:
[email protected] | PDC 9

Parallelism
• Parallelism is related to an application
where tasks are divided into smaller
sub-tasks that are processed seemingly
simultaneously or parallel.
• It is used to increase the throughput and
computational speed of the system by
using multiple processors.
• It enables single sequential CPUs to do
lot of things “seemingly” simultaneously.
[email protected] | PDC 10

Parallelism leads to overlapping of


central processing units and input
output tasks in one process with the
central processing unit and input
output tasks of another process.

[email protected] | PDC 11
12
ensure data consistency, allowing multiple
threads to work together safely and efficiently.

Choosing the right synchronization primitive


depends on the specific needs of your parallel
algorithm.

Synchronization
While these tools are essential for preventing
Techniques errors, they can also impact performance, so
Locks, semaphores, and barriers are crucial it's important to use them judiciously and
tools for managing shared resources in parallel optimize their implementation.
computing.

These mechanisms prevent race conditions and

[email protected] | PDC
13
They are essential for preventing race
conditions and ensuring data consistency
when multiple threads attempt to read
from or write to shared data
simultaneously.

Locks By using locks, developers can control the


Locks are synchronization mechanisms used flow of execution in concurrent systems,
in parallel and distributed computing to which is crucial for maintaining correct
manage access to shared resources, program behavior.
ensuring that only one thread or process
can access a resource at a time.

[email protected] | PDC

Locks provide mutual exclusion


ensuring only one thread accesses a
shared resource at a time

• Two states (locked and unlocked)


• Support operations like acquire (lock)
and release (unlock)

[email protected] | PDC 14
15
processes that can access a resource
at the same time, ensuring that
operations are performed in an
orderly manner to prevent conflicts.
By using semaphores, systems can
coordinate tasks effectively, allowing
Semaphores for safe
Semaphores are synchronization tools communication and resource sharing
used to manage access to shared between multiple processes.
resources in concurrent
programming.

They help control the number of

[email protected] | PDC
Semaphores control access to shared resources by maintaining a count
of available resources

• Can be binary (similar to locks) or counting, allowing for complex resource


management
• Used to implement producer-consumer patterns (bounded buffer problem)

[email protected] | PDC 16

Barriers
• Barriers are synchronization mechanisms used in parallel and
distributed computing to ensure that multiple processes or threads
reach a certain point in execution before any of them continue.
• This coordination helps manage dependencies and improve the
overall efficiency of tasks by preventing race conditions and ensuring
data consistency across concurrent operations.
• Barriers enable multiple threads to wait for each other before
proceeding to the next execution phase
• Synchronize threads at specific program points
• Ensure all threads reach a particular state before continuing

[email protected] | PDC 17

What is Deadlock?
• A deadlock is a situation where two or
more processes wait for an event to
happen that would never occur.
• In other words, Deadlock is a situation
in computing where two or more
programs or processes get stuck, each
waiting for the other to release
resources, so none of them can
proceed.
• It’s like a traffic jam at an intersection
where no car can move because each
one is blocking the others.
[email protected] | PDC 18
Which are
some
techniques
to handle a
deadlock?
The four common
ways to handle deadlocks are:

• Prevention: System is designed with constraints to prevent at least one of the conditions required
for a deadlock.
• Avoidance: Algorithms decide at runtime to deny a resource request if subsequent requests might
lead to a deadlock. Decisions may be conservative resulting in lower resource utilization.
• Detection: Periodically check for deadlocks. Frequent checks imply a performance overhead.
Infrequent checks imply deadlocks are not caught soon enough. Use triggers such as a drop in
CPU utilization to start a check. Perhaps check when a resource request can't be granted.
• Recovery: Take action to break one of the conditions for a deadlock. One of the processes could
be terminated, called victim selection. Or the process is forced to release resources it holds.

[email protected] | PDC 20
[email protected] | PDC 21

You might also like