Lecture 02 - Concurrency Control & Synchronization
Lecture 02 - Concurrency Control & Synchronization
Lecture 02
Kashif Ali
BSCS 7th AB
CSIT, UET Peshawar
1 [email protected] | PDC
[email protected] | PDC 2
Concurrency
• Concurrency means multiple
computations are happening at
the same
time.
• Concurrency is everywhere in modern
programming, whether we like it or not:
• Multiple computers in a network
• Multiple applications running on one
computer
• Multiple processors in a computer
(today, often multiple processor
cores on a single chip)
[email protected] | PDC 3
In fact, concurrency is essential in modern programming:
[email protected] | PDC 4
Two Models for Concurrent Programming
• There are two common models for concurrent
programming: • Shared memory
• Message passing
[email protected] | PDC 5
Shared Memory
In the shared memory model of concurrency,
concurrent modules interact by reading and writing
shared objects in memory.
[email protected] | PDC 6
Message Passing
In the message-passing model, concurrent modules interact by
sending messages to each other through a communication
channel. Modules send off messages, and incoming messages to
each module are queued up for handling. Examples include:
[email protected] | PDC 8
Message Passing Example
• Now let’s look at the message-passing approach to our bank
account example.
• Now not only are the cash machine modules, but the
accounts are
modules, too. Modules interact by sending messages to
each other.
Incoming requests are placed in a queue to be handled one
at a
time. The sender doesn’t stop working while waiting for an
answer
to its request. It handles more requests from its own queue.
The
reply to its request eventually comes back as another message.
Parallelism
• Parallelism is related to an application
where tasks are divided into smaller
sub-tasks that are processed seemingly
simultaneously or parallel.
• It is used to increase the throughput and
computational speed of the system by
using multiple processors.
• It enables single sequential CPUs to do
lot of things “seemingly” simultaneously.
[email protected] | PDC 10
[email protected] | PDC 11
12
ensure data consistency, allowing multiple
threads to work together safely and efficiently.
Synchronization
While these tools are essential for preventing
Techniques errors, they can also impact performance, so
Locks, semaphores, and barriers are crucial it's important to use them judiciously and
tools for managing shared resources in parallel optimize their implementation.
computing.
[email protected] | PDC
13
They are essential for preventing race
conditions and ensuring data consistency
when multiple threads attempt to read
from or write to shared data
simultaneously.
[email protected] | PDC
[email protected] | PDC 14
15
processes that can access a resource
at the same time, ensuring that
operations are performed in an
orderly manner to prevent conflicts.
By using semaphores, systems can
coordinate tasks effectively, allowing
Semaphores for safe
Semaphores are synchronization tools communication and resource sharing
used to manage access to shared between multiple processes.
resources in concurrent
programming.
[email protected] | PDC
Semaphores control access to shared resources by maintaining a count
of available resources
[email protected] | PDC 16
Barriers
• Barriers are synchronization mechanisms used in parallel and
distributed computing to ensure that multiple processes or threads
reach a certain point in execution before any of them continue.
• This coordination helps manage dependencies and improve the
overall efficiency of tasks by preventing race conditions and ensuring
data consistency across concurrent operations.
• Barriers enable multiple threads to wait for each other before
proceeding to the next execution phase
• Synchronize threads at specific program points
• Ensure all threads reach a particular state before continuing
[email protected] | PDC 17
What is Deadlock?
• A deadlock is a situation where two or
more processes wait for an event to
happen that would never occur.
• In other words, Deadlock is a situation
in computing where two or more
programs or processes get stuck, each
waiting for the other to release
resources, so none of them can
proceed.
• It’s like a traffic jam at an intersection
where no car can move because each
one is blocking the others.
[email protected] | PDC 18
Which are
some
techniques
to handle a
deadlock?
The four common
ways to handle deadlocks are:
• Prevention: System is designed with constraints to prevent at least one of the conditions required
for a deadlock.
• Avoidance: Algorithms decide at runtime to deny a resource request if subsequent requests might
lead to a deadlock. Decisions may be conservative resulting in lower resource utilization.
• Detection: Periodically check for deadlocks. Frequent checks imply a performance overhead.
Infrequent checks imply deadlocks are not caught soon enough. Use triggers such as a drop in
CPU utilization to start a check. Perhaps check when a resource request can't be granted.
• Recovery: Take action to break one of the conditions for a deadlock. One of the processes could
be terminated, called victim selection. Or the process is forced to release resources it holds.
[email protected] | PDC 20
[email protected] | PDC 21