Unit 4 2
Unit 4 2
Chapter 2
Concurrency
Introduction
1-2
Categories of Concurrency
• Categories of Concurrency:
– Physical concurrency - Multiple independent
processors ( multiple threads of control)
– Logical concurrency - The appearance of
physical concurrency is presented by time-
sharing one processor (software can be
designed as if there were multiple threads of
control)
• Coroutines (quasi-concurrency) have a
single thread of control
• A thread of control in a program is the
sequence of program points reached as
control flows through the program
1-3
Motivations for the Use of Concurrency
1-4
Introduction to Subprogram-Level
Concurrency
• A task or process or thread is a program
unit that can be in concurrent execution
with other program units
• Tasks differ from ordinary subprograms in
that:
– A task may be implicitly started
– When a program unit starts the execution of a
task, it is not necessarily suspended
– When a task’s execution is completed, control
may not return to the caller
• Tasks usually work together
1-5
Two General Categories of Tasks
1-6
Task Synchronization
1-7
Kinds of synchronization
• Cooperation: Task A must wait for task B to
complete some specific activity before task
A can continue its execution, e.g., the
producer-consumer problem
• Competition: Two or more tasks must use
some resource that cannot be
simultaneously used, e.g., a shared counter
– Competition is usually provided by mutually
exclusive access (approaches are discussed
later)
1-8
Scheduler
1-9
Task Execution States
1-10
Liveness and Deadlock
1-11
Design Issues for Concurrency
• Competition and cooperation
synchronization*
• Controlling task scheduling
• How can an application influence task
scheduling
• How and when tasks start and end
execution
• How and when are tasks created
* The most important issue
1-12
Methods of Providing Synchronization
• Semaphores
• Monitors
• Message Passing
1-13
Semaphores
• Dijkstra - 1965
• A semaphore is a data structure consisting of a
counter and a queue for storing task descriptors
– A task descriptor is a data structure that stores all of the
relevant information about the execution state of the task
• Semaphores can be used to implement guards on
the code that accesses shared data structures
• Semaphores have only two operations, wait and
release (originally called P and V by Dijkstra)
• Semaphores can be used to provide both
competition and cooperation synchronization
1-14
Producer and Consumer Tasks
semaphore fullspots, emptyspots;
fullstops.count = 0;
emptyspots.count = BUFLEN;
task producer;
loop
-- produce VALUE –-
wait (emptyspots); {wait for space}
DEPOSIT(VALUE);
release(fullspots); {increase filled}
end loop;
end producer;
task consumer;
loop
wait (fullspots);{wait till not empty}}
FETCH(VALUE);
release(emptyspots); {increase empty}
-- consume VALUE –-
end loop;
end consumer;
1-15
Competition Synchronization with
Semaphores
• A third semaphore, named access, is used
to control access (competition
synchronization)
– The counter of access will only have the values
0 and 1
– Such a semaphore is called a binary semaphore
• Note that wait and release must be atomic!
1-16
Evaluation of Semaphores
1-17
Monitors
• Ada, Java, C#
• The idea: encapsulate the shared data and
its operations to restrict access
• A monitor is an abstract data type for
shared data
1-18
Competition Synchronization
1-19
Evaluation of Monitors
1-20
Message Passing
• Message passing is a general model for
concurrency
– It can model both semaphores and monitors
– It is not just for competition synchronization
• Central idea: task communication is like
seeing a doctor--most of the time she
waits for you or you wait for her, but when
you are both ready, you get together, or
rendezvous
1-21
Ada Support for Concurrency
task Task_Example is
entry ENTRY_1 (Item : in Integer);
end Task_Example;
1-22
Task Body
• The body task describes the action that
takes place when a rendezvous occurs
• A task that sends a message is suspended
while waiting for the message to be
accepted and during the rendezvous
• Entry points in the spec are described with
accept clauses in the body
accept entry_name (formal parameters) do
...
end entry_name;
1-23
Example of a Task Body
task body Task_Example is
begin
loop
accept Entry_1 (Item: in Float) do
...
end Entry_1;
end loop;
end Task_Example;
1-24
Ada Message Passing Semantics
1-25
Message Passing: Server/Actor Tasks
1-26
Multiple Entry Points
• Tasks can have more than one entry point
– The specification task has an entry clause for
each
– The task body has an accept clause for each
entry clause, placed in a select clause, which is
in a loop
1-27
A Task with Multiple Entries
task body Teller is
loop
select
accept Drive_Up(formal params) do
...
end Drive_Up;
...
or
accept Walk_Up(formal params) do
...
end Walk_Up;
...
end select;
end loop;
end Teller;
1-28
Semantics of Tasks with Multiple
accept Clauses
• If exactly one entry queue is nonempty, choose a
message from it
• If more than one entry queue is nonempty, choose
one, nondeterministically, from which to accept a
message
• If all are empty, wait
• The construct is often called a selective wait
• Extended accept clause - code following the
clause, but before the next clause
– Executed concurrently with the caller
1-29
Cooperation Synchronization with
Message Passing
• Provided by Guarded accept clauses
when not Full(Buffer) =>
accept Deposit (New_Value) do
...
end
• An accept clause with a with a when clause is either
open or closed
– A clause whose guard is true is called open
– A clause whose guard is false is called closed
– A clause without a guard is always open
1-30
Semantics of select with Guarded
accept Clauses:
• select first checks the guards on all clauses
• If exactly one is open, its queue is checked for
messages
• If more than one are open, non-deterministically
choose a queue among them to check for messages
• If all are closed, it is a runtime error
• A select clause can include an else clause to avoid
the error
– When the else clause completes, the loop
repeats
1-31
Partial Shared Buffer Code
task body Buf_Task is
Bufsize : constant Integer := 100;
Buf : array (1..Bufsize) of Integer;
Filled : Integer range 0..Bufsize := 0;
Next_In, Next_Out : Integer range 1..Bufsize := 1;
begin
loop
select
when Filled < Bufsize =>
accept Deposit(Item : in Integer) do
Buf(Next_In) := Item;
end Deposit;
Next_In := (Next_In mod Bufsize) + 1;
Filled := Filled + 1;
or
...
end loop;
end Buf_Task;
1-32
A Consumer Task
task Consumer;
task body Consumer is
Stored_Value : Integer;
begin
loop
Buf_Task.Fetch(Stored_Value);
-- consume Stored_Value –
end loop;
end Consumer;
1-33
Task Termination
• The execution of a task is completed if
control has reached the end of its code
body
• If a task has created no dependent tasks
and is completed, it is terminated
• If a task has created dependent tasks and is
completed, it is not terminated until all its
dependent tasks are terminated
1-34
The terminate Clause
1-35
Message Passing Priorities
1-36
Concurrency in Ada 95
1-37
Evaluation of the Ada
1-38
Java Threads
• The concurrent units in Java are methods named
run
– A run method code can be in concurrent execution with
other such methods
– The process in which the run methods execute is called a
thread
class myThread extends Thread
public void run () {…}
}
…
Thread myTh = new MyThread ();
myTh.start();
1-39
Controlling Thread Execution
1-40
Thread Priorities
1-41
Competition Synchronization with Java
Threads
• A method that includes the synchronized
modifier disallows any other method from running
on the object while it is in execution
…
public synchronized void deposit( int i) {…}
public synchronized int fetch() {…}
…
• The above two methods are synchronized which
prevents them from interfering with each other
• If only a part of a method must be run without
interference, it can be synchronized thru
synchronized statement
synchronized (expression)
statement
1-42
Cooperation Synchronization with Java
Threads
• Cooperation synchronization in Java is
achieved via wait, notify, and notifyAll
methods
– All methods are defined in Object, which is the
root class in Java, so all objects inherit them
• The wait method must be called in a loop
• The notify method is called to tell one
waiting thread that the event it was waiting
has happened
• The notifyAll method awakens all of the
threads on the object’s wait list
1-43
Java’s Thread Evaluation
1-44
C# Threads
1-45
Synchronizing Threads
1-46
C#’s Concurrency Evaluation
1-47
Statement-Level Concurrency
• Objective: Provide a mechanism that the
programmer can use to inform compiler of
ways it can map the program onto
multiprocessor architecture
• Minimize communication among
processors and the memories of the other
processors
1-48
High-Performance Fortran
1-49
Primary HPF Specifications
• Number of processors
!HPF$ PROCESSORS procs (n)
• Distribution of data
!HPF$ DISTRIBUTE (kind) ONTO procs ::
identifier_list
– kind can be BLOCK (distribute data to processors
in blocks) or CYCLIC (distribute data to
processors one element at a time)
• Relate the distribution of one array with that
of another
ALIGN array1_element WITH array2_element
1-50
Summary
1-51