Unit 3 Concurrency
Unit 3 Concurrency
Concurrency Control
1
Concurrency
• Multiple applications
– Multiprogramming
• Structured application
– Application can be a set of concurrent
processes
• Operating-system structure
– Operating system is a set of processes or
threads
2
Concurrency
3
Difficulties of Concurrency
• Sharing of global resources
• Operating system managing the
allocation of resources optimally
• Difficult to locate programming errors
4
Currency
• Communication among processes
• Sharing resources
• Synchronization of multiple processes
• Allocation of processor time
5
Race Condition
• It occurs when multiple processes or threads read and
write data items so that the final result depends on the
relative timing of their execution.
Consider , P1 updates a to 1 & , P2 updates a to 2
P3 & P4 share global variable b & c, b=1 & c=2.
At P3, b=b+c,
At P4, c=b+c
If ,P3 executes first, b=3,c=5
If ,P4 executes first, b=4,c=3
6
Operating System Concerns
• Keep track of various processes
• Allocate and deallocate resources
– Processor time
– Memory
– Files
– I/O devices
• Protect data and resources
• Output of process must be independent of the
speed of execution of other concurrent
processes
7
Process Interaction
• Processes unaware of each other
• Processes indirectly aware of each other
• Process directly aware of each other
8
9
Requirements for Mutual Exclusion
• Only one process at a time is allowed in the critical
section for a resource
• A process that halts in its noncritical section must do so
without interfering with other processes
• It is not possible for processes requiring critical section to
be delayed indefinitely: No deadlock or starvation
• A process must not be delayed access to a critical section
when there is no other process using it
• No assumptions are made about relative process speeds or
number of processors.
• A process remains inside its critical section for a finite
time only
10
Mutual Exclusion:Hardware Support
• Interrupt Disabling
– A process runs until it invokes an operating
system service or until it is interrupted
– Disabling interrupts guarantees mutual exclusion
– Processor is limited in its ability to interleave
programs
– Multiprocessing
• disabling interrupts on one processor will not
guarantee mutual exclusion
11
Mutual Exclusion:
Hardware Support
• Special Machine Instructions
– Performed in a single instruction cycle
– Access to the memory location is blocked
for any other instructions
12
Mutual Exclusion:
Hardware Support
• Test and Set Instruction
boolean testset (int i) {
if (i == 0) {
i = 1;
return true;
}
else {
return false;
}
}
13
Mutual Exclusion:
Hardware Support
• Exchange Instruction
void exchange(int register,
int memory)
{
int temp;
temp = memory;
memory = register;
register = temp;
} 14
Mutual Exclusion Machine
Instructions
• Advantages
– Applicable to any number of processes on
either a single processor or multiple
processors sharing main memory
– It is simple and therefore easy to verify
– It can be used to support multiple critical
sections
15
Mutual Exclusion Machine
Instructions
• Disadvantages
– Busy-waiting consumes processor time
– Starvation is possible when a process leaves
a critical section and more than one process
is waiting.
– Deadlock is possible.
16
Semaphore
• Synchronization tool that does not require busy waiting
• Semaphore S – integer variable
• Two standard operations modify S: wait() and signal()
– Originally called P() and V()
• Less complicated
• Can only be accessed via two indivisible (atomic) operations
– wait (S) {
while S <= 0
; // no-op
S--;
}
– signal (S) {
S++;
}
17
Semaphore
• Counting semaphore – integer value can range over an
unrestricted domain
• Binary semaphore – integer value can range only between 0
18
Semaphore Implementation
• Must guarantee that no two processes can execute wait ()
and signal () on the same semaphore at the same time
• Thus, implementation becomes the critical section problem
where the wait and signal code are placed in the crtical
section.
– Could now have busy waiting in critical section
implementation
• But implementation code is short
• Little busy waiting if critical section rarely occupied
• Note that applications may spend lots of time in critical
sections and therefore this is not a good solution.
19
Semaphore Implementation with no Busy
waiting
• With each semaphore there is an associated waiting
queue. Each entry in a waiting queue has two data
items:
– value (of type integer)
– pointer to next record in the list
• Two operations:
– block – place the process invoking the operation on
the appropriate waiting queue.
– wakeup – remove one of processes in the waiting
queue and place it in the ready queue.
Semaphore Implementation with no Busy waiting (Cont.)
• Implementation of wait:
wait (S){
value--;
if (value < 0) {
add this process to waiting queue
block(); }
}
• Implementation of signal:
Signal (S){
value++;
if (value <= 0) {
remove a process P from the waiting queue
wakeup(P); }
}
Deadlock and Starvation
• Deadlock – two or more processes are waiting indefinitely for an
event that can be caused by only one of the waiting processes
• Let S and Q be two semaphores initialized to 1
P0 P1
wait (S); wait (Q);
wait (Q); wait (S);
. .
. .
. .
signal (S); signal (Q);
signal (Q); signal (S);
• Starvation – indefinite blocking. A process may never be
removed from the semaphore queue in which it is suspended.
Semaphores
• Special variable called a semaphore is used for
signaling.
• To transmit signal via semaphore s, a process
executes the primitive semSignal(s).
• To receive signal via semaphore s, a process
executes the primitive semWait(s).
• If a process is waiting for a signal, it is
suspended until that signal is sent.
23
Semaphores
• Semaphore is a variable that has an
integer value
– May be initialized to a nonnegative number
– Wait operation decrements the semaphore
value
– Signal operation increments semaphore
value
24
Semaphore Primitives
25
Binary Semaphore Primitives
26
Mutual Exclusion Using
Semaphores
27
28
29
Mutex
• Similar to a binary semaphore.
• A key difference between the two is that process
that locks the mutex (sets the value to 0) must be the
one to unlock it(sets the value to 1).
• Mutex is locking mechanism used to synchronize
access to a resource. Only one task (can be a thread
or process based on OS abstraction) can acquire the
mutex. It means there is ownership associated with
mutex, and only the owner can release the lock
(mutex).
30
Problems with Semaphores
• Incorrect use of semaphore operations
results timing error:
33
Condition Variables as
synchronization mechanism
• condition x, y;
• Two operations on a condition
variable:
– x.wait () – a process that invokes the
operation is suspended.
– x.signal () – resumes one of processes (if
any) that invoked x.wait ()
34
Monitor with Condition
Variables
35
Monitors
• Monitor is a software module consisting
of one or more procedures, an
initialization sequence and local data.
• Chief characteristics
– Local data variables are accessible only by
the monitor
– Process enters monitor by invoking one of
its procedures
– Only one process may be executing in the
monitor at a time
36
37
Producer/Consumer Problem
• One or more producers are generating
data and placing these in a buffer
• A single consumer is taking items out of
the buffer one at time
• Only one producer or consumer may
access the buffer at any one time
38
Producer
producer:
while (true) {
/* produce item v */
b[in] = v;
in++;
}
39
Consumer
consumer:
while (true) {
while (in <= out)
/*do nothing */;
w = b[out];
out++;
/* consume item w */
}
40
Producer/Consumer Problem
41
Producer with Circular Buffer
producer:
while (true) {
/* produce item v */
while ((in + 1) % n == out)
/* do nothing */;
b[in] = v;
in = (in + 1) % n
}
42
Consumer with Circular
Buffer
consumer:
while (true) {
while (in == out)
/* do nothing */;
w = b[out];
out = (out + 1) % n;
/* consume item w */
}
43
44
Readers/Writers Problem
• A database is shared by several concurrent
process. Some process want only to read i.e
readers while other want to update or write
are writers.
• If reading is done simultaneously no adverse
effect will result but simultaneous write will
create chaos.
• To avoid this difficulty , we require exclusive
access to shared data while writing .This
synchronization problem referred as
reader/writer problem.
45
Readers/Writers Problem
• Two ways to reader/writer problem
1. No reader should wait for another
reader to finish because writer is
waiting.
2. When writer is ready ,that writer
performs its write as soon as possible,
no new reader start reading.
• Both solution may result in starvation.
46
Readers/Writers Problem
• Reader process share following data structure
semaphore mutex,wrt;
int readcount
• Semaphore mutex and wrt are initialized to 1 and
readcount =0
• wrt- common to both R/W process. It functions as
mutual exclusion semaphore for the writers.
• mutex-It is used to ensure mutual exclusion when
variable readcount is updated.
• readcount- It is count of processes that are currently
reading object.
47
Readers/Writers Problem
• Structure of writer process-
do{
wait(wrt);
…………
//writing is performed
…………
signal(wrt);
}while(TRUE)
48
Readers/Writers Problem
• Structure of reader process-
do {
wait (mutex);
readcount++;
if (readcount = = 1)
wait (wrt);
signal(mutex);
……
//reading is performed
……..
wait (mutex);
readcount--;
if (readcount = = 0)
signal(wrt);
signal(mutex);
}while(TRUE);
49
Readers/Writers Problem
• Any number of readers may
simultaneously read the file
• Only one writer at a time may write to
the file
• If a writer is writing to the file, no reader
may read it
50
Interprocess Communication : Pipes
(Unnamed Pipes)
• Pipes directing output of one program (coomand) as
input to another program (command).
• Half-duplex UNIX pipes.
• Full-duplex pipes.
• Unnamed pipes can be used only with related
processes(child/parent or child/child)and exist only for
as long as the processes using them exist.
• ls | sort | lp
51
•
Pipes()
int pipe(int fd[2]) -- creates a pipe and returns two file descriptors, fd[0], fd[1].
• fd[0] is opened for reading, fd[1] for writing.
• pipe() returns 0 on success, -1 on failure and sets errno accordingly.
• The standard programming model is that after the pipe has been set up, two (or more)
cooperative processes will be created by a fork and data will be passed
using read() and write().
• Pipes opened with pipe() should be closed with close(int fd).
Example: Parent writes to a child
int pdes[2];
pipe(pdes);
if ( fork() == 0 )
{ /* child */
close(pdes[1]);
read( pdes[0]); /* read from parent */
..... }
else
{
close(pdes[0]);
52
write( pdes[1]); /* write to child */ ..... }
Pipes
• Between processes - After a fork() Writes to fildes[1] by one
process can be read on fildes[0] by the other.
• Even more useful: two pipes, fildes_a and fildes_b .
• After a fork()
• Writes to fildes_a[1] by one process can be read on fildes_a[0] by
the other, and
• Writes to fildes_b[1] by that process can be read on fildes_b[0]
by the first process.
• Usually, the unused end of the pipe is closed by the process.
• If process A is writing and process B is reading, then process A
would close fildes[0] and process B would close fildes[1]
• Reading from a pipe whose write end has been closed returns 0
(end of file).
• Writing to a pipe whose read end has been closed. 53
Half-duplex UNIX pipes
#include <stdio.h>
#include <unistd.h>
fd1. Else {
/* Parent process closes up output side of pipe */
close(fd[1]);
}
. 54
.
Shared Memory
• Shared Memory is an efficeint means of passing data between
programs. One program will create a memory portion which other
processes (if permitted) can access.
•SM can be used in a uniprocessor as well as in multiprocessor.
• It can be used to communicate between different processes of the
same user, different processes of the different user, and even
between processes that runs at different times.
• It is also possible to broadcast data from one process to many
others and collect data from many to one hence we can say that
SM is a multidimensional communication media.
•SM allows multiple processes to share virtual memory space to
exchange and change its content. This requires process
synchronization to coordinate the access to the shared memory
segment.
55
Shared Memory
• A shared memory is an extra piece of memory that is attached to
some address spaces for their owners to use. As a result, all of
these processes share the same memory segment and have access
to it.
• Consequently, race conditions may occur if memory accesses are
not handled properly.
56
Shared Memory
57
Shared Memory System Calls
58
shmget()
• shmget() is used to obtain access to a shared memory
segment.
• It is prottyped by:
int shmget(key_t key, size_t size, int shmflg);
- key argument is a access value associated with the
semaphore ID.
-size argument is the size in bytes of the requested shared
memory.
- shmflg argument specifies the initial access permissions and
creation control flags.
• When the call succeeds, it returns the shared memory segment
ID. This call is also used to get the ID of an existing shared
segment (from a process requesting sharing of some existing
memory portion). 59
shmget()
#include <sys/types.h>
#include <sys/ipc.h>
#include <sys/shm.h>
... key_t key; /* key to be passed to shmget() */
int shmflg; /* shmflg to be passed to shmget() */
int shmid; /* return value from shmget() */
int size; /* size to be passed to shmget() */
... key = ...
size = ...
shmflg) = ...
if ((shmid = shmget (key, size, shmflg)) == -1)
{ perror("shmget: shmget failed"); exit(1); }
Else
{ (void) fprintf(stderr, "shmget: shmget returned %d\n", shmid); exit(0);
}
...
60
shmctl()
• shmctl() is used to alter the permissions and other characteristics
of a shared memory segment.
• It is prototyped as follows:
int shmctl(int shmid, int cmd, struct shmid_ds *buf);
- The process must have an effective shmid of owner, creator or
superuser to perform this command.
- cmd argument is one of following control commands:
SHM_LOCK-- Lock the specified shared memory segment in
memory. The process must have the effective ID of superuser to
perform this command.
SHM_UNLOCK-- Unlock the shared memory segment. The
process must have the effective ID of superuser to perform this
command.
61
shmctl()
IPC_STAT-- Return the status information contained in the
control structure and place it in the buffer pointed to by buf.
The process must have read permission on the segment to
perform this command.
IPC_SET-- Set the effective user and group identification and
access permissions. The process must have an effective ID of
owner, creator or superuser to perform this command.
IPC_RMID-- Remove the shared memory segment.
62
shmctl()
#include <sys/types.h>
#include <sys/ipc.h>
#include <sys/shm.h>
...
int cmd; /* command code for shmctl() */
int shmid; /* segment ID */
struct shmid_ds shmid_ds; /* shared memory data structure to hold results
*/
...
shmid = ...
cmd = ...
if ((rtrn = shmctl(shmid, cmd, shmid_ds)) == -1)
{ perror("shmctl: shmctl failed"); exit(1);
}
...
63
shmat() and shmdt()
• shmat() and shmdt() are used to attach and detach shared
memory segments.
• They are prototypes as follows:
void *shmat(int shmid, const void *shmaddr, int shmflg);
int shmdt(const void *shmaddr);
• shmat() returns a pointer, shmaddr, to the head of the shared
segment associated with a valid shmid.
• shmdt() detaches the shared memory segment located at the
address indicated by shmaddr.
64
Shared Memory for client server mechnism
• For a server, it should be started before any client. The server
should perform the following tasks:
– Ask for a shared memory with a memory key and memorize
the returned shared memory ID. This is performed by system
call shmget().
– Attach this shared memory to the server's address space with
system call shmat().
– Initialize the shared memory, if necessary.
– Do something and wait for all clients' completion.
– Detach the shared memory with system call shmdt().
– Remove the shared memory with system call shmctl().
65
Shared Memory for client server mechnism
• For the client part, the procedure is almost the same:
– Ask for a shared memory with the same memory key and
memorize the returned shared memory ID.
– Attach this shared memory to the client's address space.
– Use the memory.
– Detach all shared memory segments, if necessary.
– Exit.
66