0% found this document useful (0 votes)
72 views

Process Synchronization: Sourav Banerjee Kgec Kalyani

The document discusses process synchronization and mutual exclusion in multithreaded applications. It describes how allowing threads to access shared mutable data simultaneously can lead to race conditions and incorrect results. It introduces the concept of using mutual exclusion to ensure only one thread can be in its critical section at a time when accessing shared resources, preventing issues. Various techniques for implementing mutual exclusion primitives like enterMutualExclusion() and exitMutualExclusion() are then discussed.
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
72 views

Process Synchronization: Sourav Banerjee Kgec Kalyani

The document discusses process synchronization and mutual exclusion in multithreaded applications. It describes how allowing threads to access shared mutable data simultaneously can lead to race conditions and incorrect results. It introduces the concept of using mutual exclusion to ensure only one thread can be in its critical section at a time when accessing shared resources, preventing issues. Various techniques for implementing mutual exclusion primitives like enterMutualExclusion() and exitMutualExclusion() are then discussed.
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 44

Process synchronization

Sourav Banerjee
KGEC
Kalyani
Contents
• Mutual exclusion
Mutual exclusion
• Consider a mail server that a processes e-mail for an organization. Suppose
we want the system to continuously monitor the total number of e-mails
that have been sent since the day began. Assume that the receipt of an e-
mail is handled by one of several threads. Each time one of these threads
receive an e-mail from a user, the thread increments a processwide shared
variable, mailcount, by 1. consider what happens if two threads attempt to
increment mailcount simultaneously. First, assume that each thread runs
the assembly-language code-
LOAD mailcount
ADD 1
STORE mailcount
Continuation….
• Assume that the LOAD instruction copies mailcount from memory to a
register, the ADD instruction adds the immediate constant 1 from memory
to the value in the register, and the STORE instruction copies the value in
the register to memory. Suppose mailcount is currently 21687. now
suppose the first thread executes the LOAD and ADD instructions, thus
leaving 21688 in the register ( but not yet updating the value in mailcount
memory, which is still 21687). Then, due to a quantum expiration, the first
thread loses the processor and the system context switches to the second
thread. The second thread now executes all three instructions, thus setting
mailcount to 21688. this thread loses processor, and the system context
switches back to the first thread, which then continues by executing the
STORE instruction-also placing 21688 into mailcount. Due to the
uncontrolled access to the shared variable mailcount, the system has
essentially lost track of one of the e-mails-mailcount should be 21689.
Continuation….
• In the case of an e-mail management application such an error may seem
minor. A similar error occurring in a mission-critical application such as
air traffic control could cost lives.
• The cause of this incorrect result in the writing of the shared variable
mailcount. Clearly, many concurrent threads may read data
simultaneously without this difficulty. But when one thread reads data that
another thread is writing , or when one thread writes data that another
thread is also writing, indeterminate results can occur.
• We can solve this problem by granting each thread exclusive access to
mailcount. While one thread increments the shared variable, all other
threads desiring to do so will be made to wait. When the executing thread
finishes accessing the shared variable, the system will allow one of the
waiting processes to proceed. This is called serializing access to shared
variable.
Continuation….
• In this manner , threads will not be able to access the shared data
simultaneously. As each thread proceeds to update the shared variable, all
others are excluded from doing so simultaneously. This is called mutual
exclusion.
Producer/Consumer relationship
• In a producer/consumer relationship, the producer portion of the
application generates data and stores it in a shared object, and the
consumer portion reads data from the shared object.
• One example, is print spooling. A word processor spools data into a buffer
( typically a file) and that data is subsequently consumed by the printer as
it prints the document.
• Consider a multithreaded producer/consumer relationship implemented in
java in which a producer thread generates data and places it into a buffer
capable of holding a single value and a consumer thread reads data from
the buffer. If the producer waiting to put the next data into the buffer
determines that the consumer has not yet read the previous data from the
buffer, the producer should call wait so the consumer gets a chance to read
unconsumed data before it is overwritten.

Deitel 188-196 Sourav Banerjee,CSE,KGEC


Continuation…
• When the consumer reads the data, it should call notify to allow the
(possibly waiting) producer to store the next value.
• If the consumer finds the buffer empty (because the producer has not yet
produced data), it should call wait to place itself in waiting state; otherwise
the consumer may read “garbage” from an empty buffer. When the
producer places the next data into the buffer, the producer should call
notify to allow the (possibly waiting) consumer thread to proceed, so the
consumer can read the next data.
Critical Section
• M.E needs to be enforced only when threads access shared modifiable
data. When threads are performing operations that do not conflict with one
another (e.g. reading variables), the system should allow the threads to
proceed concurrently. When a thread is accessing shared modifiable data,
it is said to be in a critical section ( or Critical Region). To prevent errors
the system should ensure that only one thread at a time can execute the
instructions in its critical section for a particular resource. If one thread
attempts to enter its critical section while another thread is in its critical
section, the thread should wait until the executing thread exits the critical
section. Once a thread has exited its critical section, a waiting thread ( or
one of the waiting threads, if there are several) may enter and execute its
critical section.
• A thread in a critical section has exclusive access to shared, modifiable
data, and all other threads currently requiring access to that data are kept
waiting. therefore, a thread should execute a critical section as quickly as
possible. A thread must not block inside its critical section, and critical
sections must ne coded carefully to avoid-for example- the possibility of
infinite loops. If a thread in a critical section terminates, either voluntarily
or involuntarily, then the OS, in performing its termination housekeeping,
must release M.E , so that other threads may enter their Critical Sections.
• Enforcing Mutual exclusive access to critical sections is one of the key
problems in concurrent programming. Many solutions have been devised :
some software solutions and some hardware solutions.
Mutual Exclusion Primitives
• The following pseudocode properly describes the e-mail counting
mechanism of slide no. 3. notice that we use the words
enterMutualExclusion() and exitMutualExclusion(). These words are
constructs that encapsulate the thread’s critical section-when a thread wants
to enter its critical section, the thread must first execute
enterMutualExclusion() ; when a thread exits the critical section, it executes
exitMutualExclusion(). Because these constructs invoke the most
fundamental operations inherent to mutual exclusion, they are sometimes
called Mutual Exclusion Primitives.
while (true) {
Receive e-mail // executing outside Critical Section
enterMutualExclusion() //want to enter critical section
Increment mailCount // executing inside Critical Section
exitMutualExclusion() // leaving Critical Section
}
• Assume that threads T1 and T2 of the same process are both executing in
the system. Each thread manages e-mail messages, and each thread
contains instructions that correspond to the preceding pseudocode. When
T1 reaches the enterMutualExclusion() line, the system must determine
whether T2 is already in its critical section. If T2 is not in its critical
section, then T1 enters its critical section, increments shared variable
mailCount and executes exitMutualExclusion() to indicate that T1 has left
its critical section. If, on the other hand, when T1 executes
enterMutualExclusion(), T2 is in its critical section, then T1 must wait until
T2 executes exitMutualExclusion(), at which point T1 enters its critical
section. If T1 and T2 simultaneously execute enterMutualExclusion(), then
only one of the threads will be allowed to proceed, and one will be kept
waiting. For the moment , we shall assume that the system randomly
selects the thread that will proceed.
• Disadvantage- such a policy could lead to indefinite postponement of one
of the threads if the other is always selected when the threads repeatedly
try to enter their critical sections.
Implementing Mutual Exclusion
primitives
• The enterMutualExclusion() and exitMutualExclusion() exhibits the
following properties:
1. The solution is implemented purely in software on a machine without
specifically designed mutual exclusion machine-language instructions.
Each machine-language instruction is executed indivisibly-i.e,. Once
started, it completes without interruption. If a system contains multiple
processors, several threads could try to access the same data item
simultaneously. As we will see, many mutual exclusion solutions for
uniprocessor systems rely on access to shared data, meaning that they
might not work on multiple processor systems. For simplicity, we will
assume that the system contains one processor.
2. No assumption can be made about the relative speeds of asynchronous
concurrent threads. This means that any solution must assume that a
thread can be preempted or resumed at any time during its execution and
that the rate of execution of each thread may not be constant or
predictable.
3. A thread that is executing instructions outside its critical section cannot
prevent any other threads from entering their critical sections.
4. A thread must not be indefinitely postponed from entering its critical
section.
**** Once inside an improperly coded critical section, a thread could certainly
“misbehave” in ways that could lead to indefinite postponement or even
deadlock.
Dekker’s Algorithm
• An elegant software implementation of Mutual Exclusion.
• This is an implementation of mutual exclusion for two threads.
• After this you may study more efficient algorithm- G.L Peterson and L.L
Lamport.
• This algorithm culminates in the presentation of a correct software
implementation to mutual exclusion that is free of deadlock and indefinite
postponement.
• See the next slide for enforcing mutual exclusion between two threads. The
pseudocode is presented using a C-like syntax. Each thread’s instructions
can be broken into three parts: noncritical instructions ( inst.s That do not
modify shared data), critical instructions (i.e. inst.s that do modify shared
data) and the instructions that ensure mutual exclusion ( i.e. inst.s that
implement enterMutualExclusion() and exitMutualExclusion(). Each
thread repeatedly enters and exits its critical section until it is done.
Dekker’s Algorithm (first version)
Continuation….
• Under this version of mutual exclusion, the system uses a variable called
threadNumber, to which both threads have access. Before the system starts
executing the threads, threadNumber is set to 1 (line 3). Then the system starts both
threads. The enterMutualExclusion() primitive is implemented as a single while
loop that loops indefinitely until variable threadNumber becomes equal to the
number of the thread ( line 13, and 31 in threads T1 and T2, respectively). The
exitMutualExclusion() primitive is implemented as a single instruction that sets
threadNumber to the number of the other thread ( line 17 and 35 in threads T1 and
T2 respectively). Assume T1 begins executing first. The thread executes the while
loop (line 13) that acts as enterMutualExclusion(). Because threadNumber is
initially 1, T1 enters its critical section ( the code indicated by the comment in line
15). Now assume that the system suspends T1 and begins executing T2. the second
thread finds threadNumber equal to 1 and remains “locked” in the while loop (line
31). This gurantees M.E, because T2 can’t enter its critical section until T1 exits its
critical section and sets threadNumber to 2 (line 17).
• Eventually T1 finishes executing in its critical section ( recall that we are assuming
that the critical sections contain no infinite loops and that threads do not die or
block indside critical sections). At this point, T1 sets threadNumber to 2 (line 17)
and continues to its “noncritical” instructions. Now T2 is free to enter its critical
section.
• Although this implementation gurantees mutual exclusion, the solution has
significant drawbacks. In the enterMutualExclusion() primitive, the thread uses the
processor to perform essentially no work (i.e. the thread repeatedly tests the value
of threadNumber. Such a thread is said to be busy waiting. Busy waiting can be an
ineffective technique for implementing mutual exclusion on uniprocessor systems.
Recall that one goal of multiprogramming is to increase processor utilization. If our
mutual exclusion primitive uses processor cycles to perform work that is not
essential to the thread, then processor time is wasted. This overhead, however, is
limited to short periods of time (i.e. when there is contention between threads to
enter their critical sections).
Dekker’s Algo ( A proper
solutions)
• Dekker’s algo still uses a flag to indicate a thread’s desire to enter its
critical section, but it also incorporate the concept of a “favored thread”
that will enter the critical section in the case of a tie (i.e. when each thread
simultaneously wishes to enter its critical section).
Test and set instruction
Swap instruction
Semaphore
• It is used to implement mutual exclusion. A semaphore contains a
protected variable whose integer value, once initialized, can be accessed
and altered by only one of two operations, P and V [ Dutch word
“Proberen” stands for “to test” and “Verhogen” stands for “to increment”].
A thread calls the P operation (also called wait operation) when it wants to
enter its critical section and calls the V operation ( also called the signal
operation) when it wants to exit its critical section. Before a semaphore can
be used for synchronization, it must be initialized. Initialization sets the
value of the protected variable to indicate that no thread is executing in its
critical section. It also creates a queue that stores references to threads
waiting to enter their critical sections protected by that semaphore.
Continuation…..
Mutual Exclusion with Semaphore
• Follow the next slide how M.E is enforced using a semaphore. The system
initializes the semaphore occupied to 1; such semaphores are called binary
semaphores. This value indicates, the critical section is available.
• The program in the next slide uses the P and V operations as the
enterMutualExclusion() and exitMutualExclusion() primitives. When a
thread wants to enter a critical section that is protected by semaphore S, the
thread calls P(S), which operates as follows
If S>0
S=S-1
Else
the calling thread is placed in the semaphore’s queue of waiting threads
Because above slide initializes the semaphore value to 1, only one thread will
be allowed into the critical section at a time. When this thread calls P, the
semaphore’s value is reduced to 0. when another thread calls P, that thread
will be blocked.
After a thread finishes executing in its critical section, the thread calls V(S).
This operation proceeds as follows
If any threads are waiting on S
Resume the “next” waiting thread in the semaphore’s queue
Else
S=S+1
• Thus if any threads are waiting on the semaphore, the “next” thread, which
depends on the semaphore implementation, executes. Otherwise, the value
of S is incremented, allowing one more thread to enter its critical section.
• A proper semaphore implementation requires that P and V be indivisible
operations. Also, if several threads attempt a P(S) simultaneously, the
implementation should guarantee that only one thread will be allowed to
proceed. The others will be kept waiting, but the implementation of P and
V can guarantee that threads will not suffer indefinite postponement.
• For example, when a thread blocks on a semaphore, the system might place
that thread in a queue associated with the semaphore. When another thread
calls P, the system can select one of the threads in the queue to release. We
shall assume a FIFO queue discipline for threads blocked on a semaphore (
to avoid indefinite postponement).
Counting Semaphore
• Also called General Semaphore, that is initialized to an integral value
greater than zero and commonly greater than one. A counting semaphore is
particularly useful when a resource is to be allocated from a pool of
identical resources. The semaphore is initialized to the number of resources
in the pool. Each P operation decrements the semaphore by 1, indicating
that another resource has been removed from the pool and is used by a
thread. Each V operation increments the semaphore by 1, indicating that a
thread has returned a resource to the pool and the resource may be
reallocated to another thread. If a thread attempts a P operation when the
semaphore has been decremented to zero, then the thread must wait until a
resource is returned to the pool by a V operation.
Implementing Semaphore

You might also like