0% found this document useful (0 votes)
10 views22 pages

Week 8,9

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views22 pages

Week 8,9

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 22

PROCESS

SYNCHRONIZATION
USMAN AZIZ
OUTLINE

• Process Synchronization Introduction.


• The Critical Section Problem.
• Peterson’s Solution.
• Synchronization Hardware.
• Semaphores & its Applications.
• Synchronization Examples.
WHAT IS PROCESS SYNCHRONIZATION ?

• Several processes run in an operating system.


• Some of them share resources due to which problems like data
inconsistency may arise.
• Example: One process changing the data from memory location where
another process is trying to read the data from same memory location.
It is possible that the data read by second will be wrong.
SYNCHRONIZATION CAUSED BY

• Producer Consumer Problem (Bounded-Buffer Problem);

• To ensure that the Producer should not add DATA when Buffer is full
and Consumer should not take the data when Buffer in empty
SOLUTION OF THE PROBLEM

• Using Semaphores: A variable or abstract data type that is used for


controlling access, by multiple processes, to a common resource in a concurrent
system such as multiprogramming operating system.
• Using Monitors: A monitor is a synchronization construct that allows threads
to have both mutual exclusion and the ability to wait (block) for a certain
condition to become true.
• Atomic Transactions: Atomic read-modify-write access to shared variables is
avoided, as each of two count variables is updated by only single thread. Also
these variables stay incremented all time; the relation remains correct when
their values wrap around on an integer overflow.
RACE CONDITION
CRITICAL SECTION PROBLEM

• A section of code, common to n cooperating processes, in


which process may be accessing common variables.
• A critical section environment contains:
• Entry Section: Code requesting entry into the critical section.
• Critical Section: Code in which only one process can execute at any
one time.
• Exit Section: The end of critical section, releasing or allowing others
in.
• Remainder Section: Rest of the code AFTER the critical section.
IT ENFORCE THREE FOLLOWING RULES

• Mutual Exclusion: No more than one process can executes in its


critical section at one time.
• Progress: If no one is in the critical section and someone wants in,
then those processes not in their remainder section must be able to
decide in a finite time who should go in.
• Bounded Wait: All requesters must eventually be let into the critical
section.
PETERSON’S SOLUTION

• To handle the problem of Critical Section (CS), Peterson gave an algorithm with a
bounded waiting.
• Suppose there are N processes (P1, P2, ..PN) and each of them at same point need to
enter the Critical Section.
• A FLAG[] array of size N is maintained which is by default is false and whenever a
process need to enter the critical section, it has be to be set its flag as true, i.e.
suppose Pi wants to enter so it will set FLAG[i]=TRUE.
• There is another variable called TURN which indicated the process number which is
currently to enter in the CS while exiting would change the TURN to another number
from among the list of ready processes.
SYNCHRONIZATION HARDWARE

• Problems of Critical section are also solvable by hardware.


• Uniprocessor systems disables interrupts while a Process P1 is using CS
but it is great advantage in multiprocessor systems.
• Some systems provide lock functionality where a Process acquires a
lock while entering the CS and releases the lock after leaving it. Thus
another process trying to enter CS cannot enter as the entry is locked.
It can only do if it is free by acquiring the lock itself.
• Another advanced approach is Atomic Instructions (Non-interruptible
instructions).
MUTEX LOCKS

• As the synchronization hardware solution is not easy to implement from


everyone, a strict software approach called Mutex Locks was
introduced.
• In this approach, in the entry section of code, a LOCK is acquired over
the critical resources modified and used inside critical section, and in
the exit section that LOCK is released.
• As the resource is locked while a process executes its critical section
hence no other process can access it.
SEMAPHORES

• A more robust alternative to simple mutual exclusion is to use


semaphores, which are integer variable for which only two (atomic)
operations are defined, the wait and the signal operations as shown:

• As given, these are not atomic as written in “macro code”. We define


these operations, however, to be atomic. (Protected by a hardware
lock)
SEMAPHORE FORMAT

wait( mutex ); // Mutual exclusion: mutex init to 1


CRITICAL SECTION
signal( mutex );
REMAINDER
• Note that not only must the variable-changing steps ( S– and S++ ) be
indivisible, it is also necessary that for the wait operation when the tests
proves false that there be no interruptions before S gets decremented.
• If we interrupt a busy loop when its true, it protects from hanging the system.
• Semaphores can be used to force synchronization (precedence) if the
preceding process does a signal at the end, and the follower does wait
at the beginning. Example here we want to execute P1 before P2.

• To prevent looping, we redefine the semaphore structure as :


• Its critical that these be atomic, in uniprocessor we can disable
interrupt, but in multiprocessor other mechanisms for atomicity are
needed.
TYPES OF SEMAPHORE

• There are mainly of two types:


• Binary Semaphore: It is special form of semaphore used for
implementing mutual exclusion, hence it is often called Mutex. A binary
semaphore is initialized to 1 and only takes the value of 0 and 1 during
execution of a program.
• Counting Semaphore: These are used to implement bounced
concurrency.
SEMAPHORE PROPERTIES

• It’s simple.
• Works with many processes.
• Can have many different critical sections with different semaphores.
• Each critical section has unique access semaphores.
• Can permit multiple processes into the critical section as once, if
desirable.
DEADLOCKS

• One problem that can arise when using semaphore to block processes
waiting for a limited resources is the problem deadlocks, which occur
when:
• Multiple processes are blocked, each waiting for a resource that can
only be freed by one of the other (blocked) processes as shown:
STARVATION

• In which one or more processes gets blocked forever and never get a
chance to take their turn in critical section.
• For example: in semaphores above, we did not specify the algorithms
for adding processes to the waiting queue in semaphore in wait() call,
or selecting one to be removed from the queue in the signal() call.
• If the method chosen is FIFO queue, then every process will eventually
get their turn, but if LIFO queue is implemented, then the first process
to start waiting could starve.
THANK YOU

You might also like