0% found this document useful (0 votes)
27 views76 pages

Operating System: Institute of Technology and Management

Uploaded by

Smitha Raj
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views76 pages

Operating System: Institute of Technology and Management

Uploaded by

Smitha Raj
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 76

BMS INSTITUTE OF TECHNOLOGY AND MANAGEMENT

OPERATING SYSTEM
22MCA102

Course Outcomes:
CO1: Explore operating system concepts
CO2: Apply the suitable OS algorithm for any given use case
CO3: Analyse the file concepts, memory management and disk scheduling
techniques
CO4: Explore Linux features and commands
CO5: Build shell scripts using Linux commands and language constructs

Mr. Dwarakanath G V
Assistant Professor, Dept. of MCA, BMSIT&M
[email protected]
13/05/2024
Mobile: 9916155597 1
BMS INSTITUTE OF TECHNOLOGY AND MANAGEMENT

Module 2: Processes
BMS INSTITUTE OF TECHNOLOGY AND MANAGEMENT

Chapter : Processes
 Process Concept
 Process Scheduling
 Scheduling Criteria
 Scheduling Algorithms
 The Critical-section Problem
 Semaphores
 Classic Problems of Synchronization
 Synchronization Examples
BMS INSTITUTE OF TECHNOLOGY AND MANAGEMENT

Objectives
 To introduce the notion of a process -- a program in execution,
which forms the basis of all computation
 To describe the various features of processes, including scheduling,
creation and termination, and communication
 To introduce the critical-section problem
 To present both software and hardware solutions of the critical-
section problem.
BMS INSTITUTE OF TECHNOLOGY AND MANAGEMENT
Process Concept
 An operating system executes a variety of programs:
 Batch system – jobs
 Time-shared systems – user programs or tasks
 Textbook uses the terms job and process almost interchangeably
 Process – a program in execution; process execution must progress in
sequential fashion
 Multiple parts
 The program code, also called text section
 Current activity including program counter, processor registers
 Stack containing temporary data
 Function parameters, return addresses, local variables
 Data section containing global variables
 Heap containing memory dynamically allocated during run time
BMS INSTITUTE OF TECHNOLOGY AND MANAGEMENT

Process Concept (Cont.)


 Program is passive entity stored on disk (executable file), process is
active
 Program becomes process when executable file loaded into
memory
 Execution of program started via GUI mouse clicks, command line entry
of its name, etc
 One program can be several processes
 Consider multiple users executing the same program
BMS INSTITUTE OF TECHNOLOGY AND MANAGEMENT
Process in Memory
BMS INSTITUTE OF TECHNOLOGY AND MANAGEMENT
Process State
 As a process executes, it changes state
 new: The process is being created
 running: Instructions are being executed
 waiting: The process is waiting for some event to occur
 ready: The process is waiting to be assigned to a processor
 terminated: The process has finished execution
BMS INSTITUTE OF TECHNOLOGY AND MANAGEMENT

Diagram of Process State


BMS INSTITUTE OF TECHNOLOGY AND MANAGEMENT
Process Control Block (PCB)
Information associated with each process
(also called task control block)
 Process state – running, waiting, etc
 Program counter – location of instruction
to next execute
 CPU registers – contents of all process-
centric registers
 CPU scheduling information- priorities,
scheduling queue pointers
 Memory-management information –
memory allocated to the process
 Accounting information – CPU used, clock
time elapsed since start, time limits
 I/O status information – I/O devices
allocated to process, list of open files
BMS INSTITUTE OF TECHNOLOGY AND MANAGEMENT
CPU Switch From Process to Process
BMS INSTITUTE OF TECHNOLOGY AND MANAGEMENT

Chapter : CPU Scheduling


BMS INSTITUTE OF TECHNOLOGY AND MANAGEMENT
Chapter : CPU Scheduling

 Basic Concepts
 Scheduling Criteria
 Scheduling Algorithms
BMS INSTITUTE OF TECHNOLOGY AND MANAGEMENT

Objectives
 To introduce CPU scheduling, which is the basis for multiprogrammed
operating systems
 To describe various CPU-scheduling algorithms
 To discuss evaluation criteria for selecting a CPU-scheduling algorithm
for a particular system
 To examine the scheduling algorithms of several operating systems
BMS INSTITUTE OF TECHNOLOGY AND MANAGEMENT
Basic Concepts
 Maximum CPU utilization obtained
with multiprogramming
 CPU–I/O Burst Cycle – Process
execution consists of a cycle of CPU
execution and I/O wait
 CPU burst followed by I/O burst
 CPU burst distribution is of main
concern
BMS INSTITUTE OF TECHNOLOGY AND MANAGEMENT
Histogram of CPU-burst Times
BMS INSTITUTE OF TECHNOLOGY AND MANAGEMENT
CPU Scheduler
 Short-term scheduler selects from among the processes in ready
queue, and allocates the CPU to one of them
 Queue may be ordered in various ways
 CPU scheduling decisions may take place when a process:
1. Switches from running to waiting state
2. Switches from running to ready state
3. Switches from waiting to ready
4. Terminates
 Scheduling under 1 and 4 is nonpreemptive
 All other scheduling is preemptive
 Consider access to shared data
 Consider preemption while in kernel mode
 Consider interrupts occurring during crucial OS activities
BMS INSTITUTE OF TECHNOLOGY AND MANAGEMENT
Dispatcher
 Dispatcher module gives control of the CPU to the process
selected by the short-term scheduler; this involves:
 switching context
 switching to user mode
 jumping to the proper location in the user program to restart
that program
 Dispatch latency – time it takes for the dispatcher to stop one
process and start another running
BMS INSTITUTE OF TECHNOLOGY AND MANAGEMENT
Scheduling Criteria
 CPU utilization – keep the CPU as busy as possible
 Throughput – # of processes that complete their execution per time
unit
 Turnaround time – amount of time to execute a particular process
 Waiting time – amount of time a process has been waiting in the ready
queue
 Response time – amount of time it takes from when a request was
submitted until the first response is produced, not output (for time-
sharing environment)
BMS INSTITUTE OF TECHNOLOGY AND MANAGEMENT

Scheduling Algorithm Optimization Criteria


 Max CPU utilization
 Max throughput
 Min turnaround time
 Min waiting time
 Min response time
BMS INSTITUTE OF TECHNOLOGY AND MANAGEMENT
First- Come, First-Served (FCFS) Scheduling

Process Burst Time


P1 24
P2 3
P3 3
 Suppose that the processes arrive in the order: P1 , P2 , P3
The Gantt Chart for the schedule is:

P1 P2 P3
0 24 27 30

 Waiting time for P1 = 0; P2 = 24; P3 = 27


 Average waiting time: (0 + 24 + 27)/3 = 17
BMS INSTITUTE OF TECHNOLOGY AND MANAGEMENT
FCFS Scheduling (Cont.)
Suppose that the processes arrive in the order:
P2 , P3 , P1
 The Gantt chart for the schedule is:

P2 P3 P1
0 3 6 30

 Waiting time for P1 = 6; P2 = 0; P3 = 3


 Average waiting time: (6 + 0 + 3)/3 = 3
 Much better than previous case
 Convoy effect - short process behind long process
 Consider one CPU-bound and many I/O-bound processes
BMS INSTITUTE OF TECHNOLOGY AND MANAGEMENT
Shortest-Job-First (SJF) Scheduling
 Associate with each process the length of its next CPU burst
 Use these lengths to schedule the process with the shortest time
 SJF is optimal – gives minimum average waiting time for a given set of
processes
 The difficulty is knowing the length of the next CPU request
 Could ask the user
BMS INSTITUTE OF TECHNOLOGY AND MANAGEMENT
Example of SJF
ProcessArriva l Time Burst Time
P1 0.0 6
P2 2.0 8
P3 4.0 7
P4 5.0 3

 SJF scheduling chart

P4 P1 P3 P2
0 3 9 16 24

 Average waiting time = (3 + 16 + 9 + 0) / 4 = 7


BMS INSTITUTE OF TECHNOLOGY AND MANAGEMENT
Determining Length of Next CPU Burst
 Can only estimate the length – should be similar to the previous one
 Then pick process with shortest predicted next CPU burst

 Can be done by using the length of previous CPU bursts, using exponential
averaging

1. t n  actual length of n th CPU burst


2.  n 1  predicted value for the next CPU burst
3.  , 0    1
4. Define :  n 1   t n  1    n .
 Commonly, α set to ½
 Preemptive version called shortest-remaining-time-first
BMS INSTITUTE OF TECHNOLOGY AND MANAGEMENT
Prediction of the Length of the Next CPU Burst
BMS INSTITUTE OF TECHNOLOGY AND MANAGEMENT
Examples of Exponential Averaging
  =0
n+1 = n
 Recent history does not count
  =1
 n+1 =  tn
 Only the actual last CPU burst counts
 If we expand the formula, we get:
n+1 =  tn+(1 - ) tn -1 + …
+(1 -  )j  tn -j + …
+(1 -  )n +1 0

 Since both  and (1 - ) are less than or equal to 1, each successive


term has less weight than its predecessor
BMS INSTITUTE OF TECHNOLOGY AND MANAGEMENT
Example of Shortest-remaining-time-first
 Now we add the concepts of varying arrival times and preemption to the
analysis
ProcessA arri Arrival TimeT Burst Time
P1 0 8
P2 1 4
P3 2 9
P4 3 5
 Preemptive SJF Gantt Chart
P1 P2 P4 P1 P3
0 1 5 10 17 26

 Average waiting time = [(10-1)+(1-1)+(17-2)+5-3)]/4 = 26/4 = 6.5 msec


BMS INSTITUTE OF TECHNOLOGY AND MANAGEMENT
Priority Scheduling
 A priority number (integer) is associated with each process

 The CPU is allocated to the process with the highest priority (smallest
integer  highest priority)
 Preemptive
 Nonpreemptive

 SJF is priority scheduling where priority is the inverse of predicted next


CPU burst time

 Problem  Starvation – low priority processes may never execute

 Solution  Aging – as time progresses increase the priority of the process


BMS INSTITUTE OF TECHNOLOGY AND MANAGEMENT
Example of Priority Scheduling
ProcessA arri Burst TimeT Priority
P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2

 Priority scheduling Gantt Chart

 Average waiting time = 8.2 msec


BMS INSTITUTE OF TECHNOLOGY AND MANAGEMENT
Round Robin (RR)
 Each process gets a small unit of CPU time (time quantum q), usually
10-100 milliseconds. After this time has elapsed, the process is
preempted and added to the end of the ready queue.
 If there are n processes in the ready queue and the time quantum is q,
then each process gets 1/n of the CPU time in chunks of at most q time
units at once. No process waits more than (n-1)q time units.
 Timer interrupts every quantum to schedule next process
 Performance
 q large  FIFO
 q small  q must be large with respect to context switch,
otherwise overhead is too high
BMS INSTITUTE OF TECHNOLOGY AND MANAGEMENT
Example of RR with Time Quantum = 4
Process Burst Time
P1 24
P2 3
P3 3
 The Gantt chart is:

P1 P2 P3 P1 P1 P1 P1 P1
0 4 7 10 14 18 22 26 30

 Typically, higher average turnaround than SJF, but better response


 q should be large compared to context switch time
 q usually 10ms to 100ms, context switch < 10 usec
BMS INSTITUTE OF TECHNOLOGY AND MANAGEMENT
Time Quantum and Context Switch Time
BMS INSTITUTE OF TECHNOLOGY AND MANAGEMENT
Turnaround Time Varies With The Time Quantum

80% of CPU bursts


should be shorter than q
BMS INSTITUTE OF TECHNOLOGY AND MANAGEMENT
Multilevel Queue
 Ready queue is partitioned into separate queues, eg:
 foreground (interactive)
 background (batch)
 Process permanently in a given queue
 Each queue has its own scheduling algorithm:
 foreground – RR
 background – FCFS
 Scheduling must be done between the queues:
 Fixed priority scheduling; (i.e., serve all from foreground then from
background). Possibility of starvation.
 Time slice – each queue gets a certain amount of CPU time which it can
schedule amongst its processes; i.e., 80% to foreground in RR
 20% to background in FCFS
BMS INSTITUTE OF TECHNOLOGY AND MANAGEMENT
Multilevel Queue Scheduling
BMS INSTITUTE OF TECHNOLOGY AND MANAGEMENT
Multilevel Feedback Queue
 A process can move between the various queues; aging can be
implemented this way
 Multilevel-feedback-queue scheduler defined by the following
parameters:
 number of queues
 scheduling algorithms for each queue
 method used to determine when to upgrade a process
 method used to determine when to demote a process
 method used to determine which queue a process will enter when
that process needs service
BMS INSTITUTE OF TECHNOLOGY AND MANAGEMENT
Example of Multilevel Feedback Queue
 Three queues:
 Q0 – RR with time quantum 8 milliseconds
 Q1 – RR time quantum 16 milliseconds
 Q2 – FCFS

 Scheduling
 A new job enters queue Q0 which is served
FCFS
 When it gains CPU, job receives 8
milliseconds
 If it does not finish in 8 milliseconds,
job is moved to queue Q1
 At Q1 job is again served FCFS and receives
16 additional milliseconds
 If it still does not complete, it is
preempted and moved to queue Q2
BMS INSTITUTE OF TECHNOLOGY AND MANAGEMENT

Chapter : Process
Synchronization
BMS INSTITUTE OF TECHNOLOGY AND MANAGEMENT
Chapter : Process Synchronization
 Background
 The Critical-Section Problem
 Peterson’s Solution
 Semaphores
 Classic Problems of Synchronization
 Monitors
 Synchronization Examples
 Alternative Approaches
BMS INSTITUTE OF TECHNOLOGY AND MANAGEMENT
Objectives
 To present the concept of process synchronization.
 To introduce the critical-section problem, whose solutions can be
used to ensure the consistency of shared data
 To present both software and hardware solutions of the critical-
section problem
 To examine several classical process-synchronization problems
 To explore several tools that are used to solve process
synchronization problems
BMS INSTITUTE OF TECHNOLOGY AND MANAGEMENT
Background
 Processes can execute concurrently
 May be interrupted at any time, partially completing execution
 Concurrent access to shared data may result in data inconsistency
 Maintaining data consistency requires mechanisms to ensure the
orderly execution of cooperating processes
 Illustration of the problem:
Suppose that we wanted to provide a solution to the consumer-
producer problem that fills all the buffers. We can do so by having
an integer counter that keeps track of the number of full buffers.
Initially, counter is set to 0. It is incremented by the producer after it
produces a new buffer and is decremented by the consumer after it
consumes a buffer.
BMS INSTITUTE OF TECHNOLOGY AND MANAGEMENT
Producer
while (true) {
/* produce an item in next produced */

while (counter == BUFFER_SIZE) ;


/* do nothing */
buffer[in] = next_produced;
in = (in + 1) % BUFFER_SIZE;
counter++;
}
BMS INSTITUTE OF TECHNOLOGY AND MANAGEMENT
Consumer
while (true) {
while (counter == 0)
; /* do nothing */
next_consumed = buffer[out];
out = (out + 1) % BUFFER_SIZE;
counter--;
/* consume the item in next consumed */
}
BMS INSTITUTE OF TECHNOLOGY AND MANAGEMENT
Race Condition
 counter++ could be implemented as
register1 = counter
register1 = register1 + 1
counter = register1
 counter-- could be implemented as
register2 = counter
register2 = register2 - 1
counter = register2
 Consider this execution interleaving with “count = 5” initially:
S0: producer execute register1 = counter {register1 = 5}
S1: producer execute register1 = register1 + 1 {register1 = 6}
S2: consumer execute register2 = counter {register2 = 5}
S3: consumer execute register2 = register2 – 1 {register2 = 4}
S4: producer execute counter = register1 {counter = 6 }
S5: consumer execute counter = register2 {counter = 4}
BMS INSTITUTE OF TECHNOLOGY AND MANAGEMENT
Critical Section Problem
 Consider system of n processes {p0, p1, … pn-1}
 Each process has critical section segment of code
 Process may be changing common variables, updating table,
writing file, etc
 When one process in critical section, no other may be in its
critical section
 Critical section problem is to design protocol to solve this
 Each process must ask permission to enter critical section in entry
section, may follow critical section with exit section, then remainder
section
BMS INSTITUTE OF TECHNOLOGY AND MANAGEMENT
Critical Section

 General structure of process Pi


BMS INSTITUTE OF TECHNOLOGY AND MANAGEMENT
Algorithm for Process Pi
do {

while (turn == j);


critical section
turn = j;
remainder section
} while (true);
BMS INSTITUTE OF TECHNOLOGY AND MANAGEMENT
Solution to Critical-Section Problem

1. Mutual Exclusion - If process Pi is executing in its critical section,


then no other processes can be executing in their critical sections
2. Progress - If no process is executing in its critical section and there
exist some processes that wish to enter their critical section, then
the selection of the processes that will enter the critical section next
cannot be postponed indefinitely
3. Bounded Waiting - A bound must exist on the number of times that
other processes are allowed to enter their critical sections after a
process has made a request to enter its critical section and before
that request is granted
 Assume that each process executes at a nonzero speed
 No assumption concerning relative speed of the n processes
BMS INSTITUTE OF TECHNOLOGY AND MANAGEMENT
Critical-Section Handling in OS
Two approaches depending on if kernel is preemptive or non-
preemptive
 Preemptive – allows preemption of process when running in
kernel mode
 Non-preemptive – runs until exits kernel mode, blocks, or
voluntarily yields CPU
Essentially free of race conditions in kernel mode
BMS INSTITUTE OF TECHNOLOGY AND MANAGEMENT
Peterson’s Solution
 Good algorithmic description of solving the problem
 Two process solution
 Assume that the load and store machine-language instructions
are atomic; that is, cannot be interrupted
 The two processes share two variables:
 int turn;
 Boolean flag[2]

 The variable turn indicates whose turn it is to enter the critical section
 The flag array is used to indicate if a process is ready to enter the
critical section. flag[i] = true implies that process Pi is ready!
BMS INSTITUTE OF TECHNOLOGY AND MANAGEMENT
Algorithm for Process Pi
do {
flag[i] = true;
turn = j;
while (flag[j] && turn = = j);
critical section
flag[i] = false;
remainder section
} while (true);
BMS INSTITUTE OF TECHNOLOGY AND MANAGEMENT
Peterson’s Solution (Cont.)
 Provable that the three CS requirement are met:
1. Mutual exclusion is preserved
Pi enters CS only if:
either flag[j] = false or turn = i
2. Progress requirement is satisfied
3. Bounded-waiting requirement is met
BMS INSTITUTE OF TECHNOLOGY AND MANAGEMENT
Semaphore
 Synchronization tool that provides more sophisticated ways (than Mutex locks) for
process to synchronize their activities.
 Semaphore S – integer variable
 Can only be accessed via two indivisible (atomic) operations
 wait() and signal()
 Originally called P() and V()

 Definition of the wait() operation

wait(S) {
while (S <= 0)
; // busy wait
S--;
}
 Definition of the signal() operation
signal(S) {
S++;
}
BMS INSTITUTE OF TECHNOLOGY AND MANAGEMENT
Semaphore Usage
 Counting semaphore – integer value can range over an unrestricted domain
 Binary semaphore – integer value can range only between 0 and 1
 Same as a mutex lock
 Can solve various synchronization problems
 Consider P1 and P2 that require S1 to happen before S2
Create a semaphore “synch” initialized to 0
P1:
S1;
signal(synch);
P2:
wait(synch);
S2;
 Can implement a counting semaphore S as a binary semaphore
BMS INSTITUTE OF TECHNOLOGY AND MANAGEMENT
Semaphore Implementation
 Must guarantee that no two processes can execute the wait() and
signal() on the same semaphore at the same time
 Thus, the implementation becomes the critical section problem where
the wait and signal code are placed in the critical section
 Could now have busy waiting in critical section implementation
 But implementation code is short
 Little busy waiting if critical section rarely occupied
 Note that applications may spend lots of time in critical sections and
therefore this is not a good solution
BMS INSTITUTE OF TECHNOLOGY AND MANAGEMENT
Semaphore Implementation with no Busy waiting
 With each semaphore there is an associated waiting queue
 Each entry in a waiting queue has two data items:
 value (of type integer)
 pointer to next record in the list
 Two operations:
 block – place the process invoking the operation on the
appropriate waiting queue
 wakeup – remove one of processes in the waiting queue and
place it in the ready queue
 typedef struct{
int value;
struct process *list;
} semaphore;
BMS INSTITUTE OF TECHNOLOGY AND MANAGEMENT
Implementation with no Busy waiting (Cont.)

wait(semaphore *S) {
S->value--;
if (S->value < 0) {
add this process to S->list;
block();
}
}

signal(semaphore *S) {
S->value++;
if (S->value <= 0) {
remove a process P from S->list;
wakeup(P);
}
}
BMS INSTITUTE OF TECHNOLOGY AND MANAGEMENT
Deadlock and Starvation
 Deadlock – two or more processes are waiting indefinitely for an event that
can be caused by only one of the waiting processes
 Let S and Q be two semaphores initialized to 1
P0 P1
wait(S); wait(Q);
wait(Q); wait(S);
... ...
signal(S); signal(Q);
signal(Q); signal(S);

 Starvation – indefinite blocking


 A process may never be removed from the semaphore queue in which it is
suspended
 Priority Inversion – Scheduling problem when lower-priority process holds a
lock needed by higher-priority process
 Solved via priority-inheritance protocol
BMS INSTITUTE OF TECHNOLOGY AND MANAGEMENT
Classical Problems of Synchronization
 Classical problems used to test newly-proposed synchronization schemes
 Bounded-Buffer Problem
 Readers and Writers Problem
 Dining-Philosophers Problem
BMS INSTITUTE OF TECHNOLOGY AND MANAGEMENT
Bounded-Buffer Problem
 n buffers, each can hold one item
 Semaphore mutex initialized to the value 1
 Semaphore full initialized to the value 0
 Semaphore empty initialized to the value n
BMS INSTITUTE OF TECHNOLOGY AND MANAGEMENT
Bounded Buffer Problem (Cont.)
 The structure of the producer process

do {
...
/* produce an item in next_produced */
...
wait(empty);
wait(mutex);
...
/* add next produced to the buffer */
...
signal(mutex);
signal(full);
} while (true);
BMS INSTITUTE OF TECHNOLOGY AND MANAGEMENT
Bounded Buffer Problem (Cont.)
 The structure of the consumer process

Do {
wait(full);
wait(mutex);
...
/* remove an item from buffer to next_consumed */
...
signal(mutex);
signal(empty);
...
/* consume the item in next consumed */
...
} while (true);
BMS INSTITUTE OF TECHNOLOGY AND MANAGEMENT
Readers-Writers Problem
 A data set is shared among a number of concurrent processes
 Readers – only read the data set; they do not perform any updates
 Writers – can both read and write
 Problem – allow multiple readers to read at the same time
 Only one single writer can access the shared data at the same time
 Several variations of how readers and writers are considered – all involve
some form of priorities
 Shared Data
 Data set
 Semaphore rw_mutex initialized to 1
 Semaphore mutex initialized to 1
 Integer read_count initialized to 0
BMS INSTITUTE OF TECHNOLOGY AND MANAGEMENT
Readers-Writers Problem (Cont.)
 The structure of a writer process

do {
wait(rw_mutex);
...
/* writing is performed */
...
signal(rw_mutex);
} while (true);
BMS INSTITUTE OF TECHNOLOGY AND MANAGEMENT
Readers-Writers Problem (Cont.)
 The structure of a reader process
do {
wait(mutex);
read_count++;
if (read_count == 1)
wait(rw_mutex);
signal(mutex);
...
/* reading is performed */
...
wait(mutex);
read count--;
if (read_count == 0)
signal(rw_mutex);
signal(mutex);
} while (true);
BMS INSTITUTE OF TECHNOLOGY AND MANAGEMENT
Readers-Writers Problem Variations
 First variation – no reader kept waiting unless writer has
permission to use shared object
 Second variation – once writer is ready, it performs the write
ASAP
 Both may have starvation leading to even more variations
 Problem is solved on some systems by kernel providing reader-
writer locks
BMS INSTITUTE OF TECHNOLOGY AND MANAGEMENT
Dining-Philosophers Problem

 Philosophers spend their lives alternating thinking and eating


 Don’t interact with their neighbors, occasionally try to pick up 2 chopsticks
(one at a time) to eat from bowl
 Need both to eat, then release both when done
 In the case of 5 philosophers
 Shared data
 Bowl of rice (data set)
 Semaphore chopstick [5] initialized to 1
BMS INSTITUTE OF TECHNOLOGY AND MANAGEMENT
Dining-Philosophers Problem Algorithm
 The structure of Philosopher i:
do {
wait (chopstick[i] );
wait (chopStick[ (i + 1) % 5] );

// eat

signal (chopstick[i] );
signal (chopstick[ (i + 1) % 5] );

// think

} while (TRUE);
 What is the problem with this algorithm?
BMS INSTITUTE OF TECHNOLOGY AND MANAGEMENT
Dining-Philosophers Problem Algorithm (Cont.)

 Deadlock handling
 Allow at most 4 philosophers to be sitting simultaneously
at the table.
 Allow a philosopher to pick up the forks only if both are
available (picking must be done in a critical section.
 Use an asymmetric solution -- an odd-numbered
philosopher picks up first the left chopstick and then the
right chopstick. Even-numbered philosopher picks up first
the right chopstick and then the left chopstick.
BMS INSTITUTE OF TECHNOLOGY AND MANAGEMENT
Problems with Semaphores
 Incorrect use of semaphore operations:

 signal (mutex) …. wait (mutex)

 wait (mutex) … wait (mutex)

 Omitting of wait (mutex) or signal (mutex) (or both)

 Deadlock and starvation are possible.


BMS INSTITUTE OF TECHNOLOGY AND MANAGEMENT
Synchronization Examples
 Solaris
 Windows
 Linux
 Pthreads
BMS INSTITUTE OF TECHNOLOGY AND MANAGEMENT
Solaris Synchronization
 Implements a variety of locks to support multitasking, multithreading
(including real-time threads), and multiprocessing
 Uses adaptive mutexes for efficiency when protecting data from short code
segments
 Starts as a standard semaphore spin-lock
 If lock held, and by a thread running on another CPU, spins
 If lock held by non-run-state thread, block and sleep waiting for signal of lock
being released
 Uses condition variables
 Uses readers-writers locks when longer sections of code need access to data
 Uses turnstiles to order the list of threads waiting to acquire either an
adaptive mutex or reader-writer lock
 Turnstiles are per-lock-holding-thread, not per-object
 Priority-inheritance per-turnstile gives the running thread the highest of the
priorities of the threads in its turnstile
BMS INSTITUTE OF TECHNOLOGY AND MANAGEMENT
Windows Synchronization
 Uses interrupt masks to protect access to global resources on
uniprocessor systems
 Uses spinlocks on multiprocessor systems
 Spinlocking-thread will never be preempted
 Also provides dispatcher objects user-land which may act mutexes,
semaphores, events, and timers
 Events
 An event acts much like a condition variable
 Timers notify one or more thread when time expired
 Dispatcher objects either signaled-state (object available) or
non-signaled state (thread will block)
BMS INSTITUTE OF TECHNOLOGY AND MANAGEMENT
Linux Synchronization
 Linux:
 Prior to kernel Version 2.6, disables interrupts to implement
short critical sections
 Version 2.6 and later, fully preemptive
 Linux provides:
 Semaphores
 atomic integers
 spinlocks
 reader-writer versions of both
 On single-cpu system, spinlocks replaced by enabling and disabling
kernel preemption
BMS INSTITUTE OF TECHNOLOGY AND MANAGEMENT
Pthreads Synchronization
 Pthreads API is OS-independent
 It provides:
 mutex locks
 condition variable
 Non-portable extensions include:
 read-write locks
 spinlocks

You might also like