0% found this document useful (0 votes)
70 views155 pages

Slide 03

The critical section of the code is the segment between hit+=1 and val=hit. This is the critical section because it is accessing and modifying the shared global variable hit. For multiple processes running this code concurrently, we need synchronization to ensure that only one process is allowed to execute this critical section at a time. Otherwise, there could be a race condition where two processes read, modify, and write hit simultaneously, leading to unexpected behavior.

Uploaded by

phong hai
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
70 views155 pages

Slide 03

The critical section of the code is the segment between hit+=1 and val=hit. This is the critical section because it is accessing and modifying the shared global variable hit. For multiple processes running this code concurrently, we need synchronization to ensure that only one process is allowed to execute this critical section at a time. Otherwise, there could be a race condition where two processes read, modify, and write hit simultaneously, leading to unexpected behavior.

Uploaded by

phong hai
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 155

Operating System

Nguyen Tri Thanh


[email protected]

1
Review
 A system uses FCFS process (arrived_time, duration)
 P1(0,20), P2(30,10), P3(20,40), P4(50,15)
 Which of the following is the correct running
order of the above processes?
A. P1, P2, P3, P4
B. P1, P3, P2, P4
C. P1, P4, P2, P3
D. P5, P2, P3, P1

2
Review
 A system uses SJF process (arrived_time, duration)
 P1(0,20), P2(30,10), P3(20,40), P4(50,15)
 Which of the following is the correct running
order of the above processes?
A. P1, P2, P3, P4
B. P1, P4, P2, P3
C. P1, P3, P2, P4
D. P4, P2, P3, P1

3
Review
 A system uses SRTF process (arrived_time, duration)
 P1(0,20), P2(30,10), P3(20,40), P4(40,15)
 Which of the following is the correct running
order of the above processes?
A. P1, P3, P2, P4, P3
B. P1, P2, P3, P4, P4
C. P1, P4, P2, P3, P2
D. P1, P2, P3, P1, P4

4
Review
 A system uses RR process (arrived_time, duration)
 P1(0,22), P2(30,10), P3(20,40), P4(40,25)
 Time quantum = 15
 Which of the following is the correct running
order of the above processes?
A. P1, P2, P3, P1, P2, P3, P4, P3
B. P1, P3, P1, P3, P2, P3, P4, P3
C. P1, P1, P2, P3, P2, P3, P4, P3
D. P1, P1, P3, P2, P3, P4, P3, P4
5
Review
 A system uses RR process (arrived_time, duration)
 P1(0,20), P2(30,10), P3(20,40), P4(40,25)
 Time quantum 15
 Which of the following is the correct total
waiting time of the above processes?
A. 40
B. 50
C. 60
D. 70
6
Inter-process Communication
(IPC)

7
Objectives
 Present what IPC is
 Present why we need synchronization
 Methods of synchronization
 Classical synchronization problems
 Write a simple synchronization program

8
Reference
 Chapter 3, 6 of Operating System Concepts

9
Introduction
 In some situations, processes need to
communicate with each other
 To send/receive data (web browser – web server)
 To control the other process
 To synchronize with each other
 This can be done by IPC
 IPC is implemented differently among OSes
 Linux: message queue, semaphore, shared
segment, …
10
Introduction (cont’d)
 IPC can be divided into 2 categories
 IPC among processes within the same system
 Linux: pipe, named pipe, file mapping, …
 IPC among processes in different systems
 Remote Procedure Call (RPC), Socket, Remote
Method Invocation (RMI), …

11
Process Synchronization

12
Synchronization definition
 Process synchronization refers to the idea
that multiple processes are to join up or
handshake at a certain point, in order to reach
an agreement or commit to a certain
sequence of actions.
 http://en.wikipedia.org/wiki/Synchronization_(computer_
science)
Synchronization is everywhere

14
Synchronization is everywhere

another example … 14
Synchronization is everywhere

15
Synchronization is everywhere

15
Synchronization is everywhere

another example … 15
Synchronization is everywhere

16
Problem
Write process P:
while (true) {
val=buf;
val += count();//Take time
buf=val;
}
buf: Buffer
UPDATE A SET
buf=buf+count();

17
Problem
Write process P:
while (true) {
What if more than
val=buf; one P are
val += count();//Take time running?
buf=val;
}
buf: Buffer
UPDATE A SET
buf=buf+count();

17
Problem (cont’d)
 Two concurrent processes

val=buf; val=buf;
val += count(); val += count();
buf=val; buf=val;

Do we always get the expected value of buf? Why?

18
Problem (cont’d)
 Suppose buf=5
val=buf; //val=5
val+=count(); //val=10
val=buf; //val=5
val+=count(); //val=10
buf=val; //buf=10
buf=val; //buf=10

19
Problem (cont’d)
 Cause: P and Q simultaneously operate on
global variable buf
 Solution: Let them operate separately
val=buf; //val=5
val+=count(); //val=10
buf=val ; //buf=10

val=buf; //val=10
val+=count(); //val=15
buf=val ; //buf=15
20
Race condition
 Happen when many processes simultaneously
work with shared data

21
Race condition
 Happen when many processes simultaneously
work with shared data

21
Race condition
 Happen when many processes simultaneously
work with shared data

21
Race condition
 Happen when many processes simultaneously
work with shared data

22
Race condition
 Happen when many processes simultaneously
work with shared data

22
Race condition
 Happen when many processes simultaneously
work with shared data

22
Race condition
 Happen when many processes simultaneously
work with shared data

To avoid “trouble”, processes need to be controlled


22
Critical section
 In concurrent programming a critical section is a piece
of code that accesses a shared resource (data structure
or device) that must not be concurrently accessed by
more than one thread of execution. A critical section will
usually terminate in fixed time, and a thread, task or
process will have to wait a fixed time to enter it (aka
bounded waiting). Some synchronization mechanism is
required at the entry and exit of the critical section to
ensure exclusive use, for example a semaphore.
 (http://en.wikipedia.org/wiki/Critical_section)

23
Critical section
 Suppose n processes P1, ..., Pn share a
global variable v
 v can also be other resource, e.g, file
 Each process has a segment of code CSi
which operates on v
 CSi is called critical section
 Because it is critical to prone errors
 CSi should be the smallest code segment
 Need to make the critical section safe
24
Critical section

25
Critical section

26
Question
Process P: Which is the critical
while (true) { section of the code
waitForNewRequest(); when multiple
processes of P run?
if(found){

hit+=1; 
val=hit; 
} 
Respond(); 
} 

hit: a global variable 


 27
Question
Process P: Which is the critical
while (true) { section of the code
waitForNewRequest(); when multiple
processes of P run?
if(found){
while (true) {
hit+=1; waitForNewRequest();
val=hit; if(found){
} hit+=1;
Respond(); val=hit;
} }
hit: a global variable Respond();
} 27
Critical section (cont’d)
 Common structure
do {
Enter_Section (CSi);
Run CSi;
Exit_Section(CSi);
Run (REMAINi); // Remainder section
} while (TRUE);

28
Critical section (cont’d)
 Short description
do {
ENTRYi; // Enter section
Run CSi; // Critical section
EXITi; // Exit section
REMAINi; // Remainder section
} while (TRUE);

29
Implementation of Critical
section
Implementation must satisfy 3 conditions
1. Mutual Exclusion
o If a process is in its critical section, then no other
processes can be in their critical sections
2. Progress
o If no process is in its critical section
o other processes waiting to enter their critical section,
o then the selection of the process to enter the critical
section cannot be postponed indefinitely
3. Bounded Waiting
o No process has to wait indefinitely to enter its critical
section
Question
Which is the purpose of the first condition?
A. It supports the priority of process
B. It ensures the correct use of the shared
resource
C. It tries to utilize the shared resource effectively
D. It makes the implementation of OS simpler
Critical section

Mutual exclusion using critical regions

32
Question
Which is the consequence of the second
condition?
A. It reduces the waiting time of requested
processes
B. It ensures the correct use of the shared
resource
C. It supports the priority of processes
D. It makes the implementation of OS simpler
Question
Which is the consequence of the second
condition?
A. It supports the priority of processes
B. It ensures the correct use of the shared
resource
C. It utilizes the shared resource effectively
D. It makes the algorithm complicated to
implement
Question
Which is the consequence of the 3rd condition?
A. It supports the priority of processes
B. It ensures the correct use of the shared
resource
C. It utilizes the shared resource effectively
D. It makes sure no process can never enter its
critical section
Question
Which is the correct conditions of critical
section?
A. mutual exclusion, protection, bounded using
B. mutual exclusion, protection, bounded waiting
C. mutual exclusion, progressive, bounded
waiting
D. mutual exclusion, bounded waiting, progress
Question
Which is the correct purpose the 2nd condition of
critical section?
A. maximize CPU utilization
B. maximize the shared resource utilization
C. maximize disk utilization
D. maximize RAM utilization
Question
Which is the consequence of the 3rd condition?
A. It supports the priority of processes
B. It ensures the correct use of the shared
resource
C. It ensures the relative fairness of processes to
use the shared resource
D. It utilizes the shared resource effectively
The fairness

The fair exam today is to swim


Critical section (cont’d)
 Each process has to
 request to run (enter section) its critical section
CSi
 and announce its completion (exit section) of its
CSi.

40
Peterson’s Solution
 Solution for two processes
 The two processes share two variables:
 int turn; // with the value of 0 or 1
 Boolean flag[2]
 The variable turn indicates whose turn it is to
enter the critical section
 If turn==i then Pi is in turn to run its CSi
 The flag array is used to indicate if a process is
ready to enter the critical section. flag[i] = true
implies that process Pi is ready!
Peterson’s solution (cont’d)
 Program Pi:
do {
flag[i] = TRUE;
turn = j;
while (flag[j] && turn == j) ;
CSi;
flag[i] = FALSE;
REMAINi;
} while (1);
42
Peterson’s solution (cont’d)
 The proof of this solution is provided on page
196 of the textbook
 Comments
 Complicated when then number of processes
increases
 Difficult to control

43
Question
 Which code snippet is Enter_Section?
A. flag[i] = TRUE;
turn = j;
while (flag[j] && turn == j) ;
B. flag[i] = TRUE;
while (flag[j] && turn == j) ;
C. flag[i] = TRUE;
turn = j;
D. turn = j;
while (flag[j] && turn == j) ;

44
Semaphore

45
Reference infomation
 Semaphore is proposed
by Edsger Wybe Dijkstra
(Dutch) for Computer
Science in 1972
 Semaphore was firstly
used in his book “The
operating system”

Edsger Wybe Dijkstra


(1930-2002) 46
Semaphore
 Semaphore is an integer, can be only access
through two atomic operators wait (or P) and
signal (or V).
 P: proberen – check (in Dutch)
 V: verhogen – increase (in Dutch)
 Processes can share a semaphore
 Atomic operators guarantee the consistency

47
wait and signal operators
wait(S) // or P(S) signal(S) // or V(S)
{ {
while (S<=0); S++;
S--; }
}

 Wait if semaphore  Increase S by 1


S<=0 else decrease S
by 1

48
Using semaphore
 Apply for critical section
do {
wait(s); // s is a semaphore initialized by 1
CSi;
signal(s);
REMAINi;
} while (1);

49
Semaphore

50
Semaphore

50
Semaphore

50
Semaphore

50
Semaphore

50
Semaphore

50
Semaphore

50
Semaphore

50
Semaphore

50
Semaphore

50
Semaphore

51
Semaphore

51
Semaphore

51
Semaphore

52
Semaphore

52
Semaphore

52
Semaphore

53
Semaphore

53
Semaphore

53
Semaphore

53
Question
Process P: Use semaphore to make
while (true) { the code safe?
waitForNewRequest();
if(found){ 

hit+=2; 

val=hit; 

} 

Respond(); 

} 

hit: a global variable


54
Question
Process P: Use semaphore to make
while (true) { the code safe?
waitForNewRequest();
if(found){ if(found){
hit+=2; wait(mutex);
val=hit; hit+=2;
} val=hit;
Respond(); signal(mutex);
} }
hit: a global variable
54
Problem (cont’d)
Two concurrent processes: mutex=1
wait(mutex)
val=buf;
val += count();
wait(mutex)
buf=val; (waiting)

signal(mutex);
val=buf;
val += count();
buf=val;
signal(mutex); 55
Using semaphore (cont’d)
 P1 needs to do O1; P2 need to do O2; O2 can
only be done after O1
 Solution: use a semaphore synch = 0
Using semaphore (cont’d)
 P1 needs to do O1; P2 need to do O2; O2 can
only be done after O1
 Solution: use a semaphore synch = 0

 P1:  P2:
... ...
O1; wait(synch);
signal(synch); O2;
... ...
56
Using semaphore (cont’d)
 P1 needs to do O1; P2 need to do O2; O2 can
only be done after O1
 Solution: use a semaphore synch = 0

 P1:  P2:
... ...
O1; wait(synch);
signal(synch); O2;
... ...
56
Semaphore support

57
https://www.geeksforgeeks.org/use-posix-semaphores-c/
Semaphore support

58
https://www.php.net/manual/en/book.sem.php
Semaphore support

https://www.baeldung.com/java-semaphore 59
Semaphore support

60
Semaphore support

61
Semaphore support

62
Semaphore support

63
Semaphore support

64
Semaphore implementation
 In the above semaphore implementation
 Use busy waiting (while loop)
 Resource wasting
 Atomic operators
 When a process called wait(), it will be blocked if
the semaphore is not free
 This type of semaphore is called spinlock
 Other wait() implementation just returns true/false
and does not block the calling process
65
Semaphore implementation
(cont’d)
 Remove the busy waiting loop by using block
 To restored a blocked process, use wakeup
 Semaphore data structure
typedef struct {
int value; // value of semaphore
struct process *L; //waiting process list
} semaphore;

66
Semaphore implementation
(cont’d)
void wait(semaphore *S) void signal(semaphore *S)
{ {
S->value--; S->value++;
if (S->value<0) { if (S->value<=0) {
Add the requested remove a process P
process P into S->L; from S->L;
block(P); wakeup(P);
} }
} }
67
Semaphore implementation
(cont’d)

68
Semaphore implementation
(cont’d)

68
Binary semaphore
 Semaphore only has the value of 0 or 1
 Other semaphore type is counting
semaphore

69
Binary semaphore
 Semaphore only has the value of 0 or 1
 Other semaphore type is counting
semaphore

69
Binary semaphore
 Semaphore only has the value of 0 or 1
 Other semaphore type is counting
semaphore

69
Question
 When counting semaphores are suitable to
use?
A. When 2 processes share a single variable/resource
B. When 3 processes share a single variable/resource
C. When n processes share a single variable/resource
D. When n processes share m variables/resources of
the same type

70
Classical synchronization
problems

71
Bounded-Buffer Problem
 N buffers, each can hold one item
 Semaphore mutex initialized to the value 1
 Semaphore full initialized to the value 0
 Semaphore empty initialized to the value N.
Bounded-buffer problem (cont’d)
Write process P: Read process Q:
do { do {
wait(empty); wait(full);
wait(mutex); wait(mutex);
Write(item,buf); Read(item,buf);
signal(mutex); signal(mutex);
signal(full); signal(empty);
} while (TRUE); } while (TRUE);
buf: shared resource
73
Question
 Which is the initialized value of the full
variable in the above algorithm?
A. -1
B. 0
C. 1
D. NULL

74
Question
What will be the problem if the initialized
value of the full variable is 1?
A. no problem at all
B. the writer process can not run
C. the reader process can not run
D. the reader can read an invalid value

75
Bounded-buffer problem (cont’d)

76
Bounded-buffer problem (cont’d)

76
Readers-writers problem
 A data set is shared among a number of
concurrent processes
 Readers – only read
 Writers – can both read and write
 Problem
 allow multiple readers to read at the same time
when there is no writer accessing the data set
 Only one writer can access the shared data at the
a time
77
Readers-writers problem

78
Readers-writers problem (cont’d)
 Shared data
 Data set
 Semaphore wrt initialized by 1
 Used to manage write access
 Integer readcount initialized by 0 to count the
number of readers that are reading
 Semaphore mutex initialized by 1
 Used to manage readcount access

79
Readers-writers problem (cont’d)
 Process writer Pw:  Process reader Pr:
do { do {
wait(wrt); wait(mutex);
write(data_set); readcount++;
signal(wrt); if (readcount ==1) wait(wrt);
}while (TRUE); signal(mutex);
read(data_set);
wait(mutex);
readcount--;
if (readcount ==0) signal(wrt);
signal(mutex);
} while (TRUE); 80
Question
Why do we need readcount variable?
A. We may remove this variable
B. To make sure there is one reader at a time
C. To make sure no readers are reading
D. To make sure no readers are reading before
writing

81
Question
Which is the initialized value of the
readcount variable in the above algorithm?
A. -1
B. 0
C. 1
D. NULL

82
Question
Which is the purpose of mutex variable?
A. To safely access the data_set
B. We may remove this variable without affecting
the program
C. To safely access the readcount variable
D. To safely access the wrt variable

83
Question
Which is the initialized value of the mutex
variable in the above algorithm?
A. -1
B. 0
C. 1
D. NULL

84
Question
Which is the purpose of wrt variable?
A. To safely access the mutex variable
B. To safely write the data_set
C. To safely write the readcount variable
D. To safely read the data_set

85
Question
Which is the initialized value of the wrt variable
in the above algorithm?
A. -1
B. 0
C. 1
D. NULL

86
Dining-Philosophers Problem
 Five philosophers at a
table having 5
chopsticks, 5 bows and
a rice cooker
 A philosopher just eats
or thinks
 How to make sure
philosophers correctly
use the “shared data” –
the chopsticks

87
Dining-philosophers problem (cont’d)
 Use semaphore to  Code of philosopher i:
handle chopstick access do {
wait(chopstick[i]);
 semaphore chopstick[5];
wait(chopstick[(i+1)%5];
 Solution is provided as Eat(i);
in the text box signal(chopstick[i]);
signal(chopstick[(i+1)%5];
Think(i);
} while (TRUE);

88
Question
 What value chopstick[i] is initialized?
A. 1
B. 2
C. 0
D. 5

89
Question
 Is there any problem with the solution?
A. No problem
B. Only one philosopher can eat at a time
C. Only three philosophers can eat at a time
D. No philosopher could eat in case each takes a
chopstick and waits for the second one

90
Question
 Which of the following is incorrect about the
solution to the above problem?
A. No solution available
B. Create an order of philosophers to eat
C. Create an order of philosophers to think
D. Allow at most 4 philosophers to request to eat at
a time

91
Extra problem: Barrier

 Use of a barrier
 processes approaching a barrier
 all processes but one blocked at barrier
 last process arrives, all are let through
 https://github.com/angrave/SystemProgramming/wiki/Sample-
program-using-pthread-barriers
92
Limitations of semaphore (cont’d)
 Compare the two code snippets
 Snippet 1  Snippet 2
... ...
wait(mutex); signal(mutex);
//Critical section //Critical section
signal(mutex); wait(mutex);
... ...

93
Question
 What is the problem of the two code
snippets?
A. Snippet 1 has problem
B. Snippet 2 has problem
C. Both snippets have problem
D. No problem at all

94
Question
 Which is the problem of the incorrect use of
semaphore in the above code snippet?
A. No process can enter its critical section
B. No problem at all
C. The mutual exclusion condition may be violated
D. No process can exit its critical section

95
Limitations of semaphore
 Semaphores need correct calls to wait and
signal
 Incorrect use of semaphore may lead to
deadlock
 Even correct use of semaphores may lead to
deadlock, in some cases

96
Limitations of semaphore (cont’d)
 Compare the two code snippets
 Snippet 1  Snippet 2
... ...
wait(mutex); wait(mutex);
CS1; CS2;
wait(mutex); signal(mutex);
... ...

97
Question
 Which of the two code snippets has problem?
A. Snippet 1 has problem
B. Snippet 2 has problem
C. Both snippets have problem
D. No problem at all

98
Question
 Which is the consequence of the above
problem?
A. One process will be blocked
B. There will be a deadlock
C. No consequences if only two processes are
involved
D. No consequences

99
Limitations of semaphore (cont’d)
 Process P1  Process P2
... ...
wait(S); wait(Q);
wait(Q); wait(S);
CS1... CS2...
signal(S); signal(Q);
signal(Q); signal(S);
... ...

100
Question
 What is the problem of the above two
processes?
A. There is deadlock
• if P1 got S and waits for Q and
• P2 got Q and waits for S
B. The exclusive condition is violated
C. The order of semaphore calls is incorrect
D. No problem at all

101
Monitor

102
Reference information
 Per Brinch Hansen
(Dennish) proposed the
concept and
implemented in 1972
 Monitor was firstly used
in Concurrent Pascal
programming language

Per Brinch Hansen


(1938-2007)
103
What is monitor?
 Monitor means to supervise
 It is a type of construct in a high level
programming language for synchronization
purpose
 C# programming language
 http://msdn.microsoft.com/en-us/library/hf5de04k.aspx
 Java programming language
 http://www.artima.com/insidejvm/ed2/threadsynch.html
 http://journals.ecs.soton.ac.uk/java/tutorial/java/threads/monitors.html
 http://www.csc.villanova.edu/~mdamian/threads/javamonitors.html

 Monitor was studied and developed to


104
overcome the limitations of semaphores
C# monitor

105
Java monitor

106
Monitor
 A monitor usually has
 Member variables as shared resources
 A set of procedures which operate on the shared
resources
 Exclusive lock
 Constraints to manage race condition
 This description of monitor is like a class

107
A sample monitor type
monitor monitor_name {
//Shared resources
procedure P1(...) { ...
}
procedure P2(...) { ...
}
...
procedure Pn(...) { ...
}
initialization_code (..) { ...
}
} 108
Schematic view of a Monitor

109
Monitor implementation
 Monitor must be implemented so that
 only one process can enter the monitor at a time
(mutual exclusive)
 programmer do not need to write code for this
 Other monitor implementation
 have more synchronization mechanism
 add condition variable

110
Condition type
 Declaration
 condition x, y;
 Use condition variable
 there are two operators: wait and signal
 x.wait():
 process calls x.wait() will have to wait or suspend
 x.signal():
 process calls x.signal() will wakeup a waiting process
– the one that called x.wait()

111
Monitor with condition

112
x.signal() characteristics
 x.signal() wakeup only one waiting process
 If no waiting process, it does nothing
 x.signal() is different from that of classical
semaphore
 signal in classical semaphore always change the
state (value) of semaphore

113
Solution to Dining Philosophers
monitor DP
{
enum { THINKING; HUNGRY, EATING) state [5] ;
condition self [5];

void pickup (int i) {


state[i] = HUNGRY;
test(i);
if (state[i] != EATING) self [i].wait;
}

void putdown (int i) {


state[i] = THINKING;
// test left and right neighbors
test((i + 4) % 5);
test((i + 1) % 5);
}
Solution to Dining Philosophers (cont)

void test (int i) {


if ( (state[(i + 4) % 5] != EATING) &&
(state[i] == HUNGRY) &&
(state[(i + 1) % 5] != EATING) ) {
state[i] = EATING ;
self[i].signal () ;
}
}

initialization_code() {
for (int i = 0; i < 5; i++)
state[i] = THINKING;
}
}
Solution to Dining Philosophers (cont)

 Each philosopher invokes the operations


pickup() and putdown() in the following
sequence

dp.pickup (i)

EAT

dp.putdown (i)
Monitor Implementation Using Semaphores
 Variables
semaphore mutex; // (initially = 1)
semaphore next; // (initially = 0)
int next-count = 0;

 Each procedure F will be replaced by

wait(mutex);

//body of F;

if (next-count > 0)
signal(next)
else
signal(mutex);

 Mutual exclusion within a monitor is ensured.


Monitor Implementation
 For each condition variable x, we have:

semaphore x-sem; // (initially = 0)


int x-count = 0;

 The operation x.wait can be implemented as:

x-count++;
if (next-count > 0)
signal(next);
else
signal(mutex);
wait(x-sem);
x-count--;
Monitor Implementation
 The operation x.signal can be implemented
as:
if (x-count > 0) {
next-count++;
signal(x-sem);
wait(next);
next-count--;
}
Linux Synchronization
 Linux:
 disables interrupts to implement short critical
sections

 Linux provides:
 semaphores
 spin locks
Pthreads Synchronization
 pthreads API is OS-independent
 It provides:
 mutex locks
 condition variables

 Non-portable extensions include:


 read-write locks
 spin locks
Question? 122

You might also like