Multiprogramming: Concurrency Improves Throughput
Multiprogramming: Concurrency Improves Throughput
Multiprogramming
A single user cannot always keep CPU or I/0 devices busy at all times.
Multiprogramming offers a more efficient approach to increase system
performance.
In order to increase the resource utilisation, systems supporting
multiprogramming approach allow more than one job (program) to utilize CPU
time at any moment.
More number of programs competing for system resources, better will be
resource utilisation.
The idea is implemented as follows. The main memory of a system contains more
than one program .
The time needed to process n programs is I + CLE _ O, where
The fact that the three devices CPU, CR, and LP operate almost independently makes an
important improvement possible.
We get a much more efficient system if the input and output phases overlap in time.
An even better result is obtained if all three phases overlap.
I + O = (ik + ok )
K=1
With
n n
--- = -----------------
t I + CLE + O
Input and output processing for n programs using overlapping can be as low as:
n-1
M = i1 + mk + on
K=1
with a throughput of
n n
--- = ---------------
t CLE + M
Illustration :
i1 c1 o1 i2 c2 o2 i3 c3 o3 i4 c4 o4
Time saved
i2 c2 o2 i4 c4 o4
Two processes are said to be concurrent if they over lap in their execution.
Precedence Graphs
Program1 :
a : = x + y;
b : = z + 1;
c : = a – b;
w : = c + 1;
These have data dependence and so precedence constraints,
The point of this example is that, within a single program. There are
precedence constraints among the various statements.
Precedence Graph :
S2 S3
S4
S6
S5
S7
Precedence graph
S2 and S3 can be executed after S1 completes.
S4 can be executed after S2 completes.
S5 and S6 can be executed after S4 completes.
S7 can execute only after S5’ S6’ and S3 complete.
Note that S3 can be executed concurrently with S2’ S4’ S5’ and S6’
Concurrency Conditions :
R(SI) = {a1’ a2’ …’ am}’ the read set for Si ’ is the set of all variables
whose values are reference in statement Si ‘ during its execution.
W(Si) = {b1’ b2’ …’ bn}’ the write set for Si ’ is the set of all variables
whose values are changed (written) by the execution if statement Si ’
Ilustrations of READ and WRITE sets:
The values of the variables a and b are used to compute the new value of c.
Hence a and b are in the read set. The (old) value of c is not used in the statement, but
a new value is defined as a result of the execution of the statement. Hence c is in the
write set, but not the read set.
R(c := a – b) = {a,b}
W(c := a – b) = {c}
R (w:= c + 1) = {c}
W (w:= c + 1) = {w}
a is being read into, this its value is changing. The read and write sets are:
R (read (a)) = {}
W (read (a)) = {a}
Example 5 : S 1: a := x + y
S2: b := z + 1
S3: c := a – b
R(S1) = {x, y}
R(S2) = {z}
W(S1) = {a}
W((S2)= {b}
The fork and join instructions were introduced by Conway [1963] and
Dennis and Van Horn [1966]. They were one of the first language
notations for specifying concurrency.
S1
S1;
fork L;
S 2;
.
Fork
.
.
L : S 3;
S2
S3
S2 S1 count := 2;
fork L1;
.
.
joi .
n S1;
go to L2;
L1: S2;
S3 L2: join count;
count : = count – 1;
+ if count 0 then quit;
S1: a := x + y
S2: b := z + 1
S3: c := a – b
S4: w : = c + 1;
To allow the concurrent execution of the first two statements, this program
could be rewritten using fork and join instructions:
count: = 2;
fork L1;
a := x + y;
Go to L2;
L1: b: = z + 1;
L2: join count;
c := a – b;
w := c + 1;
2. Concurrent Statements:
A higher-level language construct for specifying concurrency is the
parbegin/parend statement of Dijkstra [1965a], which has the following form:
parbegin S1; S2; ……Sn parend;
S1
S2 ………… Sn
Sn+1
begin
parbegin
a : = x + y;
b : = z + 1;
parend;
c : = a – b;
w : = c + 1;
end.
Illustration :
S1
S2 S3
S4
S6
S5
S7
var f, g : file of T;
r, s: T;
count: integer;
begin
reset(f);
read(f,s);
while not eof(f)
do begin
count : = 2;
t := s ;
fork L1;
write(g,t);
go to L2;
L1: read(f,s);
L2: join count;
end;
write(g,t);
end.
Using Concurrent Statement :
var f, g : file of T;
s,t,: T;
begin
reset(f);
read(f,s);
while not eof(f)
do begin
t : = s;
parbegin
write(g,t);
read(f,s);
parend;
end;
write(g,t);
end.
PROCESS MANAGEMENT
Process Concept :
The key idea about a process is that it is an activity of some kind and
consists of pattern of bytes (that the CPU interprets as machine instruction,
data, register and stack).
Process hierarchies:
Operating system needs some way to create and kill processes. When a
process is created, it creates another process(es) which in turn crates some
more process(es) and so on, thus forming process hierarchy or process tree.
In UNIX operating system, a process is created by the fork system call and
terminated by exit.
Process States:
The lifetime of a process can be divided into several stages as states, each with
certain characteristics that describe the process. It means that as a process starts
executing, it goes through one state to another state.
The operating system groups all information that it needs about a particular
process into a data structure called a Process Control Block (PCB).
It simply serves as the storage for any information for processes.
A line Printer driver produces characters which are consumed by the Line
Printer.
Producer-Consumer Processes:
Producer should produce into Buffer and Consumer should consume from
buffer.
A Producer can produce into one buffer while the consumer is consuming from
other buffer. P-C must be synchronized so that Consumer does not try to
consume items which are not yet been produced.
Parbegin
Producer: begin
repeat
…..
produce an item in nextp
…..
while counter =n do skip;
buffer[in] := nextp;
in := in+1 mod n;
counter := counter + 1;
until false;
end;
Consumer: begin
repeat
…..
while counter = 0 do skip;
nextc := buffer[out];
out := out+1 mod n;
counter := counter - 1;
consume the item in nextc;
until false;
end;
parend;
This implementation may lead to wrong values, if concurrent execution is uncontrolled
e.g. If counter value is currently 5, and Producer is executing the statement counter
:= counter +1 and Consumer is executing the statement counter:= counter -1
concurrently, then the value of counter may be 4,5 or 6. ( correct answer should be 5
because one produced one consumed)
Problem Definition:
Entry Section
……
Critical Section
…….
Exit Section
Remainder Section
A solution to the Mutual Exclusion (ME) problem must satisfy the following 3 requirements
:
General structure :
begin
common variable decl.
parbegin
P0;
P1;
parend
end.
1. Algorithm 1:
Pi :
repeat
C.S
turn := j ;
Remainder section
until false
Analysis:
1. Ensures only one process at a time can be in CS.
2. However it does not satisfy the progress requirement.
STRICT ALTERNATION of processes in CS.
If turn=0 and P1 wants to enter its CS, it can’t even though P0 in
remainder section.
Algorithm 2:
Problem with Algo.1 is that it fails to remember the state of each process but
remembers only which process is allowed to enter its CS.
As a solution :
CS;
Flag[i] := false;
Remanider section ;
until false
critical section
Flag[i] := false;
remainder section ;
until false
So, in this algorithm, we first set our flag[i] to be true, signaling that we
want to enter our critical section.
Then we check that if the other process also wants to enter its critical section. If
so, we wait. Then we enter our critical section.
When we exit the critical section, we set our flag to be false, allowing the
other process (if it is waiting) to enter its critical section.
Algorithm 4 :
repeat
flag[i] := true;
turn := j;
while (flag[j] and turn=j) do skip;
critical section
Flag[i] := false;
remainder section
until false;
To enter our critical section, we first set our flag[i] to be true, and assert that it is
the other process’ turn to enter if it wants to (turn = j). If both processes try to
enter at the same time, turn will be set to both i and j at roughly the same time.
Only one of these assignments will last; the other will occur, but be immediately
overwritten. The eventual value of turn decides which of the two processes is
allowed to enter its critical section first.
Hardware Solutions :
Many machines provide special hardware instructions that allow one to either test and
modify the content of a word, or to swap the contents of two words, in one memory
cycle. These special instructions can be used to solve the critical section problem. Let
us abstract the main concepts behind these types of instructions by defining the
Test-and Set instruction as follows:
The important characteristic is that these instructions are executed atomically; that is,
in one memory cycle. Thus if two Test-and Set instructions are executed
simultaneously (each on a different cpu), they will be executed sequentially in some
arbitrary order.
If the machine supports the Test-and Set instruction, then mutual
exclusion can be implemented by declaring a boolean variable lock, initialized to
false.
repeat
critical section
lock := false;
remainder section
until false;
critical section
lock := false;
remainder section
until false;
N-Process Software Solutions:
Semaphores:
The solutions to the mutual exclusion problem presented in the last section are
not easy to generalize to more complex problems.
V(S): S: = S + 1;
repeat
P(mutex);
critical section
V(mutex);
remainder section
until false;
S
1
S
2
Consider two concurrently running processes: P1 with a statement S1, and P2
with a statement S2. Suppose that we require that S2 be executed only after S1 has
completed. This scheme can be readily implemented by letting P1 and P2 share a
common semaphore synch, initialized to 0, and by inserting the statements:
S1;
V(synch); in process P1, and the statements
P(synch);
S2;
in process P2. Since synch is initialized to 0, P2 will execute S2 only after
P1 has invoked V(synch), which is after S1.
Further Example:
Semaphore definition with out busy-waiting:
Each semaphore has an integer value and a list of processes. When a process must
wait on a semaphore, it is added to the list of processes. A V operation removes one
process from the list of waiting processes and awakens it.
if S. value < 0
then begin
add this process to S.L;
block;
end;
if S.value ≤ 0
then begin
remove this process P from S.L;
wakeup(P);
end;
Table shows a sequence of 14 states of 6 processes invoking P and V operations on
semaphores. Initially, none of the processes is in its critical
1 A P(s) A 1
2 A V(s) 0
3 B P(s) B 1
4 C P(s) B C 0
5 D P(s) B C,D 0
6 E P(s) B C,D,E 0
7 B V(s) D C,E 0
8 F P(s) D C,E,F 0
9 D V(s) C,F 0
11 none of A,..,F E C,F 0
12 E V(s) F C 0
13 F V(s) C 0
P(s): if s= 1
then s 0 /* lower semaphore */
else BLOCK calling process on s
DISPATCH a ready process