0% found this document useful (0 votes)
8 views150 pages

Chapter 2 OS

The document provides an overview of processes and process scheduling in operating systems, detailing the definition of a process, its memory structure, and various states it can be in. It explains the role of the Process Control Block (PCB) and the types of schedulers (long-term, short-term, medium-term) that manage process execution. Additionally, it discusses scheduling algorithms, their criteria, and examples of calculating waiting and turnaround times for processes.

Uploaded by

fecogi8703
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views150 pages

Chapter 2 OS

The document provides an overview of processes and process scheduling in operating systems, detailing the definition of a process, its memory structure, and various states it can be in. It explains the role of the Process Control Block (PCB) and the types of schedulers (long-term, short-term, medium-term) that manage process execution. Additionally, it discusses scheduling algorithms, their criteria, and examples of calculating waiting and turnaround times for processes.

Uploaded by

fecogi8703
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 150

Process and Process

Scheduling
Processes
 Process
 Process Memory
 Process State
 Process Control Block
 A program in execution
 An instance of a program running on a computer
 The entity that can be assigned to and executed on a processor
 A unit of activity characterized by
 A single sequential thread of execution
 A current state
 An associated set of system resources: memory image, open files,
locks, etc.
When a program is loaded into the memory and it becomes a
process, it can be divided into four sections ─ stack, heap, text
and data.
Process in memory is divided into four sections:
 Program is a passive entity, such as a file
containing a list of instructions stored on disk
(often called an executable file).
 In contrast, a Process is an active entity, with a
program counter specifying the next instruction
to execute and a set of associated resources.
 When a process executes, it passes through different
states. The state of a process is defined in part by the
current activity of that process.
 These stages may differ in different operating systems,
and the names of these states are also not standardized.
 A Process may be in one of the following states:
1. New
2. Ready
3. Running
4. Waiting
5. Terminated
 New-The process is being created.

 Ready- The process is waiting to be assigned to a processor.

 Running- Instructions are being executed.

 Waiting- The process is waiting for some event to


occur(such as an I/O completion or reception of a signal).

 Terminated- The process has finished execution


Process Control Block
Each process is represented in the operating
system by a process control block (PCB)—also
called a task control block. It contains many
pieces of information associated with a specific
process, including these:
 Process state: The state may be new, ready, running
and so on
 Program counter: It indicates the address of the
next instruction to be executed for this program.
 CPU registers: These vary in number and type
based on architecture. They include accumulators,
stack pointers, general purpose registers etc.
 CPU scheduling information: This includes process
priority, pointers to scheduling queues and any
scheduling parameters.
 Memory-management information: This includes the
value of base and limit registers (protection) and page
tables, segment tables depending on memory.
 Accounting information: It includes amount of CPU
and real time used, account numbers, process
numbers etc
 I/O status information: It includes list of I/O devices
allocated to this process, a list of open files etc
 A process is a program that performs a single thread of
execution. This single thread of control allows the process to
perform only one task at a time.
 Most modern operating systems have extended the process
concept to allow a process to have multiple threads of
execution and thus to perform more than one task at a time.
 This feature is especially beneficial on multicore systems,
where multiple threads can run in parallel.
 On a system that supports threads, the PCB is expanded to
include information for each thread. Other changes
throughout the system are also needed to support threads.
Process scheduling Queues=>
-Job Queue -Ready Queue -Device Queue

 Job queue – set of all processes in the system.


 Ready queue –
 set of all processes residing in main memory, ready and
waiting to execute.
 This queue is generally stored as a linked list.
 A ready-queue header contains pointers to the first and
final PCBs in the list. Each PCB includes a pointer field
that points to the next PCB in the ready queue.
 Device queues –
 Set of processes waiting for an I/O device.
 When a process is allocated the CPU, it executes for a
while and eventually quits, is interrupted, or waits for the
occurrence of a particular event, such as the completion
of an I/O request.
 Suppose the process makes an I/O request to a shared
device, such as a disk.
 Since there are many processes in the system, the disk may
be busy with the I/0 request of some other process. The
process therefore may have to wait for the disk.
Queueing-Diagram representation of process scheduling
 In Queueing diagram=>
 Each rectangular box represents a queue.
 Two types of Queues present:
 Ready queue
 Set of device queues
 The circles represent the resources that serve the queues
 the arrows indicate the flow of processes in the system.
 When the process is allocated the CPU and is
executing, one of several events could occur:
1)The process could issue an I/0 request and then be placed
in an I/0 queue.
2) The process could create a new subprocess and wait for
the subprocess's termination.
3)The process could be removed forcibly from the CPU,
as a result of an interrupt, and be put back in the ready
queue.
 Schedular is a component of operating system which
determines which process should be executed and when.

 Three Types of Schedular


 Long Term Schedular
 Short Term Schedular
 Medium Term Schedular
Long Term Main
memory
Schedular Short Term Schedular

Job Pool

Disk
 Long Term Schedular=>
 Selects job from job queue and loads it in ready queue.
 The long-term scheduler executes much less frequently.
 The long-term scheduler may need to be invoked only
when a process leaves the system.
 Because of the longer interval between executions, the
long-term scheduler can afford to take more time to decide
which process should be selected for execution.
 Controls degree of multiprogramming.
 CPU bound process
 I/O process bound
 Mostly used in batch system.
 An I/O-bound process is one that spends more of its time doing
I/O than it spends doing computations.
 A CPU-bound process, in contrast, generates I/O requests
infrequently, using more of its time doing computations.
 Long-term scheduler select a good process mix of I/O-bound and
CPU-bound processes.
 If all processes are I/O bound, the ready queue will almost always
be empty, and the short-term scheduler will have little to do.
 If all processes are CPU bound, the I/O waiting queue will
almost always be empty, devices will go unused, and again the
system will be unbalanced.
 The system with the best performance will thus have a
combination of CPU-bound and I/O-bound processes.
 ShortTerm Schedular=>
 Also called as CPU schedular.
 Selects process from ready queue and allocates that to CPU.
 The short-term scheduler must select a new process for the
CPU frequently.
 The short-term scheduler executes at least once every 100
milliseconds.
 Because of the short time between executions, the short-term
scheduler must be fast.
Medium Term Schedular :
 Medium Term Schedular :
 Time-sharing systems introduced additional, intermediate
level of scheduling i.e. medium term scheduler.
 Removes processes from memory (and from active contention
for the CPU) and thus reduce the degree of multiprogramming.
 Later, the process can be reintroduced into memory, and its
execution can be continued where it left off. This scheme is called
swapping.
 The process is swapped out, and is later swapped in, by the
medium-term scheduler.
 Swapping may be necessary to improve the process mix or
because a change in memory requirements has overcommitted
available memory, requiring memory to be freed up.
 Process scheduling is an essential part of a
Multiprogramming operating systems. Such operating
systems allow more than one process to be loaded into the
executable memory at a time and the loaded process shares
the CPU using time multiplexing.
 A typical process involves both I/O time and CPU time.
 In a uniprogramming system like MS-DOS, time
spent waiting for I/O is wasted and CPU is free during
this time.
 In multiprogramming systems, one process can use
CPU while another is waiting for I/O. This is possible only
with process scheduling.
 Process execution consists of a cycle of CPU Burst and I/O Burst.
Process alternates between these two states.
CPU Burst
Process execution begins with a: is when
That is followed by an, the
process is
Which is followed by another executed
That by another, I/O Burst is in CPU
when the
And so on…. process is
waiting for
I/O for further
execution
 CPU-scheduling decisions may take place under the following
four circumstances:
1. When a process switches from the running state to the
waiting state (for example, as the result of an I/O request or
an invocation of wait() for the termination of a child process).
2. When a process switches from the running state to the ready
state (for example, when an interrupt occurs)
3. When a process switches from the waiting state to the ready
state (for example, at completion of I/O)
4.When a process terminates.
 For conditions 1 and 4 there is no choice - A new process
must be selected.
 For conditions 2 and 3 there is a choice - To either continue
running the current process, or select a different one.
 If scheduling takes place only under conditions 1 and 4,
the system is said to be non-preemptive, or
cooperative. Under these conditions, once a process starts
running it keeps running, until it either voluntarily blocks or
until it finishes. Otherwise the system is said to be
preemptive.
There are several different criteria to consider when trying to
select the "best" scheduling algorithm for a particular
situation and environment, including:
 CPU utilization - keeping the CPU as busy as possible.
 Throughput - The number of processes that are completed per
time unit.
 For long processes, this rate may be one process per hour; for
short transactions, it may be ten processes per second.
 Turnaround time - The interval from the time of submission of a
process to the time of completion is the turnaround time.
 Periods spent waiting to get into memory + waiting in
the ready queue + executing on the CPU + I/O.
 Waiting time - amount of time that a process spends waiting in
the ready queue.
 Waiting time is the sum of the periods spent waiting in the
ready queue.
 Response time - time from the submission of a request until the
first response is produced.
 This measure is the time it takes to start responding, not
the time it takes to output the response.
 The first-come, first-served(FCFS) is the simplest
scheduling algorithm.
 The process that requests the CPU first is allocated the
CPU first.
 The implementation of the FCFS policy is easily managed
with a FIFO queue.
 When a process enters the ready queue, its PCB is linked
onto the tail of the queue. When the CPU is free, it is
allocated to the process at the head of the queue.
 The running process is then removed from the queue.
 On the negative side, the average waiting time under the
FCFS policy is often quite long.
 The average waiting time is not minimal and may vary
substantially if the process’s CPU burst times vary
greatly.
 The FCFS scheduling algorithm is nonpreemptive.
Waiting Time =Service Time - Arrival Time

Turnaround Time = Waiting Time + Burst Time

Turnaround Time = Completion Time- Arrival Time

Waiting Time =Turn Around Time - Burst Time


Example1=> Consider following set of processes that
arrive at time 0, with length of the CPU burst given in
milliseconds:
Process Burst Time
P1 24
P2 3
P3 3

For all processes arrival time is 0.


 A Gantt chart is a horizontal bar chart developed as a
production control tool in 1917 by Henry L. Gantt, an
American engineer and social scientist.
 Gantt chart representation=>

0 24 27 30
Calculation=>

Waiting Time:
=>wT=ST-AT
P1=>0-0=0
P2=>24-0=24
P3=>27-0=27
Turnaround Time:
=>TT=waiting time+Burst Time
P1=>0+24=24
P2=>24+3=27
P3=>27+3=30
Process Burst Time Waiting Time
P1 24 0-0 = 0
P2 3 24-0 = 24
P3 3 27-0 = 27

Average Waiting Time = 0+24+27 = 17 milliseconds


3
Turnaround Time = Waiting Time + Burst Time
Process Burst Time Waiting Time Turnaround T.
P1 24 0-0 = 0 0+24=24
P2 3 24-0 = 24 24+3=27
P3 3 27-0 = 27 27+3=30

Avg.Turnaround Time= 24+27+30 = 27ms


3
Example2 => Consider following set of processes with
length of the CPU burst given in milliseconds:

Process Arrival Execution


Time Time
P1 0 5
P2 1 3
P3 2 8
P4 3 6
Process Arrival Execution
Time Time
 A Gantt chart is a horizontal bar chart developed as a
P1 0 5
production control tool in 1917
P2
by 1
Henry3
L. Gantt, an
American engineer and social scientist.
P3 2 8
 Gantt chart representation=> P4 3 6

0 5 8 16 22
 Calculation=>
Waiting Time=Service Time-Arrival Time
P1=>0-0=0
P2=>5-1=4
P3=>8-2=6
P4=>16-3=13
AWT=(0+4+6+13)/4
Turnaround Time=Waiting Time+Burst Time
P1=>0+5=5
P2=>4+3=7
P3=>6+8=14
P4=>13+6=19
ATT=(5+7+14+19)/4
Process Burst Time Waiting Time
P1 5 0-0 = 0
P2 3 5-1 = 4
P3 8 8-2 = 6
P4 6 16-3 = 13

Waiting Time = Service Time – Arrival Time

Average Waiting Time = 0+4+6+13 = 5.75 milliseconds


4
Turnaround Time = Waiting Time + Burst Time

Process Burst Time Waiting Time Turnaround T.


P1 5 0-0 = 0 0+5=5
P2 3 5-1 = 4 4+3=7
P3 8 8-2 = 6 6+8=14
P4 6 16-3 = 13 13+6=19

Avg.Turnaround Time= 5+7+14+19 = 11.25ms


4
Example3 => Consider following set of processes with
length of the CPU burst given in milliseconds:

Process Arrival Execution


Time Time
P1 0 10
P2 0 29
P3 0 3
P4 0 7
P5 0 12
0 10 39 42 49 61
=

Average Waiting Time = 0+10+39+42+49 = 28 ms


5
Avg.
Turnaround = 10+39+42+49+61 = 40.2 ms
Time 5
Example 3=> Consider following set of processes with
length of the CPU burst given in milliseconds:

Process Arrival Execution


Time Time
P1 0 5
P2 2 3
P3 4 8
P4 5 15
0 5 8 16 31

Average Waiting Time = 0+3+4+11 = 4.5 ms


4

Avg.Turnaround Time= 5+8+16+31 = 15 ms


4
 Convoy effect=>If processes with higher burst time
arrived before the processes with smaller burst time then
smaller processes have to wait for a long time for longer
processes to release the CPU.
 The FCFS is particularly troublesome for time-sharing
systems, where it is important that each user get a share of the
CPU at regular intervals.
 It would be disastrous to allow one process to keep the
CPU for an extended period.
Queueing-Diagram representation of process scheduling
 The FCFS scheduling algorithm is nonpreemptive.
Once the CPU has been allocated to a process, that
process keeps the CPU until it releases the CPU,
either by terminating or by requesting I/O.
The FCFS algorithm is thus particularly troublesome for
timesharing systems, where it is important that each
user get a share of the CPU at regular intervals.
 Shortest-job-first (SJF) scheduling algorithm associates
with each process the length of the process’s next CPU
burst.
 When the CPU is available, it is assigned to the process
that has the smallest next CPU burst. If the next CPU
bursts of two processes are the same, FCFS scheduling is
used to break the tie.
 The SJF algorithm can be either preemptive or non-
preemptive.
 A more appropriate term for this scheduling method would
be Shortest-Next-CPU-Burst algorithm. Because
scheduling depends on the length of the next CPU burst
of a process rather than its total length.
 Easy to implement in Batch systems where required CPU
time is known in advance.
 Impossible to implement in interactive systems where
required CPU time is not known.
Example1=> Consider following set of processes that
with length of the CPU burst given in milliseconds:
Process Burst Time
P1 18
P2 2
P3 2
P4 6
Arrival Time is not mentioned. So take 0 for all
processes.
 Gantt chart representation=>

Process Burst Time


P1 18
P2 2
P3 2
P4 6

0 2 4 10 28
 Calculation=>

 Waiting Time=>ST-AT
Turnaround Time=>Wt+BT
P1=10-0=10
P1=10+18=28
P2=0-0=0
P2=0+2=2
P3=2-0=2
P4=4-0=4
P3=2+2=4
AWT=(10+0+2+4)/4
P4=4+6=10
ATT=(28+2+4+10)/4
Process Burst Waiting
Time Time
P1 18 10-0 = 10
P2 2 0-0 = 0
P3 2 2-0 = 2
P4 6 4-0 = 4

Average Waiting Time = 10+0+2+4 = 4 milliseconds


4
Process Burst Time Waiting Time Turnaround T.
P1 18 10-0 = 10 10+18 = 28
P2 2 0-0 = 0 0+2 = 2
P3 2 2-0 = 2 2+2 = 4
P4 6 4-0 = 4 4+6 = 10

Turnaround Time = Waiting Time + Burst Time

28+2+4+10 = 11 ms
Avg.Turnaround Time=
4
Example 2=> Consider following set of processes that
with length of the CPU burst given in milliseconds:

Arrival Burst
Process Time Time
P1 0 7
P2 2 4
P3 4 1
P4 5 4
 Gantt chart representation=>
Arrival Burst
Process Time Time
P1 0 7
P2 2 4
P3 4 1
P4 5 4

0 7 8 12 16
 Calculation
WT=ST-AT
P1=0-0=0
P2=8-2=6
P3=7-4=3
P4=12-5=7
AWT=(0+6+3+7)/4
Arrival Burst Waiting
Process Time Time Time
P1 0 7 0-0 = 0
P2 2 4 8-2 = 6
P3 4 1 7-4 = 3
P4 5 4 12 – 5 =7

Average Waiting Time = 0+6+3+7 = 4 milliseconds


4
Process AT BT WT Turnaround T.
P1 0 7 0-0 = 0 0+7 = 7
P2 2 4 8-2 = 6 6+4 = 10
P3 4 1 7-4 = 3 3+1 = 4
P4 5 4 12 – 5 =7 7+4 = 11

7+10+4+11 = 8 ms
Avg.Turnaround Time=
4
Preemptive SJF scheduling is sometimes called shortest-
remaining time-first scheduling (SRTF)
Example 1=> Consider following set of processes that
with length of the CPU burst given in milliseconds:

Arrival Burst
Process Time Time
P1 0 7
P2 2 4
P3 4 1
P4 5 4
 Gantt chart representation=>

Arrival Burst Time


Process Time
P1 7 6 5 0
P1 0 7
4 P2 4 3 2 0
P2 2
P3 4 1 P3 1 0

P4 5 4 P4 4 0

0 1 2 3 4 5 7 11 16
 Calculation=>
 WT=Total waiting time-No of ms process executed-Arrival Time
P1=11-2-0=9
P2=5-2-2=1
Turnaround Time=WT+BT
P3=4-0-4=0
P4=7-0-5=2
AWT=
Arrival Burst Waiting
Process Time Time Time
P1 0 7 11 - 2 - 0 =9
P2 2 4 5–2–2=1
P3 4 1 4 – 4=0
P4 5 4 7–5=2

Average Waiting Time = 9+1+0+2 = 3 milliseconds


4
Process AT BT WT Turnaround T.
P1 0 7 11 - 2 - 0 =9 9+7=16
P2 2 4 5–2–2=1 4+1=5
P3 4 1 4 – 4=0 0+1=1
P4 5 4 7–5=2 2+4=6

16+5+1+6 = 7 ms
Avg.Turnaround Time=
4
 Example 3

Process Arrival Execution


Time Time
P1 0 6
P2 0 8
P3 0 7
P4 0 3

0 3 9 16 24
Problems with SJF Scheduling=>
 The real difficulty is knowing the length of next CPU burst.
 Although the SJF algo is optimal, it cannot be implemented at the
level of short term scheduling.
 There is no way to know length of next CPU burst.
One approach is:
 To try to approximate SJF scheduling.
 We may not know the length of next CPU burst, but we may able to
predict this value. We may expect that the next CPU burst will
be similar in length to the previous one.
 Thus by computing an approximation of the length of next CPU
burst, we can pick the process with the shortest predicted
CPU burst.
 The SJF algorithm is a special case of the general priority
scheduling algorithm.
 A priority is associated with each process, and the CPU is
allocated to the process with the highest priority. Equal
priority processes are scheduled in FCFS order.
 The SJF algorithm is simply a priority algorithm where the
priority (p) is the inverse of the (predicted) next CPU burst.
 The larger the CPU burst, the lower the priority, and vice
versa. This algorithm can be either preemptive or non-preemptive.
 In practice, priorities are implemented using integers within a
fixed range, but there is no agreed-upon convention as to whether
"high" priorities use large numbers or small numbers.
Example1=> Consider following set of processes that
with length of the CPU burst given in milliseconds:
Process BT Priority
P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2

Arrival Time is 0 for all.


 Gantt chart representation=>
Process BT Priority
P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2

0 1 6 16 18 19
 Calculation
Turnaround Time:
 Waiting time=>St-AT TT=WT+BT
P1=6-0=6 P1=6+10=16
P2=0-0=0 P2=0+1=1
P3=16-0=16 P3=16+2=18
P4=18-0=18 P4=18+1=19
P5=1-0=1 P5=1+5=6
AWT= ATT=
Process Burst Time WT
P1 10 6-0=6
P2 1 0-0=0
P3 2 16-0=16
P4 1 18-0=18
P5 5 1-0=1

Average Waiting Time = 6+0+16+18+1 = 8.2 ms


5
Process Burst Time Waiting Time Turnaround T.
P1 10 6-0=6 6+10=16
P2 1 0-0=0 0+1=1
P3 2 16-0=16 16+2=18
P4 1 18-0=18 18+1=19
P5 5 1-0=1 1+5=6

16+1+18+19+6 = 12 ms
Avg.Turnaround Time=
5
Example2=> Consider following set of processes that
with length of the CPU burst given in milliseconds:
Process AT BT Priority
P1 0 10 3
P2 2 3 1
P3 3 9 2
P4 3 5 4
Process AT BT Priority
P1 0 10 3
P2 2 3 1
P3 3 9 2
P4 3 5 4

P1 P2 P3 P4

0 10 13 22 27
Example1=> Consider following set of processes that
with length of the CPU burst given in milliseconds:
Process AT BT Priority
P1 0 10 3
P2 2 3 1
P3 3 9 2
P4 3 5 4
 Gantt chart representation=>

Process AT BT Priority P1 10 8 0
P2 3 0
P1 0 10 3
P2 2 3 1 P3 9 0
P3 3 9 2
P4 3 5 4 P4 5 0

0 2 5 14 22 27
Calculation=>
WT=Total WT-No of ms processed-AT
P1=14-2-0=12
P2=2-0-2=0
P3=5-0-3=2
P4=22-0-3=19

TT=WT+BT
Process BT AT WT
P1 10 0 14-2-0=12
P2 1 2 2-2=0
P3 2 3 5-3=2
P4 1 3 22-3=19

Average Waiting Time = 12+0+2+19 = 8.25 ms


4
Process Burst Time Waiting Time Turnaround T.
P1 10 14-2-0=12 12+10=22
P2 1 2-2=0 0+1=1
P3 2 5-3=2 2+2=4
P4 1 22-3=19 19+1=20

22+1+4+20 = 11.75 ms
Avg.Turnaround Time=
4
Priority(Preemptive)
Proc AT BT Priorit
ess y
7
P1 0 8 3
P2 1 1 1
P3 2 3 2
P4 3 2 3
P5 4 6 4

P1 P2 P3 P1 P4 P5

0 1 2 5 12 14 20
 Priorities can be assigned either internally or
externally.
Internal priorities are assigned by the OS using criteria such
as average burst time, ratio of CPU to I/O activity, system
resource use, and other factors available to the kernel.
External priorities are assigned by users, based on the
importance of the job, fees paid, politics, etc.
 Priority scheduling can suffer from a major problem known
as indefinite blocking, or starvation, in which a low-
priority task can wait forever because there are always some
other jobs around that have higher priority.
 If this problem is allowed to occur, then processes will either run
eventually when the system load lightens, or will eventually
get lost when the system is shut down or crashes. (There are
rumors of jobs that have been stuck for years.)
 One common solution to this problem is aging, in which
priorities of jobs increase the longer they wait.
 Under this scheme a low-priority job will eventually get its
priority raised high enough that it gets run.
 Priority scheduling can be either preemptive or non-
preemptive.
When a process arrives at the ready queue, its priority is
compared with the priority of the currently running
process.
A preemptive priority scheduling algorithm will preempt
the CPU if the priority of the newly arrived process is
higher than the priority of the currently running process.
A nonpreemptive priority scheduling algorithm will
simply put the new process at the head of the ready
queue.
 The round-robin (RR) scheduling algorithm is designed
especially for timesharing systems.
 Round robin scheduling is similar to FCFS scheduling,
except that CPU bursts are assigned with limits called time
quantum.
 When a process is given the CPU, a timer is set for whatever
value has been set for a time quantum.
 If the process finishes its burst before the time quantum
timer expires, then it is swapped out of the CPU just like
the normal FCFS algorithm.
 If the timer goes off first, then the process is swapped out
of the CPU and moved to the back end of the ready queue.
 The ready queue is maintained as a circular queue, so
when all processes have had a turn, then the scheduler
gives the first process another turn, and so on.
 RR scheduling can give the effect of all processes
sharing the CPU equally, although the average wait
time can be longer than with other scheduling algorithms.
Example1=> Consider following set of processes that
with length of the CPU burst given in milliseconds:
Process Burst Time
P1 18
P2 2
P3 2
P4 6

Time quantum=3
 Gantt chart representation=>

Process BT
P1 18 15 12 0
P1 18
P2 2 0
P2 2
P3 2 P3 2 0

P4 6 P4 6 3 0

Time Quantum=3

0 3 5 7 10 13 16 28
 Calculation
 WT= TWT-No of msprocess exe-AT
 P1=16-3-3-0=10
 P2=3-0-0=3
 P3=5-0-0=5
 P4=13-3-0=10
Process BT WT

P1 18 16-3-3-0=10

P2 2 3-0=3

P3 2 5-0=5

P4 6 13-3-0=10

Average Waiting Time = 10+3+5+10 = 7 ms


4
Process Burst Time Waiting Time Turnaround T.
P1 18 16-3-3-0=10 10+18=28
P2 2 3-0=3 3+2=5
P3 2 5-0=5 5+2=7
P4 6 13-3-0=10 10+6=16

28+5+7+16 = 14 ms
Avg.Turnaround Time=
4
Example 2=> Consider following set of processes that
with length of the CPU burst given in milliseconds.
Time Slice=10

Process AT BT
P1 0 10
P2 0 29
P3 0 3
P4 0 7
P5 0 12
 Gantt chart representation=>Time Slice=10
Process AT BT P1 10 0
P1 0 10 P2 29 19 9 0
P2 0 29 P3 3 0
P3 0 3 0
P4 7
P4 0 7 0
P5 12 2
P5 0 12

0 10 20 23 30 40 50 52 61
 Calculation
Process BT WT
P1 10 0-0=0
P2 29 52-10-10-0=32
P3 3 20-0=20
P4 7 23-0=23
P5 12 50-10-0=40

Average Waiting Time = 0+32+20+23+40 = 23 ms


5
Process BT WT Turnaround T.
P1 10 0-0=0 0+10=10
P2 29 52-10-10-0=32 32+29=61
P3 3 20-0=20 20+3=23
P4 7 23-0=23 23+7=30
P5 12 50-10-0=40 40+12=52

10+61+23+30+52 35.2 ms
Avg.Turnaround Time= =
5
 In the RR scheduling algorithm, no process is allocated the
CPU for more than 1 time quantum in a row (unless it is
the only runnable process).
 If a process’s CPU burst exceeds 1 time quantum, that
process is preempted and is put back in the ready
queue.The RR scheduling algorithm is thus preemptive.
 The performance of RR is sensitive to the time quantum
selected. If the quantum is large enough, then RR reduces
to the FCFS algorithm; If it is very small, then each
process gets 1/nth of the processor time and share the
CPU equally.
 BUT, a real system invokes overhead for every context
switch, and the smaller the time quantum the more context
switches there are.

 Turnaround time also depends on the size of the time


quantum. In general, turnaround time is minimized if most
processes finish their next cpu burst within one time quantum.
 A thread is a basic unit of CPU utilization.

 It comprises of a thread ID, a program counter, a register set


and a stack.

 It shares its code section, data section, and other operating-


system resources, such as open files and signals with other threads
belonging to the same Process.

 A traditional process has a single thread of control. If a process


has multiple threads of control, it can perform more than one
task at a time.
 Many software packages that run on modern desktop PCs
are multithreaded.
 For example:
 A word processor may have:
 A thread for displaying graphics,
 Another thread for responding to keystrokes from
the user, and
 A third thread for performing spelling and grammar
checking in the background.
 Threads also play a vital role in remote procedure
call(RPC) systems.
 RPCs allows interprocess communication by providing
a communication mechanism similar to ordinary function or
procedure calls.
 Many operating system kernels are multithreaded;
several threads operate in the kernel, and each thread
performs a specific task, such as managing devices or
interrupt handling.
Benefits:
 Responsiveness:
 Multithreading is an interactive application that may allow a program
to continue running even if part of it is blocked, there by increasing
responsiveness to the user.
 For example: A multithreaded web browser could still allow user
interaction in one thread while an image was being loaded in another
thread.
 Resource sharing:
 Bydefault, threads share the memory and their sources of the
process to which they belong. The benefit of sharing code and data is
that it allows an application to have several different threads of
activity within the same address space.
Benefits:
 Economy:
 Allocating memory and resources for process creation is costly. Since
threads share resources of the process to which they belong,
they will provide cost effective solution.
 Utilization of multiprocessor architectures:
 In multiprocessor architecture, threads maybe running in parallel
on different processors.
 A single threaded process can only run on one CPU, no matter
how many are available.
 Multithreading on a multi-CPU machine increases concurrency.
 Support for threads maybe provided either at the user level
or at the kernel level.
 Two types of threads:
User thread
Kernel thread
 User threads are supported above the kernel and are
managed without kernel support, whereas kernel
threads are supported and managed directly by the
operating system.
User Thread
 Thread Management done by a thread library.
 Library provides-
-Thread Creation
-Scheduling and
-Management
 User threads are fast to create and manage.
 User-thread libraries-
-POSIX – Pthreads
-Mach C-threads
-Solaris 2 UI-threads
Kernel Thread
 supported and managed directly by the operating system.
 Kernel threads are slower to create and manage than user
threads.
 Examples-
Windows NT
Windows 2000
Solaris 2
BeOS
Tru64 Unix
 Many-to-One Model:
 The many-to-one model maps many
user-level threads to one kernel
thread.
 Thread management is done by the
thread library in user space, so
it is efficient.
 Entire process blocks if a thread
makes blocking system call.
 Only one thread can access the
kernel at a time, hence multiple
threads are unable to run in
parallel on multiprocessors.
 E.g. Green threads available for
solaris
 One-to-One Model:
 The one-to-one model maps each
user thread to a kernel thread.
 It provides more concurrency
than the many-to-one model. It
allows multiple threads to run in
parallel on multiprocessors.
 The only drawback to this model is
that creating a user thread requires
creating the corresponding kernel
thread.
 The overhead of creating kernel
threads can burden the
performance of an application.
 E.g. Linux, Windows 95, 98, NT,
2000
 Many-to-Many Model:
 The many-to-many model
multiplexes many user-level
threads to a smaller or equal
number of kernel threads.
 The number of kernel threads
maybe specific to either a
particular application or a
particular machine.
 Developers can create as many
user threads as necessary, and
the corresponding kernel
threads can run in parallel
on a multiprocessor
QP
7M

6M
 6M
 4m 4m
10

10
10M

6M

You might also like