0% found this document useful (0 votes)
26 views20 pages

Unit 3 Part Xyz

Opnotes

Uploaded by

Varun Tyagi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views20 pages

Unit 3 Part Xyz

Opnotes

Uploaded by

Varun Tyagi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

Operating System 1 (RCS-401)

Unit- III CPU Scheduling

1.1 Process Concept:

1.1.1 Process:

 A process is an executing program, including the current values of the program


counter, registers, and variables.
 Difference between a process and a program is that the program is a group of
instructions whereas the process is the activity.
 Process can be described:
 I/O Bound Process- spends more time doing I/O then computation.
 CPU Bound Process- spends more time doing computation.

1.1.2 Process States:

Start : The process has just arrived.


Ready : The process is waiting to grab the processor.
Running : The process has been allocated by the processor.
Waiting : The process is doing I/O work or blocked.
Halted : The process has finished and is about to leave the system.
Operating System 2 (RCS-401)

1.1.3 Process/Task Control Block (PCB):

In the OS, each process is represented by its PCB (Process Control Block). The PCB, generally
contains the following information:

• Process State: The state may be new, ready, running, waiting, halted, and so on.
• Process ID : each process has a unique ID.
• Program Counter (PC) value: The counter indicates the address of the next instruction to be
executed or this process.
• Register values: The registers vary in number and type, depending on the computer
architecture. They include accumulators, index registers, stack pointers, and general-purpose
registers, plus any condition-code information. Along with the program counter, this state
information must be saved when an interrupt occurs, to allow the process
to be continued correctly afterward.
• Memory Management Information (page tables, base/bound
registers etc.):
• Processor Scheduling Information ( priority, last processor burst time
etc.)
• I/O Status Info (outstanding I/O requests, I/O devices held, etc.)
• List of Open Files
• Accounting Info: This information includes the amount of CPU and
real time used, time limits, account numbers, job or process numbers, and so on.
Operating System 3 (RCS-401)

1.1.4 Operations on process:

A. Process Creation

n Parent process create children processes, which, in turn create other processes, forming a
tree of processes

n Resource sharing

l Parent and children share


all resources

l Children share subset of


parent’s resources

l Parent and child share no resources

n Execution

l Parent and children execute concurrently

l Parent waits until children terminate


Operating System 4 (RCS-401)

B. Process Termination

n Process executes last statement and asks the operating system to delete it (exit)

l Output data from child to parent (via wait)

l Process’ resources are deallocated by operating system

n Parent may terminate execution of children processes (abort)

l Child has exceeded allocated resources

l Task assigned to child is no longer required

l If parent is exiting

 Some operating system do not allow child to continue if its parent


terminates

1.1.5 Process Address Space:

 The process address space consists of the linear address range presented to each process.
Each process is given a flat 32- or 64-bit address space, with the size depending on the
architecture. The term "flat" describes the fact that the address space exists in a single
range. (As an example, a 32-bit address space extends from the address 0 to 429496729.)
 Some operating systems provide a segmented address space, with addresses existing not
in a single linear range, but instead in multiple segments. Modern virtual memory
operating systems generally have a flat memory model and not a segmented one.

 A memory address is a given value within the address space, such as 4021f000. The
process can access a memory address only in a valid memory area. Memory areas have
associated permissions, such as readable, writable, and executable, that the associated
process must respect. If a process accesses a memory address not in a valid memory area,
or if it accesses a valid area in an invalid manner, the kernel kills the process with the
dreaded "Segmentation Fault" message.
Operating System 5 (RCS-401)

 Memory areas can contain all sorts of goodies, such as

o A memory map of the executable file's code, called the text section.
o A memory map of the executable file's initialized global variables, called the data
section.
o A memory map of the zero page (a page consisting of all zeros, used for purposes
such as this) containing uninitialized global variables, called the bss section
o A memory map of the zero page used for the process's user-space stack (do not
confuse this with the process's kernel stack, which is separate and maintained and
used by the kernel)
o An additional text, data, and bss section for each shared library, such as the C
library and dynamic linker, loaded into the process's address space.
o Any memory mapped files
o Any shared memory segments
o Any anonymous memory mappings, such as those associated with malloc().

1.1.6 Process Identification Information

 Process Identifier (process ID or PID) is a number used by most operating


system kernels (such as that of UNIX, Mac OS X or Microsoft Windows) to
temporarily uniquely identify a process.
 This number may be used as a parameter in various function calls allowing
processes to be manipulated, such as adjusting the process's priority or killing it
altogether.
 In Unix-like operating systems, new processes are created by the fork() system
call. The PID is returned to the parent enabling it to refer to the child in further
function calls. The parent may, for example, wait for the child to terminate with
the waitpid() function, or terminate the process with kill().
Operating System 6 (RCS-401)

1.2 Threads:

1.2.1 Introduction to Thread:

 A thread is a basic unit of CPU utilization;


it comprises a thread ID, a program
counter, a register set, and a stack. It
shares with other threads belonging to the
same process its code section, data section,
and other operating-system resources, such
as open files and signals.
 A traditional process has a single thread of
control. If a process has multiple threads of
control, it can perform more than one task
at a time.

1.2.2 Benefits of Threads:

A. Responsiveness. Multithreading an interactive application may allow a program to continue


running even if part of it is blocked or is performing a lengthy operation.
B. Resource sharing. Threads share the memory and the resources of the process to which they
belong. The benefit of sharing code and data is that it allows an application to have several
different threads of activity within the same address space.
C. Economy. Allocating memory and resources for process creation is costly. Because threads
share resources of the process to which they belong, it is more economical to create and context-
switch threads.
D. Utilization of multiprocessor architectures. The benefits of multithreading can be greatly
increased in a multiprocessor architecture, where threads may be running in parallel on different
processors. A single threaded process can only run on one CPU, no matter how many are
available. Multithreading on a multi-CPU machine increases concurrency.
Operating System 7 (RCS-401)

1.2.3 Multithreading Models (Management of Threads):

Threads may be provided either at the user level, for user threads, or by the kernel, for
kernel threads. User threads are supported above the kernel and are managed without kernel
support, whereas kernel threads are supported and managed directly by the operating system.
There must exist a relationship between user threads and kernel threads. There are three common
ways of establishing this relationship.

A. Many-to-One Model:

The many-to-one model maps many user-level threads to one kernel


thread. Thread management is done by the thread library in user space,
so it is efficient; but the entire process will block if a thread makes a
blocking system call. Also, because only one thread can access the
kernel at a time, multiple threads are unable to run in parallel on
multiprocessors.

B. One-to-One Model:

The one-to-one model maps each user thread to a kernel thread. It


provides more concurrency than the many-to-one model by
allowing another thread to run when a thread makes a blocking
system call; it also allows multiple threads to run in parallel on
multiprocessors. The only drawback to this model is that creating
a user thread requires creating the corresponding kernel thread.

C. Many-to-Many Model:

The many-to-many model multiplexes many user-level threads to a smaller or equal number of
kernel threads. The one-to-one model allows for greater concurrency.
Operating System 8 (RCS-401)

The many-to-many model suffers from neither of these


shortcomings:
Developers can create as many user threads as
necessary, and the corresponding kernel threads can
run in parallel on a multiprocessor. Also, when a
thread performs a blocking system call, the kernel can
schedule another thread for execution.

1.3 CPU Scheduling Concept:

The main objective of CPU Scheduling is to maximize CPU utilization. Basically we use
process scheduling to maximize CPU utilization. Process Scheduling is done by following
ways:

1.3.1 CPU-I/O Burst Cycle:


The success of CPU scheduling depends on an observed

property of processes: Process execution consists of a cycle

of CPU execution and I/O wait. Processes alternate between

these two states. Process execution begins with a CPU burst.

That is followed by an I/O burst, which is followed by

another CPU burst, then another I/O burst, and so on.

Eventually, the final CPU burst ends with a system request to

terminate execution.
Operating System 9 (RCS-401)

1.3.2 Scheduling Queue: queue is generally stored as a linked list. A ready-queue header
contains pointers to the first and final PCBs in the list. Each PCB includes a pointer field that
points to the next PCB in the ready queue.
 Job queue – set of all processes in the system

 Ready queue – set of all processes residing in main memory, ready and waiting to
execute

 Device queues – set of processes waiting for an I/O device

 Processes migrate among the various queues

1.3.3 Schedulers: A process migrates among the various scheduling queues throughout its
lifetime. The operating system must select, for scheduling purposes, processes from these queues
in some fashion. The selection process is carried out by the appropriate scheduler. Schedulers
have two types:
1. Long Term Scheduler 2. Short Term Scheduler
A. selects which process should be brought A. selects which process should be executed
into the ready queue. next & allocates CPU.
B. L.T.S. is invoked very infrequently B. STS is invoked very frequently
(seconds, minutes). (milliseconds).
C. L.T.S. controls the degree of
multiprogramming (number of processes in
memory)
Operating System 10 (RCS-401)

Medium-Term Scheduler:
Some operating systems, such as time-sharing systems, may introduce an additional,
intermediate level of scheduling. medium-term scheduler is that sometimes it can be
advantageous to remove processes from memory (and from active contention for the CPU) and
thus reduce the degree of multiprogramming. Later, the process can be reintroduced into
memory, and its execution can be continued where it left off. This scheme is called swapping.

1.3.4 Dispatcher:

Dispatcher is the module that gives control of the CPU to the process selected by the short-term
scheduler. This function involves:

 Switching Context
o When CPU switches to another process, the system must save the state of the old
process and load the saved state for the new process.
o Context-switch time is overhead; the system does no useful work while switching.
o Time dependent on hardware support.
 Switching to user mode
 Jumping to the proper location in the user program to restart that program.

The dispatcher should be as fast as possible, given that it is invoked during every process switch.
The time it takes for the dispatcher to stop one process and start another running is known as
dispatch latency.
Operating System 11 (RCS-401)

Q. Write short note on Preemptive Scheduling & Non-Preemptive Scheduling:

Ans. Preemptive scheduling: The preemptive scheduling is prioritized. The highest priority
process should always be the process that is currently utilized.

Non-Preemptive scheduling: When a process enters the state of running, the state of that
process is not deleted from the scheduler until it finishes its service time.

Non-Preemptive Scheduling may be of switching from running to waiting


state, running to ready state, waiting to ready states, process terminates; while others are
preemptive.
Operating System 12 (RCS-401)

1.3.5 Scheduling Performance Criteria:

 CPU Utilization: We want to keep the CPU as busy as possible. Conceptually, CPU
utilization can range from 0 to 100 percent. In a real system, it should range from 40
percent (for a lightly loaded system) to 90 percent (for a heavily used system).

Processor Utilization = (Processor Busy Time / (Processor Busy Time + Processor Idle time))*100

 Throughput: the number of processes that are completed per time unit, called
throughput.

Throughput = No. of Process Completed / Time Unit

 Turnaround Time: The amount of time to execute a particular process is called


turnaround time.

Turnaround Time = T(Process Completed) – T(Process Submitted)

 Waiting Time: the amount of time that a process spends waiting in the ready queue.
Waiting Time = Turnaround Time – Processing Time

 Response Time: time from the submission of a request until the first response is
produced. This measure, called response time.
Response Time = T(First Response) – T(submission of request)

 Optimization Criteria:
 Max CPU utilization
 Max throughput
 Min turnaround time
 Min waiting time
 Min response time
Operating System 13 (RCS-401)

1.3.6 Scheduling Algorithms:

A. First-Come, First-Served Scheduling


B. Shortest-Job-First Scheduling
C. Priority Scheduling
D. Round-Robin Scheduling
E. Multilevel Queue Scheduling
F. Multilevel Feedback Queue Scheduling

A. First-Come, First-Served Scheduling (FCFS):


With this scheme, the process that requests the CPU first is allocated the CPU first. The
implementation of the FCFS policy is easily managed with a FIFO queue. When a process enters
the ready queue, its PCB is linked onto the tail of the queue. When the CPU is free, it is allocated
to the process at the head of the queue. The running process is then removed from the queue.
Example:
Process p1,p2,p3,p4,p5 having arrival time of 0,2,3,5,8 microseconds and processing
time 3,3,1,4,2 microseconds, Draw Gantt Chart & Calculate Average Turn Around Time,
Average Waiting Time, CPU Utilization & Throughput using FCFS.

Processes Arrival Time Processing Time T.A.T. W.T.


T(P.C.)-T(P.S.) TAT- T(Proc.)
P1 0 3 3-0=3 3-3=0
P2 2 3 6-2=4 4-3=1
P3 3 1 7-3=4 4-1=3
P4 5 4 11-5=6 6-4=2
P5 8 2 13-8=5 5-2=3
GANTT CHART:
P1 P2 P3 P4 P5
0 3 6 7 11 13
Average T.A.T. =(3+4+4+6+5)/5 = 22/5 = 4.4 Microsecond
Average W.T. = (0+1+3+2+3)/5 =9/5 = 1.8 Microsecond
CPU Utilization = (13/13)*100 = 100%
Operating System 14 (RCS-401)

Throughput = 5/13 = 6.38


B. Shortest-Job-First Scheduling (SJF):

 Associate with each process the length of its next CPU burst. Use these lengths to
schedule the process with the shortest time
 Two schemes:
i. nonpreemptive – once CPU given to the process it cannot be preempted until
completes its CPU burst
ii. preemptive – if a new process arrives with CPU burst length less than remaining
time of current executing process, preempt. This scheme is known as the
Shortest-Remaining-Time-First (SRTF)
 SJF is optimal – gives minimum average waiting time for a given set of processes
Example:
Process p1,p2,p3,p4 having burst time of 6,8,7,3 microseconds. Draw Gantt Chart & Calculate
Average Turn Around Time, Average Waiting Time, CPU Utilization & Throughput using SJF.
Processes Burst Time T.A.T. W.T.
T(P.C.)-T(P.S.) TAT- T(Proc.)
P4 3 3-0=3 3-3=0
P1 6 9-0=9 9-6=3
P3 7 16-0=16 16-7=9
P2 8 24-0=24 24-8=16

GANTT CHART
P4 P1 P3 P2
0 3 9 16 24
Average T.A.T. =(3+9+16+24)/4 = 13 microsecond
Average W.T. = (0+3+9+16)/4 =28/4 = 7 microsecond
CPU Utilization = (24/24)*100 = 100%
Throughput = 4/24
Operating System 15 (RCS-401)

C. Priority Scheduling:

 A priority number (integer) is associated with each process

 The CPU is allocated to the process with the highest priority (smallest integer  highest
priority)
 Problem  Starvation – low priority processes may never execute
 Solution  Aging – as time progresses increase the priority of the process

Example: Process p1,p2,p3,p4,p5 having burst time of 10,1,2,1,5 microseconds and


priorities are 3,1,4,5,2. Draw Gantt Chart & Calculate Average Turn Around Time, Average
Waiting Time, CPU Utilization & Throughput using Priority Scheduling.

Processes Priority Processing Time T.A.T. W.T.


T(P.C.)-T(P.S.) TAT- T(Proc.)
P2 1 1 1-0=1 1-1=0
P5 2 5 6-0=6 6-5=1
P1 3 10 16-0=16 16-10=6
P3 4 2 18-0=18 18-2=16
P4 5 1 19-0=19 19-1=18

GANTT CHART:
P2 P5 P1 P3 P4
0 1 6 16 18 19

Average T.A.T. =(1+6+16+18+19)/5 = 12 microsecond


Average W.T. = (0+1+6+16+18)/5 =41/5 = 8.2 microsecond
CPU Utilization = (19/19)*100 = 100%
Throughput = 5/19
Operating System 16 (RCS-401)

D. Round-Robin Scheduling:
 Each process gets a small unit of CPU time (time quantum), usually 10-100 milliseconds.
After this time has elapsed, the process is preempted and added to the end of the ready
queue.
 If there are n processes in the ready queue and the time quantum is q, then each process
gets 1/n of the CPU time in chunks of at most q time units at once. No process waits
more than (n-1)q time units.
 Used for time sharing & multiuser O.S.

Example:
Process p1,p2,p3 having processing time of 24,3,3 milliseconds. Draw Gantt Chart &
Calculate Average Turn Around Time, Average Waiting Time, CPU Utilization & Throughput
using Round Robin with time slice of 4milliseconds.

Processes Processing Time T.A.T. W.T.


T(P.C.)-T(P.S.) TAT- T(Proc.)
P1 24 30-0=30 30-24=6
P2 3 7-0=7 7-3=4
P3 3 10-0=10 10-3=7

GANTT CHART
P1 P2 P3 P1 P1 P1 P1 P1
0 4 7 10 14 18 22 26 30

Average T.A.T. =(30+7+7)/3 = 44/3 = 14.67 millisecond


Average W.T. = (6+4+7)/3 =17/3 = 5.67 millisecond
CPU Utilization = (30/30)*100 = 100%
Throughput = 3/30=0.1
Operating System 17 (RCS-401)

E. Multilevel Feedback-Queue Scheduling


1. Ready queue is partitioned into separate queues:
2. The processes are permanently assigned to one queue based on some property of the
process. such as memory size, process priority or process type.
3. Each queue has its own scheduling algorithm.
4. Scheduling must be done between the queues.
F Fixed priority scheduling;. Possibility of starvation.
F Time slice(RR)Scheduling – each queue gets a certain amount of CPU time which
it can schedule amongst its processes

Multilevel Feedback Queue Scheduling


1. A process can move between the various queues
2. Process using too much CPU time will be moved to a lower priority queue.
3. Process waiting too long in a lower priority queue may be moved to a higher priority
queue
4. This form of aging prevents starvation.
Operating System 18 (RCS-401)

5. Example: in a MFQ scheduler with 3 queues, the scheduler first executes all the

processes in queue 0. only when queue 0 is empty, it will execute processes in queue 1.
Similarly processes in queue 2 will be executed only if queues 0 and 1 are empty.
6. Three queues:
Q0 – time quantum 8 milliseconds(RR)
Q1 – time quantum 16 milliseconds(RR)
Q2 – FCFS
7. Scheduling
1. A new job enters queue Q0. When it gains CPU, job receives 8 milliseconds. If it
does not finish in 8 milliseconds, job is moved to tail of queue Q1.
2. If queue Q0 is empty, the process at the head of Q1 is given a quantum of 16
additional milliseconds. If it still does not complete, it is preempted and moved to
queue Q2.Processes in queue 2 are run on an FCFS basis only when queue Q0 and
Q1 are empty
Multilevel-feedback-queue scheduler defined by the following parameters:
1. number of queues
2. scheduling algorithms for each queue
3. method used to determine when to upgrade a process
4. method used to determine when to demote a process
5. method used to determine which queue a process will enter when that process
needs service
Operating System 19 (RCS-401)

Multiple-Processor Scheduling:
CPU scheduling more complex when multiple CPUs are available.
1. Homogeneous multiprocessor system : Processors are identical in terms of
functionality; any available processor can be used to run any process in the queue. Load
sharing can be done. In order to avoid unbalancing, we can allow all processes to go to
one queue and are scheduled onto available processor.
Scheduling approach in Homogeneous MP
Two scheduling approach are used.
a) Self scheduling:
Each processor is self scheduling. Each process examines the ready queue
and selects a process to execute.
Problem arises when more than one processor access the same process from ready
queue. Different techniques are used to resolve the problem.
b) Master - Slave structure :
In this approach, one processor can be appointed as scheduler for
the other processors. The other processor only executes user code.
Operating System 20 (RCS-401)

2. Heterogeneous multiprocessor system


Processors are different in these systems. Only program compiled for a given processor’s
instruction set could be run on that processor.
 Simpler than homogeneous multiprocessor system.
 No need of data sharing.

You might also like