0% found this document useful (0 votes)
11 views27 pages

Unit 01: Chapter 3-Cpu Scheduling

CPU scheduling is essential for multiprogrammed operating systems, allowing multiple processes to share CPU time effectively. It involves preemptive and non-preemptive scheduling methods, with various algorithms like FCFS, SJF, and Round Robin, each with distinct advantages and disadvantages. Key scheduling criteria include CPU utilization, throughput, turnaround time, waiting time, and response time, which help evaluate the performance of scheduling algorithms.

Uploaded by

safasiyan3
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views27 pages

Unit 01: Chapter 3-Cpu Scheduling

CPU scheduling is essential for multiprogrammed operating systems, allowing multiple processes to share CPU time effectively. It involves preemptive and non-preemptive scheduling methods, with various algorithms like FCFS, SJF, and Round Robin, each with distinct advantages and disadvantages. Key scheduling criteria include CPU utilization, throughput, turnaround time, waiting time, and response time, which help evaluate the performance of scheduling algorithms.

Uploaded by

safasiyan3
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 27

UNIT 01: CHAPTER 3- CPU SCHEDULING

CPU SCHEDULING:

 CPU scheduling is the basis of multiprogrammed operating systems.


 CPU scheduling is a process which allows one process to use
the CPU while the execution of another process is on hold(in waiting state)
 By switching the CPU among processes, the operating system can make the
computer more productive.

CPU-I/O Burst Cycle


The success of CPU scheduling depends on the following observed property of
processes:
 Process execution consists of a cycle of CPU execution and I/O wait.
Processes alternate between these two states.
 Process execution begins with a CPU burst. That is followed by an I/O
burst, then another CPU burst, then another I/O burst, and so on.
 Eventually, the last CPU burst will end with a system request to terminate
execution, rather than with another I/O burst.
Whenever the CPU becomes idle, the operating system must select one of the
processes in the ready queue to be executed. The selection process is carried out
by the short-term scheduler (or CPU scheduler). The scheduler selects a
process from the processes in memory that are ready to execute and allocates the
CPU to that process.
CPU-scheduling
1. Preemptive Scheduling
2. Non Preemptive Scheduling

CPU scheduling decisions may take place under the following four circumstances:
1. When a process switches from the running state to the waiting state (for example, I/O
request, or invocation of wait for the termination of one of the child processes)
2. When a process switches from the running state to the ready state (for example, when
an interrupt occurs)
3. When a process switches from the waiting state to the ready state (for example,
completion of I/O)
4. When a process terminates

Preemptive (changing from running state to ready state or from waiting state to ready state)
 When scheduling takes place only under circumstances 2 and 3, we say the scheduling
scheme is preemptive
 Preemptive Scheduling means once a process started its execution, the currently running
process can be paused for a short period of time to handle some other process of higher
priority, it means we can preempt the control of CPU from one process to another if
required.
Examples:- Round Robin

Non-preemptive (terminates or switches from running to waiting)


 When scheduling takes place only under circumstances 1 and 4, we say the scheduling
scheme is non-preemptive
 Non-Preemptive Scheduling means once a process starts its execution or the CPU is
processing a specific process it cannot be halted or in other words we cannot preempt
(take control) the CPU to some other process. Examples:- FCFS
 Under non-preemptive scheduling, once the CPU has been allocated to a process, the
process keeps the CPU until it releases the CPU either by terminating or by switching
to the waiting state
CPU Scheduler in OS
 Long-term Scheduler
The function of a long-term scheduler is to select the jobs from the pool of
jobs and load these jobs into main memory(Ready Queue) of that computer,
so the long-term scheduler is also called job-scheduler.

 Short-term Scheduler
The function of a short-term scheduler is to select a job from the ready
queue and gives the control of the CPU to that process with the help of
"Dispatcher". That's why the short-term scheduler is also said to be the CPU
scheduler.
The method of selecting a process from the ready queue by the short-term
scheduler is depending on the Scheduling algorithms.
 Medium-term Scheduler
If a process requests an I/O in the middle of execution, then the process
removed from the main memory is loaded into the waiting queue. When the
I/O operation is completed, then the job moved from waiting queue to ready
queue. These two operations are performed by the medium-term scheduler.

Scheduling queues

1. Job Queue

2. Ready Queue

3. Waiting Queue : When the process needs some I/O operation in order to
complete its execution, OS changes the state of the process from running to
waiting. The context (PCB) associated with the process gets stored on the waiting
queue which will be used by the Processor when the process finishes the I/O.

Dispatcher
 A dispatcher is a special program that comes into play after the scheduler.
 The dispatcher is done after the scheduler.
 It gives control of the CPU to the process selected by the short-term
scheduler. After selecting the process, the dispatcher gives CPU to the
process.
 The main function of the dispatcher is switching, it means switching the
CPU from one process to another process.
 Another function of the dispatcher is jumping to the proper location in the
user program and ready to start execution. The time taken by the dispatcher
to stop one process and start another running is called the Dispatch
Latency in OS.
 The degree of multiprogramming is dependent on the dispatch latency. If
the dispatch latency is increasing, then the degree of multiprogramming
decreases.

Scheduling Criteria
1. CPU utilization: We want to keep the CPU as busy as possible. Conceptually,
CPU utilization can range from 0 to 100 percent. In a real system, it should range
from 40 percent (for a lightly loaded system) to 90 percent (for a heavily used
system).
2. Throughput: If the CPU is busy executing processes, then work is being done.
One measure of work is the number of processes that are completed per time
unit, called throughput. For long processes, this rate may be one process per
hour; for short transactions, it may be 10 processes per second.
3. Turnaround time : From the point of view of a particular process, the
important criterion is how long it takes to execute that process. The interval
from the time of submission of a process to the time of completion is the
turnaround time. Turnaround time is the sum of the periods spent waiting to get
into memory, waiting in the ready queue, executing on the CPU, and doing I/O.
4. Waiting time: The CPU scheduling algorithm does not affect the amount of
time during which a process executes or does I/O; it affects only the amount of
time that a process spends waiting in the ready queue. Waiting time is the sum
of the periods spent waiting in the ready queue.
5. Response time: In an interactive system, turnaround time may not be the
best criterion. Often, a process can produce some output fairly early and can
continue computing new results while previous results are being output to the
user. Thus, another measure is the time from the submission of a request until
the first response is produced. This measure, called response time, is the time it
takes to start responding, not the time it takes to output the response.
The turnaround time is generally limited by the speed of the output device. It is
desirable to maximize CPU utilization and throughput and to minimize
turnaround time, waiting time, and response time.
Scheduling Algorithms:
A Process Scheduler schedules different processes to be assigned to the CPU
based on particular scheduling algorithms.
 Arrival Time: The time at which the process arrives in the ready queue.
 Completion Time: The time at which the process completes its execution.
 Burst Time: Time required by a process for CPU execution.
 Turn Around Time: Time Difference between completion time and arrival
time.
 Waiting Time(W.T): Time Difference between turn around time and burst time.
 Multiple-Level Queues Scheduling
These algorithms are either non-preemptive or preemptive. Non-preemptive
algorithms are designed so that once a process enters the running state, it
cannot be preempted until it completes its allotted time, whereas the
preemptive scheduling is based on priority where a scheduler may preempt a
low priority running process anytime when a high priority process enters into a
ready state.

First Come First Serve (FCFS)


 Jobs are executed on first come, first serve basis.
 It is a non-preemptive, pre-emptive scheduling algorithm.
 Easy to understand and implement.
 Its implementation is based on FIFO queue.
 Poor in performance as average wait time is high.
Consider a set of three processes P1, P2 and P3 arriving at time instant 0 and having CPU
burst times as shown below:
Thus if processes with smaller CPU burst times arrive earlier, then average
waiting and average turnaround times are less. The algorithm also suffers from
what is known as a convoy effect. Consider the following scenario. Let there be a
mix of one CPU-bound process and many I/O bound processes in the ready
queue.
The CPU-bound process gets the CPU and executes (long I/O burst).
In the meanwhile, I/O bound processes finish I/O and wait for CPU thus leaving
the I/O devices idle. The CPU-bound process releases the CPU as it goes for an
I/O. I/O bound processes have short CPU bursts and they execute and go for I/O
quickly. The CPU is idle till the CPU-bound process finishes the I/O and gets hold
of the CPU. The above cycle repeats. This is called the convoy effect.
Here small processes wait for one big process to release the CPU. Since the
algorithm is non-preemptive in nature, it is not suited for timesharing systems.
Shortest Job First (SJF)
 This is also known as shortest job first, or SJF
 This is a non-preemptive, pre-emptive scheduling algorithm.
 Best approach to minimize waiting time.
 Easy to implement in Batch systems where required CPU time is known in
advance.
 Impossible to implement in interactive systems where required CPU time is not
known.
 The processer should know in advance how much time process will take.
example, consider the following set of processes P1, P2, P3, P4 and their CPU
burst times:

The SJF algorithm produces the most optimal scheduling scheme. For a given set
of processes, the algorithm gives the minimum average waiting and turnaround
times. This is because, shorter processes are scheduled earlier than longer ones
and hence waiting time for shorter processes decreases more than it increases
the waiting time of long processes.
The main disadvantage with the SJF algorithm lies in knowing the length of the
next CPU burst. In case of long-term or job scheduling in a batch system, the
time required to complete a job as given by the user can be used to schedule.
SJF algorithm is therefore applicable in long-term scheduling.
The algorithm cannot be implemented for CPU scheduling as there is no way to
accurately know in advance the length of the next CPU burst. Only an
approximation of the length can be used to implement the algorithm.
But the SJF scheduling algorithm is provably optimal and thus serves as a
benchmark to compare other CPU scheduling algorithms. SJF algorithm could be
either preemptive or non-preemptive. If a new process joins the ready queue
with a shorter next CPU burst than what is remaining of the current executing
process, then the CPU is allocated to the new process. In case of non-preemptive
scheduling, the current executing process is not preempted and the new process
gets the next chance, it being the process with the shortest next CPU burst.
Given below are the arrival and burst times of four processes P1, P2, P3 and P4.
Priority Based Scheduling
 Priority scheduling is a non-preemptive algorithm and one of the most
common scheduling algorithms in batch systems.
 Each process is assigned a priority. Process with highest priority is to be
executed first and so on.
 Processes with same priority are executed on first come first served basis.
 Priority can be decided based on memory requirements, time requirements or
any other resource requirement.
In the following example, we will assume lower numbers to represent higher
priority.
Priorities can be defined either internally or externally. Internal definition of
priority is based on some measurable factors like memory requirements,
number of open files, and so on.
External priorities are defined by criteria such as importance of the user
depending on the user’s department and other influencing factors. Priority-
based algorithms can be either preemptive or non-preemptive. In case of
preemptive scheduling, if a new process joins the ready queue with a priority
higher than the process that is executing, then the current process is preempted
and CPU allocated to the new process.
But in case of nonpreemptive algorithm, the new process having highest priority
from among the ready processes is allocated the CPU only after the current
process gives up the CPU. Starvation or indefinite blocking is one of the major
disadvantages of priority scheduling. Every process is associated with a priority.
In a heavily-loaded system, low priority processes in the ready queue are starved
or never get a chance to execute.
This is because there is always a higher priority process ahead of them in the
ready queue. A solution to starvation is aging. Aging is a concept where the
priority of a process waiting in the ready queue is increased gradually. Eventually
even the lowest priority process ages to attain the highest priority at which time
it gets a chance to execute on the CPU.
High priority and low priority.

 Priorities are generally some fixed range of numbers, such as 0 to 7, or 0 to


4,095.
 There is no general agreement on whether 0 is the highest or lowest priority.
 Some systems use low numbers to represent low priority; others use low
numbers for high priority.
 We use low numbers to represent high priority.

Round Robin Scheduling


 Round Robin is the preemptive process scheduling algorithm.
 Each process is provided a fix time to execute, it is called a quantum.
 Once a process is executed for a given time period, it is preempted and other
process executes for a given time period.
 Context switching is used to save states of preempted processes.
Consider the same example explained under FCFS algorithm.
Multiple-Level Queues Scheduling
Multiple-level queues are not an independent scheduling algorithm. They make
use of other existing algorithms to group and schedule jobs with common
characteristics.
 Multiple queues are maintained for processes with common characteristics.
 Each queue can have its own scheduling algorithms.
 Priorities are assigned to each queue. For example, CPU-bound jobs can be
scheduled in one queue and all I/O-bound jobs in another queue. The Process
Scheduler then alternately selects jobs from each queue and assigns them to the
CPU based on the algorithm assigned to the queue.
Shortest Remaining Time/job First Scheduling Algorithm
The Preemptive version of Shortest Job First(SJF) scheduling is known as Shortest
Remaining Time First (SRTF). With the help of the SRTF algorithm, the process having
the smallest amount of time remaining until completion is selected first to execute.

In the SRTF scheduling algorithm, the execution of any process can be stopped
after a certain amount of time. On arrival of every process, the short-term scheduler
schedules those processes from the list of available processes & running processes
that have the least remaining burst time.

After all the processes are available in the ready queue, then, No preemption will be
done and then the algorithm will work the same as SJF scheduling. In the Process
Control Block, the context of the process is saved, when the process is removed from
the execution and when the next process is scheduled. The PCB is accessed on the
next execution of this process.
Explanation

 At the 0th unit of the CPU, there is only one process that is P1, so P1 gets
executed for the 1 time unit.
 At the 1st unit of the CPU, Process P2 arrives. Now, the P1 needs 6 more units
more to be executed, and the P2 needs only 3 units. So, P2 is executed first by
preempting P1.
 At the 3rd unit of time, the process P3 arrives, and the burst time of P3 is 4
units which is more than the completion time of P2 that is 1 unit, so P2
continues its execution.
 Now after the completion of P2, the burst time of P3 is 4 units that means it
needs only 4 units for completion while P1 needs 6 units for completion.
 So, this algorithm picks P3 above P1 due to the reason that the completion
time of P3 is less than that of P1
 P3 gets completed at time unit 8, there are no new processes arrived.
 So again, P1 is sent for execution, and it gets completed at the 14th unit.

As Arrival Time and Burst time for three processes P1, P2, P3 are given in the above
diagram. Let us calculate Turn around time, completion time, and waiting time.

Turn around Time Waiting Time


Arrival Burst Completion
Process Turn Around Time = Waiting Time =
Time Time time
Completion Time – Turn Around Time –
Arrival Time Burst Time

P1 0 7 14 14-0=14 14-7=7

P2 1 3 4 4-1=3 3-3=0

P3 3 4 8 8-3=5 5-4=1

Average waiting time is calculated by adding the waiting time of all processes and then dividing
them by no. of processes.

average waiting time = waiting for time of all processes/ no.of processes

average waiting time=7+0+1=8/3 = 2.66ms


Longest Remaining Time First Scheduling Algorithm
The Longest Remaining time First(LRTF) scheduling is the preemptive version of
Longest Job First(LJF) scheduling. This scheduling algorithm is used by the operating
system in order to schedule incoming processes so that they can be executed in a
systematic way.

With this algorithm, the process having the maximum remaining time is processed
first. In this, we will check for the maximum remaining time after an interval of
time(say 1 unit) that is there another process having more Burst Time arrived up to
that time.

In the above example, four processes P1, P2, P3, P4 are given along with their arrival
time and burst time.
Turn around Time Waiting Time
Arrival Burst Completion
Process Turn Around Time = Waiting Time = Turn
Time time time
Completion Time – Arrival Around Time – Burst
Time Time

P1 0 3 11 11-0=11 11-3=8

P2 1 6 12 12-1=11 11-6=5

P3 3 2 13 13-2=11 13-2=11

P4 5 3 14 14-5=11 11-3=8

Average waiting time is calculated by adding the waiting time of all processes and
then dividing them by no. of processes.

average waiting time = waiting time of all processes/ no.of processes

average waiting time=8+5+11+8/4 = 4ms

Longest Job First Scheduling Algorithm


Longest Job First (LJF) scheduling comes under the category of the non-
preemptive scheduling algorithm. This algorithm mainly keeps the track of Burst
time of all processes that are available at the arrival time itself and then it will assign
the processor to the process having the longest burst time. In this algorithm, once a
process starts its execution then it cannot be interrupted in between its processing.
Any other process can be executed only after the assigned process has completed
its processing and has been terminated.

This scheduling is similar to the SJF scheduling algorithm. But, in this scheduling
algorithm, the priority is given to the process having the longest burst time.
In the above example, four processes P1, P2, P3, P4 are given along with their Burst
time and arrival time.

Explanation

1. At t = 0, there is one process that is available having 2 units of burst time. So,
select P1 and execute it for 2 ms.
2. At t = 2 i.e. after P1 gets executed, The Available Processes are P2, P3. As you
can see the burst time of P3 is more than P2. So, select P3 and execute it for
5ms.
3. At t = 7 i.e. after the execution of P3, the Available Processes are P2, P4. As you
can see the burst time of P4 is more than P2. So, select P4 and execute it for
7ms.
4. Finally, after the completion of P4 execute the process P2 for 3 ms.
Turn around Time Waiting Time
Arrival Burst Completion
Process Turn Around Time = Waiting Time = Turn
Time Time Time
Completion Time – Arrival Around Time – Burst
Time Time

P1 0 2 2 2-0=2 2-2=0

P2 1 3 17 17-1=16 16-3=13

P3 2 5 7 7-2=5 5-5=0

P4 3 7 14 14-3=11 11-7=4

Average waiting time is calculated by adding the waiting time of all processes and
then dividing them by no. of processes.

average waiting time = waiting for the time of all processes/ no.of processes

average waiting time=0+13+0+4/4=17/4=4.25milliseconds

Highest Response Ratio Next (HRRN) Scheduling


HRRN(Highest Response Ratio Next )Scheduling is a non-preemptive scheduling
algorithm in the operating system. It is one of the optimal algorithms used for
scheduling.

As HRRN is a non-preemptive scheduling algorithm so in case if there is any


process that is currently in execution with the CPU and during its execution, if any
new process arrives in the memory with burst time smaller than the currently running
process then at that time the currently running process will not be put in the ready
queue & complete its execution without any interruption.

Response Ratio = (W+S)/S

Where,W=It indicates the Waiting Time.

S=It indicates the Service time that is Burst Time.


 At time=0 there is no process available in the ready queue, so from 0 to 1 CPU
is idle. Thus 0 to 1 is considered as CPU idle time.
 At time=1, only the process P1 is available in the ready queue. So, process P1
executes till its completion.
 After process P1, at time=4 only process P2 arrived, so the process P2 gets
executed because the operating system did not have any other option.
 At time=10, the processes P3, P4, and P5 were in the ready queue. So in order
to schedule the next process after P2, we need to calculate the response ratio.
 In this step, we are going to calculate the response ratio for P3, P4, and P5.

Response Ratio = W+S/S

RR(P3) = [(10-5) +8]/8

= 1.625

RR(P4) = [(10-7) +4]/4

= 1.75

RR(P5) = [(10-8) +5]/5

= 1.4
From the above results, it is clear that Process P4 has the Highest Response ratio, so
the Process P4 is schedule after P2.

 At time t=10, execute process P4 due to its large value of Response ratio.
 Now in the ready queue, we have two processes P3 and P5, after the execution
of P4 let us calculate the response ratio of P3 and P5

RR (P3) = [(14-5) +8]/8

=2.125

RR (P5) = [(14-8) +5]/5

=2.2

From the above results,it is clear that Process P5 has the Highest Response ratio, so
the Process P5 is schedule after P4

 At t=14, process P5 is executed.


 After the complete execution of P5, P3 is in the ready queue so at time t=19 P3
gets executed.

Turn around Time Waiting Time


Arrival Burst Completion
Process Turn Around Time = Waiting Time =
Time Time time
Completion Time – Turn Around Time –
Arrival Time Burst Time

P1 1 3 4 4-1=3 3-3=0

P2 3 6 10 10-3=7 7-6=1

P3 5 8 27 27-5=22 22-8=14

P4 7 4 14 14-7=7 7-4=3

P5 8 5 19 19-8=11 11-5=6

Average waiting time is calculated by adding the waiting time of all processes and
then dividing them by no. of processes.
average waiting time = waiting time of all processes/ no.of processes

average waiting time=0+1+14+3+6/5=24/5 = 4.8ms

Multiple Processors Scheduling


In multi-processor scheduling, more than one processors(CPUs) share the load to handle
the execution of processes smoothly.

The scheduling process of a multi-processor is more complex than that of a single


processor system because of the following reasons.

 Load balancing is a problem since more than one processors are present.
 Processes executing simultaneously may require access to shared data.
 Cache affinity should be considered in scheduling.

1.Symmetric Multiprocessing: In symmetric multi-processor scheduling, the


processors are self-scheduling. The scheduler for each processor checks the
ready queue and selects a process to execute. Each of the processors works on
the same copy of the operating system and communicates with each other. If
one of the processors goes down, the rest of the system keeps working.

o Symmetrical Scheduling with global queues: If the processes to be


executed are in a common queue or a global queue, the scheduler for
each processor checks this global-ready queue and selects a process to
execute.
o Symmetrical Scheduling with per queues: If the processors in the
system have their own private ready queues, the scheduler for each
processor checks their own private queue to select a process.

2.Asymmetric Multiprocessing: In asymmetric multi-processor scheduling,


there is a master server, and the rest of them are slave servers. The master
server handles all the scheduling processes and I/O processes, and the slave
servers handle the users' processes. If the master server goes down, the whole
system comes to a halt. However, if one of the slave servers goes down, the
rest of the system keeps working.
3.Processor Affinity

A process has an affinity for a processor on which it runs. This is called processor
affinity.

 When a process runs on a processor, the data accessed by the process most
recently is populated in the cache memory of this processor. The following data
access calls by the process are often satisfied by the cache memory.
 However, if this process is migrated to another processor for some reason, the
content of the cache memory of the first processor is invalidated, and the
second processor's cache memory has to be repopulated.
 To avoid the cost of invalidating and repopulating the cache memory, the
Migration of processes from one processor to another is avoided.

There are two types of processor affinity.

 Soft Affinity: The system has a rule of trying to keep running a process on the
same processor but does not guarantee it. This is called soft affinity.
 Hard Affinity: The system allows the process to specify the subset of
processors on which it may run, i.e., each process can run only some of the
processors. Systems such as Linux implement soft affinity, but they also provide
system calls such as sched_setaffinity() to support hard affinity.

4.Load Balancing

Load Balancing is the phenomenon of distributing workload so that the processors


have an even workload in a symmetric multi-processor system.

In symmetric multiprocessing systems which have a global queue, load balancing is


not required.

However, in asymmetric multi-processor with private queues, some processors may


end up idle while others have a high workload. There are two ways to solve this.

 Push Migration: In push migration, a task routinely checks the load on each
processor. Some processors may have long queues while some are idle. If the
workload is unevenly distributed, it will extract the load from the overloaded
processor and assign the load to an idle or a less busy processor.
 Pull Migration: In pull migration, an idle processor will extract the load from
an overloaded processor itself.
5.Multi-Core Processors

A multi-core processor is a single computing component comprised of two or more


CPUs called cores. Each core has a register set to maintain its architectural state and
thus appears to the operating system as a separate physical processor. A processor
register can hold an instruction, address, etc. Since each core has a register set, the
system behaves as a multi-processor with each core as a processor.

Symmetric multiprocessing systems which use multi-core processors allow higher


performance at low energy.

6.Symmetric multiprocessor

In Symmetric multi-processors, the memory has only one operating system, which
can be run by any central processing unit. When a system call is made, the CPU on
which the system call was made traps the kernel and processed that system call. The
model works to balance processes and memory dynamically. Each processor checks
the global or private ready queue and selects a process to execute it.

There are three ways of conflict that may arise in a symmetric multi-processor
system.

 Locking system: The resources in a multi-processor are shared among the


processors. To the access safe of these resources to the processors, a locking
system is required. This is done to serialize the access of the resources by the
processors.
 Shared data: Since multiple processors are accessing the same data at any
given time, the data may not be consistent across all of these processors. To
avoid this, we must use some kind of strategy or locking scheme.
 Cache Coherence: When the resource data is stored in multiple local caches
and shared by many clients, it may be rendered invalid if one of the clients
changes the memory block. This can be resolved by maintaining a consistent
view of the data.

7.Master-Slave Multiprocessor

In a master-slave multi-processor, one CPU works as a master while all others work
as slave processors. This means the master processor handles all the scheduling
processes and the I/O processes while the slave processors handle the user's
processes. The memory and input-output devices are shared among all the
processors, and all the processors are connected to a common bus. It uses
asymmetric multiprocessing to schedule processes.

8.Virtualization and Threading

Virtualization is the process of running multiple operating systems on a computer


system. So a single CPU can also act as a multi-processor. This can be achieved by
having a host operating system and other guest operating systems.

Thread Scheduling
Scheduling of threads involves two boundary scheduling.
1. Scheduling of user-level threads (ULT) to kernel-level threads (KLT) via
lightweight process (LWP) by the application developer.
2. Scheduling of kernel-level threads by the system scheduler to perform different
unique OS functions.

Lightweight Process (LWP)


Light-weight process are threads in the user space that acts as an interface for the
ULT to access the physical CPU resources. Thread library schedules which thread of a
process to run on which LWP and how long. The number of LWPs created by the
thread library depends on the type of application. In the case of an I/O bound
application, the number of LWPs depends on the number of user-level threads. This
is because when an LWP is blocked on an I/O operation, then to invoke the other ULT
the thread library needs to create and schedule another LWP.

Thus, in an I/O bound application, the number of LWP is equal to the number of the
ULT. In the case of a CPU-bound application, it depends only on the application. Each
LWP is attached to a separate kernel-level thread.

It requires two controls to be specified for the User level threads: Contention scope,
and Allocation domain.

Contention Scope
The word contention here refers to the competition or fight among the User level
threads to access the kernel resources. Thus, this control defines the extent to which
contention takes place. It is defined by the application developer using the thread
library.
 Process Contention Scope (PCS) :
The contention takes place among threads within a same process. The thread
library schedules the high-prioritized PCS thread to access the resources via
available LWPs (priority as specified by the application developer during thread
creation).
 System Contention Scope (SCS) :
The contention takes place among all threads in the system. In this case, every
SCS thread is associated to each LWP by the thread library and are scheduled by
the system scheduler to access the kernel resources.
In LINUX and UNIX operating systems, the POSIX Pthread library provides a
function Pthread_attr_setscope to define the type of contention scope for a
thread during its creation.

Allocation Domain
The allocation domain is a set of one or more resources for which a thread is
competing. In a multicore system, there may be one or more allocation domains
where each consists of one or more cores. One ULT can be a part of one or more
allocation domain. Due to this high complexity in dealing with hardware and software
architectural interfaces, this control is not specified. But by default, the multicore
system will have an interface that affects the allocation domain of a thread.

Real Time CPU Scheduling

In real-time systems, the scheduler is considered as the most important component


which is typically a short-term task scheduler.

1. Static table-driven approaches:


These algorithms usually perform a static analysis associated with scheduling and
capture the schedules that are advantageous. This helps in providing a schedule
that can point out a task with which the execution must be started at run time.

2. Static priority-driven preemptive approaches:


Similar to the first approach, these type of algorithms also uses static analysis of
scheduling. The difference is that instead of selecting a particular schedule, it
provides a useful way of assigning priorities among various tasks in preemptive
scheduling.

3. Dynamic planning-based approaches:


Here, the feasible schedules are identified dynamically (at run time). It carries a
certain fixed time interval and a process is executed if and only if satisfies the
time constraint.

4. Dynamic best effort approaches:


These types of approaches consider deadlines instead of feasible schedules.
Therefore the task is aborted if its deadline is reached. This approach is used
widely is most of the real-time systems.

Advantages

Meeting Timing Constraints: Scheduling ensures that real-time tasks are executed
within their specified timing constraints.
 Resource Optimization: Scheduling algorithms allocate system resources
effectively, ensuring efficient utilization of processor time, memory, and other
resources. This helps maximize system throughput and performance.
 Priority-Based Execution: Scheduling allows for priority-based execution, where
higher-priority tasks are given precedence over lower-priority tasks. This ensures
that time-critical tasks are promptly executed, leading to improved system
responsiveness and reliability.

Disadvantages
 Increased Complexity: Real-time scheduling introduces additional complexity
to system design and implementation. Developers need to carefully analyze task
requirements, define priorities, and select suitable scheduling algorithms. This
complexity can lead to increased development time and effort.
 Limited Resources: Real-time systems often operate under resource-
constrained environments. Scheduling tasks within these limitations can be
challenging, as the available resources may not be sufficient to meet all timing
constraints or execute all tasks simultaneously.
 Verification and Validation: Validating the correctness of real-time schedules
and ensuring that all tasks meet their deadlines require rigorous testing and
verification techniques. Verifying timing constraints and guaranteeing the
absence of timing errors can be a complex and time-consuming process.

You might also like