0% found this document useful (0 votes)
9 views

Lecture4 OS

Uploaded by

Avan Singh
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

Lecture4 OS

Uploaded by

Avan Singh
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 42

LECTURE 4

PROCESS MANAGEMENT: Part 2


Multiprogramming, Context Switching, Process Scheduling

Dr. Akshi Kumar


Topics for today…

● Multiprogramming

● Context Switching

● Process Scheduling

● Process Scheduling Queues

● Schedulers

● Process Scheduling Criteria


2
Uniprogramming and Utilization
• Early batch processing systems used a strategy known as uniprogramming.

• In uniprogramming, one program was started and run in full to completion; the
next job would start immediately after the first one finished.

• The problem with this approach is that programs consisted of both CPU
instructions and I/O operations.

• CPU instructions were very fast, as they consisted of electrical signals. I/O
operations, however, were very slow (One example of an early I/O device was
the drum memory, which was a large magnetic device that had to be
mechanically turned for every data access).
Uniprogramming and Utilization
• If a program required many I/O operations to be performed, that created a lot
of wasted time where the CPU sat idle instead of executing instructions. We
can quantify this wasted time as the CPU utilization.

• In general terms, utilization can be defined mathematically as the actual usage


of a resource divided by the potential usage.

• Utilization is reported as a unit-less ratio, typically a percentage.

• For instance, if a system is only used for half of the time that it could be, we
would say that it experienced 50% utilization.

• In regard to CPU time, utilization is calculated as


Uniprogramming and Utilization
Consider the following timeline illustration for three sequential uniprogramming
processes.

The green regions indicate times when the CPU is executing instructions in the
program, while the yellow indicates that the times where the CPU is idle while
waiting on an I/O operation to complete.
Uniprogramming and Utilization
The following table summarizes the time each process spends executing on the
CPU or waiting for I/O:

In this scenario, the CPU was used for a total of 15 out of 25 possible seconds, so
the system experienced 60% CPU utilization when running these three jobs:

As the preceding example illustrates, a significant amount of system resources can


be wasted by waiting on I/O operations to complete.
Multiprogramming and Concurrency
• In multiprogramming (and multitasking), several processes are all loaded
into memory and available to run.

• Whenever a process initiates an I/O operation, the kernel selects a different


process to run on the CPU.

• This approach allows the kernel to keep the CPU active and performing work
as much as possible, thereby reducing the amount of wasted time.

• By reducing this waste, multiprogramming allows all programs to finish


sooner than they would otherwise.
Multiprogramming and Concurrency
Consider the following timeline illustration for the same three processes from
previous example, but in a multiprogramming environment.

As before, the green regions indicate CPU execution and the yellow indicates I/O
operations. However, note that processes B and C can run while A is waiting on
its I/O operation. Similarly, A and C execute while B is waiting on I/O operations.
As a result, the CPU is only completely idle while C’s I/O operation is performed
at time 15, because A and B have already run to completion.
Multiprogramming and Concurrency
• In our revised CPU utilization calculation, the numerator does not change
because the total amount of CPU execution time has not changed.

• Only the denominator changes, to account for the reduced time wasted
waiting on A’s and B’s I/O operations.
Multiprogramming vs Uniprogramming
Implementing Multiprogramming
There are two forms of multiprogramming that have been implemented:
• Preemptive multitasking
• Cooperative Multitasking
Preemptive multitasking:
 Preemptive multitasking is the most common technique in which processes
are given a maximum amount of time to run.
 This amount of time is called a quantum, typically measured in milliseconds.
 In preemptive multitasking, if a process issues an I/O request before its
quantum has expired, the kernel will simply switch to another process early.
 However, if the quantum has expired (i.e., the time limit has been reached), the
kernel will preempt the current process and switch to another.
 A multiprogramming strategy in which processes are granted a time-
limited access to the CPU and interrupted when that time limit expires.
Implementing Multiprogramming
• Cooperative multitasking
 In contrast, with cooperative multitasking, a process can run for as long as it
wants until it voluntarily relinquishes control of the CPU or initiates an I/O
request.
 Cooperative multitasking has a number of advantages, including its simplicity
of design and implementation.
 Furthermore, if all processes are very interactive (meaning they perform many
I/O operations), it can have low overhead costs.
 However, cooperative multitasking is vulnerable to rogue processes that
dominate the CPU time. For instance, if a process goes into an infinite loop that
does not perform any I/O operation, it will never surrender control of the CPU
and all other processes are blocked from running.
 As a result, modern systems favour preemptive multitasking.
Topics for today…

● Multiprogramming

● Context Switching

● Process Scheduling

● Process Scheduling Queues

● Schedulers

● Process Scheduling Criteria


13
Context Switching
• Multiprogramming  Applications are not actually executing code at the same time; rather, the
kernel was just switching back and forth between them so quickly that users could not notice.

• Context Switching involves storing the context or state of a process so that it can be reloaded
when required and execution can be resumed from the same point as earlier.

• This is a feature of a multitasking operating system and allows a single CPU to be shared by
multiple processes.

• Although context switches are critical for multiprogramming, they also introduce complexity
and overhead costs in terms of wasted time.
 First, when the kernel determines that it needs to perform a context switch, it must decide
which process it will now make active; this choice is performed by a scheduling routine
that takes time to run.
 Second, context switches introduce delays related to the memory hierarchy.
Steps in Context Switching
There are several steps involves in context switching of the processes. The following diagram
represents the context switching of two processes, P1 to P2, when an interrupt, I/O needs, or
priority-based process occurs in the ready queue of PCB.
Steps in Context Switching
• As we can see in the diagram, initially, the P1
process is running on the CPU to execute its task,
and at the same time, another process, P2, is in
the ready state.
• If an error or interruption has occurred or the
process requires input/output, the P1 process
switches its state from running to the waiting
state.
• Before changing the state of the process P1,
context switching saves the context of the
process P1 in the form of registers and the
program counter to the PCB1.
• After that, it loads the state of the P2 process
from the ready state of the PCB2 to the running
state.
Steps in Context Switching
1. First, the context switching needs to save the
state of process P1 in the form of the program
counter and the registers to the PCB (Program
Counter Block), which is in the running state.

2. Now update PCB1 to process P1 and moves the


process to the appropriate queue, such as the
ready queue, I/O queue and waiting queue.

3. After that, another process gets into the running


state, or we can select a new process from the
ready state, which is to be executed, or the
process has a high priority to execute its task.
Steps in Context Switching
4. Now, we have to update the PCB (Process Control
Block) for the selected process P2. It includes
switching the process state from ready to running
state or from another state like blocked, exit, or
suspend.
5. 5. If the CPU already executes process P2, we need
to get the status of process P2 to resume its
execution at the same time point where the system
interrupt occurs.
Similarly, process P2 is switched off from the CPU so
that the process P1 can resume execution. P1 process is
reloaded from PCB1 to the running state to resume its
task at the same point. Otherwise, the information is
lost, and when the process is executed again, it starts
execution at the initial level.
Context Switching Steps
The steps involved in context switching are as follows −
• Save the context of the process that is currently running on the CPU. Update
the process control block and other important fields.
• Move the process control block of the above process into the relevant queue
such as the ready queue, I/O queue etc.
• Select a new process for execution.
• Update the process control block of the selected process. This includes
updating the process state to running.
• Update the memory management data structures as required.
• Restore the context of the process that was previously running when it is
loaded again on the processor. This is done by loading the previous values of
the process control block and registers.
Context Switching Triggers
There are three major triggers for context switching. These are given as follows −

• Multitasking: In a multitasking environment, a process is switched out of the


CPU so another process can be run. The state of the old process is saved and the
state of the new process is loaded. On a pre-emptive system, processes may be
switched out by the scheduler.

• Interrupt Handling: The hardware switches a part of the context when an


interrupt occurs. This happens automatically. Only some of the context is
changed to minimize the time required to handle the interrupt.

• User and Kernel Mode Switching: A context switch may take place when a
transition between the user mode and kernel mode is required in the OS.
Context Switching

Context Switching gives the impression to the user that the system has multiple
CPUs by executing multiple processes.

https://drive.google.com/file/d/1OAaGxG-
ntm7ruONNSAFtw7s9BUlZWZu2/view?usp=share_link
Topics for today…

● Multiprogramming

● Context Switching

● Process Scheduling

● Process Scheduling Queues

● Schedulers

● Process Scheduling Criteria


22
Process Scheduling
● Process scheduling is the activity of the process manager that handles the
removal of the running process from the CPU and the selection of another
process on the basis of a particular strategy.

● The goal of a multiprogramming system tries to keep the CPU busy at all
times, that is, maximize CPU use.

● For a Time-sharing system, the goal is to switch between user processes so


that each user can interact with the process.

● To meet these goals, a Process scheduler selects among available processes


for next execution on CPU
Categories of Scheduling
There are two categories of scheduling:

1. Non-preemptive: Here the resource can’t be taken from a process until the
process completes execution. The switching of resources occurs when the
running process terminates and moves to a waiting state. VOLUNTARILY

2. Preemptive: Here the OS allocates the resources to a process for a fixed


amount of time. During resource allocation, the process switches from
running state to ready state or from waiting state to ready state. This
switching occurs as the CPU may give priority to other processes and
replace the process with higher priority with the running process. FORCED
Preemptive vs. Non-preemptive Scheduling
Topics for today…
● Multiprogramming

● Context Switching

● Process Scheduling

● Process Scheduling Queues

● Schedulers

● Process Scheduling Criteria 26


Process Scheduling Queues
● Process Scheduling Queues helps to maintain a distinct queue for each and
every process states and PCBs.

● All the process of the same execution state are placed in the same queue.
Therefore, whenever the state of a process is modified, its PCB needs to be
unlinked from its existing queue, which moves back to the new state queue.

● Three types of operating system queues are:


● Job queue – set of all processes in the system
● Ready queue – set of all processes residing in main memory, ready and
waiting to execute
● Device queues – set of processes waiting for an I/O device
● Processes migrate among the various queues
Queuing Diagram
• Queueing diagram represents queues, resources, flows

In the above-given Diagram,


• Rectangle represents a queue. Video: https://youtu.be/THqcAa1bbFU
• Circle denotes the resource
• Arrow indicates the flow of the process.
Process Scheduling Queues
1. Every new process first put in the Ready queue .It
waits in the ready queue until it is finally processed
for execution. Here, the new process is put in the
ready queue and wait until it is selected for execution,
or it is dispatched.
2. One of the processes is allocated the CPU and it is
executing
3. The process should issue an I/O request
4. Then, it should be placed in the I/O queue.
5. The process should create a new subprocess
6. The process should be waiting for its termination.
7. It should remove forcefully from the CPU, as a result
interrupt. Once interrupt is completed, it should be
sent back to ready queue.
The OS can use different policies to manage each queue
(FIFO, Round Robin, Priority, etc.) Next Week Lecture
Topics for today…

● Multiprogramming

● Context Switching

● Process Scheduling

● Process Scheduling Queues

● Schedulers

● Process Scheduling Criteria


30
Types of Schedulers in OS
● A scheduler is a type of system software that allows you to handle process
scheduling.

● There are mainly three types of Process Schedulers:


1. Long Term Scheduler
2. Short Term Scheduler
3. Medium Term Scheduler
Types of Schedulers in OS
● Long-term scheduler (or job scheduler) – selects which processes should be
brought into the ready queue
● That is, it chooses the processes from the pool (secondary memory) and
keeps them in the ready queue maintained in the primary memory
● Long-term scheduler is invoked infrequently (seconds, minutes)  (may
be slow)
● The long-term scheduler controls the degree of multiprogramming
● It provides a balanced combo of jobs, such as I/O bound, and processor
bound
● Processes can be described as either:
• I/O-bound process – spends more time doing I/O than computations, many short CPU
bursts
• CPU-bound process – spends more time doing computations; few very long CPU
bursts
● Long-term scheduler strives for good process mix
Types of Schedulers in OS
● Short-term scheduler (or CPU scheduler) – selects which process should
be executed next and allocates CPU
● The main goal of this scheduler is to boost the system performance
according to set criteria.
● This helps you to select from a group of processes that are ready to
execute and allocates CPU to one of them.
● Short-term scheduler is invoked frequently (milliseconds)  (must be
fast)
● The dispatcher gives control of the CPU to the process selected by the short-
term scheduler.
Types of Schedulers in OS
● The dispatcher is responsible for loading the
process selected by the Short-term scheduler
on the CPU (Ready to Running State)
● Context switching is done by the dispatcher
only.
● Dispatch latency – time it takes for the
dispatcher to stop one process and start
another running
● A dispatcher does the following:
1. Switching context.
2. Switching to user mode.
3. Jumping to the proper location in the
newly loaded program.
Types of Schedulers in OS
● Medium-term scheduler can be added if degree of multiple programming
needs to decrease
 Remove process from memory, store on disk, bring back in from disk to
continue execution: swapping
 It is responsible for suspending and resuming the process.
 It mainly does swapping (moving processes from main memory to disk
and vice versa).
Schedulers in Queueing Diagram
Comparison of Schedulers
Topics for today…

● Multiprogramming

● Context Switching

● Process Scheduling

● Process Scheduling Queues

● Schedulers

● Process Scheduling Criteria


38
Process Scheduling Criteria
Before we understand Scheduling Criteria, let’s glance through some
important vocabulary:

● CPU utilization – keep the CPU as busy as possible


● Throughput – # of processes that complete their execution per time unit
● Turnaround time – amount of time to execute a particular process
● Waiting time – amount of time a process has been waiting in the ready
queue
● Response time – amount of time it takes from when a request was
submitted until the first response is produced, not output (for time-sharing
environment)
Process Scheduling Criteria
● Max CPU utilization
● Max throughput
● Min turnaround time
● Min waiting time
● Min response time

Video on Process Scheduling: https://youtu.be/TLoDBLxOiXY


Next lecture…
● CPU scheduler and Dispatcher
● Necessary CPU scheduling terminologies
● Types of CPU Scheduling
● CPU Scheduling Algorithms (SIX)

You might also like