Operating System Chapter 4
Operating System Chapter 4
CS-316
Ms Iqra Mehmood
UNIVERSITY OF JHANG
Process Scheduling
Process Scheduling is an OS task that schedules processes of different states like ready, waiting, and
running.
Process scheduling is the activity of the process manager that handles the removal of the running
process from the CPU and the selection of another process based on a particular strategy.
Process scheduling is an essential part of a Multiprogramming operating system. Such operating
systems allow more than one process to be loaded into the executable memory at a time and the loaded
process shares the CPU using time multiplexing.
Process scheduling allows OS to allocate a time interval of CPU execution for each process. Another
important reason for using a process scheduling system is that it keeps the CPU busy all the time. This
allows you to get the minimum response time for programs.
Categories of Scheduling
Scheduling falls into one of two categories:
Non-preemptive: In this case, a process’s resource cannot be taken before the process has
finished running. When a running process finishes and transitions to a waiting state, resources
are switched.
Preemptive: In this case, the OS assigns resources to a process for a predetermined period. The
process switches from running state to ready state or from waiting for state to ready state during
resource allocation. This switching happens because the CPU may give other processes priority
and substitute the currently active process for the higher priority process.
Two State Process Model
Two-state process models are:
Running State
Not Running State
Running
In the Operating system, whenever a new process is built, it is entered into the system, which should
be running.
Not Running
The process that are not running are kept in a queue, which is waiting for their turn to execute. Each
entry in the queue is a point to a specific process.
1. Job queue – It helps you to store all the processes in the system.
2. Ready queue – This type of queue helps you to set every process residing in the main
memory, which is ready and waiting to execute.
3. Device queues – It is a process that is blocked because of the absence of an I/O device.
In the above-given Diagram,
1. Every new process first put in the Ready queue .It waits in the ready queue until it is finally
processed for execution. Here, the new process is put in the ready queue and wait until it is
selected for execution or it is dispatched.
2. One of the processes is allocated the CPU and it is executing
3. The process should issue an I/O request
4. Then, it should be placed in the I/O queue.
5. The process should create a new subprocess
6. The process should be waiting for its termination.
7. It should remove forcefully from the CPU, as a result interrupt. Once interrupt is completed, it
should be sent back to ready queue.
Long Term scheduler mainly controls the degree of Multiprogramming. The purpose of long term
scheduler is to choose a perfect mix of IO bound and CPU bound processes among the jobs present in
the pool.
If the job scheduler chooses more IO bound processes then all of the jobs may reside in the blocked
state all the time and the CPU will remain idle most of the time. This will reduce the degree of
Multiprogramming. Therefore, the Job of long term scheduler is very critical and may affect the system
for a very long time.
A scheduling algorithm is used to select which job is going to be dispatched for the execution. The Job
of the short term scheduler can be very critical in the sense that if it selects job whose CPU burst time
is very high then all the jobs after that, will have to wait in the ready queue for a very long time.
This problem is called starvation which may arise if the short term scheduler makes some mistakes
while selecting the job.
The dispatcher is responsible for loading the process selected by the Short-term scheduler on the CPU
(Ready to Running State) Context switching is done by the dispatcher only. A dispatcher does the
following:
Switching context.
Switching to user mode.
Jumping to the proper location in the newly loaded program.
Context Switching
In order for a process execution to be continued from the same point at a later time, context switching
is a mechanism to store and restore the state or context of a CPU in the Process Control block. A
context switcher makes it possible for multiple processes to share a single CPU using this method.
A multitasking operating system must include context switching among its features.
The state of the currently running process is saved into the process control block when the scheduler
switches the CPU from executing one process to another. The state used to set the computer, registers,
etc. for the process that will run next is then loaded from its own PCB. After that, the second can start
processing.
In order for a process execution to be continued from the same point at a later time, context switching
is a mechanism to store and restore the state or context of a CPU in the Process Control block. A
context switcher makes it possible for multiple processes to share a single CPU using this method. A
multitasking operating system must include context switching among its features.
3. Medium-Term Scheduler
Medium term scheduler takes care of the swapped out processes. If the running state processes needs
some IO time for the completion then there is a need to change its state from running to waiting.
Medium term scheduler is used for this purpose. It removes the process from the running state to make
room for the other processes. Such processes are the swapped out processes and this procedure is called
swapping. The medium term scheduler is responsible for suspending and resuming the processes.
It reduces the degree of multiprogramming. The swapping is necessary to have a perfect mix of
processes in the ready queue.
1. Efficiency: Ensure that the system resources are utilized effectively and efficiently.
2. Fairness: Allocate resources fairly among competing processes to prevent any one process
from monopolizing the system.
3. Throughput: Maximize the number of processes completed per unit of time to enhance
overall system productivity.
4. Responsiveness: Provide quick response times to user requests and interactive processes for
a smooth user experience.
5. Resource Utilization: Optimize the utilization of CPU, memory, and other system
resources to prevent wastage.
6. Priority Management: Manage process priorities effectively to ensure that high-priority
tasks receive appropriate attention.
7. Predictability: Ensure predictable behavior of the system by scheduling processes in a
consistent and reliable manner.
8. Adaptability: Adapt scheduling strategies to varying workloads and system conditions to
maintain optimal performance.
9. Minimization of Latency: Minimize the waiting time for processes, reducing latency and
improving overall system responsiveness.
10. Load Balancing: Distribute the workload evenly across system resources to prevent
overloading of any single component.
11. Energy Efficiency: Implement scheduling algorithms that consider energy consumption to
promote sustainability and reduce costs.
12. Maximization of Throughput: Prioritize tasks that maximize overall system throughput
and minimize idle time.