0% found this document useful (0 votes)
32 views8 pages

2-CPU Scheduling

Uploaded by

ranaalam45171
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views8 pages

2-CPU Scheduling

Uploaded by

ranaalam45171
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

CPU SChedUling

CPU scheduling is a critical aspect of operating systems that


involves determining which processes in a system should
be executed by the CPU at any given time. Efficient CPU
scheduling maximizes resource utilization and improves
system responsiveness. This unit covers basic scheduling
concepts, process overviews, process states,
multiprogramming, the role of the scheduler, various
scheduling algorithms, and multiple processor scheduling.
1. Basic Scheduling Concepts
CPU scheduling refers to the method by which the operating
system decides which of the processes in the ready state
should be allocated CPU time. Effective scheduling is
crucial for system performance and involves balancing
responsiveness, throughput, turnaround time, waiting
time, and resource utilization.
Key concepts in CPU scheduling include:
• Throughput: The number of processes completed in a
given time period. Higher throughput indicates better
performance.
• Turnaround Time: The total time taken from submission
of a process to its completion. It includes waiting time,
execution time, and any other delays.
• Waiting Time: The total time a process spends in the
ready queue before being executed.
• Response Time: The time from the submission of a
request until the first response is produced. This is
particularly important for interactive processes.
2. Process Overview
A process is an instance of a program in execution and
consists of program code, current activity, and a set of
resources (such as memory and file handles). Each process
goes through several stages from creation to termination.
a) Process Components
• Program Code: The actual code being executed, often
referred to as the text section.
• Process Stack: Contains temporary data such as function
parameters, return addresses, and local variables.
• Data Section: Contains global variables.
• Heap: Dynamically allocated memory during process
execution.
3. Process States
A process can be in one of several states throughout its
lifecycle:
1. New: The process is being created.
2. Ready: The process is waiting to be assigned to a CPU.
3. Running: The process is currently being executed by the
CPU.
4. Waiting (Blocked): The process is waiting for an event to
occur (e.g., I/O completion).
5. Terminated: The process has finished execution.
The transitions between these states depend on various
events, such as process creation, I/O operations, and CPU
scheduling decisions.
4. Multiprogramming
Multiprogramming is a technique that enables multiple
processes to reside in memory simultaneously. The
primary objective of multiprogramming is to maximize CPU
utilization by keeping it busy. When one process is waiting
for I/O operations, the CPU can switch to another process
that is ready to run.
This leads to the creation of a ready queue, which holds all
the processes that are ready to be executed. The operating
system uses a scheduler to select which process from the
ready queue will be allocated CPU time.

5. Scheduler and Scheduling Algorithms


The scheduler is the component of the operating system that
selects processes from the ready queue to be executed by
the CPU. There are various scheduling algorithms, each
with its advantages and disadvantages. These algorithms
determine the order in which processes are executed and
can be classified into two main categories: preemptive and
non-preemptive.
a) Preemptive Scheduling
In preemptive scheduling, a running process can be
interrupted and moved to the ready state to allow another
process to run. This is essential for ensuring
responsiveness in interactive systems. Common
preemptive scheduling algorithms include:
• Round Robin (RR): Each process is assigned a fixed time
slice (quantum). If the process does not finish execution
within that time, it is preempted and returned to the
end of the ready queue.
• Shortest Remaining Time First (SRTF): The process with
the shortest estimated remaining time is selected for
execution. This algorithm aims to minimize average
waiting time.
• Priority Scheduling: Each process is assigned a priority
level. The CPU is allocated to the process with the
highest priority. In case of a tie, the scheduler can use
FCFS (First-Come, First-Served) as a secondary criterion.
b) Non-Preemptive Scheduling
In non-preemptive scheduling, once a process is allocated the
CPU, it cannot be interrupted until it voluntarily
relinquishes control (e.g., finishes execution or waits for
I/O). Common non-preemptive scheduling algorithms
include:
• First-Come, First-Served (FCFS): Processes are executed
in the order they arrive in the ready queue. This method
is simple but can lead to the convoy effect, where
shorter processes are delayed by longer ones.
• Shortest Job Next (SJN): The process with the smallest
execution time is selected for execution next. Like SRTF,
this method minimizes average waiting time but is non-
preemptive.

6. Multiple Processor Scheduling


With the advent of multi-core processors, CPU scheduling
also needs to consider multiple processors. Multiple
processor scheduling can be classified into two categories:
a) Asymmetric Multiprocessing
In asymmetric multiprocessing, one processor (master)
controls the system and manages the scheduling of
processes across the other processors (slaves). The master
processor allocates tasks, and slave processors execute
them. This method simplifies scheduling but can create a
bottleneck at the master processor.
b) Symmetric Multiprocessing (SMP)
In symmetric multiprocessing, each processor has equal
access to the shared resources, and they can perform
scheduling independently. This approach increases system
efficiency and responsiveness. Common algorithms for
SMP include:
• Load Balancing: Distributing processes evenly across
processors to avoid overloading a single processor.
• Processor Affinity: Assigning processes to specific
processors to improve cache performance.
7. Conclusion
CPU scheduling is a vital component of operating systems
that directly affects system performance and user
experience. By understanding the basic scheduling
concepts, process states, and the various scheduling
algorithms, students of BCA can grasp how operating
systems manage processes effectively. The choice of
scheduling algorithm can significantly impact throughput,
response time, and overall system efficiency. In an
increasingly multi-core world, understanding multiple
processor scheduling is also crucial for designing efficient
and responsive systems.

You might also like