0% found this document useful (0 votes)
25 views17 pages

OSY Notes Vol 2 (4th Chapter) - Ur Engineering Friend

Uploaded by

bagajit888
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views17 pages

OSY Notes Vol 2 (4th Chapter) - Ur Engineering Friend

Uploaded by

bagajit888
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

4.

CPU Scheduling & Algorithms

Process

In the context of an operating system, a process can be defined as a program in


execution. It is an instance of a computer program that is being executed or run
by the operating system on a computer's central processing unit (CPU). A
process represents a unit of work or a task that can be performed by the
computer system.

Here are key characteristics and components of a process in an operating


system:

1. Program Code: The executable instructions of the program to be executed


are stored in the process.

2. Data: This includes variables, constants, and other data required by the
program during execution.

3. Execution Context: It includes the current values of the CPU registers,


program counter, and other relevant information that allows the program to
resume execution from where it was interrupted.

4. Resources: Processes may require various resources such as memory, I/O


devices, and file handles. The operating system manages and allocates these
resources to processes as needed.

5. State: A process can be in various states during its execution lifecycle,


including new, ready, running, blocked, and terminated. The operating system
transitions processes between these states based on their execution progress and
resource availability.

6. Scheduling Information: Information used by the operating system to


determine the order and priority in which processes are executed on the CPU.

7. Process Control Block (PCB): The operating system maintains a data


structure known as the PCB for each process. It contains all the information
needed to manage and control the process, including process ID, process state,
program counter, CPU registers, and more.

8. Parent-Child Relationship: A process may create new processes, resulting


in a parent-child relationship. The parent process can communicate with and
control its child processes.

Scheduling Categories

Preemptive Scheduling

Non-preemptive Scheduling
Preemptive and non-preemptive scheduling are two types of process scheduling
mechanisms used by operating systems to manage the execution of multiple
processes in a multitasking environment.

These mechanisms determine how the operating system decides which process
to run and for how long.

1. Preemptive Scheduling:

In preemptive scheduling, the operating system can interrupt a running process


and allocate the CPU to another process. The scheduler decides when to
preempt a running process based on certain criteria, such as priority or time
slice. If a higher-priority process becomes available or the current process
exceeds its allocated time slice, the operating system will interrupt the running
process and schedule a new one.

Advantages:
 Enhances responsiveness and reduces latency for high-priority tasks.
 Allows for better utilization of system resources by swiftly responding to
new tasks or higher-priority processes.

Disadvantages:
 Requires careful handling to prevent race conditions and ensure data
consistency.
 Context switching (switching between processes) involves overhead,
potentially affecting system performance.

2. Non-preemptive (or Cooperative) Scheduling:

In non-preemptive scheduling, a process holds the CPU until it voluntarily


relinquishes control or completes its execution. The operating system does not
interrupt the running process to allocate the CPU to another process. The
process itself decides when to release the CPU, typically at certain predefined
points or after completing a specific task.

Advantages:

 Simpler to implement as it doesn't involve forced context switches.


 Easier to manage and reason about in terms of program execution
flow.

Disadvantages:

 May lead to inefficiencies if a process doesn't release the CPU in a


timely manner, causing delays for other processes.
 A misbehaving or long-running process can potentially block other
processes from executing.
Scheduling Algorithm

Scheduling algorithms in operating systems determine the order in which


processes are executed on the CPU. The goal is to optimize resource utilization,
ensure fairness, minimize response time, and prioritize certain processes based
on various criteria.

Here are some common scheduling algorithms:

1. First-Come-First-Served (FCFS):

 Processes are scheduled in the order they arrive in the ready queue.
 Simple and easy to implement, but may lead to longer waiting times for
shorter processes (convoy effect).

2. Shortest Job First (SJF):

 Processes are scheduled based on their burst time (the time taken to
complete their execution).
 Minimizes average waiting time and is optimal for minimizing waiting
time for the shortest processes.
 However, it requires knowledge of the burst times in advance, which
may not always be available.

3. Shortest Remaining Time First (SRTF):

 A preemptive version of SJF where a new process can preempt a


running process if its remaining burst time is shorter.
 Minimizes average waiting time and improves response time for short
processes.
 May lead to high context switch overhead due to frequent preemptions.
4. Round Robin (RR):

 Each process is allocated a fixed time slice (quantum) in a cyclic order.


 Provides fair execution to all processes and prevents any single process
from monopolizing the CPU.
 Response time is usually higher compared to preemptive algorithms.

5. Priority Scheduling:

 Each process is assigned a priority, and the highest priority process is


scheduled first.
 Can be either preemptive or non-preemptive.
 May suffer from starvation (low-priority processes never getting CPU
time) if not implemented carefully.

6. Priority Aging:

 A variation of priority scheduling where the priority of a process


increases as it waits in the ready queue.
 Prevents starvation by gradually increasing the priority of waiting
processes.

7. Multi-Level Queue (MLQ):

 Divides the ready queue into multiple separate queues with different
priorities.
 Each queue can have its own scheduling algorithm.
 Processes move between queues based on their characteristics or
system-defined criteria.

8. Multi-Level Feedback Queue (MLFQ):

 Similar to MLQ, but allows processes to move between different


priority queues based on their behaviour
 Higher-priority queues for short CPU bursts and lower-priority queues
for longer CPU bursts.
 Prevents aging and allows for dynamic changes in process priorities.

9. Lottery Scheduling:

 Assins processes "lottery tickets" based on some criteria (e.g., priority,


CPU usage).
 The scheduler then randomly selects a ticket, and the process associated
with that ticket gets to run.

10. Guaranteed Scheduling:

 Provides guaranteed minimum CPU time to each process in a round-


robin manner.
 Ensures that no process is starved of CPU time.

FCFS ( First Come First Serve ) -:

First-Come-First-Served (FCFS) is a simple and straightforward scheduling


algorithm where processes are executed based on the order of their arrival in the
ready queue. The process that arrives first is scheduled first, and subsequent
processes are executed in the order they enter the ready queue.

Advantages :

1. Simplicity : FCFS is straightforward and easy to understand and implement.


It doesn't require complex data structures or calculations.
2. No Starvation : Every process gets a chance to execute, ensuring that no
process is starved of CPU time indefinitely. It follows a fair order based on
arrival time.

3. Optimal for CPU-bound Processes : FCFS is optimal for processes that


have long CPU bursts or are CPU-bound, as it allows each process to complete
its execution without preemptions.

Disadvantages:

1. Convoy Effect : FCFS can suffer from the convoy effect, where shorter
processes are stuck behind a long-running process. This increases the average
waiting time, especially for shorter processes.

2. High Average Waiting Time : The average waiting time can be relatively
high, especially if shorter processes arrive after longer processes. This can lead
to suboptimal performance in terms of response time.

3. Inefficient for I/O-bound Processes : FCFS is not efficient for I/O-bound


processes. If a process spends a significant amount of time waiting for I/O
operations, other ready processes may get delayed unnecessarily.

4. Not Suitable for Real-Time Systems : FCFS is not suitable for real-time
systems where strict timing requirements need to be met. It doesn't prioritize
processes based on their urgency or importance.

Example -:
Shortest Job First ( SJF )

The Shortest Job First (SJF) algorithm is a non-preemptive or preemptive


scheduling policy used in operating systems for process scheduling. It schedules
processes based on their burst time, which is the time required for a process to
complete its execution.

Here's an overview of how SJF operates:

1. Non-Preemptive SJF:

 The operating system schedules the process with the shortest burst time
first.
 When a new process arrives, its burst time is compared with the
remaining burst times of other processes in the ready queue. The process
with the shortest burst time is selected to run next.

2. Preemptive SJF (also known as Shortest Remaining Time First


- SRTF):

 If a new process arrives with a shorter burst time than the remaining
time of the currently running process, the operating system preempts the
running process and schedules the new process.
 The algorithm compares the remaining burst times of all processes and
selects the one with the shortest remaining burst time to run next.

Advantages of SJF:
1. Optimal Average Waiting Time : SJF provides the minimum average
waiting time among all scheduling algorithms. It gives priority to shorter jobs,
which minimizes the overall waiting time.

2. Efficient Utilization of CPU : When the process with the shortest burst time
is scheduled first, it optimizes CPU utilization by executing processes quickly.

3. High Throughput for Short Processes : Short processes complete execution


swiftly, allowing the system to handle a higher number of processes in a given
time frame.

Disadvantages of SJF:

1. Prediction of Burst Time : In non-preemptive SJF, accurate prediction of


burst times for each process is challenging. If burst times are estimated
inaccurately, it can lead to suboptimal scheduling.

2. Starvation for Long Processes : Longer processes may face starvation if


many short processes keep arriving. The long processes will have to wait until
all shorter processes complete.

3. Unrealistic Assumption : SJF assumes that the burst time of each process is
known in advance, which is not always the case in real-world scenarios.

Example -:
Round Robin

Round Robin CPU Scheduling is the most important CPU Scheduling


Algorithm which is ever used in the history of CPU Scheduling Algorithms.
Round Robin CPU Scheduling uses Time Quantum (TQ). The Time Quantum is
something which is removed from the Burst Time and lets the chunk of process
to be completed.

Time Sharing is the main emphasis of the algorithm. Each step of this algorithm
is carried out cyclically. The system defines a specific time slice, known as a
time quantum.

Round Robin scheduling is a widely used scheduling algorithm in operating


systems and computer systems. It's a pre-emptive scheduling algorithm that
treats each process or task in a circular queue, allowing each process to run for a
fixed time slice or quantum before moving on to the next one in the queue. This
ensures fair allocation of CPU time to all processes in the system, preventing
any single process from monopolizing the CPU.

Here's how Round Robin scheduling works:

1. Initialization: Create a queue to hold the processes that are ready to execute.
Each process is assigned a fixed time quantum (time slice) that determines how
long it can run on the CPU before being moved to the back of the queue.

2. Arrival of Processes: As processes arrive or are created, they are added to


the end of the queue.
3. Scheduling: The CPU scheduler selects the process at the front of the queue
to run on the CPU for the time quantum specified. If a process completes its
execution within the time quantum, it is removed from the queue. If it doesn't
finish, it's moved to the back of the queue, making room for the next process.

4. Execution: The selected process is allowed to execute on the CPU until its
time quantum expires or it voluntarily releases the CPU, such as when it
performs an I/O operation or finishes its execution.

Example -:

Priority Scheduling

Priority scheduling is a type of scheduling algorithm used in computer operating


systems and multitasking systems to determine the order in which tasks or
processes are executed based on their priority levels. Each task or process is
assigned a priority, which indicates its relative importance or urgency.

In priority scheduling, higher-priority tasks are given preference and are


scheduled to run before lower-priority tasks. If multiple tasks have the same
priority, they are typically scheduled in a first-come-first-served (FCFS) or a
round-robin manner.
Priority scheduling is widely used in real-time systems, where tasks have strict
timing requirements and need to be completed within specific deadlines. It helps
in efficiently managing resources and ensuring that critical tasks are given
higher precedence, ultimately enhancing system performance and
responsiveness.

Priority scheduling is a popular scheduling algorithm used in operating systems


and computer systems to determine the order in which processes are executed
based on their priority levels. Here are some advantages and disadvantages of
priority scheduling:

Advantages:

1. Responsive to Priority: Priority scheduling ensures that high-priority tasks


or processes are executed first, allowing critical tasks to be completed promptly.
This can be crucial in real-time systems and environments where certain tasks
must be given immediate attention.

2. Efficient Resource Utilization: It can lead to efficient utilization of


resources as high-priority processes are scheduled and completed quickly. This
can enhance the overall system performance and throughput.

3. Customizable Allocation: Priorities can be assigned based on system


requirements, allowing for customization and adaptation to specific needs or
applications. Different processes or tasks can be categorized into priority levels
based on their importance.

4. Support for Multitasking: Priority scheduling facilitates multitasking by


ensuring that time-critical or important tasks get sufficient CPU time, even in a
system with numerous competing processes.
5. Flexibility in Task Execution: The scheduler can adapt to changes in
priorities dynamically. If a process's priority changes during its execution (e.g.,
due to a change in its importance), the scheduler can adjust the order of
execution accordingly.

Disadvantages:

1. Potential Starvation: Lower-priority processes may face starvation, where


they receive very little or no CPU time if higher-priority processes constantly
arrive. This can lead to a degradation in system performance and fairness.

2. Priority Inversion: Priority inversion can occur when a lower-priority task


holds a resource that a higher-priority task needs. This can lead to a scenario
where a higher-priority task is blocked by a lower-priority task, causing delays
and affecting system responsiveness.

3. Complexity and Overhead: Implementing and managing a priority-based


scheduling system can be complex. Assigning and managing priorities for
various processes require additional overhead and administrative effort, which
can impact the efficiency and simplicity of the scheduler.

4. Possibility of Process Starvation: If a process continually receives a low


priority and there are always higher-priority processes in the system, that
process may never get the chance to execute, causing process starvation and
potential performance issues.

5. Risk of Priority Inversion Deadlocks: If not handled carefully, priority


inversion can lead to deadlocks, where processes end up waiting indefinitely for
resources, causing system lockups and an inability to proceed with further
processing.
Example -:

Deadlock

A deadlock in operating system terms refers to a state where a set of processes


are blocked because each process is holding a resource and waiting for another
resource acquired by some other process. This creates a circular waiting
scenario where none of the processes can proceed, effectively halting the entire
system.

Here are the key components involved in a deadlock:

1. Processes: Multiple processes are competing for resources like CPU time,
memory, or input/output devices.
2. Resources: Resources can be either reusable (e.g., memory, CPU) or
consumable (e.g., files, database records).

3. Requests and Holds: A process can request a resource or hold a resource. If


a process requests a resource that is currently held by another process, it may be
forced to wait until the resource becomes available.

4. Circular Wait: A deadlock occurs when a set of processes are each waiting
for a resource acquired by one of the other processes, forming a circular chain
of dependencies.

Here's an example scenario to illustrate a deadlock:

- Process 1 holds Resource A and requests Resource B.

- Process 2 holds Resource B and requests Resource C.

- Process 3 holds Resource C and requests Resource A.

In this scenario, none of the processes can proceed because they are all waiting
for a resource held by another process. This forms a circular wait, leading to a
deadlock.

Preventing and resolving deadlocks is crucial in operating system design.


Strategies to handle deadlocks include:

1. Deadlock Prevention: Ensure that at least one of the necessary conditions


for deadlock (mutual exclusion, hold and wait, no preemption, and circular
wait) is not satisfied.

2. Deadlock Avoidance: Use algorithms that carefully allocate resources to


processes to avoid the possibility of deadlock. Examples include the Banker's
algorithm and optimistic concurrency control.
3. Deadlock Detection and Recovery: Allow the system to enter a deadlock
state, then detect and recover from it by either killing processes or rolling back
to a previous state.

Assignment #4

1. Define preemptive and non-preemptive scheduling with the help of an


example..

2. What is the need of scheduling algorithms ?

3. List types of scheduling algorithms and explain any one of them.

4. Explain fcfs with the help of an example.

5. Explain sjf with the help of an example. ( preemptive and non-preemptive )

6. Explain round robin with the help of an example.

7. Explain priority scheduling with the help of an example.

8. Explain deadlock

You might also like