0% found this document useful (0 votes)
4 views11 pages

OS Chapter 4

Chapter 4 discusses CPU scheduling, a critical process for operating systems to allocate CPU resources to various processes. It covers preemptive and non-preemptive scheduling methods, their advantages and disadvantages, and the role of the dispatcher in managing process execution. Additionally, it outlines key scheduling criteria and algorithms, emphasizing the importance of metrics like CPU utilization, throughput, and response time in evaluating scheduling effectiveness.

Uploaded by

igxgamer299
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views11 pages

OS Chapter 4

Chapter 4 discusses CPU scheduling, a critical process for operating systems to allocate CPU resources to various processes. It covers preemptive and non-preemptive scheduling methods, their advantages and disadvantages, and the role of the dispatcher in managing process execution. Additionally, it outlines key scheduling criteria and algorithms, emphasizing the importance of metrics like CPU utilization, throughput, and response time in evaluating scheduling effectiveness.

Uploaded by

igxgamer299
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 11

Chapter 4

CPU Scheduling
CPU Scheduling
• CPU scheduling is a process used by the operating system (OS) to
allocate the CPU (processor) to various processes running in a
computer system. Since the CPU can execute only one process at a
time (in most systems), scheduling is critical for efficiently managing
the CPU and ensuring fair and optimal performance of all processes.
• CPU Scheduler:
• A component of the OS that decides which process in the Ready
Queue should execute next.
CPU-I/O Burst Cycle
• The CPU-I/O Burst Cycle is a fundamental concept in process
execution that describes how a process alternates between using the
CPU and performing I/O operations. This cycle is key to understanding
process scheduling in operating systems.
Preemptive Scheduling
•Preemptive Scheduling:
•The CPU can be taken away from a running process to assign it to another higher-priority process.

• When it Occurs: Happens during interruptions like:Arrival of a higher-priority process.


• Time slice expiration in time-sharing systems.
• When a process transitions from waiting to ready.
• Examples of Preemptive Algorithms:Round Robin (RR)
• Shortest Remaining Time First (SRTF)
• Priority Scheduling (preemptive version)
•Advantages:

•Better for time-sharing and real-time systems.


•Reduces the average response time for shorter processes.
•Ensures high-priority tasks are executed promptly.
•Disadvantages:

•Overhead due to context switching.


•Can lead to starvation of lower-priority processes if not managed properly.
Non-Preemptive
•Non-Preemptive Scheduling:
•A running process cannot be interrupted and must finish its CPU burst before another process can execute.

•When it Occurs:
•Happens when processes naturally complete their CPU burst.
•No forceful interruptions.

•Examples of Non-Preemptive Algorithms:


•First-Come, First-Served (FCFS)
•Shortest Job Next (SJN)
•Priority Scheduling (non-preemptive version)

•Advantages:
•Simpler implementation.
•No overhead of context switching during execution.
•More predictable as the process runs to completion.

•Disadvantages:
•Poor responsiveness for high-priority tasks.
•Can lead to longer average waiting times.
•Less suitable for interactive or real-time systems.
Dispatcher
• A dispatcher in an Operating System (OS) is a component of the CPU scheduler responsible for
assigning processes to the CPU for execution. It works as a bridge between the scheduling
mechanism and the execution of processes. The dispatcher is activated once the CPU scheduler
has selected a process from the ready queue. Its main job is
• Dispatch Latency: The time taken by the dispatcher to stop one process and start another. Lower
dispatch latency ensures smoother multitasking and higher CPU utilization.to switch the CPU's
focus from one process to another efficiently and effectively.
Scheduling Criteria
• Scheduling criteria in an operating system are the metrics used to
evaluate and compare scheduling algorithms. These criteria help
ensure that system resources are utilized effectively and user
requirements are met. Here are some key scheduling criteria:
1. CPU Utilization
• Definition: The percentage of time the CPU is actively executing
processes.
• Goal: Maximize CPU utilization to reduce idle time.
• Ideal State: Close to 100% in a time-shared system.
• 2. Throughput
• Definition: The number of processes completed per unit of time.
• Goal: Maximize throughput to ensure the system handles a high
workload efficiently.
• Higher Priority: Systems with batch processing or high task volume.
• 3. Turnaround Time
• Definition: The total time taken for a process to complete from
submission to completion.
Turnaround Time=Completion Time−Arrival Time
• Goal: Minimize turnaround time for better user experience.
4. Waiting Time
• Definition: The time a process spends in the ready queue waiting for
CPU allocation.
• Waiting Time=Turnaround Time−Burst Time
• Goal: Minimize waiting time to reduce delays for processes.
• 5. Response Time
• Definition: The time from when a process is submitted until its first
response.
Response Time=First Response Time−Arrival Time
• Goxal: Minimize response time to make the system more interactive.
• Important For: Interactive systems, like real-time applications.
Scheduling Algorithms
• 1. First-Come, First-Served (FCFS)

•Type: Non-Preemptive

•Description: Processes are executed in the order they arrive in the ready queue.

•Advantages:
•Simple and easy to implement.
•No context switching overhead.

•Disadvantages:
•Can cause the convoy effect (a long process delays all others).
•Poor average waiting time for short processes.

•Use Case: Batch systems.

You might also like