Chapter 4 - Processor Management
Chapter 4 - Processor Management
Chapter 4 - Processor Management
2020 - Semester 2
Event Specific Picture –
Chapter 4
behind red graphic
Processor Management
Learning Objectives
After completing the chapter, you should be able to describe:
How job scheduling and process scheduling compare
How several process scheduling algorithms work
How jobs go through various states during execution
How the process scheduling policies can be implemented
How the interrupt handler manages pauses in processing
Outline
Overview
Scheduling sub-managers
– Job Scheduler
– Process Scheduler
Process States
Process Control Block
Process Scheduling Policies:
– Two types: => Preemptive and non-preemptive scheduling
Process Scheduling Algorithms
– First-Come First-Served
– Shortest Job Next
– Priority Scheduling
– Shortest Remaining Time
– Round Robin
– Multiple-Level Queues
– Earliest Deadline First
Interrupts
Overview/Definitions (1 of 3)
Program (job)
– Inactive unit. E.g. File stored on a disk
– A unit of work submitted by a user (not a process)
Process (task)
– A program in execution
– Active entity
• Requires resources to perform function. E.g. Processor, special registers, etc.
to perform a function.
Process management
- keeping track of processes and the states they are in.
Thread
– Portion of a process
– Runs independently
Processor (CPU)
– Performs calculations and executes programs.
Overview/Definitions (2 of 3)
Multiprogramming environment
– Multiple processes competing to be run by a single processor
(CPU).
– Requires fair and efficient CPU allocation for each job
Multitasking
– One user running many programs
– Still, resources must be shared by several programs
Interrupt
– Call for help
– Activates higher-priority program
Context Switch
– Saving job processing information when interrupted
Overview/Definitions (3 of 3)
In this chapter, we will see how a Processor Manager
manages a system with a single CPU (examining
single-central processing unit CPU systems).
(figure 4.3)
A typical job (or process) changes status as it moves through the system from
HOLD to FINISHED.
© Cengage Learning 2018
Process States (1 of 3)
User submits job:
– Job accepted
• Put on HOLD and placed in queue
– Job state changes from HOLD to READY
• Indicates job waiting for CPU
– Job state changes from READY to RUNNING
• When selected for CPU and processing
– Job state changes from RUNNING to WAITING
• Requires unavailable resources: moves back to READY status
– Job state changes to FINISHED
• Job completed (successfully or unsuccessfully)
Process States (2 of 3)
Job Scheduler or Process Scheduler incurs state
transition responsibility:
– HOLD to READY
• Job Scheduler initiates using predefined policy
– READY to RUNNING
• Process Scheduler initiates using predefined algorithm
– RUNNING back to READY
• Process Scheduler initiates according to predefined time limit
or other criterion
– RUNNING to WAITING
• Process Scheduler initiates by instruction in job
Process States (3 of 3)
Job Scheduler or Process Scheduler incurs
state transition responsibility (continued):
– WAITING to READY
• Process Scheduler initiates by signal from I/O
device manager
• Signal indicates I/O request satisfied; job continues
– RUNNING to FINISHED
• Process Scheduler or Job Scheduler initiates upon
job completion
• Satisfactorily or with error
Process Control Block (PCB)
The operating system must manage a large amount
of data for each active process in the system.
(figure 4.6)
Queuing paths from HOLD to FINISHED. The Job and Processor Schedulers
release the resources when the job leaves the RUNNING state.
© Cengage Learning 2018
CPU Context Switch
There is only one CPU and therefore only one set of CPU
registers
– These registers contain the values for the currently
executing process
(140+215+535+815+940)/5 = 529
First Come First Served (FCFS)
(figure 4.7)
Timeline for job sequence A, B, C using the F CFS algorithm.
© Cengage Learning 2018
Note: Average turnaround time: 16.67
(figure 4.8)
Timeline for job sequence C, B, A using the F CFS algorithm.
© Cengage Learning 2018
Note: Average turnaround time: 7.3
Shortest Job Next (SJN)
Non-preemptive
Also known as shortest job first (SJF)
Job handled based on length of CPU cycle time
Easy implementation in batch environment
– CPU time requirement known in advance
Does not work well in interactive systems
Optimal algorithm
– All jobs are available at same time
– CPU estimates available and accurate
Shortest Job Next (SJN)
(75+200+340+620+940)/5 = 435
Priority Scheduling (1 of 4)
Non-preemptive
o Sometimes it can be preemtive
Preferential treatment for important jobs
– Highest priority programs processed first
– No interrupts until CPU cycles completed or natural
wait occurs
READY queue may contain two or more jobs with
equal priority
– Uses FCFS policy within priority
System administrator or Processor Manager use
different methods of assigning priorities
Priority Scheduling (2 of 4)
Processor Manager priority assignment
methods:
– Memory requirements
• Jobs requiring large amounts of memory
• Allocated lower priorities (vice versa)
– Number and type of peripheral devices
• Jobs requiring many peripheral devices
• Allocated lower priorities (vice versa)
Priority Scheduling (3 of 4)
Processor Manager priority assignment
methods (cont.)
– Total CPU time
• Jobs having a long CPU cycle
• Given lower priorities (vice versa)
– Amount of time already spent in the system
(aging)
• Total time elapsed since job accepted for processing
• Increase priority if job in system unusually long time
Priority Scheduling (4 of 4)
Example:
Shortest Remaining Time (SRT)
Preemptive version of SJN
Processor allocated to job closest to completion
– Preemptive if newer job has shorter completion time
Often used in batch environments
– Short jobs given priority
Cannot implement in interactive system
– Requires advance CPU time knowledge
Involves more overhead than SJN
– System monitors CPU time for READY queue jobs
– Performs context switching
Shortest Remaining Time (SRT)
(figure 4.10)
Arrival Time 0 1 2 3
Timeline for job sequence A, B, C, D using the
Job A B C D preemptive S R T algorithm. Each job is interrupted
after one C P U cycle if another job is waiting with
CPU Cycle 6 3 1 4 less C P U time remaining. © Cengage Learning 2018
Round Robin (1 of 5)
Preemptive
Distributes the processing time equitably among all ready
processes
The algorithm establishes a particular time slice (quantum),
which is the amount of time each process receives before
being preempted and returned to the ready state to allow
another process its turn
Time quantum size:
– Crucial to system performance
– Varies from 100 ms to 1-2 seconds
CPU equally shared among all active processes
– Not monopolized by one job
Used extensively in interactive systems
Round Robin (2 of 5)
Suppose the time slice is 50
(515+325+940+920+640)/5 = 668
Round Robin (3 of 5)
Suppose the time quantum is 4
Arrival Time 0 1 2 3
Job A B C D
CPU Cycle 8 4 9 5
(figure 4.12)
Timeline for job sequence A, B, C, D using the preemptive round robin
algorithm with time slices of 4 ms.
© Cengage Learning 2018
Round Robin (4 of 5)
Efficiency
– Depends on time quantum size
• In relation to average CPU cycle
Quantum too large (larger than most CPU cycles)
– Response time suffers
– Algorithm reduces to FCFS scheme
Quantum too small
– Throughput suffers
– Context switching occurs
• Job execution slows down
• Overhead dramatically increased
Information saved in PCB
Round Robin (5 of 5)
Best quantum time size:
– Depends on system
• Interactive: response time key factor
• Batch: turnaround time key factor
– General rules of thumb
• Long enough to allow 80% of CPU cycles to run to
completion
• At least 100 times longer than the time required to
preform one context switch.
Process Scheduling Algorithms
Are they preemptive or non-preemptive?
Explain.
First-Come, First-Served?
Priority?
Shortest Job Next?
Shortest Remaining Time?
Round Robin?
Earliest Deadline First?
Multiple-Level Queues (1 of 3)
A class of scheduling algorithms that categorise processes
by some characteristic and can be used in conjunction with
other policies or schemes.
– Processes can be categorised by: memory size, process type, their
response time, their externally assigned priorities, etc.
A multilevel queue-scheduling algorithm divides the ready
queue into several separate queues to which processes are
assigned.
– Each queue is associated with one particular scheduling algorithm
Examples:
– Interactive processes: Round Robin
– Background processes: FCFS and SRT
Multiple-Level Queues (2 of 3)
Works in conjunction with several other schemes
Works well in systems with jobs grouped by common
characteristic
– Priority-based
• Different queues for each priority level
– CPU-bound jobs in one queue and I/O-bound jobs in another
queue
– Hybrid environment
• Batch jobs in background queue
• Interactive jobs in foreground queue
Scheduling policy based on predetermined scheme
Four primary methods of moving jobs
Multiple-Level Queues (3 of 3)
Four primary methods of moving jobs:
– Case 1: No Movement Between Queues
– Case 2: Movement Between Queues
– Case 3: Variable Time Quantum Per Queue
– Case 4: Aging
End