Elements of Processor Management: CIS 250 Operating Systems
Elements of Processor Management: CIS 250 Operating Systems
Todays class
Basic elements of an OS: Processor Management Jobs, processes, basic model of processes Threads Scheduling (latter half of lecture)
stack space in memory is used to keep values of variables from subroutines that have to be returned to
Process States
A process may go through a number of different states as it is executed When a process requests a resource, for example, it may have to wait for that resource to be given by the OS In addition to I/O, memory, and the like, processes must share the CPU
processes are unaware of each others CPU usage virtualization allows them to share the CPU
TERMINATED
READY
RUNNING
WAITING
The number of states possible may differ from our modelafter all, it is only a model
Context Switching
Strictly speaking, only one process can run on the CPU at any given time To share the CPU, the operating system must save the state of one process, then load anothers program and data Hardware has to help with the stopping of programs (time-out interrupt) This context switch does no useful work it is an overhead expense
All threads in a process share memory space and other resources Each thread has its own CPU state (registers, program counter) and stack May be scheduled by the process or by the kernel Threads are efficient, but lack protection from each other
Program
Thread 1
PC
Thread 2
PC
Stack
Stack
Types of Threads
User Threads
Designed for applications to use Managed by application programmer May not get used!
Kernel Threads
Managed by OS More overhead
BREAK
Categories of policies
Multi-user CPU scheduling policies
variants of Round Robin (preemptive)
Batch systems
First Come, First Served (nonpreemptive) Shortest Job First (nonpreemptive) Shortest Remaining Time (preemptive) Priority Scheduling (preemptive)
Round Robin
Almost all computers today use some form of time-sharing, even PCs Time divided into CPU quanta (5-100 ms) Proper time quantum size? Two rules of thumb:
At least 100 times larger than a context switch (if not, context switch takes too much time) At least 80% of CPU cycles allocated run to completion (if not, RR degenerates towards FCFS)
Simple Example
Arrives: Job: CPU Cycles: 0 A 4 B 5 C 7 D
10
A A B C D ABC ACA
Simple Example
Arrives: Job: CPU Cycles: 0 A 4 B 5 C 7 D
10
B D
D B
Turnaround Time
Is the time from job arrival to job completion
T(ji) = Finish(ji) - Arrive(ji) (and where might such information be stored?)
We often want to know what the average turnaround time is for a given schedule Lets calculate the turnaround time for the last schedule
AVERAGE = 11.25
0
A
4
B
5
C
7
D
CPU Cycles:
Priority:
10 3
3 1
5 4
2 2
Multilevel Priority
Many different priority levels, 1, 2, 3, n where 1 is lowest and n is highest A process is permanently assigned to one priority level A RR scheduler handles processes of equal priority within a level
D B
since each process was in a different queue, we didnt have to calculate RR within the queue
If a process uses up time available at level 1, it is assumed to be a runaway process and is terminated
0
A
4
B
5
C
7
D
CPU Cycles:
Priority:
10 3
3 1
5 4
2 2
2
2
4
1
6
21
8
21
10
2
12 13
3
16
20
B C D A B C
A Bon Fin
Weve talked about how processes are moved and scheduled (though we ignored the time used on context switching) Sometimes processes need the same resourcesthis can lead to problems!
Next time: interprocess communication