0% found this document useful (0 votes)
59 views36 pages

Elements of Processor Management: CIS 250 Operating Systems

This document discusses key elements of processor management in operating systems, including processes, scheduling, and threading. It covers: - The basic states a process can be in (new, ready, running, waiting, terminated) and how it transitions between states. - Process scheduling policies like round robin, shortest job first, priority scheduling, and multilevel feedback queue scheduling. - Context switching and how the CPU is shared between processes. - Process control blocks that contain information about each process. - Differences between threads and processes in how they share resources.

Uploaded by

Asawari Surve
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
59 views36 pages

Elements of Processor Management: CIS 250 Operating Systems

This document discusses key elements of processor management in operating systems, including processes, scheduling, and threading. It covers: - The basic states a process can be in (new, ready, running, waiting, terminated) and how it transitions between states. - Process scheduling policies like round robin, shortest job first, priority scheduling, and multilevel feedback queue scheduling. - Context switching and how the CPU is shared between processes. - Process control blocks that contain information about each process. - Differences between threads and processes in how they share resources.

Uploaded by

Asawari Surve
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 36

Elements of Processor Management

CIS 250 Operating Systems Lecture 2

Todays class
Basic elements of an OS: Processor Management Jobs, processes, basic model of processes Threads Scheduling (latter half of lecture)

Remember von Neumann architecture


We have a CPU executing instructions Memory and devices connected by a bus A running program typically has a code segment and also some data Several programs may be running simultaneously on such a computer
they compete and cooperate on resources like memory, CPU cycles, and I/O devices the operating system must coordinate this activity

Consider An Executing Program


Sometimes we call a program in the system a job or a process We speak of the terms process and job interchangeably sometimes, but they are different A job is an activity submitted to the operating system to be done A process is an activity being worked on by the system Thus {processes} {jobs}

Basic Program Execution


When a program runs, the program counter keeps track of the next instruction to be executed registers keep values of current variables
together the program counter and variables for a program are the program state

stack space in memory is used to keep values of variables from subroutines that have to be returned to

Process States
A process may go through a number of different states as it is executed When a process requests a resource, for example, it may have to wait for that resource to be given by the OS In addition to I/O, memory, and the like, processes must share the CPU
processes are unaware of each others CPU usage virtualization allows them to share the CPU

The Five State Model


NEW
admitted time-out interrupt

TERMINATED

deallocate resources and exit

READY

RUNNING

scheduler dispatch OS services request I/O or event wait

WAITING

Arc labels show typical events which cause that transition

Which state is my job in?


On some systems, you can look at the status of your jobs and even others jobs
on UNIX, type ps at the prompt Note: nice command in UNIX print jobs on Mac or Windows

The number of states possible may differ from our modelafter all, it is only a model

Context Switching
Strictly speaking, only one process can run on the CPU at any given time To share the CPU, the operating system must save the state of one process, then load anothers program and data Hardware has to help with the stopping of programs (time-out interrupt) This context switch does no useful work it is an overhead expense

Process Control Block (PCB)


Is a small data structure for the job submitted to the operating system Contents (four major parts):
unique identification process status (ie: where in 5 state model) process state (ie: instruction counter, memory map, list of resources allocated, priority) other useful information (eg: usage stats)

Schedulers use instead of entire process (why?)

Threads versus Processes


A thread is a single path of instruction in a program
threads share a process!

All threads in a process share memory space and other resources Each thread has its own CPU state (registers, program counter) and stack May be scheduled by the process or by the kernel Threads are efficient, but lack protection from each other

Program

Program Code (note only one copy needed)


Shared Data

Thread 1
PC

Thread 2
PC

Stack

Stack

Types of Threads
User Threads
Designed for applications to use Managed by application programmer May not get used!

Kernel Threads
Managed by OS More overhead

CPU scheduling policies


Many conflicting goals: maximum number of jobs processed, importance of the jobs, responsiveness to users, how long a job might be forced to wait, I/O utilization, CPU utilization, etc. Preemptive versus Nonpreemptive policies

BREAK

Categories of policies
Multi-user CPU scheduling policies
variants of Round Robin (preemptive)

Batch systems
First Come, First Served (nonpreemptive) Shortest Job First (nonpreemptive) Shortest Remaining Time (preemptive) Priority Scheduling (preemptive)

Round Robin
Almost all computers today use some form of time-sharing, even PCs Time divided into CPU quanta (5-100 ms) Proper time quantum size? Two rules of thumb:
At least 100 times larger than a context switch (if not, context switch takes too much time) At least 80% of CPU cycles allocated run to completion (if not, RR degenerates towards FCFS)

So How Are Processes Scheduled?


Lets assume that we know the arrival times of the jobs, and how many CPU cycles each will take; for RR, assume quantum = 2 Real processors dont know arrival times and CPU cycles required in advance, so we cant schedule on this basis! Lets try out some of the algorithms

Simple Example
Arrives: Job: CPU Cycles: 0 A 4 B 5 C 7 D

10

Round Robin (RR), First Stages


0 2 4

Round Robin, Continued (2)


0 2 4 6 8

Round Robin, Completed


0 2 4 6 8 10 12 13 15 17 18 20

A A B C D ABC ACA

Simple Example
Arrives: Job: CPU Cycles: 0 A 4 B 5 C 7 D

10

Shortest Remaining Time (SRT) (also preemptive)


0 4 7 9 14 20

B D

Shortest Job Next (SJN) (non-preemptive)


0 10 12 15 20

D B

First Come, First Served (FCFS) (non-preemptive)


0 10 13 18 20

Turnaround Time
Is the time from job arrival to job completion
T(ji) = Finish(ji) - Arrive(ji) (and where might such information be stored?)

We often want to know what the average turnaround time is for a given schedule Lets calculate the turnaround time for the last schedule

Calculating Average Turnaround Time for FCFS


Job: Arrives: Finishes: Turnaround: A 0 10 10 B 4 13 9 C 5 18 13 D 7 20 13

AVERAGE = 11.25

Simple Example (Extended)


Arrives:
Job:

0
A

4
B

5
C

7
D

CPU Cycles:
Priority:

10 3

3 1

5 4

2 2

here higher priority values mean you go first

Multilevel Priority
Many different priority levels, 1, 2, 3, n where 1 is lowest and n is highest A process is permanently assigned to one priority level A RR scheduler handles processes of equal priority within a level

Multilevel Priority (ML) (preemptive)


0 5 10 15 17 20

D B

since each process was in a different queue, we didnt have to calculate RR within the queue

Multilevel Feedback (MLF) (preemptive)


There are several priority levels 1, 2, n Processes start at the highest priority level n A maximum time T is associated with each levelif the process uses more than that amount of time, it moves to a lower priority level RR is used to schedule within each level

Multilevel Feedback Time Limits


The time limit at each priority level varies as a function of level
TP = T for P = n TP = 2TP+1 for 1 P < n Usually T is set to some small, reasonable amount appropriate for the particular system

If a process uses up time available at level 1, it is assumed to be a runaway process and is terminated

Simple Example (Extended)


Arrives:
Job:

0
A

4
B

5
C

7
D

CPU Cycles:
Priority:

10 3

3 1

5 4

2 2

here higher priority values mean you go first

Multilevel Feedback Schedule


0
1

2
2

4
1

6
21

8
21

10
2

12 13
3

16

20

B C D A B C

Assumptions: T = 2; RR Quantum = 2 at each level

A Bon Fin
Weve talked about how processes are moved and scheduled (though we ignored the time used on context switching) Sometimes processes need the same resourcesthis can lead to problems!
Next time: interprocess communication

Lab will be a process scheduling exercise

You might also like