0% found this document useful (0 votes)
70 views83 pages

Osc Unit 2

The document discusses process management concepts in operating systems including process states, process scheduling queues, context switching, and scheduling algorithms. Process states include new, ready, running, waiting, and terminated. Scheduling algorithms covered are First Come First Serve, Shortest Job First, and Round Robin.

Uploaded by

SRH
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
70 views83 pages

Osc Unit 2

The document discusses process management concepts in operating systems including process states, process scheduling queues, context switching, and scheduling algorithms. Process states include new, ready, running, waiting, and terminated. Scheduling algorithms covered are First Come First Serve, Shortest Job First, and Round Robin.

Uploaded by

SRH
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 83

UNIT–II: Process Management

1. Process Concepts
2. Process Scheduling
3. Operation on Processes
4. CPU Scheduling
5. Scheduling Criteria
6. Scheduling Algorithms
• First Come First Serve (FCFS)
• Shortest Job First (SJF)
• Round Robin (RR)
7. Multilevel Queue Scheduling
8. Case Studies
• Unix
• Linux
• Windows
• Exam Questions
Process Concepts

• Process
• Process States
• Process Control Block (PCB)
• Switch from Process to Process
• Process Scheduling Queues
• Schedulers

|<<

2
3
Process
• A program in execution is called a Process.

• Process execution must progress in a Sequential fashion.


• A process is more than a program code, which is know as text
section.

• includes:
– Program Counter (current activity and
contents of reg.)
– Stack(temporary data, local variables,return)
– Data Section(global variables, static variables)
– Heap.(store dynamic memory allocated
– variables)
– Text.(code section)

|<<

4
5
Process States
As a process executes, it changes state

– new: The process is being created


– ready: The process is waiting to be assigned to a processor
– running: Instructions are being executed
– waiting: The process is waiting for some event to occur
|<<
– terminated: The process has finished execution
6
A process has several stages that it passes through from beginning
to end. There must be a minimum of five states. Even though
during execution, the process could be in one of these states, the
names of the states are not standardized. Each process goes
through several stages throughout its life cycle.

Process States:
The states of a process are as follows:
New (Create): In this step, the process is about to be created but
not yet created. It is the program that is present in secondary
memory that will be picked up by OS to create the process.

Ready: New -> Ready to run. After the creation of a process, the
process enters the ready state i.e. the process is loaded into the
main memory. The process here is ready to run and is waiting to
get the CPU time for its execution. Processes that are ready for
execution by the CPU are maintained in a queue called ready
queue for ready processes. 7
Run: The process is chosen from the ready queue by the CPU for
execution and the instructions within the process are executed by
any one of the available CPU cores.

Blocked or Wait: Whenever the process requests access to I/O


or needs input from the user or needs access to a critical
region(the lock for which is already acquired) it enters the
blocked or waits for the state. The process continues to wait in
the main memory and does not require CPU. Once the I/O
operation is completed the process goes to the ready state.

Terminated or Completed: Process is killed as well as PCB is


deleted. The resources allocated to the process will be released or
deallocated.

8
Process Control Block (PCB)

Information associated with each process


• Process State
• Process number(process id)
• Program Counter(address of next instruction to be
executed)
• CPU Registers(variable stored)
• Memory Management information(Base reg, limit reg,
page table, segmention table)
• CPU Scheduling information(priority,time qutantum
information stored)
• Accounting information(cpu statistics)
• List of open files( how many files opened)
• I/O Status information : List of i/o devices.(how many
i/o devices allocated to process) |<<

9
Context Switching in OS (Operating System):

•The Context switching is a technique or method used by the


operating system to switch a process from one state to another to
execute its function using CPUs in the system. When switching
perform in the system, it stores the old running process's status in
the form of registers and assigns the CPU to a new process to
execute its tasks. While a new process is running in the system, the
previous process must wait in a ready queue. The execution of the
old process starts at that point where another process stopped it.

•It defines the characteristics of a multitasking operating system in


which multiple processes shared the same CPU to perform multiple
tasks without the need for additional processors in the system.

10
CPU Switch From Process to Process

|<<

11
Steps for Context Switching:

There are several steps involves in context switching of the


processes. The following diagram represents the context switching
of two processes, P1 to P2, when an interrupt, I/O needs, or
priority-based process occurs in the ready queue of PCB.
•As we can see in the diagram, initially, the P1 process is running
on the CPU to execute its task, and at the same time, another
process, P2, is in the ready state.
•If an error or interruption has occurred or the process requires
input/output, the P1 process switches its state from running to the
waiting state. Before changing the state of the process P1, context
switching saves the context of the process P1 in the form of
registers and the program counter to the PCB0.
• After that, it loads the state of the P2 process from the ready state
of the PCB1 to the running state.
12
The following steps are taken when switching Process P1 to
Process 2:

1.First, these context switching needs to save the state of process


P1 in the form of the program counter and the registers to the
PCB (Program Counter Block), which is in the running state.
2.Now update PCB0 to process P1 and moves the process to the
appropriate queue, such as the ready queue, I/O queue and
waiting queue.
3.After that, another process gets into the running state, or we can
select a new process from the ready state, which is to be
executed, or the process has a high priority to execute its task.

4. Now, we have to update the PCB (Process Control Block)


for the selected process P2. It includes switching the
process state from ready to running state or from another
13
state like blocked, exit, or suspend.
5. If the CPU already executes process P2, we need to get the
status of process P2 to resume its execution at the same time
point where the system interrupt occurs.

Similarly, process P2 is switched off from the CPU so that the


process P1 can resume execution. P1 process is reloaded from
PCB0 to the running state to resume its task at the same point.
Otherwise, the information is lost, and when the process is
executed again, it starts execution at the initial level.

14
Process Scheduling Queues

• To meet the objectives of Multi programming and Multitasking the


process schedular selects an available process for program execution on
CPU.

• Job queue(hard disk)

– Set of all processes in the system

• Ready queue(RAM)
– Set of all processes residing in main memory, ready and waiting to
execute

• Device queues
– Set of processes waiting for an I/O device

• Processes migrate among the various queues

15
Ready Queue and Various I/O Device Queues

16
Representation of Process Scheduling

|<<

17
The Operating system manages various types of queues for each
of the process states. The PCB related to the process is also
stored in the queue of the same state. If the Process is moved
from one state to another state then its PCB is also unlinked
from the corresponding queue and added to the other state queue
in which the transition is made.

18
Schedulers
1.Long–term Scheduler (or Job scheduler)
• Long term scheduler is also known as job scheduler. It chooses the processes from the pool (secondary
memory) and keeps them in the ready queue maintained in the primary memory.
• Long Term scheduler mainly controls the degree of Multiprogramming. The purpose of long term scheduler is
to choose a perfect mix of IO bound and CPU bound processes among the jobs present in the pool.
• If the job scheduler chooses more IO bound processes then all of the jobs may reside in the blocked state all the
time and the CPU will remain idle most of the time. This will reduce the degree of Multiprogramming.
Therefore, the Job of long term scheduler is very critical and may affect the system for a very long time.

– This executes relatively infrequently.(less usage)

19
2.Short–term Scheduler (or CPU scheduler or Dispatcher)

-Selects from among the processes that are ready to execute and
allocates the CPU to one of them.
-invoked whenever an event occurs, leading to interruption.
Ex: clock interrupts, I/O interrupts, operating system calls,
signals, etc.
-A scheduling algorithm is used to select which job is going to
be dispatched for the execution. The Job of the short term
scheduler can be very critical in the sense that if it selects job
whose CPU burst time is very high then all the jobs after that,
will have to wait in the ready queue for a very long time.
-This problem is called starvation which may arise if the short
term scheduler makes some mistakes while selecting the job.
-It must select a new process for the CPU frequently(high
usage). It must be very fast.
20
3.Addition of Medium Term Scheduling

• The medium-term scheduler temporarily removes processes from main


memory and places them in secondary memory , like hard disk drive, or vice-
versa, referred to as "swapping out" or "swapping in”.

• It may decide to swap out a process when it


• inactive for some time,
• has a low priority,
• page faulting frequently,
• is taking up a large amount of memory

• In order to free up main memory for other processes, swapping the process
back in later when more memory is available, or when the process has been
unblocked and is no longer waiting for a resource.
Question: Locate where Medium term scheduler works?

|<<

21
Contd..

22
Threads:
A thread is the subset of a process and is also known as the lightweight
process. A process can have more than one thread, and these threads are
managed independently by the scheduler. All the threads within one
process are interrelated to each other. Threads have some common
information, such as data segment, code segment, files, etc., that is
shared to their peer threads. But contains its own registers, stack, and
counter.

23
Process Thread
A process is an instance of a program that is being Thread is a segment of a process or a lightweight
executed or processed. process that is managed by the scheduler
independently.
Processes are independent of each other and hence don't Threads are interdependent and share memory.
share a memory or other resources.

Each process is treated as a new process by the The operating system takes all the user-level threads
operating system. as a single process.

If one process gets blocked by the operating system, If any user-level thread gets blocked, all of its peer
then the other process can continue the execution. threads also get blocked because OS takes all of them
as a single process.

Context switching between two processes takes much Context switching between the threads is fast because
time as they are heavy compared to thread. they are very lightweight.

The data segment and code segment of each process are Threads share data segment and code segment with
independent of the other. their peer threads; hence are the same for other
threads also.
The operating system takes more time to terminate a Threads can be terminated in very little time.
process.

New process creation is more time taking as each new A thread needs less time for creation.
process takes all the resources.
24
Operation on Processes
• Process Creation
• Parent process creates children processes, which, in turn
create other processes, forming a tree of processes
-Using create –process system call.
• Resource sharing
– Parent and children share all resources.
– Children share subset of parent’s resources.
– Parent and child share no resources.
• Execution
– Parent and children execute concurrently.
– Parent waits until children terminate.
Operating System Concepts
Process Creation (Cont.)
• Address space
– Child is duplicate of parent(same program and data as
parent).
– Child has a new program loaded into it.
• UNIX examples
– Fork() system call creates a new duplicate process
– exec() system call is used after a fork to replace the
current process’s memory space with a new program.

fork() is used to create a new process that is an exact duplicate of the calling
process, while exec() is used to replace the current process image with another
one, typically after a fork() system call.
Operating System Concepts
A Tree of Processes On A Typical UNIX System

Operating System Concepts


Process Termination
• Process executes last statement and asks the operating
system to decide it (exit).
– Output data from child to parent (via wait).
– Process resources are deallocated by operating system.
• Parent may terminate execution of children processes
(abort).
– Child has exceeded allocated resources.
– Task assigned to child is no longer required.
– Parent is exiting.
• Operating system does not allow child to continue if
its parent terminates.
• Cascading termination.
Operating System Concepts
CPU Scheduling
• Basic Concepts
• Scheduling Criteria
• Scheduling Algorithms
Objectives
• To introduce CPU scheduling, which is the
basis for multiprogrammed operating
systems
• To describe various CPU-scheduling
algorithms
• To discuss evaluation criteria for selecting a
CPU-scheduling algorithm for a particular
system
• To examine the scheduling algorithms of
several operating systems
CPU scheduling Basic Concepts
• Maximum CPU
utilization obtained with
multiprogramming
• CPU–I/O Burst Cycle –
Process execution
consists of a cycle of
CPU execution and I/O
wait
• CPU burst followed by
I/O burst
• CPU burst distribution is
of main concern
Histogram of CPU-burst Times
33
CPU Scheduler/Short term Schedular
 Short-term scheduler selects from among the processes in
ready queue, and allocates the CPU to one of them
 Queue may be ordered in various ways
 CPU scheduling decisions may take place when a process:

1. Switches from running to waiting state


2. Switches from running to ready state
3. Switches from waiting to ready
4. Terminates
 Scheduling under 1 and 4 is nonpreemptive or cooperative
 All other scheduling is preemptive
 Consider access to shared data(incurs more cost)
 Consider preemption while in kernel mode(affects design of
OS)
 Consider interrupts occurring during crucial OS activities
Preemptive Scheduling is a type of CPU scheduling in which the
resources (CPU Cycle) have been allocated to a process for a
limited amount of time. In this type of scheduling, a process can
be interrupted when it is being executed.

Non-Preemptive Scheduling is one in which once the resources


(CPU Cycle) have been allocated to a process, the process holds it
until it completes its burst time or switches to the 'wait' state.

35
Dispatcher
• Dispatcher module gives control of the CPU to the
process selected by the short-term scheduler; this
involves:
– switching context
– switching to user mode
– jumping to the proper location in the user
program to restart that program
• Dispatch latency – time it takes for the dispatcher
to stop one process and start another process.
Scheduling Criteria
•CPU utilization – keep the CPU as busy as possible
•Throughput – no.of processes that complete their
execution per unit time.
•Turnaround time – amount of time to execute a
particular process.(Time of submission of a process to
the time of completion)
Turnaround time = Completion time - arrival time
or
Turnaround time = Waiting time + burst time

•Waiting time – amount of time a process spends


waiting in the ready queue. Waiting time is the sum of
the periods spent in the ready queue.
Waiting time = Turn around time – burst time
Response time – the amount of time it takes for the CPU to
respond to a request made by a process. It is the duration between
the arrival of a process and the first time it runs.

ResponseTime=ArrivalTime-Ready\QueueTime

Response time = first time got cpu time - arrival time

• Waiting Time
A scheduling algorithm does not affect the time required to
complete the process once it starts execution. It only
affects the waiting time of a process i.e. time spent by a process
waiting in the ready queue.

Waiting Time = Turnaround Time - Burst Time.

38
Scheduling Algorithm Optimization Criteria

• Max CPU utilization


• Max throughput
• Min turnaround time
• Min waiting time
• Min response time
Scheduling Algorithms

• FCFS Scheduling
• SJF Scheduling
– Non-Preemptive
– Preemptive
• Priority Scheduling
• Round Robin
• Multilevel Queue Scheduling

|<<

40
41
42
43
44
First- Come, First-Served (FCFS) Scheduling

Process Burst Time


P1 24
P2 3
P3 3
• Suppose that the processes arrive in the order: P1 , P2 ,
P3
The Gantt Chart for the schedule is:
P1 P2 P3
0 24 27 30

• Waiting time for P1 = 0; P2 = 24; P3 = 27


• Average waiting time: (0 + 24 + 27)/3 = 17
FCFS Scheduling (Cont.)
Suppose that the processes arrive in the order:
P2 , P3 , P1
• The Gantt chart for the schedule is:

P2 P3 P1
0 3 6 30

• Waiting time for P1 = 6; P2 = 0; P3 = 3


• Average waiting time: (6 + 0 + 3)/3 = 3
• Much better than previous case
• Convoy effect - short process behind long process
– Consider one CPU-bound and many I/O-bound processes
Shortest-Job-First (SJF) Scheduling
• Associate with each process the length
of its next CPU burst
– Use these lengths to schedule the process
with the shortest time
• SJF is optimal – gives minimum average
waiting time for a given set of processes
– The difficulty is knowing the length of the
next CPU request
– Could ask the user
Example of SJF

ProcessArrival Time Burst Time


P1 0.0 6
P2 2.0 8
P3 4.0 7
P4 5.0 3

• SJF scheduling chart


P4 P1 P3 P2
0 3 9 16 24

• Average waiting time = (3 + 16 + 9 + 0) / 4 = 7


Determining Length of Next CPU
Burst
• Can only estimate the length – should be similar to
the previous one
– Then pick process with shortest predicted next CPU burst

• Can be done by using the length of previous CPU


bursts, using exponential averaging
1. t n  actual length of n th CPU burst
2.  n 1  predicted value for the next CPU burst
3.  , 0    1
4. Define :  n 1   t n  1    n .

• Commonly, α set to ½
• Preemptive version called shortest-remaining-time-
first
Prediction of the Length of the Next CPU Burst
Examples of Exponential Averaging
•  =0
– n+1 = n
– Recent history does not count
•  =1
– n+1 =  tn
– Only the actual last CPU burst counts
• If we expand the formula, we get:
n+1 =  tn+(1 - ) tn -1 + …
+(1 -  )j  tn -j + …
+(1 -  )n +1 0
• Since both  and (1 - ) are less than or equal
to 1, each successive term has less weight than
its predecessor
Example of Shortest-remaining-time-first

• Now we add the concepts of varying arrival times


and preemption to the analysis
ProcessAarri Arrival TimeTBurst Time
P1 0 8
P2 1 4
P3 2 9
P4 3 5
• Preemptive SJF Gantt Chart
P1 P2 P4 P1 P3
0 1 5 10 17 26

• Average waiting time = [(10-1)+(1-1)+(17-2)+5-


3)]/4 = 26/4 = 6.5 msec
Round Robin (RR)
• Each process gets a small unit of CPU time (time quantum),
usually 10-100 milliseconds.

• After this time has elapsed, the process is preempted and added
to the end of the ready queue.

• If there are n processes in the ready queue and the time


quantum is q, then each process gets 1/n of the CPU time in
chunks of at most q time units at once.

• No process waits more than (n-1)q time units.

53
Example of RR with Time
Quantum = 4
Process Burst Time
P1 24
P2 3
P3 3
• The Gantt chart is:
P1 P2 P3 P1 P1 P1 P1 P1
0 4 7 10 14 18 22 26 30

• Typically, higher average turnaround than SJF, but better


response
• q should be large compared to context switch time
• q usually 10ms to 100ms, context switch < 10 usec
Example of RR with Time Quantum = 20
Process Burst Time
P1 53
P2 17
P3 68
P4 24

Gantt chart:

P1 P2 P3 P4 P1 P3 P4 P1 P3 P3

0 20 37 57 77 97 117 121 134 154 162

Average waiting time


= [(77 + 24) + (20) + (37 + 40 + 17) + (57 + 40)] / 4
= [101 + 20 + 94 + 97] / 4
= 312 / 4
= 78
55
Time Quantum and Context Switch Time
Turnaround Time Varies With The Time Quantum

80% of CPU bursts should


be shorter than q
Multilevel Queue
• Ready queue is partitioned into separate queues, eg:
– foreground (interactive)
– background (batch)
• Processes are permanently assigned to one queue
• Each queue has its own scheduling algorithm:
– foreground – RR
– background – FCFS
• Scheduling must be done between the queues:
– Fixed priority scheduling; (i.e., serve all from foreground
then from background). Possibility of starvation.
– Time slice – each queue gets a certain amount of CPU
time which it can schedule amongst its processes; i.e.,
80% to foreground in RR
– 20% to background in FCFS
Multilevel Queue Scheduling
Multilevel Feedback Queue
• In a multi-level queue-scheduling algorithm, processes
are permanently assigned to a queue.

• Idea: Allow processes to move among various queues.

• Examples
– If a process in a queue dedicated to interactive processes
consumes too much CPU time, it will be moved to a (lower-
priority) queue.
– A process that waits too long in a lower-priority queue may be
moved to a higher-priority queue.

60
Multilevel Feedback Queue

• A process can move between the various


queues; aging can be implemented this way
• Multilevel-feedback-queue scheduler defined by
the following parameters:
– number of queues
– scheduling algorithms for each queue
– method used to determine when to upgrade a
process
– method used to determine when to demote a
process
– method used to determine which queue a process
will enter when that process needs service
Example of Multilevel Feedback
Queue
• Three queues:
– Q0 – RR with time quantum 8
milliseconds
– Q1 – RR time quantum 16
milliseconds
– Q2 – FCFS
• Scheduling
1.The scheduler first executes all processes in
Queue 0.
2.Only when queue 0 is empty will it execute
processes in queue 1.
3.Similarly, processes in queue 2 will only be
executed if queue 0 and 1 are empty.
4.A process that arrive for Q1 will preempt a
process in Q2.
5.A process in Q1 will in turn be preempted by a
process arriving for Q0
63
• A process entering the Ready queue is put in
Q0 is given a TQ of 8 ms.
• If it does not finish within this time, it is
moved to tail of Queue 1. If Q0 is empty, the
process at the head of queue 1 is given a TQ
of 16 ms.
• If it does not complete, it is preempted and is
put into Q2. Processes in queue 2 are run on
an FCFS basis but are run only when Q0 and
Q1 are empty.

64
Exercise: Five batch jobs P1 through P5 arrive for execution at
times indicated. Each process has a total CPU time requirement as
listed below:

Using FCFS, determine the average turnaround and average waiting


times. Also draw the Gantt charts.

Process Arrival CPU


time burst
P1 2 10
P2 1 2
P3 0 5
P4 4 6
P5 3 4

|<<

65
Exercise: Five batch jobs P1 through P5 arrive for execution at
times indicated. Each process has a total CPU time requirement as
listed below:
Using Preemptive SJF, determine the average turnaround and
average waiting times. Also draw the Gantt charts.

Process Arrival CPU


time burst
P1 2 10
P2 1 2
P3 0 5
P4 4 6
P5 3 4

|<< 66
More SJF Examples
1. SJF non-preemptive Proc Arrives Burst
P1 0 8
P2 1 4
P3 2 9
P4 3 5
And then preemptive

2. SJF non-preemptive Proc Arrives Burst


P1 1 2
P2 0 7
P3 2 7
P4 5 3
P5 6 1
And then preemptive

67
More Priority Examples
• Example Proc Arrival Burst Priority
P1 0 6 5
P2 2 2 3
P3 3 3 4
P4 9 3 2
P5 10 1 1

68
Five batch jobs P1 through P5 arrive for execution at times
indicated. Each process has a total CPU time requirement as listed
below:
Using RR with Q = 3 units, determine the average turnaround and
average waiting times. Also draw the Gantt charts.
Process Arrival CPU
time burst
P1 2 10
P2 1 2
P3 0 5
P4 4 6
P5 3 4

|<< 69
Multilevel Queue Examples
• ML queue, 2 levels
– RR @ 10 units
– FCFS
– RR gets priority over FCFS
• Proc Arrival Burst Queue
P1 0 12 FCFS
P2 4 12 RR
P3 8 8 FCFS
P4 20 10 RR
• Non-preemptive and preemptive
70
Example of Multilevel Feedback Queue
• Three queues:
– Q0 – time quantum 8 milliseconds
– Q1 – time quantum 16 milliseconds
– Q2 – FCFS

• Scheduling
– A new job enters queue Q0 which is served FCFS. When it gains
CPU, job receives 8 milliseconds. If it does not finish in 8
milliseconds, job is moved to queue Q1.

– At Q1 job is again served FCFS and receives 16 additional


milliseconds. If it still does not complete, it is preempted and
moved to queue Q2.

71
Multilevel Feedback Queue Example
• Three levels
– RR at 8 units
– RR at 16 units
– FCFS, active
• Proc Arrival Burst
P1 0 32
P2 10 12
P3 30 10
• Non-preemptive and preemptive
72
Operating System Examples

• Windows XP scheduling
• Linux scheduling

73
Windows Scheduling
• Windows uses priority-based preemptive scheduling
• Highest-priority thread runs next
• Dispatcher is scheduler
• Thread runs until
– (1) blocks,
– (2) uses time slice,
– (3) preempted by higher-priority thread
• Real-time threads can preempt non-real-time
• 32-level priority scheme
• Variable class is 1-15, real-time class is 16-31
• Priority 0 is memory-management thread
• Queue for each priority
• If no run-able thread, runs idle thread

74
Windows Priority Classes
• Win32 API identifies several priority classes to which a process can belong
 REALTIME_PRIORITY_CLASS, HIGH_PRIORITY_CLASS,
ABOVE_NORMAL_PRIORITY_CLASS,NORMAL_PRIORITY_CL
ASS, BELOW_NORMAL_PRIORITY_CLASS,
IDLE_PRIORITY_CLASS
 All are variable except REALTIME
• A thread within a given priority class has a relative priority
 TIME_CRITICAL, HIGHEST, ABOVE_NORMAL, NORMAL,
BELOW_NORMAL, LOWEST, IDLE
• Priority class and relative priority combine to give numeric priority
• Base priority is NORMAL within the class
• If quantum expires, priority lowered, but never below base
• If wait occurs, priority boosted depending on what was waited for
• Foreground window given 3x priority boost

75
Windows XP Priorities

76
Linux Scheduling
• Constant order O(1) scheduling time
• Preemptive, priority based
• Two priority ranges: time-sharing and real-time
• Real-time range from 0 to 99 and nice value from 100 to 140
• Map into global priority with numerically lower values
indicating higher priority
• Higher priority gets larger q
• Task run-able as long as time left in time slice (active)
• If no time left (expired), not run-able until all other tasks use
their slices
• The kernel maintains a list of all runnable tasks in a runqueue
• Because of its support for SMP, each processor maintains its
own queue and schedules itself independently.
• Each runqueue contains two priority arrays:
– Two priority arrays (active, expired)
– Tasks indexed by priority
– When no more active(active array is empty), arrays are
exchanged 77
Priorities and Time-slice length

78
List of Tasks Indexed
According to Priorities

79
Linux Scheduling (Cont.)
• Real-time scheduling according to POSIX.1b
– Real-time tasks have static priorities
• All other tasks have dynamic priorities that are based on
nice value plus or minus 5
– Interactivity of task determines plus or minus
• More interactive -> more minus
(A task’s interactivity is determined by how long it
has been sleeping while waiting for I/O.)
– Priority recalculated when task expired
– This exchanging arrays implements adjusted
priorities

80
Exam Questions
1. Explain various steps involved in change of a Process State.
2. Define Process States.
3. What is a process? Write the difference between process and
program.
4. Define Process and Program.
5. Compare & Contrast Process & Thread.
6. What is the difference between a "thread" and a "process"?
7. Compare the difference between a thread and a process. List
the system calls related to threads.
8. Define a thread. What are the uses of thread?
9. What are the reasons for Process Suspension?
10. What is process control block? Explain its structure.
Exam Questions
11. Describe the process state transition diagram. Explain Process
Control Block.
12. What are the types of CPU Scheduling?
13. What are preemptive and non-preemptive scheduling policies?
14. Compare preemptive and non-preemptive CPU scheduling.
15. What is CPU scheduler?
16. What is throughput, turnaround time, waiting time and
response time?
17. Explain any three CPU Scheduling algorithms.
Exam Questions |<<

18. Assume the following are the jobs to execute with one
processor:
Job Burst Priorit
Time y
1 10 3
2 1 1
3 2 3
4 1 4
The jobs are assumed to have arrived in the order 1, 2, 3, 4, 5.
5 5 2
a. Give Gantt–Chart illustrating the execution of these jobs
using FCFS, RR (quantum =1), Shortest Process Next, Shortest
Remaining Time.
b. What is the turn–around time, waiting time of each job for
each of the above scheduling algorithms?

You might also like