Module 2
Module 2
Process Management
1.1 Process Concept
1.1.1 The Process
1.1.2 Process State
1.1.3 Process Control Block
1.2 Process Scheduling
1.2.1 Scheduling Queues
1.2.2 Schedulers
1.2.3 Context Switch
1.3 Operations on Processes
1.3.1 Process Creation
1.3.2 Process Termination
1.4 Inter-process Communication
1.4.1 Shared-Memory Systems
1.4.2 Message-Passing Systems
1.4.2.1 Naming
1.4.2.2 Synchronization
1.4.2.3 Buffering
Page 1
Operating Systems – BCS303
PROCESS MANAGEMENT
Page 2
Operating Systems – BCS303
Page 3
Operating Systems – BCS303
Page 4
Operating Systems – BCS303
Page 5
Operating Systems – BCS303
Page 6
Operating Systems – BCS303
1.25.2 Schedulers
• Three types of schedulers:
1) Long-term scheduler
2) Short-term scheduler and
3) Medium-term schedulers
Long-Term Scheduler Short-Term Scheduler
Also called job scheduler. Also called CPU scheduler.
Selects which processes should be brought Selects which process should be executed
into the ready-queue. next and allocates CPU.
Need to be invoked only when a process Need to be invoked to select a new process
leaves the system and therefore executes for the CPU and therefore executes much
much less frequently. more frequently.
May be slow „,‟ minutes may separate the Must be fast „,‟ a process may execute for
creation of one new process and the next. only a few milliseconds.
Controls the degree of multiprogramming.
Page 7
Operating Systems – BCS303
Figure 1.29 Creating a separate process using the UNIX fork() system-call
Page 8
Operating Systems – BCS303
Page 9
Operating Systems – BCS303
Page 10
Operating Systems – BCS303
1.27.2.1 Naming
• Processes that want to communicate must have a way to refer to each other. They can use either
direct or indirect communication.
Page 11
Operating Systems – BCS303
1.27.2.2 Synchronization
• Message passing may be either blocking or non-blocking (also known as synchronous and
asynchronous).
1.27.2.3 Buffering
• Messages exchanged by processes reside in a temporary queue.
• Three ways to implement a queue:
1) Zero Capacity
The queue-length is zero.
The link can't have any messages waiting in it.
The sender must block until the recipient receives the message.
2) Bounded Capacity
The queue-length is finite.
If the queue is not full, the new message is placed in the queue.
The link capacity is finite.
If the link is full, the sender must block until space is available in the queue.
3) Unbounded Capacity
The queue-length is potentially infinite.
Any number of messages can wait in the queue.
The sender never blocks.
Page 12
Operating Systems – BCS303
2.1.1 Motivation
1) The software-packages that run on modern PCs are multithreaded.
An application is implemented as a separate process with several threads of control.
For ex: A word processor may have
→ first thread for displaying graphics
→ second thread for responding to keystrokes and
→ third thread for performing grammar checking.
2) In some situations, a single application may be required to perform several similar tasks.
For ex: A web-server may create a separate thread for each client request.
This allows the server to service several concurrent requests.
3) RPC servers are multithreaded.
When a server receives a message, it services the message using a separate thread.
This allows the server to service several concurrent requests.
4) Most OS kernels are multithreaded;
Several threads operate in kernel, and each thread performs a specific task, such as
→ managing devices or
→ interrupt handling.
Page 13
Operating Systems – BCS303
2.1.2 Benefits
1) Responsiveness
• A program may be allowed to continue running even if part of it is blocked.
Thus, increasing responsiveness to the user.
2) Resource Sharing
• By default, threads share the memory (and resources) of the process to which they belong.
Thus, an application is allowed to have several different threads of activity within the same
address-space.
3) Economy
• Allocating memory and resources for process-creation is costly.
Thus, it is more economical to create and context-switch threads.
4) Utilization of Multiprocessor Architectures
• In a multiprocessor architecture, threads may be running in parallel on different processors.
Thus, parallelism will be increased.
Page 14
Operating Systems – BCS303
Page 15
Operating Systems – BCS303
Page 16
Operating Systems – BCS303
2.3.1 Pthreads
• This is a POSIX standard API for thread creation and synchronization.
• This is a specification for thread-behavior, not an implementation.
• OS designers may implement the specification in any way they wish.
• Commonly used in: UNIX and Solaris.
Page 17
Operating Systems – BCS303
Page 18
Operating Systems – BCS303
Page 19
Operating Systems – BCS303
2.5.4 Dispatcher
• It gives control of the CPU to the process selected by the short-term scheduler.
• The function involves:
1) Switching context
2) Switching to user mode &
3) Jumping to the proper location in the user program to restart that program.
• It should be as fast as possible, since it is invoked during every process switch.
• Dispatch latency means the time taken by the dispatcher to
→ stop one process and
→ start another running.
Page 20
Operating Systems – BCS303
Page 21
Operating Systems – BCS303
• Suppose that the processes arrive in the order P2, P3, P1.
• The Gantt chart for the schedule is as follows:
Page 22
Operating Systems – BCS303
Page 23
Operating Systems – BCS303
Page 24
Operating Systems – BCS303
Page 25
Operating Systems – BCS303
Figure 2.9 How turnaround time varies with the time quantum
Page 26
Operating Systems – BCS303
• There must be scheduling among the queues, which is commonly implemented as fixed-priority
preemptive scheduling.
For example, the foreground queue may have absolute priority over the background queue.
• Time slice: each queue gets a certain amount of CPU time which it can schedule amongst its
processes; i.e., 80% to foreground in RR
20% to background in FCFS
Page 27
Operating Systems – BCS303
Page 28
Operating Systems – BCS303
Page 29
Operating Systems – BCS303
Page 30
Operating Systems – BCS303
Exercise Problems
1) Consider the following set of processes, with length of the CPU burst time given in milliseconds:
Process Arrival Time Burst Time Priority
P1 0 10 3
P2 0 1 1
P3 3 2 3
P4 5 1 4
P5 10 5 2
(i) Draw four Gantt charts illustrating the execution of these processing using FCFS, SJF, a non
preemptive priority and RR (Quantum=2) scheduling.
(ii) What is the turn around time of each process for each scheduling algorithm in (i).
(iii) What is waiting time of each process in (i)
Solution:
(i) FCFS:
SJF (preemptive):
Page 31
Operating Systems – BCS303
Page 32
Operating Systems – BCS303
3) Consider following set of processes with CPU burst time (in msec)
i) Draw Gantt chart illustrating the execution of above processes using SRTF and non preemptive SJF
ii) Find the turnaround time for each process for SRTF and SJF. Hence show that SRTF is faster than SJF.
Solution:
Conclusion:
Since average turnaround time of SRTF(7.75) is less than SJF(9.25), SRTF is faster than SJF.
Page 33
Operating Systems – BCS303
Draw Gantt charts and calculate the waiting and turnaround time using FCFS, SJF and RR with time
quantum 10 scheduling algorithms.
Solution:
(i) FCFS:
SJF (preemptive):
Page 34
Operating Systems – BCS303
Compute average turn around time and average waiting time using
i) FCFS
ii) Preemptive SJF and
iii) RR (quantum-4).
Solution:
(i) FCFS:
Page 35