0% found this document useful (0 votes)
14 views

Module 2 Os End Sums

Uploaded by

jeevanashree2004
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views

Module 2 Os End Sums

Uploaded by

jeevanashree2004
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 52

OPERATING SYSTEMS MODULE 2

MODULE 2

PROCESS CONCEPT
Question that arises in discussing operating systems involves what to call all the CPU activities.
A batch system executes jobs, whereas a time-shared system has user programs, or tasks. Even
on a single-user system such as Microsoft Windows, a user may be able to run several programs
at one time: a word processor, a web browser, and an e-mail package. Even if the user can
execute only one program at a time, the operating system may need to support its own internal
programmed activities, such as memory management. In many respects, all these activities are
similar, so we call all of them processes.
***What is Process?
Process is a program in execution.
A process execution must progress in sequential fashion. It has multiple parts such as Text
section, program counter, stack, data section and heap.
PROCESS IN MEMORY
Explain the process in memory
Process includes
1. Program Counter to indicate the address of the next instruction to be executed for this
process.
2. Registers Content of the processor.
3. Process Stack contains temporary data.
4. Data Section contains global variables.
5. Heap is memory that is dynamically allocated during process runtime
A program by itself is not a process.
1) A process is an active-entity.
2) A program is a passive-entity such as an executable-file stored on disk.
A program becomes a process when an executable-file is loaded into memory. If you run many
copies of a program, each is a separate process. The text-sections are equivalent, but the data-
sections vary.

Sinchana M N Assistant Professor Page 1


OPERATING SYSTEMS MODULE 2

Process in Memory
PROCESS STATE
*****Explain the process state with suitable transition diagram
As a process executes, it changes state. Each process may be in one of the following states
 New: The process is being created.
 Running: Instructions are being executed.
 Waiting: The process is waiting for some event to occur (such as I/O completions).
 Ready: The process is waiting to be assigned to a processor.
 Terminated: The process has finished execution.
Only one process can be running on any processor at any instant.

Transition diagram of process state

Sinchana M N Assistant Professor Page 2


OPERATING SYSTEMS MODULE 2

PROCESS CONTROL BLOCK


******What is process control block (PCB)? Explain with neat diagram
Information associated with each process in an operating system is represented as process control
block (PCB) or task control block.
PCB contains following information about a process
Process State: The state may be new, ready, running, waiting, halted, and so on Program
Counter
Program Counter: This indicates the address of the next instruction to be executed for the
process.
CPU Registers: These include
→ accumulators
→ index registers
→ stack pointers and
→ general-purpose registers
CPU Scheduling Information
 This includes
→ priority of process
→ pointers to scheduling-queues and
→ scheduling-parameters.
Management Information
 This includes

→ value of base- & limit-registers and


→ value of page-tables( or segment-tables).
Accounting Information
 This includes
→ amount of CPU time
→ time-limit and
→ process-number.
I/O Status Information
 This includes. list of I/O devices and list of open files.

Sinchana M N Assistant Professor Page 3


OPERATING SYSTEMS MODULE 2

PCB structure

PROCESS SCHEDULING
The main Objective of multiprogramming is to have some process running at all times to
maximize CPU utilization.
The main objective of time-sharing is to switch the CPU between processes so frequently that
users can interact with each program while it is running.
To meet above 2 objectives: Process scheduler is used to select an available process for
program- execution on the CPU.

SCHEDULING QUEUES
There are three types of scheduling-queues:
JOBQUEUE
 This queue consists of all processes in the system.
 As processes enter the system, they are put into a job-queue.

READYQUEUE
 This queue consists of the processes that are
 Residing in main-memory and
 Ready & waiting to execute
 This queue is generally stored as a linked list.
 A ready-queue header contains pointers to the first and final PCBs in the list.
 Each PCB has a pointer to the next PCB in the ready-queue.

Sinchana M N Assistant Professor Page 4


OPERATING SYSTEMS MODULE 2

DEVICE QUEUE
 This queue consists of the processes that are waiting for an I/O device.
 Each device has its own device-queue.

The ready-queue and various I/O device-queues

REPRESENTATION OF PROCESS SCHEDULING


Briefly explain process scheduling queues with neat block diagram

Sinchana M N Assistant Professor Page 5


OPERATING SYSTEMS MODULE 2

Each rectangular box represents a queue. Two types of queues are present: the ready queue and a
set of device queues. The circles represent the resources that serve the queues, and the arrows
indicate the flow of processes in the system.
A new process is initially put in the ready queue. It waits there until it is selected for execution,
or is dispatched. Once the process is allocated the CPU and is executing. When the process is
executing, one of following events could occur
1) The process could issue an I/O request and then be placed in an I/O queue.
2) The process could create a new sub-process and wait for the sub-process's termination.
3) The process could be interrupted and put back in the ready-queue.
In the first two cases, the process eventually switches from the waiting state to the ready state
and is then put back in the ready queue. A process continues this cycle until it terminates, at
which time it is removed from all queues and has its PCB and resources de-allocated.
SCHEDULERS
A process migrates among the various scheduling queues throughout its lifetime. The operating
system must select, for scheduling purposes, processes from these queues in some fashion. This
selection process is carried out by the appropriate scheduler.

***What are schedulers? Explain any one type of scheduler


Scheduler is the system software, which select the processes form scheduling queues in some
fashion for scheduling purposes
There are three types of schedulers:
1) Long-term scheduler

2) Short-term scheduler and Explain any one in detail


3) Medium-term schedulers
LONG-TERM SCHEDULER
 Also called job scheduler.
 Selects which processes should be brought into the ready-queue.

 Need to be invoked only when a process leaves the system and therefore executes
much less frequently.
 Controls the degree of multiprogramming.

Sinchana M N Assistant Professor Page 6


OPERATING SYSTEMS MODULE 2

What is degree of multi-programming?


The number of processes in main memory is called as degree of multi-programming.
If the degree of multiprogramming is stable, then the average rate of process creation must be
equal to the average departure rate of processes leaving the system.
Processes can be described as either:
1) I/O-bound Process
 Spends more time doing I/O operation than doing computations, many short CPU
bursts.(CPU execution time)
2) CPU-bound Process
 Spends more time doing computations than doing I/O operation and few process’s have very
long CPU bursts.
Why long-term scheduler should select a good process mix of I/O-bound and CPU-bound
processes? OR
Why it is important for the scheduler to distinguish I/O bound programs from CPU bound
programs
The reason is:
1. If all processes are I/O bound, then
i) Ready-queue will almost always be empty, and
ii) Short-term scheduler will have little work to do.
2) If all processes are CPU bound, then
i) I/O waiting queue

ii)
iii) queue will almost always be empty (devices will go unused) and system will
beunbalanced.
SHORT-TERM SCHEDULER
 Also called CPU scheduler.
 Selects which process should be executed next and allocates CPU.

 Need to be invoked to select a new process for the CPU and therefore executes much
more frequently.
 Must be fast, a process may execute for only a few milliseconds.

Sinchana M N Assistant Professor Page 7


OPERATING SYSTEMS MODULE 2

****Differentiate between Long-term scheduler and Short-term scheduler


Long-Term Scheduler Short-Term Scheduler
Also called job scheduler. Also called CPU scheduler.
Selects which processes should be brought into Selects which process should be executed next
the ready-queue. and allocates CPU.
Need to be invoked only when a process leaves Need to be invoked to select a new process for
the system and therefore executes much less the CPU and therefore executes much more
frequently. frequently.
May be slow minutes may separate the Must be fast a process may execute for only a
creation of one new process and the next. few milliseconds.
Controls the degree of multiprogramming.

MEDIUM-TERM SCHEDULER
 Some time-sharing systems have medium-term scheduler
 The scheduler removes processes from memory and thus reduces the degree of
multiprogramming.
 Later, the process can be reintroduced into memory, and its execution can be
continued where it left off. This scheme is called swapping.
 The process is swapped out, and is later swapped in, by the scheduler.
 Swapping is necessary to improve the process mix or requiring memory to be freed up.

Medium-term scheduling in Queuing diagram

Sinchana M N Assistant Professor Page 8


OPERATING SYSTEMS MODULE 2

CONTEXT SWITCH
***Define context switch? What is the need for context switch?
Context-switch means saving the state of the old process and switching the CPU to another
process.
In general purpose systems interrupts cause the OS to change a CPU from its current task and to
run a kernel routine. When an interrupt occurs, the system needs to save the current context of
the process currently running on the CPU so that it can restore that context when its processing is
done, essentially suspending the process and then resuming it.
The context of a process is represented in the PCB of the process; it includes
 value of CPU registers
 process-state and
 memory-management information.
Disadvantages:
 Context-switch time is pure overhead, because the system does no useful
work while switching.
 Context-switch times are highly dependent on hardware support
OPERATIONS ON PROCESSES
1) Process Creation and
2) Process Termination
Process Creation
• A process may create a new process via a create-process system-call.
• The creating process is called a parent-process.The new process created by the parent is called
the child-process (Sub-process).
• OS identifies processes by pid (process identifier), which is typically an integer-number.
• A process needs following resources to accomplish the task:
→ CPU time
→ memory and
→ I/O devices.
• Child-process may
→ get resources directly from the OS or
Sinchana M N Assistant Professor Page 9
OPERATING SYSTEMS MODULE 2

→ get resources of parent-process. This prevents any process from overloading the
system.
• Two options exist when a process creates a new process:
1) The parent & the children execute concurrently.
2) The parent waits until all the children have terminated.
• Two options exist in terms of the address-space of the new process:
1) The child-process is a duplicate of the parent-process (it has the same
program and data as the parent).
2) The child-process has a new program loaded into it.
PROCESS CREATION IN UNIX
In UNIX, each process is identified by its process identifier (pid), which is a unique integer. A
new process is created by the fork() system-call. The new process consists of a copy of the
address-space of the original process.
Both the parent and the child continue execution with one difference:
1) The return value for the fork() is
zero for the new (child) process.
2) The return value for the fork() is
Non-zero pid of the child for the parent-process.
Typically, the exec() system-call is used after a fork() system-call by one of the two
processes to replace the process's memory-space with a new program. The parent can
issue wait() system-call to move itself off the ready-queue.

Sinchana M N Assistant Professor Page 10


OPERATING SYSTEMS MODULE 2

Creating a separate process using the UNIX fork() system-call

Process creation using the fork() system-call

Sinchana M N Assistant Professor Page 11


OPERATING SYSTEMS MODULE 2

PROCESS TERMINATION
A process terminates when it executes the last statement (in the program). Then, the OS deletes
the process by using exit() system-call. Then, the OS de-allocates all the resources of the process.
The resources include memory, open files and I/O buffers.
Process termination can occur in following cases:
A process can cause the termination of another process via TerminateProcess() system-
call.
 Users could arbitrarily kill the processes.
A parent terminates the execution of children for following reasons:
 The child has exceeded its usage of some resources.
 The task assigned to the child is no longer required.
 The parent is exiting, and the OS does not allow a child to continue.
In some systems, if a process terminates, then all its children must also be terminated.
This phenomenon is referred to as cascading termination.

INTER PROCESS COMMUNICATION (IPC)


Processes executing concurrently in the OS may be 1) Independent processes or 2) Co-operating
processes.
A process is independent if
i) The process cannot affect or be affected by the other processes.
ii) The process does not share data with other processes.
A process is co-operating if
i) The process can affect or be affected by the other processes.
ii) The process shares data with other processes.
Advantages of process co-operation (Cooperative process)
1) Information Sharing
 Since many users may be interested in same piece of information (ex: shared file).
2) Computation Speedup
 We must break the task into subtasks.
 Each subtask should be executed in parallel with the other subtasks.

Sinchana M N Assistant Professor Page 12


OPERATING SYSTEMS MODULE 2

 The speed can be improved only if computer has multiple processing elements such as
CPUs or I/O channels.
3) Modularity
 Divide the system-functions into separate processes or threads.
4) Convenience
 An individual user may work on many tasks at the same time.
 For example, a user may be editing, printing, and compiling in parallel.
What is Inter-process communication? Briefly explain its types
Inter-process communication (IPC) is a set of programming interfaces that allow a programmer
to coordinate activities among different program processes that can run concurrently in
an operating system.
Cooperating processes require an IPC mechanism that will allow them to exchange data and
information.
Two basic models of IPC:
1. Shared-memory and
2. Message passing.
SHARED MEMORY SYSTEMS

Shared memory system


 Communicating-processes must establish a region of shared-memory.
 A shared-memory resides in address-space of the process creating the shared-memory.
 Other processes must attach their address-space to the shared-memory.

Sinchana M N Assistant Professor Page 13


OPERATING SYSTEMS MODULE 2

 The processes can then exchange information by reading and writing data in the shared-
memory. The processes are also responsible for ensuring that they are not writing
to the same location simultaneously
Let us illustrate Cooperative process with
PRODUCER-CONSUMER PROBLEM
Producer-process produces information that is consumed by a consumer-process.
Example (client- server, compiler-assembler, loader)
To allow producer and consumer to run concurrently, have a buffer of items to be filled by the
producer and emptied by the consumer. The buffer resides in a memory shared by producer and
consumer. Producer and consumer must be synchronized. Two types of buffer that can be used
are:
1. Unbounded-buffer places no practical limit on the size of the buffer
2. Bounded-buffer assumes that there is a fixed buffer size. Advantages: It allows
maximum speed and convenience of communication, faster
Explain the implementation of producer-consumer processes using bounded buffer in
shared memory systems.

Sinchana M N Assistant Professor Page 14


OPERATING SYSTEMS MODULE 2

Sinchana M N Assistant Professor Page 15


OPERATING SYSTEMS MODULE 2

MESSAGE-PASSING SYSTEMS

Discuss the methods to implement message passing IPC in detail.


Message passing systems allow processes to communicate and to synchronize their actions
without sharing the same address-space. For example, a chat program used on the WWW.
Messages can be of 2 types: 1) Fixed size or
2) Variable size.
If fixed-sized messages are used, the system-level implementation is simple. However, the
programming task becomes more difficult.
If variable-sized messages are used, the system-level implementation is complex. However, the
programming task becomes simpler.
A communication-link must exist between processes to communicate. The three methods for
implementing a link are:
1) Direct or indirect communication.
2) Synchronous or asynchronous communication. Explain any one in detail
3) Automatic or explicit buffering.
IPC in message passing system provides two operations:
1) Send (P, message): Send a message to process P.
2) Receive (Q, message): Receive a message from process Q.
Advantages:
1) Useful for exchanging smaller amounts of data.
2) Easier to implement.
3) Useful in a distributed environment.

Sinchana M N Assistant Professor Page 16


OPERATING SYSTEMS MODULE 2

1. DIRECT OR INDIRECT COMMUNICATION.


*****Explain direct and indirect communication with respect to message passing system.
Direct communication:
Each process must explicitly name the recipient/sender.

Properties of a communication link:


 A link is established automatically between every pair of processes that want to
communicate. The processes need to know only each other‟s identity to communicate.
 A link is associated with exactly two processes.
 Exactly one link exists between each pair of processes.

Symmetric addressing: Both sender and receiver processes must name the other to
communicate.
Messages are sent to/received from mailboxes (or ports).
Properties of a communication link:
 A link is established between a pair of processes only if both members have a shared
mailbox.
 A link may be associated with more than two processes.
 A number of different links may exist between each pair of communicating processes.
Mailbox owned by a process:
 The owner can only receive, and the user can only send.
 The mailbox disappears when its owner process terminates.
Mailbox owned by the OS:
 The OS allows a process to:
1. Create a new mailbox
2. Send & receive messages via it
3. Delete a mailbox.

Sinchana M N Assistant Professor Page 17


OPERATING SYSTEMS MODULE 2

Differentiate between direct and indirect inter-process communication


Direct Communication Indirect Communication
Each process must explicitly name the Messages are sent to/received
recipient/sender. from mailboxes (or ports).
Properties of a communication link: Properties of a communication link:
 A link is established automatically between  A link is established between a pair of
every pair of processes that want to processes only if both members have a
communicate. The processes need to know shared mailbox.
only each other‟s identity to communicate.  A link may be associated with more than
 A link is associated with exactly two two processes.
processes.  A number of different links may exist
 Exactly one link exists between each pair of between each pair of communicating
processes. processes.
Symmetric addressing: Mailbox owned by a process:
 Both sender and receiver processes must  The owner can only receive, and the user
name the other to communicate. can only send.
 The mailbox disappears when its owner
process terminates.
Asymmetric addressing: Mailbox owned by the OS:
 Only the sender names the recipient; the  The OS allows a process to:
recipient needn't name the sender. 1. Create a new mailbox
2. Send & receive messages via it
3. Delete a mailbox.

2. SYNCHRONIZATION
Communication takes place through send() and receive primitives. Message passing may
be either blocking or non-blocking (also known as synchronous and asynchronous).

Sinchana M N Assistant Professor Page 18


OPERATING SYSTEMS MODULE 2

Synchronous Message Passing


Blocking is considered synchronous
Blocking send:
 The sending process is blocked until the message is received by the receiving process or by
the mailbox.
Blocking receive:
 The receiver blocks until a message is available.
Asynchronous Message Passing
Non-blocking is considered asynchronous
Non-blocking send:
 The sending process sends the message and resumes operation.
Non-blocking receive:
 The receiver retrieves either a valid message or a null.
3. BUFFERING
Messages exchanged by processes reside in a temporary queue. Queue can be implemented in
one of three ways.
1) Zero Capacity
 The queue-length is zero.
 The link can't have any messages waiting in it.
 The sender must block until the recipient receives the message.
2) Bounded Capacity
 The queue-length is finite.
 If the queue is not full, the new message is placed in the queue.
 The link capacity is finite.
 If the link is full, the sender must block until space is available in the queue.
3) Unbounded Capacity
 The queue-length is potentially infinite.
 Any number of messages can wait in the queue.
 The sender never blocks.

Sinchana M N Assistant Professor Page 19


OPERATING SYSTEMS MODULE 2

MULTITHREADED PROGRAMMING

What is a thread?
A thread is a basic unit of CPU utilization. It comprises thread id, program counter, register set,
and a stack.

 Thread shares with other threads belonging to the same process its code section, data
section and other OS resources. Threads run within an application.
 Traditional process has single thread of control (heavy weight) .
 If a process has multiple threads of control, it can perform more than one task at a time.
 Multiple tasks with the application can be implemented by separate threads such as Update
display, Fetch data, Spell checking and Answer a network request
What is the difference between process and thread?
S. No. Process Thread

1 Program in execution It is the basic unit of CPU utilization. It


is part of process.
2 Processes run in separate Threads within the same process run in
memory spaces. a shared memory space.
3 Heavy weight process, since it Light weight process, it consumes few
consumes most of the resources in the system.
resources in the system
4 Creation of process requires Creation of thread requires less time
more time
5 Context switching takes more Context switching is faster (takes less
time time )
6 More time required for Less time for termination
termination
7 Less efficient as compared to Enhances efficiency in the context of
the process in the context of communication.
communication.

Dept. of ISE Page 1


OPERATING SYSTEMS MODULE 2

Why do we need multiple-thread programming?


Suppose a process with single thread programming is used in a Web server system. A single
application may be required to perform several similar tasks. Server runs as a single process and
accepts request. When server receives a request it creates another process to service the request.
Process creation is time consuming and resource intensive. Hence it is more efficient to use one
process that has multiple threads. Here server creates a separate thread to listen for client
requests. When request is made it creates another thread to service the request. Hence it can
simplify code, and increases efficiency.

MULTI-THREADING BENEFITS
******Discuss the benefits of multi-threaded programming.
The benefits of multi-threaded programming are:
1. Responsiveness
2. Resource Sharing
3. Economy
4. Scalability

Dept. of ISE Page 2


OPERATING SYSTEMS MODULE 2

Responsiveness: may allow continued execution if part of process is blocked, especially


important for user interfaces (web browser)
Resource Sharing: threads share resources of process, easier than shared memory or message
passing.
Economy: cheaper than process creation, thread switching lower overhead than context
switching.
Scalability: In a multiprocessor architecture, threads may be running in parallel on different
processors. Thus parallelism will be increased.
SUPPORT FOR THREADS
Support for threads may be provided either at the user level, for User threads or by the kernel, for
kernel threads.
1. User-level Thread
2. Kernel-level Thread
What is the difference between user-level thread and kernel- level thread?
S. No. User-Level Thread Kernel-Level Thread
1 User threads are supported above the Kernel threads are supported directly by the
kernel and are implemented by operating system.
a thread library at the user level. The The kernel performs thread creation,
library provides support for thread scheduling, and management in kernel space.
creation scheduling, and management
with no support from the kernel.
2 User-level threads are generally fast The kernel-level threads are slow and
to create and manage inefficient. For instance, threads operations are
hundreds of times slower than that of user-level
threads.
3 User level thread is generic and can Kernel level thread is specific to the operating
run on any operating system. system.
4 When threads are managed in user No run-time system is needed in each. Also,
space, each process needs its own there is no thread table in each process. Instead,
private thread table to keep track of the kernel has a thread table that keeps track of
the threads in that process all the threads in the system.
5 Example: User-thread libraries Example: Windows NT, Windows 2000, Solaris
include POSIX Pthreads, Mach C- 2, BeOS, and Tru64 UNIX (formerly Digital
threads, and Solaris 2 UI-threads. UNIX)-support kernel threads.

Dept. of ISE Page 3


OPERATING SYSTEMS MODULE 2

MULTITHREADING MODELS
*********Explain multi-threading models in detail OR
******Discuss the three common ways of establishing relationship between user and kernel
threads.
Three ways of establishing relationship between user-threads & kernel-threads (Multi-threading
model):
1) Many-to-one model
2) One-to-one model and
3) Many-to-many model.
Many to One model:

 Many user-level threads are mapped to single kernel thread.


 Thread management is done by the thread library in user space, so it is efficient.
 The entire process will block if a thread makes a blocking system-call.
 Multiple threads may not run in parallel on multi-processor system because only one may
be in kernel at a time.
 Few systems currently use this model.
Examples: Solaris -Green Threads, GNU Portable Threads etc.
One-to- Many model:

Dept. of ISE Page 4


OPERATING SYSTEMS MODULE 2

 Each user thread is mapped to a kernel thread


Advantages:
 It provides more concurrency by allowing another thread to run when a thread makes a
blocking system-call.
 Multiple threads can run in parallel on multiprocessors.
Disadvantage:
 Creating a user thread requires creating the corresponding kernel thread.
Example:
→ Windows NT/XP/2000, Linux

Many-to-Many model:
• Many user-level threads are multiplexed to a smaller number of kernel threads
Advantages:
1) Developers can create as many user threads as necessary
2) The kernel threads can run in parallel on a multiprocessor.
3) When a thread performs a blocking system-call, kernel can schedule another thread
for execution.

Many-to-many model

Dept. of ISE Page 5


OPERATING SYSTEMS MODULE 2

THREAD LIBRARIES
It provides the programmer with an API for the creation and management of threads.
Two ways of implementation:
1) First Approach
 Provides a library entirely in user space with no kernel support.
 All code and data structures for the library exist in the user space.
2) Second Approach
 Implements a kernel-level library supported directly by the OS.
 Code and data structures for the library exist in kernel space.
Three main thread libraries: 1) POSIX Pthreads
2) Win32 and
3) Java.
Pthreads
• This is a POSIX standard API for thread creation and synchronization.
• This is a specification for thread-behavior, not an implementation.
• OS designers may implement the specification in any way they wish.
• Commonly used in: UNIX and Solaris.
Java Threads
• Threads are the basic model of program-execution in Java program and Java language.
• The API provides a rich set of features for the creation and management of threads.
• All Java programs comprise at least a single thread of control.

Dept. of ISE Page 6


OPERATING SYSTEMS MODULE 2

Two techniques for creating threads:


1) Create a new class that is derived from the Thread class and override its run() method.
2) Define a class that implements the Runnable interface. The Runnable interface
is defined as follows:

THREADING ISSUES
***Discuss any 3 threading issues that come with multi-threaded programs

1. System call fork() and exec() in multi-thread programming

2. Thread cancellation

3. Signal handling

4. Thread pools

System call fork() is used to create a separate, duplicate process. If one thread in a program calls
fork(), then some systems duplicates all threads and other systems duplicate only the thread that
invoked the fork(). If a thread invokes the exec(), the program specified in the parameter to exec()
will replace the entire process including all threads.
Thread Cancellation: This is the task of terminating a thread before it has completed. Target
thread is the thread that is to be canceled. Thread cancellation occurs in two different cases:
3) Asynchronous cancellation: One thread immediately terminates the target thread.
4) Deferred cancellation: The target thread periodically checks whether it should be
terminated.
Signal Handling: In UNIX, a signal is used to notify a process that a particular event has
occurred. All signals follow this pattern:
1. A signal is generated by the occurrence of a certain event.
2. A generated signal is delivered to a process.
3. Once delivered, the signal must be handled.
A signal handler is used to process signals. A signal may be received either synchronously or
asynchronously, depending on the source.
1) Synchronous signals
 Delivered to the same process that performed the operation causing the signal.
 E.g. illegal memory access and division by 0.

Dept. of ISE Page 7


OPERATING SYSTEMS MODULE 2

2) Asynchronous signals
 Generated by an event external to a running process.
 E.g. user terminating a process with specific keystrokes <ctrl><c>.
Every signal can be handled by one of two possible handlers:
1) A Default Signal Handler
 Run by the kernel when handling the signal.
2) A User-defined Signal Handler
 Overrides the default signal handler.
In single-threaded programs, delivering signals is simple.
In multithreaded programs, delivering signals is more complex. Then, the following options exist:
1) Deliver the signal to the thread to which the signal applies.
2) Deliver the signal to every thread in the process.
3) Deliver the signal to certain threads in the process.
4) Assign a specific thread to receive all signals for the process.
Thread Pools: The basic idea is to create a no. of threads at process-startup and place the threads
into a pool (where they sit and wait for work).
Procedure:
1. When a server receives a request, it awakens a thread from the pool.
2. If any thread is available, the request is passed to it for service.
Once the service is completed, the thread returns to the pool.

Advantages:
1) Servicing a request with an existing thread is usually faster than waiting to create a
thread.
2) The pool limits the no. of threads that exist at any one point.
No. of threads in the pool can be based on factors such as: no. of CPUs, amount of
memory and expected no. of concurrent client-requests.

Dept. of ISE Page 8


OPERATING SYSTEMS MODULE 2

PROCESS SCHEDULING

Basic Concepts: In a single-processor system, only one process may run at a time and other
processes must wait until the CPU is rescheduled. The main objective of multiprogramming is to
have some process running at all times, in order to maximize CPU utilization.
CPU-I/O Burst Cycle: Process execution consists of a cycle of CPU execution and an I/O wait as
shown in below figure. Process execution begins with a CPU burst, followed by an I/O burst, then
another CPU burst, etc… Finally, a CPU burst ends with a request to terminate execution. An I/O-
bound program typically has many short CPU bursts. A CPU-bound program might have a few
long CPU bursts.
CPU SCHEDULER

CPU scheduler selects a waiting-process from the ready-queue and allocates CPU to the waiting-
process. The ready-queue could be a FIFO, priority queue, tree and list. The records in the queues
are generally process control blocks (PCBs) of the processes.
CPU SCHEDULING
Four situations under which CPU scheduling decisions take place:
1. When a process switches from the running state to the waiting state. For ex; I/O request.
2. When a process switches from the running state to the ready state. For ex: when an
interrupt occurs.
3. When a process switches from the waiting state to the ready state. For ex: completion of
I/O.
4. When a process terminates.

Dept. of ISE Page 9


OPERATING SYSTEMS MODULE 2

Scheduling under 1 and 4 is on-preemptive. Scheduling under 2 and 3 is preemptive.


Non Preemptive Scheduling
Once the CPU has been allocated to a process, the process keeps the CPU until it releases the
CPU either by terminating or by switching to the waiting state.
Preemptive Scheduling
This is driven by the idea of prioritized computation. Processes that are runnable may be
temporarily suspended
Disadvantages:
1) Incurs a cost associated with access to shared-data.
2) Affects the design of the OS kernel.
Dispatcher
It gives control of the CPU to the process selected by the short-term scheduler. The function
involves:
1) Switching context
2) Switching to user mode &
3) Jumping to the proper location in the user program to restart that program.
It should be as fast as possible, since it is invoked during every process switch.
Dispatch latency means the time taken by the dispatcher to stop one process and to start another
process to run.
SCHEDULING CRITERIA USED IN OS

******Discuss the scheduling criteria used in operating system.

The various scheduling criteria used in OS are:


1. CPU Utilization
2. Throughput
3. Turnaround time
4. Waiting time
5. Response time
CPU Utilization: We must keep the CPU as busy as possible. In a real system, it ranges from
40% to 90%.
Throughput: The number of processes completed per time unit. For long processes, throughput
may be 1 process per hour; For short transactions, throughput might be 10 processes per second.
Turnaround Time: The interval from the time of submission of a process to the time of
completion. Turnaround time is the sum of the periods spent in waiting to get into memory,

Dept. of ISE Page 10


OPERATING SYSTEMS MODULE 2

waiting in the ready-queue, executing on the CPU and doing I/O.


Waiting Time: The amount of time that a process spends waiting in the ready-queue.
Response Time: The time from the submission of a request until the first response is produced.
SCHEDULING ALGORITHMS
CPU scheduling deals with the problem of deciding which of the processes in the ready queue is
to be allocated the CPU. Following are some scheduling algorithms:
1) FCFS scheduling (First Come First Served)
2) Round Robin scheduling
3) SJF scheduling (Shortest Job First)
4) SRT scheduling(Shortest Remaining Time First)
5) Priority scheduling with and without preemption.
6) Multilevel Queue scheduling and
7) Multilevel Feedback Queue scheduling
FCFS SCHEDULING
The process that requests the CPU first is allocated the CPU first. That means process which
arrives the ready-queue first, get scheduled first if the CPU is free. This is a non-preemptive
scheduling concept. The implementation is easily done using a FIFO queue.
Procedure:
1) When a process enters the ready-queue, its PCB is linked onto the tail of the queue.
2) When the CPU is free, the CPU is allocated to the process at the queue‘s head.
3) The running process is then removed from the queue.
Advantage: Code is simple to write & understand.
Disadvantages:
1) Convoy effect: All other processes wait for one big process to get off the CPU.
2) Non-preemptive (a process keeps the CPU until it releases it).
3) Not good for time-sharing systems.
4) The average waiting time is generally not minimal.
Example: Suppose that the processes arrive in the order P1, P2, P3.

The Gantt Chart for the schedule is as follows:

Dept. of ISE Page 11


OPERATING SYSTEMS MODULE 2

Waiting time for P1 = 0; P2 = 24; P3


= 27 Average waiting time: (0 +
24 + 27)/3 = 17
• Suppose that the processes arrive in the order P2, P3, P1.
The Gantt chart for the schedule is as follows:

Waiting time for P1 = 6; P2 = 0; P3 = 3


Average waiting time: (6 + 0 + 3)/3 = 3

SJF SCHEDULING (Shortest Job First)


The CPU is assigned to the process that has the smallest next CPU burst. If two processes have
the same length CPU burst, FCFS scheduling is used to break the tie. For long-term scheduling in
a batch system, we can use the process time limit specified by the user, as the ‗length‘. SJF can't
be implemented at the level of short-term scheduling, because there is no way to know the length
of the next CPU burst.
Advantage: The SJF is optimal, i.e. it gives the minimum average waiting time for a given set of
processes.
Disadvantage: Determining the length of the next CPU burst.
SJF algorithm may be either 1) Non-preemptive or 2) preemptive.
Non preemptive SJF: The current process is allowed to finish its CPU burst.
Preemptive SJF: If the new process has a shorter next CPU burst time than what is left of the
executing process, that process is preempted. It is also known as SRTF scheduling (Shortest-
Remaining-Time-First).
Example (for non-preemptive SJF): Consider the following set of processes, with the
length of the CPU-burst time given in milliseconds.(Arrival of process according to their
Burst time)

Dept. of ISE Page 12


OPERATING SYSTEMS MODULE 2

For non-preemptive SJF, the Gantt Chart is as follows:

Waiting time for P1 = 3; P2 = 16; P3 = 9; P4=0


Average waiting time: (3 + 16 + 9 + 0)/4 = 7ms
Example (preemptive SJF): Consider the following set of processes, with the length of the
CPU-burst time given in milliseconds.

For preemptive SJF, the Gantt Chart is as follows:

The average waiting time is ((10 - 1) + (1 - 1) + (17 - 2) + (5 - 3))/4 = 26/4 = 6.5ms.

PRIORITY SCHEDULING
A priority is associated with each process. The CPU is allocated to the process with the highest
priority. Equal-priority processes are scheduled in FCFS order. Priorities can be defined either
internally or externally. Internally-defined priorities use some measurable quantity to compute the
priority of a process.
For example: time limits, memory requirements, no. of open files.
Externally-defined priorities set by criteria that are external to the OS
For example: importance of the process, political factors
Priority scheduling can be either preemptive or non-preemptive.
Preemptive
 The CPU is preempted if the priority of the newly arrived process is higher
than the priority of the currently running process.
Non Preemptive
 The new process is put at the head of the ready-queue
Advantage: Higher priority processes can be executed first.

Dept. of ISE Page 13


OPERATING SYSTEMS MODULE 2

Disadvantage: Indefinite blocking, where low-priority processes are left waiting indefinitely for
CPU.
Solution: Aging is a technique of increasing priority of processes that wait in system for a long
time.
Example: Consider the following set of processes, assumed to have arrived at time 0, in
the order PI, P2, ..., P5, with the length of the CPU-burst time given in milliseconds.

The Gantt chart for the schedule is as follows:

The average waiting time is 8.2 milliseconds.


ROUND ROBIN SCHEDULING
It is designed especially for timesharing systems. It is similar to FCFS scheduling, but with
preemption. A small unit of time is called a time quantum (or time slice), which ranges from 10 to
100 ms.
The ready-queue is treated as a circular queue. The CPU scheduler goes around the ready-queue
and allocates the CPU to each process for a time interval of up to one time quantum. To
implement this algorithm, the ready-queue is kept as a FIFO queue of processes
CPU scheduler
1. Picks the first process from the ready-queue.
2. Sets a timer to interrupt after one time quantum and
3. Dispatches the process.
One of two things will then happen.
1. The process may have a CPU burst of less than that of time quantum. In this case, the
process itself will release the CPU voluntarily.
2. If the CPU burst of the currently running process is longer than that of time quantum, the
timer will go off and will cause an interrupt to the OS. The process will be put at the tail of
the ready-queue.

Dept. of ISE Page 14


OPERATING SYSTEMS MODULE 2

Advantage: Higher average turnaround than SJF.


Disadvantage: Better response time than SJF.
Example: Consider the following set of processes that arrive at time 0, with the length of
the CPU- burst time given in milliseconds.(Time quantum = 4ms)

The Gantt chart for the schedule is as follows:

The average waiting time is 17/3 = 5.66 milliseconds.


NOTE:
The RR scheduling algorithm is preemptive. No process is allocated the CPU for more than one
time quantum in a row. If a process' CPU burst exceeds the time quantum, that process is
preempted and is put back in the ready-queue..
The performance of algorithm depends heavily on the size of the time quantum
If time quantum= very large, RR policy is the same as the FCFS policy.
If time quantum = very small, RR approach appears the users as though each of n processes
If time quantum = very small, RR approach appears to the users as though each
of n processes has its own processor running at l/n the speed of the real
processor.
In software, we need to consider the effect of context switching on the performance of RR
scheduling
1) Larger the time quantum for a specific process time, less time is spend on context
switching.
2) The smaller the time quantum, more overhead is added for the purpose of context-
switching.

Dept. of ISE Page 15


OPERATING SYSTEMS MODULE 2

MULTILEVEL QUEUE SCHEDULING


It is useful for situations in which processes are easily classified into different groups.
For example, a common division is made between foreground (or interactive) processes and
background (or batch) processes. The ready-queue is partitioned into several separate queues. The
processes are permanently assigned to one queue based on some property like memory size,
process priority or process type. Each queue has its own scheduling algorithm.
For example, separate queues might be used for foreground and background processes.

Multilevel queue scheduling

There must be scheduling among the queues, which is commonly implemented as fixed-
priority preemptive scheduling.
For example, the foreground queue may have absolute priority over the background queue.
Time slice: each queue gets a certain amount of CPU time which it can schedule among
its processes; i.e., 80% to foreground in RR 20% to background in FCFS.

Dept. of ISE Page 16


OPERATING SYSTEMS MODULE 2

MULTILEVEL FEEDBACK QUEUE SCHEDULING


A process may move between queues. The basic idea is to separate processes according to the
features of their CPU bursts.
For example: If a process uses too much CPU time, it will be moved to a lower-priority queue.
This scheme leaves I/O-bound and interactive processes in the higher-priority queues. If a process
waits too long in a lower-priority queue, it may be moved to a higher- priority queue. This form
of aging prevents starvation.

Multilevel feedback queues.

In general, a multilevel feedback queue scheduler is defined by the following parameters:


1) The number of queues.
2) The scheduling algorithm for each queue.
3) The method used to determine when to upgrade a process to a higher priority queue.
4) The method used to determine when to demote a process to a lower priority queue.
5) The method used to determine which queue a process will enter when that
process needs service.
MULTIPLE PROCESSOR SCHEDULING
Write a shot note on multi-processor scheduling
If multiple CPUs are available, the scheduling problem becomes more complex. The two
approaches are:
Asymmetric Multiprocessing
The basic idea is: A master server is a single processor responsible for all scheduling decisions,
I/O processing and other system activities. The other processors execute only user code.
Advantage: This is simple because only one processor accesses the system data structures,
reducing the need for data sharing.
Symmetric Multiprocessing
The basic idea is: Each processor is self-scheduling. To do scheduling, the scheduler for each

Dept. of ISE Page 17


OPERATING SYSTEMS MODULE 2

processor examines the ready-queue and selects a process to execute.


Restriction: We must ensure that two processors do not choose the same process and that
processes are not lost from the queue.
Processor Affinity: In SMP a system, Migration of processes from one processor to another are
avoided and instead processes are kept running on same processor. This is known as processor
affinity. The two forms are:
Soft Affinity: When an OS try to keep a process on one processor because of policy, but cannot
guarantee it will happen. It is possible for a process to migrate between processors.
Hard Affinity: When an OS have the ability to allow a process to specify that it is not to migrate
to other processors. Eg: Solaris OS
Load Balancing
This concept attempts to keep the workload evenly distributed across all processors in an SMP
system. The two approaches:
1) Push Migration
 A specific task periodically checks the load on each processor and if it finds an
imbalance, it evenly distributes the load to idle processors.
2) Pull Migration
 An idle processor pulls a waiting task from a busy processor.

THREAD SCHEDULING
On OSs, it is kernel-level threads but not processes that are being scheduled by the OS. The user-
level threads are managed by a thread library, and the kernel is unaware of them. To run on a
CPU, user-level threads must be mapped to an associated kernel-level thread.
Contention Scope
Two approaches:
1) Process-Contention scope
 On systems implementing the many-to-one and many-to-many models, the
thread library schedules user-level threads to run on an available LWP.
 Competition for the CPU takes place among threads belonging to the same process.
2) System-Contention scope
 The process of deciding which kernel thread to schedule on the CPU.
 Competition for the CPU takes place among all threads in the system.
 Systems using the one-to-one model schedule threads using only SCS.

Dept. of ISE Page 18


OPERATING SYSTEMS MODULE 2

Pthread Scheduling
Pthread API that allows specifying either PCS or SCS during thread creation.
Pthreads identifies the following contention scope values:
1. PTHREAD_SCOPEJPROCESS schedules threads using PCS scheduling.
2. PTHREAD-SCOPE_SYSTEM schedules threads using SCS scheduling.
Pthread IPC provides following two functions for getting and setting the contention scope policy:
1) pthread_attr_setscope(pthread_attr_t *attr, int scope)
2) pthread_attr_getscope(pthread_attr_t *attr, int *scope)

Dept. of ISE Page 19


OPERATING SYSTEMS MODULE 2

PROBLEMS BASED ON VARIOUS PROCESSES SCHEDULING ALGORITHM

Solution steps:
1. Draw Gantt chart for the given problem.
2. From Gantt chart find the completion time of each process
3. Determine the Turnaround time using the formula:
Turnaround time = Completion time – Arrival time
4. Determine the waiting time using the formula:
Waiting time = Turnaround time – Burst time
5. Determine the Response time using the formula:
Response time = First time process scheduled – Arrival time

1. Consider the following set of processes with CPU burst time (in ms)
Process Arrival time Burst Time
P0 0 6
P1 1 3

P2 2 1

P3 3 4

Compute the waiting time and average turnaround time for the above process using
FCFS, SRT and RR (time quantum = 2ms) scheduling algorithm.
Solution:
i.) The Gantt chart for the FCFS schedule is as follows:

Process Arrival time Burst Time Completion Turnaround Waiting


Time time time
P0 0 6 6 6 0
P1 1 3 9 8 5

P2 2 1 10 8 7

P3 3 4 14 11 7

Dept. of ISE Page 20


OPERATING SYSTEMS MODULE 2

Average Turnaround time = Sum of turnaround time/ no. of processes


= 33/4 = 8.25ms
Average waiting time = Sum waiting time/ no. of processes
= 19/4 = 4.75ms
ii.) The Gantt chart for the SRT schedule is as follows:

Process Arrival time Burst Time Completion Turnaround Waiting


Time time time
P0 0 6 14 14 8
P1 1 3 5 4 1

P2 2 1 3 1 0

P3 3 4 9 6 2

Average Turnaround time = Sum of turnaround time/ no. of processes


= 25/4 = 6.25ms
Average waiting time = Sum waiting time/ no. of processes
= 11/4 = 2.75ms
iii.) The Gantt chart for the RR (TQ = 2ms) schedule is as follows:

Process Arrival time Burst Time Completion Turnaround Waiting


Time time time
P0 0 6 12 12 6
P1 1 3 10 9 6

P2 2 1 5 3 2

P3 3 4 14 11 7

Average Turnaround time = Sum of turnaround time/ no. of processes


= 35/4 = 8.75ms
Average waiting time = Sum waiting time/ no. of processes
= 21/4 = 5.25ms

Dept. of ISE Page 21


OPERATING SYSTEMS MODULE 2

2. Consider the following set of processes given in the table


Process Arrival time Burst Time Priority
P1 0 10 4
P2 3 5 2

P3 3 6 6

P4 5 4 3

Consider the large number as highest priority. Calculate the average waiting time and
turnaround time and draw Gantt chart for preemptive priority scheduling and preemptive
SJF scheduling.
Solution:
i.) The Gantt chart for the preemptive priority schedule is as follows: (Priority high= larger
number)

Process Arrival Burst Priority Completion Turnaround Waiting


time Time Time time time
P1 0 10 4 16 16 6
P2 3 5 2 25 22 17

P3 3 6 6 9 6 0

P4 5 4 3 20 15 11

Average Turnaround time = Sum of turnaround time/ no. of processes


= 59/4 = 14.75ms
Average waiting time = Sum waiting time/ no. of processes
= 34/4 = 8.5ms
ii.) The Gantt chart for the preemptive SJF schedule is as follows:

Process Arrival Burst Time Completion Turnaround Waiting


time Time time time
P1 0 10 25 25 15

Dept. of ISE Page 22


OPERATING SYSTEMS MODULE 2

P2 3 5 8 5 0

P3 3 6 18 15 9

P4 5 4 12 7 3

Average Turnaround time = Sum of turnaround time/ no. of processes


= 52/4 = 13ms
Average waiting time = Sum waiting time/ no. of processes
= 27/4 = 6.75ms

3. For the following example calculate average waiting time and average turnaround time
using FCFS, preemptive SJF and RR ( 1 time unit) CPU scheduling algorithms
Process Arrival time Burst Time
P1 0 8
P2 1 4

P3 2 9

P4 3 5

Solution:
i.) The Gantt chart for the FCFS schedule is as follows:

Process Arrival Burst Time Completion Turnaround Waiting


time Time time time
P1 0 8 8 8 0
P2 1 4 12 11 7

P3 2 9 21 19 10

P4 3 5 26 23 18

Average Turnaround time = Sum of turnaround time/ no. of processes


= 61/4 = 15.25ms
Average waiting time = Sum waiting time/ no. of processes
= 35/4 = 8.75ms

Dept. of ISE Page 23


OPERATING SYSTEMS MODULE 2

ii.) The Gantt chart for the preemptive SJF schedule is as follows:

Process Arrival Burst Time Completion Turnaround Waiting


time Time time time
P1 0 8 17 17 9
P2 1 4 5 4 0

P3 2 9 26 24 15

P4 3 5 10 7 2

Average Turnaround time = Sum of turnaround time/ no. of processes


= 52/4 = 13ms
Average waiting time = Sum waiting time/ no. of processes
= 26/4 = 6.5ms

iii.) The Gantt chart for the RR(1 unit time) schedule is as follows

Process Arrival Burst Time Completion Turnaround Waiting


time Time time time
P1 0 8 23 23 15
P2 1 4 13 12 8

P3 2 9 26 24 15

P4 3 5 20 17 12

Average Turnaround time = Sum of turnaround time/ no. of processes


= 76/4 = 19ms
Average waiting time = Sum waiting time/ no. of processes
= 50/4 = 12.5ms
3. Given below is the snapshot of processes. Draw Gantt charts using preemptive
and non preemptive priority scheduling algorithm. (A smaller number has a
higher priority) Also, calculate the average waiting time and turnaround time for
both.

Dept. of ISE Page 24


OPERATING SYSTEMS MODULE 2

4.
Process Arrival time Burst Time Priority
P1 0 6 4
P2 3 5 2

P3 3 3 6

P4 5 5 3

i.) The Gantt chart for the preemptive Priority schedule is as follows

Process Arrival Burst Time Priority Completion Turnaround Waiting


time Time time time
P1 0 6 4 16 16 10
P2 3 5 2 8 5 0

P3 3 3 6 19 16 13

P4 5 5 3 13 8 3

Average Turnaround time = Sum of turnaround time/ no. of processes


= 45/4 = 11.25ms
Average waiting time = Sum waiting time/ no. of processes
= 26/4 = 6.5ms
ii.) The Gantt chart for the Non-preemptive Priority schedule is as follows

Process Arrival Burst Priority Completion Turnaround Waiting


time Time Time time time
P1 0 6 4 6 6 0
P2 3 5 2 11 8 3

P3 3 3 6 19 16 13

P4 5 5 3 16 11 6

Average Turnaround time = Sum of turnaround time/ no. of processes

Dept. of ISE Page 25


OPERATING SYSTEMS MODULE 2

= 41/4 = 10.25ms
Average waiting time = Sum waiting time/ no. of processes
= 22/4 = 5.5ms

5. Consider the following set of processes


Process Arrival time Burst Time Priority
P1 0 10 2
P2 2 5 1

P3 3 2 0

P4 5 20 3

Draw Gantt charts and calculate average waiting time, average turnaround time using
following CPU scheduling algorithm
i. Preemptive shortest job
ii. Non preemptive priority (0 = high priority)
i.)The Gantt chart for the preemptive shortest job schedule is as follows:

Process Arrival Burst Time Completion Turnaround Waiting


time Time time time
P1 0 10 17 17 7
P2 2 5 9 7 2

P3 3 2 5 2 0

P4 5 20 37 32 12

Average Turnaround time = Sum of turnaround time/ no. of processes


= 58/4 = 14.5ms
Average waiting time = Sum waiting time/ no. of processes
= 21/4 = 5.25ms
ii.)The Gantt chart for the Non-preemptive priority (0 = high priority) schedule is as follows:

Dept. of ISE Page 26


OPERATING SYSTEMS MODULE 2

Process Arrival Burst Time Priority Completion Turnaround Waiting


time Time time time
P1 0 10 2 10 10 0
P2 2 5 1 17 15 10

P3 3 2 0 12 9 7

P4 5 20 3 37 32 12

Average Turnaround time = Sum of turnaround time/ no. of processes


= 66/4 = 16.5ms
Average waiting time = Sum waiting time/ no. of processes
= 29/4 = 7.25ms

6. Consider the following set of processes


Process Arrival time Burst Time
P1 0 6
P2 2 3

P3 4 3

P4 5 5

Draw Gantt charts and calculate average waiting time, average turnaround time using
following CPU scheduling algorithm
i. FCFS
ii. SRTF
iii. RR (quantum = 1msec)
i.) The Gantt chart for the FCFS schedule is as follows:

Dept. of ISE Page 27


OPERATING SYSTEMS MODULE 2

Process Arrival Burst Time Completion Turnaround Waiting


time Time time time
P1 0 6 6 6 0
P2 2 3 9 7 4

P3 4 3 12 8 5

P4 5 5 17 12 7

Average Turnaround time = Sum of turnaround time/ no. of processes


= 33/4 = 8.25ms
Average waiting time = Sum waiting time/ no. of processes
= 16/4 = 4ms

ii.) The Gantt chart for the SRTF schedule is as follows:

Process Arrival Burst Time Completion Turnaround Waiting


time Time time time
P1 0 6 12 12 6
P2 2 3 5 3 0

P3 4 3 8 4 1

P4 5 5 17 12 7

Average Turnaround time = Sum of turnaround time/ no. of processes


= 31/4 = 7.75ms
Average waiting time = Sum waiting time/ no. of processes
= 14/4 = 3.5ms

iii.)The Gantt chart for the RR(time quantum = 1ms) schedule is as follows:

Dept. of ISE Page 28


OPERATING SYSTEMS MODULE 2

Process Arrival Burst Time Completion Turnaround Waiting


time Time time time
P1 0 6 14 14 8
P2 2 3 9 7 4

P3 4 3 13 9 6

P4 5 5 17 12 7

Average Turnaround time = Sum of turnaround time/ no. of processes


= 42/4 = 10.5ms
Average waiting time = Sum waiting time/ no. of processes
= 25/4 = 6.25ms

7. Consider the following set of processes


Process Burst time Priority
P1 10 3
P2 1 1

P3 2 3

P4 1 4

P5 5 2

The processes are assumed to have arrived in the order P1, P2, P3, P4, P5 all at time 0. Draw
Gantt charts and calculate average waiting time, average turnaround time using following
CPU scheduling algorithm
i. FCFS
ii. SJF
iii. RR (quantum = 1msec)
i) The Gantt chart for the FCFS schedule is as follows:

Dept. of ISE Page 29


OPERATING SYSTEMS MODULE 2

By considering arrival time = 0 for all processes.


Process Burst Time Completion Turnaround Waiting
Time time time
P1 10 10 10 0
P2 1 11 11 10

P3 2 13 13 11

P4 1 14 14 13

P5 5 19 19 14

Average Turnaround time = Sum of turnaround time/ no. of processes


= 67/5 = 13.4ms
Average waiting time = Sum waiting time/ no. of processes = 48/5 = 9.6ms

ii.) The Gantt chart for the SJF schedule is as follows:

Process Arrival Burst Time Completion Turnaround Waiting


time Time time time
P1 0 10 19 19 9
P2 0 1 1 1 0

P3 0 2 4 4 2

P4 0 1 2 2 1

P5 0 5 9 9 4

Average Turnaround time = Sum of turnaround time/ no. of processes


= 35/5 = 7ms
Average waiting time = Sum waiting time/ no. of processes
= 16/5 = 3.2ms
The Gantt chart for the RR(time quantum = 1ms) schedule is as follows:

Dept. of ISE Page 30


OPERATING SYSTEMS MODULE 2

Process Arrival Burst Time Completion Turnaround Waiting


time Time time time
P1 0 10 19 19 9
P2 0 1 2 2 1

P3 0 2 7 7 5

P4 0 1 4 4 3

P5 0 5 14 14 9

Average Turnaround time = Sum of turnaround time/ no. of processes


= 46/5 = 9.2ms
Average waiting time = Sum waiting time/ no. of processes
= 27/5 = 5.4ms

8. Consider the following set of processes


Process Arrival time Burst time Priority
P1 0 10 3
P2 0 1 1

P3 3 2 3

P4 5 1 4

P5 10 5 2

Draw Gantt charts and calculate average waiting time, average turnaround time using
preemptive priority scheduling algorithm. Assume highest priority = 1 and lowest priority =
4
The Gantt chart for the preemptive priority schedule is as follows:

Process Arrival Burst Time Priority Completion Turnaround Waiting


time Time time time
P1 0 10 3 11 11 1
P2 0 1 1 1 1 0

Dept. of ISE Page 31


OPERATING SYSTEMS MODULE 2

P3 3 2 3 18 15 13

P4 5 1 4 19 14 13

P5 10 5 2 16 6 1

Average Turnaround time = Sum of turnaround time/ no. of processes = 47/5 = 9.4ms
Average waiting time = Sum waiting time/ no. of processes = 28/5 = 5.6ms

9. Consider the following set of processes


Process Arrival time Burst time
P0 0 6
P1 1 3

P2 2 1

P3 3 4

Draw Gantt charts and calculate average waiting time, average turnaround time using
SRTF and non preemptive SJF
i). The Gantt chart for the SRTF schedule is as follows:

Process Arrival Burst Time Completion Turnaround Waiting


time Time time time
P0 0 6 14 14 8
P1 1 3 5 4 1

P2 2 1 3 1 0

P3 3 4 9 6 2

Average Turnaround time = Sum of turnaround time/ no. of processes = 25/4 = 6.25ms
Average waiting time = Sum waiting time/ no. of processes = 11/4 = 2.75ms

ii). The Gantt chart for the non preemptive SJF schedule is as follows:

Dept. of ISE Page 32


OPERATING SYSTEMS MODULE 2

Process Arrival Burst Time Completion Turnaround Waiting


time Time time time
P0 0 6 6 6 0
P1 1 3 10 9 6

P2 2 1 7 5 4

P3 3 4 14 11 7

Average Turnaround time = Sum of turnaround time/ no. of processes = 31/4 = 7.75ms
Average waiting time = Sum waiting time/ no. of processes = 17/4 = 4.25ms

SYNCHRONIZATION
What is synchronization?
Synchronization is the method which ensures the orderly execution of cooperating processes that
share logical address space (ie: code and data) or share data through files or messages through
threads so that data consistency is maintained.
The concurrent-access to shared-data may result in data-inconsistency. To maintain data-
consistency: the orderly execution of co-operating processes is necessary.
Suppose that we wanted to provide a solution to producer-consumer problem that fills
all full buffers. We can do so by having a variable counter that keeps track of the no. of
full buffers
Initially, counter=0.
 counter is incremented by the producer after it produces a new item to buffer.
 counter is decremented by the consumer after it consumes an item from buffer.
Shared-data:

Dept. of ISE Page 33

You might also like