0% found this document useful (0 votes)
9 views

Operating System N

Uploaded by

Gunjan Verma
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

Operating System N

Uploaded by

Gunjan Verma
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 139

Operating System

Study Material

Operating System ( OS )
For

Computer Science & Information Technology

By

Siddharth Shukla

Website: www.igate.guru
1
Page

1
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

Copyright©,i-gate publication
First edition 2021

All right reserved

No part of this book or parts thereof may be reproduced, stored in a retrieval


system or transmitted in any language or by any means, electronic,
mechanical, photocopying, recording or otherwise without the prior written
permission of the publisher.

2
Page

2
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

Operating System ( OS )

Syllabus –

 Processes
 Threads
 Inter-process communication
 Concurrency
 Synchronization
 Deadlock
 CPU scheduling
 Memory management and virtual memory
 I/O systems

Textbooks – “Operating system”

 Schaumm’s outlines series


 Peter Galvin
 William stalling

GATE RESULT 2011 in Top 100 Rank( CS / IT )


S.No. Name AIR %ILE Selection
1 Chandrahas Dewangan 44 99.97% ISc Bang.
2 Ankit Dixit 58 99.96% ISc Bang.
3 Akanksha Patel 85 99.94% IIT Bom.
GATE RESULT 2012 in Top 100 Rank( CS / IT )
S.No. Name AIR %ILE Selection
1 Ayush Dubey 22 99.98% ISc Bang.
2 Rahul Pawar 36 99.97%
ISc Bang.
3 Kritika Jain 59 99.96%
IIT Bom
GATE RESULT 2013 in Top 100 Rank( CS / IT )
S.No. Name AIR %ILE Selection
1 Supriya Sharma 47 99.98% IIT Bom
3
Page

3
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

Operating system –

 O.s. is an interface between user and Hardware.


 An operating system is a program that acts as an intermediary between the user of a computer
and the computer hardware. The purpose of an operating system is to provide an environment in
which a user can execute programs in a convenient and efficient manner.
 The operating system provides certain services to programs and to the users of those programs in
order to make their tasks easier. The services differ from one operating system to another.
 Computers are equipped with a layer of software called the operating system, whose job is to
manage all these devices and provide user programs with a simpler interface to the hardware.

Goals of operating system –

 User friendly
 Convenient
 Robustness
 Reliable
 Efficient use of hardware
 scalability

Functions of operating system –

 CPU scheduling
 Protection
 Security
 Memory management
 Handling errors
 Acting as interface

Differences between operating systems for mainframe computers and personal computers –

 Operating systems for batch systems have simpler requirements than for personal computers.
 Batch systems do not have to be concerned with interacting with a user as much as a personal
computer. As a result, an operating system for a PC must be concerned with response time for an
interactive user. Batch systems do not have such requirements.
 A pure batch system also may have not to handle time sharing, whereas an operating system must
switch rapidly between different jobs.

Views of an operating system –

 User view
4

 System view
Page

 Functionality view
4
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

 Application programming interface (API)

Modes of Execution –

 User mode
 Kernel mode

The operating system is that portion of the software that runs in kernel mode or supervisor mode. It is
protected from user tampering by the hardware. Certain instructions could be executed only when
the CPU is in kernel mode. Hardware devices could be accessed only when the program is executing in
kernel mode. Control over when interrupts could be enabled or disabled is also possible only when
the CPU is in kernel mode.

Compilers and editors run in user mode. The CPU has very limited capability when executing in user
mode, thereby enforcing protection of critical resources.

Basic elements –

There are four main structural elements:

 Processor: Controls the operation of the computer and performs its data processing functions.
When there is only one processor, it is often referred to as the central processing unit (CPU).
 Main memory: Stores data and programs. This memory is typically volatile; that is, when the
computer is shut down, the contents of the memory are lost. In contrast, the contents of disk
memory are retained even when the computer system is shut down. Main memory is also referred
to as real memory or primary memory.
 I/O modules: Move data between the computer and its external environment. The external
environment consists of a variety of devices, including secondary memory devices (e. g., disks),
communications equipment, and terminals.
 System bus: Provides for communication among processors, main memory, and I/O modules.
 One of the processor’s functions is to exchange data with memory.
 It typically makes use of two internal registers:
a) Memory address registers (MAR), which specifies the address in memory for the next
read or write.
b) Memory buffer register (MBR), which contains the data to be written into memory or
which receives the data read from memory.
 Similarly, an I/O address register (I/OAR) specifies a particular I/O device. An I/O buffer
register (I/OBR) is used for the exchange of data between an I/O module and the
processor.
 A memory module consists of a set of locations, defined by sequentially numbered
addresses. Each location contains a bit pattern that can be interpreted as either an
5

instruction or data.
Page

5
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

 An I/O module transfers data from external devices to processor and memory, and vice
versa. It contains internal buffers for temporarily holding data until they can be sent on.
 The OS also manages secondary memory and I/O (input/output) devices on behalf of its
users.

Architecture of Operating system –

 Layered structure -

6
Page

6
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

 Kernel structure -

 client – server structure –

Only important part of kernel is loaded, rests of them are loaded as per request of client.

Types of Operating system –

A. Batch OS –
 It requires the grouping up of similar jobs which consist of programs, data and system
commands.
 Users have no control over result of a program.
 Off-line debugging.
B. Multiprogramming OS –
 Simultaneous execution of multiple programs. It imports system throughput and resource
utilization.
 Example: Windows XP,98

 Multitasking OS –

A running state of a program is called a process or a task. The concept of managing


multiple simultaneous active programs, competing with each other for accessing
the system resources is called multitasking.
Example: Windows NT, Linux

 Multi-user OS –

It is defined as multiprogramming OS that support simultaneous interaction with


multiple users.
7

Example: Linux, UNIX


Page

7
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

A dedicated transaction processing system such as Railway reservation system, is


multi-under OS.

 Multiprocessing OS –

The term 'multiprocessing' means multi CPUs perform more than one job at a time.
The term 'multiprocessing' means situation in which a single CPU divides its time
between more than one jobs.

C. Time sharing interactive operating system –


 In this type of operating system, CPU switches rapidly from one user to another user, each
user is given an impression that he has his own computer while it is actually one computer
shared among many users.
 The time-sharing system uses time slice (Round Robin scheduling algorithm).
 Example: CTSS, Multiple, UNIX etc.

D. Real time operating system –


 It has well-defined, fixed time constraints. Processing must be done within the defined
constraints or system will fail.
 It is characterized by processing activity triggered by randomly accepted external events.
 Example: Harmony, MARUTI, VRTX(Versatile Real-time Executive by Hunter and ready
Inc), HART (Hexagonal Architecture for RTS by University of Michigan)

E. Network operating system –


 It is defined as a collection of software and associated protocols that allow a set of
autonomous computers, which are interconnected by a computer network.
 Example: BSD (Berkley system distribution), MS LAN Manager, Windows NT, UNIX.
 The system has little or no fault tolerance.
 Network OS can be specialized to serve.
 client - server OS(MS-NT Serve, Unix server)
 Peer-to peer network OS (WIN-95)

F. Distributed operating system –


 It refers to collection of autonomous system, capable of communicating and cooperating
with each other through a network are connected to each other through LAN/WAN.
Provides virtual machine abstraction to its users.
8
Page

8
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

Comparison of network OS and distributed OS –

S.N. Networks OS Distributed OS

In these systems, the control


In these systems, It can be done
1 over file placement must be
automatically by the system itself.
done manually by the users.
In network OS, these are various In this, there is a single system
2 machines each with its own user-id wide mapping that valid
(UID) mapping. everywhere.

Processor registers –

A processor includes a set of registers that provide memory that is faster and smaller than main
memory. Processor registers serve two functions:

1. User-visible registers: Enable the machine or assembly language programmer to minimize main
memory references by optimizing register use. For high level languages, an optimizing compiler
will attempt to make intelligent choices of which variables to assign to registers and which to main
memory locations. Types of registers that are typically available are data, address, and condition
code registers.
a. Data registers can be assigned to a variety of functions by the programmer. In some cases,
they are general purpose in nature and can be used with any machine instruction that
performs operations on data.

b. Address registers contain main memory addresses of data and instructions, or they contain
a portion of the address that is used in the calculation of the complete or effective address.
These registers may themselves be general purpose, or may be devoted to a particular way,
or mode, of addressing memory.

i. Index register: Indexed addressing is a common mode of addressing that involves


adding an index to a base value to get the effective address.
ii. Segment pointer: With segmented addressing, memory is divided into segments,
which are variable-length blocks of words. A memory reference consists of a
reference to a particular segment and an offset within the segment.
iii. Stack pointer: If there is user-visible stack addressing, then there is a dedicated
register that points to the top of the stack. This allows the use of instructions that
contain no address field, such as push and pop.
9
Page

9
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

c. Condition codes (also referred to as flags) are bits typically set by the processor hardware
as the result of operations. For example, an arithmetic operation may produce a positive,
negative, zero, or overflow result.

2. Control and status registers: Used by the processor to control the operation of the processor and
by privileged OS routines to control the execution of programs. A variety of processor registers are
employed to control the operation of the processor. On most processors, most of these are not
visible to the user. Some of them may be accessible by machine instructions executed in what is
referred to as a control or kernel mode.
a. Program counter (PC): Contains the address of the next instruction to be fetched.
b. Instruction register (IR): Contains the instruction most recently fetched

Software -

1. Freeware –
We can easily download it from web and use it. (Available freely)
No restriction
2. Shareware –
One can buy & share it
3. Firmware –
Distributed & created by specific firm.
--------------------◄►--------------------

10
Page

10
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

1. PROCESSES

Process -

 A key concept in all operating systems.


 A program in execution.
 An instance of a program running on a computer.
 A time-shared user program such as a compiler is a process.
 The entity that can be assigned to and executed on a processor
 Process associated with each process is its address space, a list of memory locations from some
minimum (usually 0) to some maximum, which the process can read and write. The address space
contains the executable program, the program’s data, and its stack. Also associated with each
process is some set of registers, including the program counter, stack pointer, and other hardware
registers, and all the other information needed to run the program.

The operating system is responsible for the following activities in connection with process
management:

 Creating and deleting both user and system processes


 Suspending and resuming processes
 Providing mechanisms for process synchronization
 Providing mechanisms for process communication
 Providing mechanisms for deadlock handling

11
Page

11
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

Difference between Process and Program –

S.N. Process Program


1 Process is execution instance of a program. Program is a set of instruction.
2 It is an active entity. It is a passive entity.
3 Process executed or resides in RAM. Program resides in 2ndry memory.
4 There is limited time for a process. Unlimited time for a program.
For example –
for(i=1 ; i<=10 ; i++)
prod = prod * i ;
5
╘> It is a program containing 1 multiplication statement “prod = prod * I”.
but the process will execute till multiplication one at a time through the “for” loop.

Abstract view of Process –

Programs consist of data and instruction.

Data →

1. Static data
Example – Variable, data structure (whose size is fixed).
2. Dynamic data
Example – Space allocated during runtime using dynamic memory allocation.

 Static variable memory is allocated during load time but not at editing or compile time.
Example – int a,b; // static data
int *p = malloc() //dynamic data
 Runtime stack – Activation record are maintain where each function call are generated.
12

 Dynamic variable memory is allocated at runtime.


 Every process maintain in run time stack.
Page

12
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

Process state diagram –

As a process executes, it changes state. The state of a process is defined in part by the current activity
of that process.

1. New: A process that has just been created but has not yet been admitted to the pool of executable
processes by the OS. Typically, a new process has not yet been loaded into main memory, although
its process control block has been created.
2. Ready: A process that is prepared to execute when given the opportunity.
3. Running: Instructions are being executed. If we will assume a computer with a single processor, so
at most one process at a time can be in this state.
4. Blocked/Waiting: A process that cannot execute until some event occurs, such as the completion of
an I/O operation or reception of a signal.
5. Exit/ Terminated: A process that has been released from the pool of executable processes by the
OS, either because it halted or because it aborted for some reason.

The types of events that lead to each state transition for a process; the possible transitions are as
follows:

 Null → New: A new process is created to execute a program.


 New → Ready: The OS A will move a process from the New state to the Ready state when it is
prepared to take on an additional process.
 Ready → Running: When it is time to select a process to run, the OS chooses one of the processes in
the Ready state. This is the job of the scheduler or dispatcher.
 Running → Exit: The currently running process is terminated by the OS if the process indicates that
it has completed, or if it aborts.
 Running → Ready: The most common reason for this transition is that the running process has
reached the maximum allowable time for uninterrupted execution; virtually all multiprogramming
operating systems impose this type of time discipline.
13
Page

13
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

 Running → Blocked: A process is put in the Blocked state if it requests something for which it must
wait. A request to the OS is usually in the form of a system service call; that is, a call from the
running program to a procedure that is part of the operating system code.
 Blocked → Ready: A process in the Blocked state is moved to the Ready state when the event for
which it has been waiting occurs.
 Ready → Exit: For clarity, this transition is not shown on the state diagram. In some systems, a
parent may terminate a child process at any time. Also, if a parent terminates, all child processes
associated with that parent may be terminated.
 Blocked → Exit: The comments under the preceding item apply.

Process Control Block (PCB) –

Each process is represented in the operating system by a process control block (PCB) also called a
task control block.

It contains many pieces of information associated with a specific process –

 Process state: The state may be new, ready, running, waiting, halted and so on.
 Program counter: The counter indicates the address of the next instruction to be executed for this
process.
 PSW: U/S (user/supervisor mode), EI (Enable interrupt), Interrupt level mask, condition code etc.
 CPU registers: The registers vary in number and type, depending on the computer architecture.
They include accumulators, index registers, stack pointers, and general-purpose registers, plus
any condition-code information.
 CPU-scheduling information: This information includes a process priority, pointers to scheduling
queues, and any other scheduling parameters.
 Link: Pointer to same scheduling queue.
14

 Memory-management information: This information may include such information as the value of
the base and limit registers, the page tables, or the segment tables, depending on the memory
Page

system used by the operating system


14
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

 Accounting information: This information includes the amount of CPU and real time used, time
limits, account numbers, job or process numbers, and so on.
 I/O status information: This information includes the list of I/O devices allocated to the process, a
list of open files, and so on.

In brief, the PCB simply serves as the repository for any information that may vary from process to
process.

Context switching –

When CPU switches from process Pi to process Pj , state of Pi has to be saved and state of Pj has to
be loaded (from PCB).

Diagrammatic representation of context switching –

Process creation –

There are four principal events that cause processes to be created:

1. System initialization.
2. Execution of a process creation system call by a running process.
3. A user request to create a new process.
4. Initiation of a batch job.
15

 When an operating system is booted, typically several processes are created. Some of these are
foreground processes, that is, processes that interact with (human) users and perform work for
Page

15
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

them. Others are background processes, which are not associated with particular users, but
instead have some specific function.
 All processes have a unique process ID , getpid() , getppid() , system calls allow processes to get
their information.

System call

It provides the interface between a process and the operating system. These calls are available as
assembly language instructions. System calls for Modern Microsoft Windows platform are part of
the win32 application programmer interface (API).

Some system calls are –

 Fork ( )
 Exec ( )
 Signal ( )
 Kill ( )
 clone ( )
 vfork ( )
 wait ( )
 exit ( )

Process Suspension –

 Swap out on temporary basis and later on resume to increase the performance.
 Suspension is not for I/O.
 On performance reason a process is suspended.
 All 3 state “ready – running – block’ are possible to for suspend.
 There are two independent concepts here: whether a process is waiting on an event (blocked or
not) and whether a process has been swapped out of main memory (suspended or not).
 To accommodate this 2 × 2 combination, we need four states:
i. Ready: The process is in main memory and available for execution.
ii. Blocked: The process is in main memory and awaiting an event.
iii. Blocked/Suspend: The process is in secondary memory and awaiting an event.
iv. Ready/Suspend: The process is in secondary memory but is available for execution as soon
as it is loaded into main memory.
16
Page

16
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

Important new transitions are the following:

 Blocked → Blocked/Suspend: If there are no ready processes, then at least one blocked process is
swapped out to make room for another process that is not blocked. This transition can be made
even if there are ready processes available, if the OS determines that the currently running
process or a ready process that it would like to dispatch requires more main memory to maintain
adequate performance.
 Blocked/Suspend → Ready/Suspend: A process in the Blocked/Suspend state is moved to the
Ready/Suspend state when the event for which it has been waiting occurs. Note that this requires
that the state information concerning suspended processes must be accessible to the OS.
 Ready/Suspend → Ready: When there are no ready processes in main memory, the OS will need to
bring one in to continue execution. In addition, it might be the case that a process in the
Ready/Suspend state has higher priority than any of the processes in the Ready state. In that case,
the OS designer may dictate that it is more important to get at the higher-priority process than to
minimize swapping.
 Ready → Ready/Suspend: Normally, the OS would prefer to suspend a blocked process rather than
a ready one, because the ready process can now be executed, whereas the blocked process is
taking up main memory space and cannot be executed. However, it may be necessary to suspend a
ready process if that is the only way to free up a sufficiently large block of main memory. Also, the
OS may choose to suspend a lower-priority ready process rather than a higher priority blocked
process if it believes that the blocked process will be ready soon.
17

Several other transitions that are worth considering are the following:
Page

17
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

 New → Ready/Suspend and New → Ready: When a new process is created, it can either be added to
the Ready queue or the Ready/Suspend queue. In either case, the OS must create a process control
block and allocate an address space to the process. It might be preferable for the OS to perform
these housekeeping duties at an early time, so that it can maintain a large pool of processes that
are not blocked. With this strategy, there would often be insufficient room in main memory for a
new process; hence the use of the (New → Ready/Suspend) transition. On the other hand, we
could argue that a just-in-time philosophy of creating processes as late as possible reduces OS
overhead and allows that OS to perform the process-creation duties at a time when the system is
clogged with blocked processes anyway.

 Blocked/Suspend → Blocked: Inclusion of this transition may seem to be poor design. After all, if a
process is not ready to execute and is not already in main memory, what is the point of bringing it
in? But consider the following scenario: A process terminates, freeing up some main memory.
There is a process in the (Blocked/Suspend) queue with a higher priority than any of the
processes in the (Ready/Suspend) queue and the OS has reason to believe that the blocking event
for that process will occur soon. Under these circumstances, it would seem reasonable to bring a
blocked process into main memory in preference to a ready process.

 Running → Ready/Suspend: Normally, a running process is moved to the Ready state when its time
allocation expires. If, however, the OS is preempting the process because a higher-priority process
on the Blocked/Suspend queue has just become unblocked, the OS could move the running
process directly to the (Ready/Suspend) queue and free some main memory.

 Any State → Exit: Typically, a process terminates while it is running, either because it has
completed or because of some fatal fault condition. However, in some operating systems, a process
may be terminated by the process that created it or when the parent process is itself terminated. If
this is allowed, then a process in any state can be moved to the Exit state.

A suspended process as having the following characteristics:

1. The process is not immediately available for execution.


2. The process may or may not be waiting on an event. If it is, this blocked condition is independent
of the suspend condition, and occurrence of the blocking event does not enable the process to be
executed immediately.
3. The process was placed in a suspended state by an agent: itself, a parent process, or the OS, for the
purpose of preventing its execution.
4. The process may not be removed from this state until the agent explicitly orders the removal.

Process Termination –
18

After a process has been created, it starts running and does whatever its job is. However, nothing
Page

lasts forever, not even processes. Sooner or later the new process will terminate, usually due to
one of the following conditions:
18
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

1. Normal exit (voluntary).


2. Error exit (voluntary).
3. Fatal error (involuntary).
4. Killed by another process (involuntary).
 Most processes terminate because they have done their work.
 When a compiler has compiled the program given to it, the compiler executes a system call to tell
the operating system that it is finished.
 This call is ‘exit’ in UNIX and ‘ExitProcess’ in Windows.

The new process has –

 Memory address space.


 Instruction (Copied from parent)
 Data (Copied from parent)
 Stack (empty)

--------------------◄►--------------------

19
Page

19
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

2. THREADS

Thread –

 A thread is a basic unit of CPU utilization.


 It comprises a thread ID, a program counter, a register set, and a stack.
 It shares with other threads belonging to the same process its code section, data section, and other
operating-system resources, such as open files and signals.
 A traditional (or heavyweight) process has a single thread of control.
 If the process has multiple threads of control, it can do more than one task at a time.

Need for Thread -

They play a vital role in RPC (Remote Procedure Call).


20
Page

20
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

Singles thread versus Multi-thread -

 In a single-threaded process model the representation of a process includes its process control
block and user address space, as well as user and kernel stacks to manage the call/return behavior
of the execution of the process.
 While the process is running, it controls the processor registers. The contents of these registers
are saved when the process is not running.
 In a multithreaded environment, there is still a single process control block and user address
space associated with the process, but now there are separate stacks for each thread, as well as a
separate control block for each thread containing register values, priority, and other thread-
related state information.

Types of thread –

There are two broad categories of thread implementation:

User-level threads (ULTs) and kernel-level threads (KLTs)

1. User-Level Threads –
 In a pure ULT facility, all of the work of thread management is done by the application.
 The kernel is not aware of the existence of threads.
 User threads are supported above the kernel and are managed without kernel support.
 The threads library contains code for creating and destroying threads, for passing
21

messages and data between threads, for scheduling thread execution, and for saving and
restoring thread contexts.
Page

 User level threads are fast to create and manage.


21
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

2. Kernel-Level Threads –
 In a pure KLT facility, all of the work of thread management is done by the kernel.
 There is no thread management code in the application level, simply an application
programming interface (API) to the kernel thread facility.
 Kernel threads are supported and managed directly by the operating system.
 Windows is an example of this approach.

Advantages –

 Takes less time to create a new thread in an existing process than to create a brand new process.
 Take less time to terminate a thread than a process.
 Switching between thread is faster than a normal context switch.
 Threads enhance efficiency in communication between different executing programs. No kernel
involved.

Multithreading Models –

i. Many-to-One Model –
The many-to-one model maps many user-level threads to one kernel thread. Thread management
22

is done by the thread library in user space, so it is efficient; but the entire process will block if a
thread makes a blocking system call. Also, because only one thread can access the kernel at a time,
Page

multiple threads are unable to run in parallel on multiprocessors.


22
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

ii. One-to-One Model –


The one-to-one model maps each user thread to a kernel thread. It provides more concurrency
than the many-to-one model by allowing another thread to run when a thread makes a blocking
system call; it also allows multiple threads to run in parallel on multiprocessors. The only
drawback to this model is that creating a user thread requires creating the corresponding kernel
thread. Because the overhead of creating kernel threads can burden the performance of an
application, most implementations of this model restrict the number of threads supported by the
system.

iii. Many-to-Many Model –


The many-to-many model (Figure 5.4) multiplexes many user-level threads to a smaller or equal
number of kernel threads. The number of kernel threads may be specific to either a particular
application or a particular machine.

23
Page

23
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

Concurrent programming –

Assumptions –

 2 or more threads.
 Each executes in parallel.
 We can’t predict exact running speed.
 The threads can interact via access to shared variable.

Example –

 One thread writes a variable.


 The other thread reads from the same variable.
 Problem – no determinism.
The relative order of one thread’s read and the other threads write determines the result.

Security of threads –

Since, there is an extensive sharing among threads; there is potential problem of security. It is
quite possible that one thread overwrites the stack of another thread although it very unlikely
since threads are meant to corporate on a single task.

Advantages of threads –

 The best advantage is that a user level threads package can be implemented on an operating
system.
 They do not require modification to operating systems.
 They have a simple representation i.e. each thread is represented by a PC, register, stack and small
control block all stored in the user process address space.
 They have a simple management i.e. thread creation, switching between threads and
synchronization between threads can all be done without intervention of the kernel.
 They are fast and efficient i.e. thread switching is not expensive than a procedure call.

Advantages and disadvantages of threads over multiple processes –

Advantages –

 Context switching.
 Sharing

Disadvantages –
24

 Blocking.
Page

24
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

Disadvantages of Threads –

 There is a lack of coordination between thread and kernel. Therefore a process as whole gets one
time slice irrespective of whether it has one thread or 1000 threads within. Each thread
relinquishes control to other threads.
 They require non – blocking system call i.e., a multithreaded kernel. Otherwise the entire process
will be blocked in the kernel, even if there are runnable threads left in the processes. If one thread
causes a page fault, the process blocks.

Process scheduling –

 The objective of multiprogramming is to have some process running at all times, to maximize CPU
utilization.
 The objective of time sharing is to switch the CPU among processes so frequently that users can
interact with each program while it is running.
 The aim of processor scheduling is to assign processes to be executed by the processor or
processors over time, in a way that meets system objectives, such as response time, throughput,
and processor efficiency.

Scheduling queue –

Queue are implemented and stored in main memory. Queue is for process manager.

1. Ready queue.
2. Job queue.
i. Device queue
ii. Event queue
 Job queue consists of all processes in the system.
 The processes that are residing in main memory and are ready and waiting to execute are kept on
a list called the ready queue.
 The list of processes waiting for a particular I/O device is called a device queue.

Queuing diagram –

A common representation for a discussion of process scheduling is a queuing diagram.

 Processes entering into the system are put into a job queue.
 Processes in memory waiting to be executed are kept in a list called ready queue.

25

Processes waiting for a particular work are placed in input/output queue.


Page

25
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

Scheduler –

A process migrates among the various scheduling queues throughout its lifetime. The operating
system must select, for scheduling purposes, processes from these queues in some fashion. The
selection process is carried out by the appropriate scheduler.

Types of Scheduler –

Scheduling activity is broken down into three separate functions:

 Long - term scheduling


 Medium - term scheduling
26

 Short term scheduling


Page

The names suggest the relative time scales with which these functions are performed.
26
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

A. Short term scheduling –


 Short-term scheduling is the actual decision of which ready process to execute next.
 The short-term scheduler, or CPU scheduler, selects from among the processes that are
ready to execute and allocates the CPU to one of them.
 It is also known as CPU scheduler.
 The short-term scheduler is invoked whenever an event occurs that may lead to the
blocking of the current process or that may provide an opportunity to preempt a currently
running process in favor of another.
 Examples of such events include -
 Clock interrupts
 I/O interrupts
 Operating system calls
 Signals (e.g., semaphores)
B. Medium - term scheduling -
 Medium-term scheduling is a part of the swapping function.
 This is a decision whether to add a process to those that are at least partially in main
memory and therefore available for execution.

 The key idea behind a medium-term scheduler is that sometimes it can be advantageous to
remove processes from memory and thus reduce the degree of multiprogramming.
 Later, the process can be reintroduced into memory, and its execution can be continued
where it left off. This scheme is called swapping.
 The process is swapped out, and is later swapped in, by the medium-term scheduler.

C. Long - term scheduling –


 Long-term scheduler or job scheduler is performed when a new process is created.
 Long-term scheduler selects processes from this pool and loads them into memory for
execution.
 This is a decision whether to add a new process to the set of processes that are currently
active.
27

 The long-term scheduler executes much less frequently; minutes may separate the creation
of one new process and the next.
Page

 The long-term scheduler controls the degree of multiprogramming


27
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

Context Switch –

 Switching the CPU to another process requires saving the state of the old process and loading the
saved state of the new process. This task is known as a context switch.
 The context of a process is represented in the PCB of the process; it includes the value of the CPU
registers, the process state and memory management.
 Context switch time varies from 1 to 1000 micro sec.
 Context-switch times are highly dependent on hardware support information.
 A context switch here simply requires changing the pointer to the current register set

Levels of scheduling –

28
Page

28
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

3. INTER - PROCESS COMMUNICATION (IPC) & SYNCHRONIZATION -

Inter – process communication -

 It provides a mechanism to allow processes to communicate and to synchronize their actions


without sharing the same address space.
 IPC is useful in a distributed environment.
 Example - Chat program used on the World Wide Web.
 The basic form of communication between processes or threads in a microkernel OS is messages.
A message includes a header that identifies the sending and receiving process and a body.
 In any two processes to communicate, medium is required:
 Hardware resource and software resource.

IPC is best provided by a message-passing system, and message passing systems can be defined in
many different ways –

The function of a message system is to allow processes to communicate with one another without
the need to resort to shared data. An IPC facility provides at least the two operations
send(message) and receive(message).

If processes P and Q want to communicate, they must send messages to and receive messages
from each other; a communication link must exist between them.

Several methods for logically implementing a link and the send()/receive() operations are:
 Direct or indirect communication.
 Synchronous or asynchronous communication.
 Automatic or explicit buffering.

Processes that want to communicate can use either direct or indirect communication.

In direct communication, each process that wants to communicate must explicitly name the
recipient or sender of the communication. In this scheme, the send() and receive() primitives are
defined as:
 send(P, message) — Send a message to process P.
 receive(Q, message) — Receive a message from process Q.

In indirect communication, the messages are sent to and received from mailboxes, or ports.
The send() and receive() primitives are defined as follows:
 send(A, message) —Send a message to mailbox A.
 receive(A, message) — Receive a message from mailbox A.
29
Page

29
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

Buffering –

The communication is direct or indirect; messages exchanged by communicating processes reside


in a temporary queue. There are three ways to implement such a queue:

 Zero capacity: The queue has maximum length 0; thus, the link cannot have any messages
waiting in it. In this case, the sender must block until the recipient receives the message.
 Bounded capacity: The queue has finite length n; thus, at most n messages can reside in it. If
the queue is not full when a new message is sent, the latter is placed in the queue and the
sender can continue execution without waiting. The link has a finite capacity, however. If
the link is full, the sender must block until space is available in the queue.
 Unbounded capacity: The queue has infinite length; thus, any number of messages can wait
in it. The sender never blocks.

Process synchronization –

Process synchronization is a mechanism to ensure a systematic sharing of resource among


concurrent process.

 Problem due to lack of synchronization –


 Inconsistency of data.
 Loss of data.
 Deadlock.
 There are 2 types of process synchronization –
 Independent -
It can’t affect or be affected by the execution of another process.
 Cooperating –
It can affect or be affected by the execution of another process.

Two synchronization problems associated with inter – process communication –

 Race condition
 Deadlock

The situation, where several threads access and manipulate the same data concurrently, and
where the outcome of the execution depends on the particular order in which the access takes
place, is called a race condition.

Avoiding Race Condition –

Critical section –
30

 A section of code or set of operations, in which process may be charging shared variable, updating
Page

a common file or a table etc.

30
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

 C.S. is that part of a program where share resources are access.


 Non – critical section is that part of a program, where no share resources are access.

Architecture –

 If there are several process, during the operation on common data in a common file then only one
process is allow to be there in the C.S. & once the process start during the operation in C.S. it
complete the entire operation before it exit the exit section.
 A C.S. environment contain –
 Entry section
 Critical section
 Exit section
 A solution to the critical section problem must satisfy the three requirements.
 Mutual exclusion
 Progress
 Bounded waiting
 Mutual exclusion: No more than one process can execute in its critical section at a time.
 Progress: Process running out – side critical section should not be blocked.
If no process is executing in its critical section and there exist some processes that wish to enter
their critical section, then only those processes that are not executing in the critical section can
participate in the decision of which will enter its critical section next and this selection can’t be
postponed indefinitely.
 Bounded waiting: No process should have o wait forever to enter its critical section.

Synchronization Mechanism –

There are two mechanisms –

 Mutual exclusion with busy waiting ( Software solution , user mode)


 Mutual exclusion without busy waiting (Hardware solution , Kernel mode )
31

Mutual exclusion without busy waiting (Hardware solution) –


Page

This method is Disable interrupt (DI) while a process is modifying a shared variable.

31
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

It is best suited for uniprocessor machine.

Mutual exclusion with busy waiting (Software solution) -

1) Strict alternative –
Algorithm 1-
void P0(void) void P1(void)
{ {
while (1) while (1)
{ {
non.cs( ); non.cs( );
entry : while(turn != 0) entry : while(turn != 1)
< C.S > < c.s >
exit : turn = 1; exit : turn = 0;
} }
} }

Algorithm 2-
process = P0 process = P1
flag[0] = true; flag[1] = true;
while( flag[1]) while( flag[1])
{ {
BW; BW;
} }
< C.S > < C.S >

flag[0] = false; flag[0] = false;

2) Peterson’s solution / Dekker’s algorithm -

process = P0 }
flag[0] = true; < C.S >
turn = 1;
32

while( flag[1]&&turn==1) flag[0] = false;


{
Page

turn = 1;
BW;
32
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

process = P1 }
flag[1] = true; < C.S >
turn = 0;
while( flag[0]&&turn==0) flag[1] = false;
{ turn = 0;
BW;

Producer & consumer problem -

#define n 100

int Buffer[n];

void producer (void) void consumer (void)

{ {

int itemp , in = 0; int itemc , out = 0;

while(1) while(1)

{ produceitem(itemp); { while(count==0);

while(count==n); itemc = Buffer[out]

Buffer[in] = itemp; out = (in + 1)%n ;

in = (in + 1)%n ; count = count - 1 ;

count = count + 1 ; processitem(itemc) ;

} } } }

A new synchronization technique called “SEMAPHORE” was introduced by Dijkstra’s.

SEMAPHORE –

 A semaphore could have the value 0, indicating that no wakeups were saved, or some positive
value if one or more wakeups were pending.
 A semaphore may be initialized to a nonnegative integer value.
 Semaphore operation executed in kernel mode.
33

 It is a user defined data type.


Page

 Base on range of value semaphore divided into 2 categories –


33
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

 Binary semaphore – Take only 2 values. (range between 0 and 1)


 Counting semaphore – Range over an unrestricted domain. (-∞ to ∞)
 It is an OS resource.
 Operation of semaphore wait(P) and signal(V).
 Semaphore can access by wait and signal operation.

 wait(S) : while(S<=0)
{
//keep testing
}
S=S–1
 signal(S) : S = S + 1

Types of semaphore –

A. binary semaphore –
It is also known as “Mutex”. Binary semaphore is initialized by O.S. to 1.
wait operation decrement the value by 1. signal operation increment the value by 1.
Binary semaphore can take value either 0 or 1.

int S = 1

wait (S)
{
while( S <= 0)
{ signal (S)
BW; {
} S ++ ;
S--; }
}

For an ideal solution, requirements are -


1) Mutual exclusion.
2) Progress.
3) Bounded waiting.

Advantages –

 Its implementation is very easy.


34

Disadvantages –
Page

 It does not meet the requirement of bounded waiting.


34
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

 A process waiting to enter CS, will perform busy waiting, thus waiting CPU cycle.

B. Counting semaphore –

A pointer to a process queue, the queue will hold process control block (PCB) of all those
processes that are waiting to enter their critical sections. This queue is implemented as
FCFS queue, so that the waiting processes are served on FCFS order.

Implementation –

struct CSEMAPHORE

int value ;

QueueType L ;

};

CSEMAPHORE S ;

S.value = 4 ;

DOWN(S) ;

< C.S >

UP(S);

 The code of DOWN is in kernel like a system call.


 DOWN operation result in decrement the value of semaphore.
 If result value is positive or zero then it is successful complete DOWN operation. Negative
value indicates number of block process carry out unsuccessfully.

 DOWN operation defined as –

DOWN( CSEMAPHORE S )
{
35

S.value = S.value – 1 ;
if (S.value < 0)
Page

{
35
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

put this process (PCB) in S.L( ) and block it ;


sleep( );
}
}

 UP operation defined as –

UP( CSEMAPHORE S )
{
S.value = S.value – 1 ;
if ( S.value <= 1 )
{
select a process from S.L( )
&
wakeup( ) ;
}
}

Advantages –

 The waiting process will be permitted to enter their critical section in a FACS order, so the
requirement of bounded waiting is met.
 CPU cycles are saved here as waiting process does not perform ant busy waiting.

Disadvantages –

 Counting semaphore is more complex to implement, since it involves implementation of


FCFS.
 More context switching, so more overhead involved.
--------------------◄►--------------------
36
Page

36
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

4. CONCURRENCY

Concurrency control –

Concurrency –

 Real –
Active through multiprocessor, array processor, & vector processor (Physical
concurrency)
 Pseudo –
Achieve by process.

Concurrent system can run a collection of processes concurrently. It based on multi-programmed


O.S. The system must provide for mechanism of process synchronization and communication to
support concurrent execution of process.

Dependency graph –

Example –
S1 : a = b + c ;
S2 : d = e + f ;
S3 : k = a + d ;
S4 : l = k * m ;

1. Flow dependency / data dependency –


I:a=b+c
J:k=a+d

flow dependency : I → J

I:R{b,c},W{a}
J:R{a},W{k}

W( I ) ∩ R( J ) = ∅

2. Anti dependency –
37

I:b=a+c
Page

J:a=k+l

37
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

Anti dependency : I J

I:W{b},R{a,c}
J:W{a},R{k,l}

R( I ) ∩ W( J ) = ∅

3. Output dependency –
I:a=b+c
J:a=d+e

Output dependency : I J

W( I ) ∩ W( J ) = ∅

Fork and join construct –

 The fork instruction produces 2 concurrent executions in a program.


 Join instruction to recombine two or more concurrent computation in to join.
 Join defined as –

join(count)
{
count = count -1 ;
If (count != 0)
exit ;
}

Parbegin – parend / begin – end / co – begin – co – end –

co – begin & co - end

It represents sequential action.


38
Page

38
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

parent begin & parent end

It represents parallel action.

--------------------◄►--------------------

39
Page

39
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

5. DEADLOCKS

Deadlock –

 A process requests resources; if the resources are not available at that time, the process enters a
wait state. Waiting processes may never again change state, because the resources they have
requested are held by other waiting processes. This situation is called a deadlock.
 Deadlock is permanent because none of the events is ever triggered.
 A set of processes is deadlocked if each process in the set is waiting for an event that only another
process in the set can cause.
 Another name for deadlock is LOCK – UP.
 Example –

 Pi asks Pj to release Rb , whereas Pj asks Pi to release Ra.


 Process Pi & Pj get blocked, leading to deadlock, where both process goes to infinite blocking.

System model –

Let the resource type be R1 , R2 , R3 . . . . . Rm .

Each resource type Ri has Wi instance, each process utilize a resource a follows –

a) Request –
40

If the resource is not available when it is requested, the requesting process is forced to wait.
Page

40
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

b) Assignment –
The operating system will assign to the requesting process an instance of the requested resource,
whenever it is available. Then, the process comes out of its waiting state.

c) Use –
The process will use the assigned process.

d) Release –
The processes release the resource.

 The request and release of resources are system calls.


 Request and release of resources that are not managed by the operating system.

Deadlock characterization / condition for deadlock -

a) Mutual exclusion –
 Each resource is either currently assigned to exactly one process or is available.
 Only one process may use a resource at a time. No process may access a resource unit that
has been allocated to another process.
 If another process requests that resource, the requesting process must be delayed until the
resource has been released.

b) Hold and wait –


 A process may hold allocated resources while awaiting assignment of other resources.

c) No preemption –
 Resources previously granted cannot be forcibly taken away from a process. They must be
explicitly released by the process holding them.

d) Circular wait –
 A closed chain of processes exists, such that each process holds at least one resource
needed by the next process in the chain.
41
Page

41
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

Methods of handling deadlock –

1. No deadlock
2. Allow deadlock

 No Deadlock occur –
i. Prevention
ii. Avoidance
 Deadlock occur –
i. Detection & recovery

Deadlock detection –

Resource allocation graph (RAG) –

 The resource allocation graph is a directed graph that depicts a state of the system of resources
and processes, with each process and each resource represented by a node.
 The graph consists of 2 sets, a set of vertices V and a set of edges E.
 The set of vertices V is partitioned into two different types of nodes:
P = {P1, P2, . . . . . . . . , Pn}, the set consisting of all the active processes in the system, and
R = {R1, R2, . . . . . . . . . , Rm}, the set consisting of all resource types in the system.
 A directed edge from process Pi to resource type Rj is denoted by Pi → Rj; it signifies that process
Pi requested an instance of resource type Rj and is currently waiting for that resource.
 A directed edge from resource type Rj to process Pi is denoted by Rj → Pi; it signifies that an
instance of resource type Rj has been allocated to process Pi.
42
Page

42
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

 A directed edge Pi → Rj is called a request edge & Rj → Pi is called an assignment edge.

Examples –

 One instance of resource type R1.


 Two instances of resource type R2.
 One instance of resource type R3.
 Four instances of resource type R4
 Process states:
 Process P1 is holding an instance of resource type R2 and is waiting for an instance of
resource type R1.
 Process P2 is holding an instance of R1 and R2 and is waiting for an instance of resource
type R3.
43

 Process P3 is holding an instance of R3.


Page

43
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

Note –

 If graph contain no cycle, then no deadlock occur.


 If graph contain a cycle :
 If only one instance per resource type, then deadlock occur.
 If several instance per resource type then there is a possibility of deadlock.

Deadlock prevention –

1. Mutual exclusion
2. Hold and wait
3. No preemption
4. Circular wait

Mutual exclusion –

 If no resource were ever assigned exclusively to a single process, we would never have
deadlocks.
 This must hold for non – sharable resources.

Hold and wait –


44

 If we can prevent processes that hold resources from waiting for more resources, we can
Page

eliminate deadlocks.
44
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

 The hold-and-wait condition can be prevented by requiring that a process request all of its
required resources at one time and blocking the process until all requests can be granted
simultaneously.

No preemption –

 If a process is holding some resources and requests another resource that cannot be
immediately allocated to it (that is, the process must wait), then all resources currently being
held are preempted.
 The preempted resources are added to the list of resources for which the process is waiting.
 The process will be restarted only when it can regain its old resources, as well as the new ones
that it is requesting.
 There are two approach –
 Self
 Force

Circular wait –

 One way to ensure that this condition never holds is to impose a total ordering of all resource
types and to require that each process requests resources in an increasing order of
enumeration.

Deadlock avoidance –

 Simplest and most useful model requires that each process declares the maximum number of
resources of each type that it may need.
 The deadlock avoidance algorithm dynamically examines the resource allocation state to
ensure that there can never be a circular wait condition.
 Resource allocation state is defined by the number of available and allocated resource and the
maximum demands of the process.

Safe state –

 A safe state is not a deadlocked state.


 A state is safe if the system can allocate resources to each process in some order and still avoid
a deadlock.
 A system is in a safe state only if there exists a safe sequence.
45
Page

45
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

 A sequence of processes <P1, P2, . . . . . . . . . , Pn> is a safe sequence for the current allocation
state if, for each Pi, the resource requests that Pi can still make can be satisfied by the currently
available resources plus the resources held by all Pj , with j < i.
 When they have finished, Pi can obtain all of its needed resources, complete its designated task,
return its allocated resources, and terminate. When Pi terminates, Pi+1 can obtain its needed
resources, and so on.

Resource allocation graph for deadlock avoidance –

 An edge from Pi → Rj indicates that process Pi may request resource Rj at some time in the future.
 This is called claim edge.
 This edge resembles a request edge in direction but is represented by a dashed line.
 When process Pi requests resource Rj, the claim edge Pi → Rj is converted to a request edge.
 Similarly, when a resource Rj is released by Pi, the assignment edge Rj → Pi is reconverted to a
claim edge Pi → Rj.

Banker’s algorithm –

 Banker’s algorithm is used to obtain a system always on safe mode.


 It is also known as “Safety algorithm”.

 The banker’s algorithm considers each request as it occurs, and sees if granting it leads to a safe
state. If it does, the request is granted; otherwise, it is postponed until later.
OR
 When a user requests a set of resources, the system must determine whether the allocation of
46

these resources will leave the system in a safe state. If it will, the resources are allocated;
Page

otherwise, the process must wait until some other process releases enough resources.

46
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

 Several data structures must be maintained to implement the banker's algorithm –


Let n be the number of processes in the system and m be the number of resource types.
We need the following data structures:
 Available:
A vector of length m indicates the number of available resources of each type.
If Available[j] = k,
There are k instances of resource type Rj available.
 Max:
An n × m matrix defines the maximum demand of each process.
If Max[i][j] = k,
Then process Pi may request at most k instances of resource type Rj.
 Allocation:
An n × m matrix defines the number of resources of each type currently allocated to each
process.
If Allocation[i][j] = k,
Then, process Pi is currently allocated k instances of resource type Rj.
 Need:
An n × m matrix indicates the remaining resource need of each process.
If Need[i][j] = k,
Then process Pi may need k more instances of resource type Rj to complete its task.
Note that Need,i-,j- = Max,i-,j- − Allocation,i-,j-.

 These data structures vary over time in both size and value.

Safety Algorithm –

The algorithm for finding out whether or not a system is in a safe state can be described as
follows:

Step 1 : Let Work and Finish be vectors of length m and n, respectively.

Initialize Work = Available and Finish,i- = false for i = 0, 1, …, n−1.

Step 2 : Find an i such that both

a. Finish[i] == false
47

b. Needi ≤ Work
If no such i exists, go to step 4.
Page

47
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

Step 3 : Work = Work + Allocationi

Finish[i] = true

Go to step 2.

Step 4 : If Finish[i] == true for all i,

then the system is in a safe state.

This algorithm may require an order of m × n2 operations to decide whether a state is safe.

Resource-Request Algorithm –

Let Requesti be the request vector for process Pi.

If Requesti[j] == k, then process Pi wants k instances of resource type Rj.

When a request for resources is made by process Pi, the following actions are taken:

Step 1 : If Requesti ≤ Needi, go to step 2.

Otherwise, raise an error condition, since the process has exceeded its maximum
claim.

Step 2 : If Requesti ≤ Available, go to step 3.

Otherwise, Pi must wait, since the resources are not available.

Step 3 : Have the system pretend to have allocated the requested resources to process P i by
modifying the state as follows:

Available = Available − Requesti;

Allocationi = Allocationi + Requesti;

Needi = Needi − Requesti;

If the resulting resource-allocation state is safe, the transaction is completed, and process Pi is
allocated its resources. However, if the new state is unsafe, then Pi must wait for Requesti, and the
old resource-allocation state is restored.
48
Page

48
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

Example –

Consider a system with five processes P0 through P4 and three resource types A, B, C. Resource
type A has 10 instances, resource type B has 5 instances, and resource type C has 7 instances.
Suppose that, at time T0, the following snapshot of the system has been taken:

Process Allocation Max Available


A B C A B C A B C
P0 0 1 0 7 5 3 3 3 2
P1 2 0 0 3 2 2
P2 3 0 2 9 0 2
P3 2 1 1 2 2 2
P4 0 0 2 4 3 3

The content of the Need matrix is defined to be Max - Allocation and is –

Process Allocation Max Available Need


A B C A B C A B C A B C
P0 0 1 0 7 5 3 3 3 2 7 4 3
P1 2 0 0 3 2 2 1 2 2
P2 3 0 2 9 0 2 6 0 0
P3 2 1 1 2 2 2 0 1 1
P4 0 0 2 4 3 3 4 3 1

We can say that the system is currently in a safe state. Indeed, the sequence <P1, P3, P4, P2, P0>
satisfies the safety criteria. Suppose now that process P1 requests one additional instance of
resource type A and two instances of resource type C, so Request1 = (1,0,2). To decide whether
this request can be immediately granted, we first check that Request1 ≤ Available — that is,
(1,0,2) ≤ (3,3,2), which is true. We then pretend that this request has been fulfilled.

The new state is –


49
Page

49
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

Process Allocation Max Available Need


A B C A B C A B C A B C
P0 0 1 0 7 5 3 2 3 0 7 4 3
P1 3 0 2 3 2 2 0 2 0
P2 3 0 2 9 0 2 6 0 0
P3 2 1 1 2 2 2 0 1 1
P4 0 0 2 4 3 3 4 3 1

The system state is in safe state. The sequence <P1, P3, P4, P0, P2> satisfies our safety requirement.
Hence, we can immediately grant the request of process P1.

Note that when the system is in this state, a request for (3, 3, 0) by P4 cannot be granted, since the
resources are not available. Furthermore, a request for (0, 2, 0) by P0 cannot be granted, even
though the resources are available, since the resulting state is unsafe.

Deadlock detection algorithm –

Deadlock detection is used by employing an algorithm that tracks the circular waiting and killing
one or more processes so that the deadlock is removed. The system state is examined periodically
to determine if a set of processes is deadlocked. A deadlock is resolved by aborting and restarting
a process, relinquishing all the resources that the process held.

 For single instance of each resource type

If in the RAG, every resource has only on instance (or single instance) then we define a deadlock
detection algorithm that uses a variant of the RAG and is called a wait-for graph.

 How can we get this graph from RAG?

We can get this by removing the nodes of type resource and collapsing the appropriate edges.
Wait-for-graph has a cycle then there is deadlock in the system.

To detect deadlocks, the system needs to maintain the wait-for graph and to periodically
invoke an algorithm. The complexity of this algorithm is O(n2)where n is the number of vertices in
the graph.

Consider the RAG –


50
Page

50
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

We draw the wait – for – graph by removing all nodes that represent resource and collapsing their
edge.

The system is in deadlock state –

Cycle → P1 , P 2 , P 4 , P 1

Cycle → P1 , P 2 , P 3 , P 4 , P 1

--------------------◄►--------------------

51
Page

51
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

6. CPU SCHEDULING

CPU scheduling -

 CPU scheduling is the basis of multi programmed operating systems.


 Its function is to taking decision which process to run next.
 By switching the CPU among processes, the operating system can make the computer more
productive.
 When a computer is multi programmed, it frequently has multiple processes competing for the
CPU at the same time. This situation occurs whenever two or more processes are simultaneously
in the ready state. If only one CPU is available, a choice has to be made which process to run next.
The part of the operating system that makes the choice is called the scheduler and the algorithm it
uses is called the scheduling algorithm.

Scheduling Algorithm Goals –

All systems –

 Fairness - giving each process a fair share of the CPU


 Policy enforcement - seeing that stated policy is carried out
 Balance - keeping all parts of the system busy

Batch systems -

 Throughput - maximize jobs per hour


 Turnaround time - minimize time between submission and termination
 CPU utilization - keep the CPU busy all the time

Interactive systems -

 Response time - respond to requests quickly


 Proportionality - meet users’ expectations

Real-time systems -

 Meeting deadlines - avoid losing data


 Predictability - avoid quality degradation in multimedia systems

Terminology –
52
Page

52
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

i. Throughput –
Throughput is the number of jobs per hour / per unit time that the system completes.

ii. CPU utilization –


Keep the CPU as busy as possible.

iii. Arrival time –


The amount of time taken for a process from state new to ready.

iv. Turn - around time –


The interval from the time of submission of a process to the time of completion is the turnaround
time.
TAT = CT – AT
Turnaround time is the sum of the periods spent waiting to get into memory, waiting in the ready
queue, executing on the CPU, and doing I/O.
TAT = waiting time in ready queue + waiting time in waiting queue for I/O + Execution time.

v. Waiting time –
Waiting time is the sum of the periods spent waiting in the ready queue.
WT = TAT – BT
WT = CT – AT – BT
It should be minimum.

vi. Response time –


Amount of time it takes from when a request was submitted until the first response is produced.
Response time should be minimum.

vii. Deadline –
Maximum number of process done in a given time.
53
Page

53
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

Types of scheduling –

I. Preemptive
II. Non – preemptive

Difference -

S.N. Non – preemptive scheduling Preemptive scheduling


In Non – preemptive scheduling, if
In preemptive scheduling the CPU can
once a process has been allocated to
1 be taken away before the completion
CPU, then the CPU can’t be taken
of the process.
away from that process.
It is useful when a higher priority job
No preference is given when a higher comes as here the CPU can be
2
priority job comes. snatched from a lower priority
process.
3 The treatment of all processes is not
fair as CPU snatching is done either
The treatment of all processes is fair. due to constraints or due to higher
priority, process request for its
execution.
It is a cheaper scheduling method. It is a costlier scheduling method.
4
First come first served is an example. Round – robin is an example.

Dispatcher –

The dispatcher is the module that gives control of the CPU to the process selected by the short-term
scheduler. This function involves the following:

 Switching context
 Switching to user mode
 Jumping to the proper location in the user program to restart that program

Scheduling algorithms –

CPU scheduling deals with the problem of deciding which of the processes in the ready queue is to
be allocated the CPU. There are many different CPU scheduling algorithms.

1. First come – First serve scheduling (FCFS) –


54

The process that requests the CPU first is allocated the CPU first.
Page

First-come, first-served (FCFS) scheduling algorithm is the simplest CPU-scheduling algorithm.

54
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

It is non – preemptive algorithm.


The average waiting time under the FCFS policy is often quite long.

Example – Consider the following processes –


Find average WT , TAT & RT –

Process CPU Burst time (ms)


P1 5
P2 24
P3 16
P4 10
P5 3
Solution –
Suppose all processes arrive at time ‘0’. So, Gantt chart is –

Process AT BT CT TAT WT RT
P1 0 5 5 5 0 0
P2 0 24 29 29 5 5
P3 0 16 45 45 29 29
P4 0 10 55 55 45 45
P5 0 3 58 58 55 55
Total - 192 134 134

Average TAT = 192/5 = 38.4 ms

Average WT = 134/5 = 26.8 ms = Average RT

Here, Average WT = Average RT, because it is a non – preemptive scheduling.

Advantages –

 Simple and brutally fair.


55

 It is suitable for batch system.


Page

Disadvantages –
55
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

 The average waiting time is not minimal.


 Not suitable for time sharing system like UNIX.
 Convoy effect - All the other processes wait for the one big process to get off the CPU. This
effect results in lower CPU and device utilization.

2. Shortest job first scheduling (SJF) –


It is also known as Shortest Process Next (SPN).
This algorithm associates with each process the length of the process's next CPU burst. When the
CPU is available, it is assigned to the process that has the smallest next CPU burst. If the next CPU
bursts of two processes are the same, FCFS scheduling is used to break the tie.
SJF can be preemptive and non – preemptive.

Example for non – preemptive scheduling – Find average waiting time and average TAT.

Process number CPU BT


1 5
2 24
3 16
4 10
5 3
Solution –
Arrival time for all process = 0
Gantt chart –

Process AT BT CT TAT WT
1 0 5 8 8 3
2 0 24 58 58 34
3 0 16 34 34 18
4 0 10 18 18 8
5 0 3 3 3 0
Total - 121 63

Average WT = 63/5 = 12.6 ms


Average TAT = 121/5 = 24.2 ms
56

Preemptive SJF algorithm - A preemptive SJF algorithm will preempt the currently executing
Page

process, whereas a non-preemptive SJF algorithm will allow the currently running process
56
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

to finish its CPU burst. Preemptive SJF scheduling is sometimes called shortest-remaining-
time-first scheduling.

Example – Find Average waiting time –

Process Arrival time CPU BT


P1 0 8
P2 1 4
P3 2 9
P4 3 5
Solution –
Gantt chart is –

Process Arrival time CPU BT WT


P1 0 8 10 – 1 = 9
P2 1 4 0
P3 2 9 17 – 2 = 15
P4 3 5 5 -3 = 2

Average waiting time = (9 + 0 + 15 + 2) / 5 = 26/4 = 6.5 ms

Advantages –

 This algorithm gives minimum average waiting time, so it is an optimal algorithm.

Disadvantages –

 It is difficult to know the length of the CPU burst time.


 It cannot be implemented at the level of short term CPU scheduling.
 Longer jobs are waiting for CPU.

3. Priority scheduling –
57

A priority is associated with each process, and the CPU is allocated to the process with the highest
priority. Equal-priority processes are scheduled in FCFS order.
Page

57
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

Priority scheduling can be either preemptive or non preemptive. When a process arrives at the
ready queue, its priority is compared with the priority of the currently running process.

A. Preemptive algorithm (Priority based) –


A preemptive priority scheduling algorithm will preempt the CPU if the priority of the
newly arrived process is higher than the priority of the currently running process.
A non preemptive priority scheduling algorithm will simply put the new process at the
head of the ready queue.
Example –
Process AT Priority BT
P1 0 3 10
P2 1 2 5
P3 2 1 2

Here, 1 = highest priority & 3 = lowest priority.


Find the average waiting time.
Solution –

Process AT Priority BT CT WT
P1 0 3 10 17 8–1=7
P2 1 2 5 8 4–2=2
P3 2 1 2 4 0

Average waiting time = ( 7 + 2 + 0 )= 9/3 = 3 ms

B. Non – preemptive algorithm (Priority) –


Here a higher priority job cannot preempt a low priority job as there is no preemption.
Example –Find average waiting time.

Process AT Priority BT
P0 0 5 10
P1 1 4 6
58

P2 3 2 2
P3 5 0 4
Page

58
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

Solution –
Gantt chart –

Process AT Priority BT CT TAT WT


P0 0 5 10 10 10 0
P1 1 4 6 22 21 15
P2 3 2 2 16 13 11
P3 5 0 4 14 9 5

Average waiting time = ( 0+15+11+5 )/4 = 31/4 = 7.45 ms


Average TAT = ( 10+21+13+9 )/4 = 53/4 = 13.25 ms

C. Non – preemptive algorithm –


Example –Find average waiting time. (Assume arrival time for all process is 0)

Process Priority BT
P1 3 10
P2 1 1
P3 3 2
P4 4 1
P5 2 5
Solution –
Gantt chart –

Process Priority BT CT WT
P1 3 10 16 6
P2 1 1 1 0
P3 3 2 18 16
P4 4 1 19 18
P5 2 5 6 1
59

Average waiting time = (6+0+16+18+1)/5 = 41/5 = 8.2 ms


Page

59
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

Problems –

 Starvation for low priority basis.


 Starvation is where processes with low priority never get a chance to run.
 Ageing is the way of addressing starvation. It increase the priority of the process in run –
time.
4. High response ratio next scheduling (HRRN) –

W S
R
S
Where,
R = response ratio
W = time spent waiting for the processor
S = expected service time

R is also known as normalized turn - around time.


The normalized turnaround time the ratio of turnaround time to actual service time, as a figure of
merit. For each individual process, we would like to minimize this ratio and the average value over
all processes.
5. Shortest remaining time first scheduling (SRTF) –
The scheduler always chooses the process that has the shortest expected remaining processing
time.
When a new process joins the ready queue, it may in fact have a shorter remaining time than the
currently running process. Accordingly, the scheduler may preempt the current process when a
new process becomes ready.
Example – Find average turn - around time and relative delay.

Process AT BT
P1 0 3
P2 2 6
P3 4 4
P4 6 5
P5 8 2
Solution –

Gantt chart –
60
Page

60
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

Process AT BT CT TAT
P1 0 3 3 3
P2 2 6 15 13
P3 4 4 8 4
P4 6 5 20 14
P5 8 2 10 2

Average turn - around time = ( 3 + 13 + 4 + 14 + 2 )/ 5 = 36/5 = 7.2 ms


TAT
Relative _ time 
BT
Relative delay for P1= 3 / 3 = 1
Relative delay for P2= 13 / 6 = 2.17
Relative delay for P3= 4 / 4 = 1
Relative delay for P4= 14 / 5 = 2.8
Relative delay for P5= 2 / 2 = 1

6. Round – Robin scheduling (R-R) –


The round-robin (RR) scheduling algorithm is designed especially for timesharing systems.
It is similar to FCFS scheduling, but preemption is added to switch between processes.
A small unit of time, called a time quantum or time slice, is defined.
A time quantum is generally from 10 to 100 milliseconds.
The ready queue is treated as a circular queue.
The CPU scheduler goes around the ready queue, allocating the CPU to each process for a time
interval of up to 1 time quantum.

Example – what will be the completion time of all processes in round robin algorithm.
( Time quantum 1 )

Process BT
P 4
Q 1
R 8
S 1
Solution – Gantt chart –
61
Page

61
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

Process BT CT WT
P 4 9 5
Q 1 2 1
R 8 14 6
S 1 4 2

Average waiting time = ( 5 + 1 + 6 + 2 ) / 4 = 14/4 = 3.5 ms

NOTE –
 If time quantum (TQ) is very small, efficiency = 0
 If TQ is low, more number of context switching is occurring.
 If TQ is high, FCFS approach is follow.

TQ (Time quantum)–

 Very small – Efficiency = 0.

All the time will be spend in the context switching.

 Small – More context switching or CPU overload is increase. (Improve response time).
 Large – Less context switching overhead. (Less interactive response time)
 Very large – Work like FCFS (Very poor response time).

7. Multi – level queue scheduling –


 A multilevel queue-scheduling algorithm partitions the ready queue into several separate
queues.
 The processes are permanently assigned to one queue, based on some property of the
process, such as memory size, process priority, or process type.
 Processes are classified into 2 groups –
Foreground (interactive) processes and Background (batch) processes.
 These two types of processes have different response-time requirements and so might
have different scheduling needs.
62

Example -
Page

62
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

8. Multi – level feed-back queue scheduling –


 In a multilevel queue-scheduling algorithm, processes are permanently assigned to a queue
on entry to the system. Processes do not move between queues.
 Multilevel feedback-queue scheduling, in contrast, allows a process to move between
queues.
 The idea is to separate processes with different CPU-burst characteristics. If a process uses
too much CPU time, it will be moved to a lower priority queue.

A multilevel feedback-queue scheduler is defined by the following parameters:


 The number of queues
 The scheduling algorithm for each queue
 The method used to determine when to upgrade a process to a higher priority queue.
 The method used to determine when to demote a process to a lower priority queue.
 The method used to determine which queue a process will enter when that process needs
service.

--------------------◄►--------------------

63
Page

63
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

7. MEMORY MANAGEMENT AND VIRTUAL MEMORY

Memory management –

The organization and management of main memory has been one of the most important factors
influencing the OS design.
 Main memory management is primarily concerned with the allocation of main memory to
requesting processes.
 Protection and sharing of memory are two important memory management functions.

Memory Management Schemes

 Single user case


 Multi-user case

(A) Single user - mono programming

 Simplest memory management approach.


 Memory is divided into two contiguous areas:
 Lower memory addresses area for an operating system (or monitor).
 Second is for the user program

Advantages

 Simplicity
 Small O.S

Disadvantages

 Poor utilization of memory-wasteful.


 Poor utilization of processor.
 User address space may contain information that never used.
 Low flexibility: User’s job limited to the size of available memory.

(B) Multiple users - multi programming

 Operate on more than one job at a time.


 Need to maximize the degree of multi-programming i.e., the number of processes in
memory.

Problems - (a) Relocation (b) Protection


64

Solution - Usage of base and limit register


Page

64
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

A base and a limit register define a logical address space.

Relocation and protection –

Protection –

 If two programs are in memory at the same time there is a chance that one program can
write to the address space of another program.

Relocation –

 Running the same code at different places in memory.


 CPU supports absolute to actual physical addresses, may take place during
i. Compile Time
ii. Load Time or
iii. Run Time

(i) Compile time binding

It generates absolute addresses. It must be known at the Compile time itself that where will
a process reside in the memory.

Problem: If starting address of a process in memory changes then the entire process must
be recompleted to generate the absolute addresses again.

(ii) Load time binding

Compiler generates re-locatable addresses which are converted to absolute addresses at


the load time, if program location in memory is unknown until run time and location is
fixed.

(iii) Execution time (Run time binding)

Processes can be moved in memory during execution. Needs good hardware support.
65

Dynamic relocation is used to achieve it.


Page

Program relocation –
65
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

 Relocation is the mechanism to Convert logical (or virtual) address to a physical address.

Effective physical address = Logical address + Contents of Relocation Register

MMU or Memory Management Unit –

MMU or Memory Management Unit is special hardware which performs address binding, uses
relocation scheme addressing, which means that code runs differently when loaded at different
places.

(a) Addresses and address space

Logical address Vs Physical address:

Logical and physical addresses differ in execution time address binding scheme. Logical and
physical addresses are same in compile time and load time address binding schemes.

(b) Logical address (Virtual address)

Address of an instruction or data used by a program. it is generated by the CPU. Logical address
space is depicted below.

(c) Physical address

It is the effective memory address of an instruction or data that is obtained after address binding
has been done i.e., after logical addresses have mapped to their physical addresses.

(d) Logical address space

It is the set of logical addresses.

(e) Physical address space

It is set of physical addresses.


66
Page

66
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

(f) Address binding

The process of mapping logical addresses to physical addresses in the memory is called address
binding.

This causes binding of addresses to instructions and data.

Relocation is necessary at the time of swapping in of a process from a backing store to the main
memory.

Types of Relocation –

(A) Static relocation

 It is a relocation formed during the loading of the program into memory by a loader.
 The system with static relocation in a swapped out process must be swapped back into the
same partition from which it was removed.

(B) Dynamic relocation

 It implies that mapping from the virtual address space to physical address space at run
time.
67
Page

Protection
67
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

It means providing security from unauthorized usage of memory.

Base Register

It holds the smallest legal physical memory address.

 Limit Register
It contains the size of the process.
Figure shows the hardware protection mechanism with base and limit registers.

 Dynamic Loading and Dynamic Linking


Loading is the process of moving the program or module from secondary storage devices
(disk) to the main memory. There are two types in loading:

(a) Compile Time Loading (static):

All routines are loaded in the main memory during compilation.

(b) Run time loading:


Routines which are loaded in the main memory at the time of execution or running.

Processing oblf a User Program


68
Page

68
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

Swapping

It is the process of temporarily removing inactive programs from the main memory of a computer
system.

A variant of this swapping policy is used for priority based scheduling algorithms. If a higher
priority process arrives and wants service, the memory manager can swap out the lower priority
and then load and execute the higher priority process. When the higher priority process finishes
the lower priority process can be swapped back in and continued. This variant of swapping called
rolled out, rolled in. Context switching time in swapping system is fairly high.

Example - let us assume that the user process is 10 MB in size and backing store is a standard hard
disk with a transfer rate of 40 MB per second. Find actual time transfer of the 10 MB process to or
from main memory.

Solution – 1000 KB 40000 KB per second

= ¼ sec = 250 msec

No head seeks, average latency of 8 msec, swap time is 258 msec.

Total swap time = 258x2=5l6 msec

Memory Allocation Techniques –

Memory allocation

I. Contiguous storage allocation


i. Fixed partition allocation
ii. variable partition allocation
II. Non-contiguous storage allocation
i. Paging
ii. Segmentation

(I) Contiguous storage allocation


69

In this allocation, a memory—resident program occupies a single contiguous block of memory.


Page

(i) Fixed/Static partitioning


69
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

Implies the division of memory into number of partitions and its size is made in the beginning
prior- to the execution of user programs and remains fixed thereafter.

Advantages of fixed partition

 Implementation of this scheme is simple.


 Overhead of processing is also low.
 It supports multi-programming.
 No special hardware is required.
 Makes efficient utilization of processor and I/O devices.

Disadvantages

 No single program (or process) may exceed the size of the largest partition in a
given system.
 It does not support a system having dynamic data structure such as stack, queue,
and heap.
 It limits the degree of multi-programming which in turn may reduce the
effectiveness of short term scheduling.
 Wastage of memory by programs that is smaller than their partitions. This wastage
is known as Internal Fragmentation.

Suppose a system supports a page size of P bytes, then a program of size M bytes
will have internal fragmentation = P – (M% P) bytes

(b) Variable-partition allocation/Multi-programming with dynamic partitions

The size and the number of partitions are decided during the run time by the O.S.

OS keeps track of status of the memory partition and this is done through a data structure
called partition description table (PDT).
70

PDT for above figure –


Page

70
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

Partition
Starting address Size of Partition
numberof partition partition status
1 0K 200 K Allocated
2 200 K 200 K Free
3 400 K 200 K Allocated
4 600 K 300 K Allocated
5 900 K 100 K Allocated
6 1000 K 100 K Free
The most Common strategies to allocate free partitions to the new processes are:

1. First Fit:
Allocate the first free partition, large enough to accommodate the process.
Executes faster.
2. Best Fit:
Allocate the smallest free partition that meets the requirement of the process.
Achieves higher utilization of memory by searching smallest free partition.
3. Worst Fit:
Allocate the largest available partition to the newly entered process in the system.
4. Next Fit:
Start from current location in the list.
5. Quick fit:
Keep separate tests for common sizes.
• Create partitions dynamically to meet the requirements of each requesting process.
• Neither the size nor the numbers of dynamically allocated partitions need to be
limited.
• Memory manager continues creating and allocating partitions to the requesting
processes until all physical memory is exhausted or maximum allowable degree of
multi-programming is reached.
• OS keeps track of which parts of memory are available and which are not.

Compaction

 Compaction is a technique by which the resident program are relocated in such a way that
the small chunks of free memory are made contiguous to each other and clubbed together
into a single free partition that may be big enough to accommodate more programs.
71
Page

71
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

 Compaction involves dynamic relocation of a program.


 Relocation register is used.
 [Relocation Register] = (Starting address of a process) - (load origin address of process).
 Mainly used in large machines like main frame or supercomputer.

Advantages of dynamic partitioning

 Memory utilization is generally better as partitions are created dynamically.


 No internal fragmentations as partitions are changed dynamically.
 The process of merging adjacent holes to form a single larger hole is called Coalescing.

Disadvantages

 Lots of OS space, time, complex memory management algorithm.


 Compaction time is very high.

Memory Management

The simplest memory management scheme is to run just one program at a time, sharing the
memory between that program and the operating System.

72


Page

The first model (a) was used on main frames and minicomputers.
 The second model (b) is used on palm top computers and embedded systems.
72
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

 The third model (c) is used in personal computers (e.g., running MS - DOS), where the
portion of the system in the ROM is called BIOS (Basic Input Output System).

Fixed Memory Partition

When a job arrives, it can be put into the input queue for the smallest partition large enough to
hold it. Since the partitions are fixed in this scheme, any space in a partition not used by a job is
lost.

↑____ Fixed memory partition with separated input queues for each partition

↑____ Fixed memory partition with single input queue

Memory Management with BitMaps

 Bitmap memory is divided up into allocation units.


 Each allocation unit is a bit in the bitmap, which is 0 if the unit is free and is 1 if it is occupied.
 The smaller the allocation unit, the larger the bitmap.
 A bitmap provides a simple way to keep track of memory words in fixed amount of memory
because the size of the bitmap depends only on the size of memory and size of the allocation unit.

Disadvantage

 When it has decided to bring a K unit process into memory, the memory manager must search
the bitmap to find a run of K consecutive 0 bits in the map.
73

Memory Management with Linked Lists


Page

73
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

 Keeping track of memory is to maintain a linked list of allocated and free memory partitions.

Modeling Multi-programming

 Suppose that a process spends a fraction p of its time waiting for I/O to complete. With n
process in memory at once, the probability that all n processes are waiting for I/O (in which
case the CPU is idle) is Pn CPU utilization = I - Pn

Non-contiguous

 To permit the sharing of data and code among processes, to resolve the problem of external
fragmentation of physical memory, to enhance degree of multiprogramming and to support
virtual memory concept, it was decided to have non-contiguous physical address space of a
process.

Non-contiguous Storage Allocation Methods

Paging –

 Paging is a memory-management scheme that permits the physical-address space of a process to


be noncontiguous.
 Memory management technique that permits a program’s memory to be non-contiguous into the
physical memory and thereby allowing a program to be allocated physical memory.

74

(1) Address mapping in paging –


Page

74
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

LA : Logical address
PA : Physical address
PMT : Page map table / Page table
MMU : Memory Management Unit.
Page size and frame size is defined between 512 bytes to 4 KB depending on computer
architecture.

(2) Translation Look Aside Buffer (TLB) / content addressing memory / Look aside memory –
Problem – MMU cannot go to page table on every memory access.
Solution – special, small, fast – look up hardware cache associative, high speed memory.

If page number is found in the TLB then it is a TLB HIT otherwise TLB MISS occurs.

75
Page

Hit ratio = the percentage of times that a particular page number is found in the TLB.
75
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

(3) Effective memory access time –


8O% hit ratio — Desired page number in the TLB 80% 0f the time.
If it takes 20 ns to search the TLB and 100 ns to access memory, then a mapped memory access
takes 120 ns when the page number is in the TLB.
If page number is not found in TLB (20 ns), then we must first access memory for the page table
and frame number (100 ns) and then access the desired byte in memory (100 ns), for a total of
220 ns.
Effective memory access time (by probability) = 0.80 x 120 + 0.20 x 220 = 140 ns
98% hit ratio, we have
Effective Access Time = 0.98 x 120 + 0.02 X 220 = 122 ns
“Higher the hit ratio the lower the effective access time”

Let the effective memory access time be ‘teff’ in systems with run time address translation equal to
the sum of the address translation time ‘tTR’ and the subsequent access time needed to fetch the
target from memory ‘tM’ mathematically –
teff = tTR + tM
tTR = h tTLB + ( 1 – h )(tTLB + tM)
tTR = tTLB + (1 – h) tM (h = TLB hit ratio)
teff = tTLB (2 – h) tM

(4) Structure of a page table –

(5) Sharing in paging –

(a) Shared code


 One copy of read only (reentrant) code shared among processes (i.e. text editors,
compilers, window systems).
 Shared code must appear in same location in the logical address space of all processes.

(b) Private code and data


 Each process keeps a separate copy of the code and data.
 The pages for the private code and data appear anywhere in the logical address space.
76
Page

76
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

Types of the Page Table –

Techniques for structuring the page table –

a. Hierarchical Paging –
 If page table size is large, then this type of page table is used.
 The logical address space is broken up into multiple page tables.

 A logical address space (On 32 bit machine with 1 K page size) is divided into a page
number consisting of 22 bit pages of each consisting of 10 bits.
 The page number is paged; the page number is divided into 12 bit page number and 10 bit
page offset.

 Here P1 is an index into the outer page table, P2 is the displacement within the page of the
outer page table.

Address translation –
77
Page

77
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

b. Hashed page table –


 This type is used for handling address spaces larger than 32 bits is to use a hashed page
table, with the hash value being the virtual-page number.
 Each entry contains a linked list of elements that hash to the same location. Each element
consists of three fields:
(a) The virtual page number.
(b) The value of the mapped page frame, and
(c) A pointer to the next element in the linked list.

 The virtual page number is compared with field a in the first element in the linked list. If
there is a match, the corresponding page frame is used to form the desired physical
address. If there is no match, subsequent entries in the linked list are searched for a
matching virtual page number.

c. Inverted Page Tables –


 Page table has one entry for each page.
 Each virtual address in the system consists of a triple <process-id, page-number, offset>.
78

 Each inverted page-table entry is a pair <process-id, page-number> where the process-id
assumes the role of the address-space identifier.
Page

78
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

 This scheme decreases the amount of memory needed to store each page table, it increases
the amount of time needed to search the table when a page reference occurs.

If page size is small then –

Pros -

 Less internal fragmentation.


 Better fit for code sections and data structure.
 Less unused program in memory.

Cons -

 Programs need many pages.


 Large page tables.

If page size is large then –

Pros -

 Programs and fewer pages.


 Smaller page table.
 Better I/O throughput.

Cons -

 More internal fragmentation.


 Do not utilize memory better for sorting programs.
79

 Overhead due to internal fragmentation and page table –


S×e P
Page

Overhead = +
P 2
79
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

Where,
S → average process size in byte
P → Page size in byte
e → Page entry

Inverted Page Table structure ( with its elements ) –

 Page number: This is the page number portion of the virtual address.
 Process identifier: The process that owns this page. The combination of page number and process
identifier identify a page within the virtual address space of a particular process.
 Control bits: This field includes flags, such as valid, referenced, and modified; and protection and
locking information.
 Chain pointer: This field is null (perhaps indicated by a separate bit) if there are no chained entries
for this entry. Otherwise, the field contains the index value (number between 0 and 2m - 1) of the
next entry in the chain.

Virtual Memory Management –

 Virtual memory is a technique that allows the execution of processes that are not completely in
memory.
 This technique frees programmers from the concerns of memory-storage limitations.
80

 Separation of user logical memory from physical memory.


 Can implement shared memory.
Page

80
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

 It allows programs to be altered and recompiled independently, without requiring the entire set of
programs to be re-linked and reloaded.
 Supports multi-programming.
 Breaking the programs into small pieces is called as “Overlays”.
 Pages are swapped in and out.

Implementation of virtual memory –

 Demand paging.
 Demand segmentation.

Demand Paging –

 A demand-paging system is similar to a paging system with swapping where processes reside on
secondary memory (usually a disk).
 When a process is to be swapped in, the pager guesses which pages will be used before the
process is swapped out again. Instead of swapping in a whole process, the pager brings only those
necessary pages into memory.

 Page table consist of the valid-invalid bit scheme.


81

 When this bit is set to "valid," the associated page is both legal and in memory.
 If the bit is set to "invalid," the page either is not valid or is valid but is currently on the disk.
Page

81
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

Page fault –
 If the process tries to access a page that was not brought into memory then page fault trap
occur.
 This trap is the result of the operating system's failure to bring the desired page into
memory.

The procedure for handling page fault –

 Check an internal table (usually kept with the process control block) for this process to
determine whether the reference was a valid or an invalid memory access.
 If the reference was invalid, we terminate the process. If it was valid, but we have not yet
brought in that page, we now page it in.
 Find a free frame.
 Schedule a disk operation to read the desired page into the newly allocated frame.
 When the disk read is complete, we modify the internal table kept with the process and the
page table to indicate that the page is now in memory.
 Restart the instruction that was interrupted by the trap. The process can now access the
page as though it had always been in memory.
82
Page

82
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

Pure demand paging –

 Never bring a page into memory until it is required.

The hardware to support demand paging is the same as the hardware for paging and swapping:-

 Page table: This table has the ability to mark an entry invalid through a valid-invalid bit or special
value of protection bits.
 Secondary memory: This memory holds those pages that are not present in main memory. The
secondary memory is usually a high-speed disk. It is known as the swap device, and the section of
disk used for this purpose is known as swap space.

Performance of Demand Paging –

 Let the memory-access time (ma), which ranges from 10 to 200 nanoseconds.
 If no page faults, the effective access time is equal to the memory access time.
 If, however, a page fault occurs, first read the relevant page from disk and then access the desired
word.
 Let p be the probability of a page fault (0 ≤ p ≤ 1).
 ‘p’ to be close to zero—that is, have only a few page faults.
 The effective access time is then -
83

Effective access time = (1 − p) × ma + p × page fault time


Page

83
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

Page Replacement –

 If no frame is free, we find one that is not currently being used and free it.
 We can free a frame by writing its contents to swap space and changing the page table to indicate
that the page is no longer in memory.

Dirty bit –

 We can reduce this overhead by using a modify bit (or dirty bit).
 This is set by the hardware whenever any word or byte in the page is written into,
indicating that the page has been modified.
 Page replacement is basic to demand paging.
 It completes the separation between logical memory and physical memory.

Two major problems to implement demand paging :-

We must develop -
 A frame-allocation algorithm and
 A page-replacement algorithm.

Segmentation –
84

 Segmentation is a memory-management scheme that supports user view of memory.


Page

 A logical-address space is a collection of segments.


84
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

 Each segment has a unique name and a length.


 No organization or division of physical space.
 The addresses specify both the segment name and the offset within the segment.
 The address space of paging and segmentation are the same.
 The user therefore specifies each address by two quantities: a segment name and an offset.
 Segments are numbered and are referred to by a segment number, rather than by a segment
name.
<segment-number, offset>
 A segment is a logical unit such as –
 Main program
 Procedure
 Functions
 Methods
 Variables (Local & Global)
 Stack
 Symbol table
 Array
 Addressing consist of a segment number and an offset.
 Since segments are not equal, segmentation is similar to dynamic partitioning.
 If one of the bits in memory is equal to one then page in working set.

Hardware –

85

Segmentation Example –
Page

85
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

Segmentation architecture –

 Logical address consist of a two triple <segment number, offset>


 Segment Table: It maps the two dimensional physical addresses.
 Each table entry has < base, limit > values
 Base: It contains the starting physical address where the segments reside in memory.
 Limit: It specifies the length of the segment.
 Segment Table Base Register (STBR) points to the segments table’s location in memory.
 Segment Table Length Register (STLR) indicates number of segments used by a program.
 Segment number S is legal if S < STLR
 Memory Protection is associated with two bits Protection/ Validation bit.
 If validation bit = 0 then it is an illegal statement. (read/write/execute privileges are associated
with segments.
 Code sharing occurs at segment level.
 Dynamic memory allocation problem arises.
86

Page fault frequency scheme –


Page

The aim is to establish an “acceptable” page fault rate.


86
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

 Thrashing has a high page fault rate.


 If page fault rate is too low, a process losses frame.
 If page fault rate is too high, a process gains frame.

Prepaging –

 Page is to get before all or some of the pages a process will need, before they are referenced.
 If prepaged pages are unused, I/O and memory would be wasted.
 Assume 's’ pages are prepaged and α of the page is used.
 Cost of (s * α) to save page faults greater than or less than the cost of prepaging s x (1 - α)
unnecessary page.
 If α near zero → prep aging is lost.

TLB reach –

 TLB Reach is the amount of memory accessible from the TLB.


 TLB Reach = (TLB size) * (page size).
 Page size: Its selection must take into consideration factors such as fragmentation, table size, I/O
overhead and locality.
 Increasing the page size may lead to increase in fragmentation.
 Multiple page sizes will allow applications that require larger page Sizes the opportunity to use
them without an increase in fragmentation.
 Working set of each process is stored in the TLB.
 If Δ too small will not influence the entire 1oality.
 If Δ too large will influence several localities.
 If Δ = α it will influence entire program.
 D is the total demands for frames.
 If D > M (total demand is greater than the total number of available frame) then Thrashing will
87

occur.
Page

Keeping track of working set –

87
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

 Approximate with interval timer + a reference bit.


Example: Δ = 10,000
 Timer interrupts after every 5000 time units keeping in memory 2 bits for each page.
 Whenever a timer interrupts copy and set the values of all reference bits to 0.

Working Set Model (Solution to thrashing problem):

 Locality of Reference concept is used.


 Let Δ = working set window i.e., a fixed number of page references.
 WSS1 Working set size for each process in the system
 (Total number of page referenced in the most recent).

Page replacement algorithm –

Page fault

 Whenever a processor needs to execute a particular page and that page is not available in main
memory, this situation is said to be “page fault”.
 When the page fault occurs the page replacement will be done.
 ‘Page Replacement’ means select a victim page in the main memory, replace that page with the
required page from the backing store (disk).

I. FIFO (First In First Out) algorithm –


 Replace a page which is the oldest page of all the pages of the main memory.
 Focuses on the length of time a page has been in memory rather than how much the page is
being used.

Example: Consider the reference string 0, 1, 2, 3, 0, 1, 2, 3, 0, 1, 2, 3, 1;

0* 0 0 3* 3 3 2* 2 2 1* 1 1 1
1* 1 1 0* 0 0 3* 3 3 2* 2 2
2* 2 2 1* 1 1 0* 0 0 3* 3

Here ‘*’ indicates ‘page fault’.


The number of page faults = 12
The remaining pages are not present in memory that is why page fault occurs.
In general, the more frames there are, the less page fault.
88

Number of page faults


Page fault rate = Number of bits in the reference string
Page

88
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

Page fault rate = 12/13 = 0.923 = 92.3%


Belady’s Anomaly –

Example – Consider the reference string – 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5

Number of frames = 4

1* 1 1 1 1 1 5* 5 5 5 4* 4
2* 2 2 2 2 2 1* 1 1 1 5*
3* 3 3 3 3 3 2* 2 2 2
4* 4 4 4 4 4 3* 3 3

The number of page faults = 10


Consider the same reference string with three frames.

1* 1 1 4* 4 4 5* 5 5 5 5 5
2* 2 2 1* 1 1 1 1 3* 3 3
3* 3 3 2* 2 2 2 2 4* 4

Here number of page faults = 9


Here as the number of frames increases the page fault also increases. This is known as Belady’s
anomaly.

II. Optimal page replacement –


 Replace the page that will not be used for the longest period of time.

Example: Reference string - 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5. ( Using 4 frames )

1* 1 1 1 1 1 1 1 1 1 4* 4
2* 2 2 2 2 2 2 2 2 2 2
3* 3 3 3 3 3 3 3 3 3
4* 4 4 5* 5 5 5 5 5
89

Number of page faults = 6


Page

89
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

Disadvantage - It requires future knowledge of reference string, so used for comparison studies.

III. LRU (Least Recently Used) algorithm –


 Replace a page that has not been used for the longest period of time.
 It looks backward in time rather than forward.
 It associates with each page the time of that page last use.
 There are two methods to implement LRU:
(a) Counters
(b) Stack

Example: Reference String - 1, 2, 3, 4, 1, 2, 5, 1, 3, 4, 5 ( Number of frames = 4 )

1* 1 1 1 1 1 1 1 1 1 1 5*
2* 2 2 2 2 2 2 2 2 2 2
3* 3 3 3 5* 5 5 5 4* 4
4* 4 4 4 4 4 3* 3 3

Number of page faults = 8

THRASHING –

 High paging activity.


 If a process does not have “enough” frames the page fault rate is very high.
 Spending more time on paging than on execution.

Cause of thrashing –
Consider the following scenario: -

The OS monitors CPU utilization. If the utilization too low, we increase the degree of multi-
programming by introducing a new process to the system. A global page replacement algorithm is
used, it replaces pages with no regard to the process to which they belong. Suppose that a process
enters a new phase in its execution and needs more frames. It starts faulting and taking frames
away from other processes. These processes need those pages, however and so they also fault,
taking frames from other processes. These faulting processes must use the paging device to swap
pages in and out. As they queue up for paging device, the ready queue empties. As processes wait,
for the paging device, CPU utilization decreases. The CPU scheduler sees the decreasing CPU
90

utilization so it increases the degree of multi-programming. The new process tries to get started
by taking frames from running processes, causing more page faults, and a longer queue for paging
Page

device. As a result, CPU utilization drops even, further. The CPU scheduler tries to increase the
90
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

degree of multi-programming even more. Thrashing occurs, and the system throughput Plunges.
The page fault rate (PFR) increases tremendously. Effective memory access time increases. No
work is getting done because the processes are spending all their time in Paging.

Page replacement policies –


 Local Page Replacement
When a process requests for a new page to be brought in and there are no free
frames in the memory, we choose a frame allocated to only that process for
replacement.
 Global Replacement
It allows a process to select a replacement frame from the set of all frames, even if
that frame is currently allocated to some other processes. So, one process can take a
frame from another.

LRU - Approximation page replacement


 Reference bit for a page is set by the hardware whenever that page is referenced (either
read or write to any byte in the page).
 All bits are cleared to 0 by the OS.
 As a user process executes, the bit associated with each page referenced is set (to 1) by, the
hardware.
 It is possible to determine which pages have been used and which have not been used by
examining the reference bits.

Additional reference bits algorithm -


 Keep an 8 bit (reference byte) for each page to record reference information for the last 8
time period.
 At regular interval (e.g., 100 ms) OS shifts the reference bit right by 1 and discard the low
order bit.
91

Example: If a page does not be referred for a while


Page

0 0 0 0 0 0 0 0
91
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

If a page is continuously referred


1 1 1 1 1 1 1 1
LRU algorithm implementation -

 Counter Implementation:
 Every page entry has a counter, every time page is referenced through this entry,
copy the clock (time stamp) into the counter.
 When a page needs to be changed, look at the counters to find the smallest time
stamp to determine which are to change.

 Stack Implementation:
Keep a stack of page numbers in a doubly linked list.
 When a page is referenced
 Move it to the top
 Requires pointers to be changed.
 No search for replacement.

Problems with both implementations –


 Need additional hardware support.
 Expensive house – keeping is requires at each memory.
 Interrupt handling overhead.

Fixed allocation algorithm –


 Equal allocation –
If there are 100 frames and 5 processes, give each process 20 frames.

 Proportional allocation –
Allocate according to the size of process.
Si = size of process Pi
S = ΣSi
m = total number of frames
S
ai = allocation of pi = Si × m
Allocation for 64 frames between 2 processes a1 and a2
m = 64
S1 = 10
S2 = 127
10
a1 = 137 × 64 ≅ 5
92

127
a2 = 137 × 64 ≅ 59
Page

92
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

The total number of frames = 59 + 5 = 64

Priority allocation -
 In both equal and proportional algorithms
 The number of frames allocated also depends on multiprogramming level i.e.,
the more processes, the less frame each gets.
 No differentiation on the priority of processes.
 We want to allocate more frames to high priority processes to speed up their
execution.
 Proportional + Priority allocation.
 Use a proportional allocation scheme using priorities rather than size.

Counting algorithms –
 Keep a counter of the number of references that have been made to each page.

IV. LFU (Least Frequently Used) algorithm –

 Selects a page for replacement, if the page has not been used often in the part.
 Replaces the page that has smallest count.
 Maintains a counter, which shows the least count, replaces that page.
 Replaces pages with smallest count.

Allocation of frames –
How does OS allocate the fixed amount of free memory (frames) among the various processes?

 Simple frame allocation algorithm: In a single user system, OS takes some frames, the rest
of frames are assigned to user process.
 Allocate at-least a minimum number of frames for each process.

Global Vs local allocation –

Frame allocation is closely related to page replacement which page to replace in a page fault.

 Global replacement
Process selects a replacement frame from the set of all frames; one process can take a
frame from another. (The number of frames allocated to a process may change).

 Local Replacement
Each process selects from only its own set of allocated frames. (The number of frames
93

allocated to a process does NOT change)


Page

Cache Memory –
93
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

Definition -

 A cache memory is a small, fast memory that retains copies of recently used information
from main memory.
 It operates transparently to the programmers automatically deciding which values to keep
and which to overwrite.

Working of cache memory –

 CPU requests contents of memory location. The Cache is checked for the data. If present,
get from cache, otherwise read required block from main memory, then deliver from cache
to CPU.
 The performance of cache memory is measured in terms of hit ratio.
 When the CPU refers to memory and finds the word in cache, it is called hit ratio.
 If the word is not found in cache and is in main memory it is called cache miss.
Hits
Hit ratio =
Hits + Misses

Average Access time = hc + ( 1 – h )m


Where,
h = hit ratio
c = cache access time
m = main memory access time

Cache design consideration -

 Size
 Mapping function
 Replacement Algorithm
 Write Policy and Block size

Tag fields -

 A cache line contains two field:


 Data from RAM
 The address of the block currently in the cache
 The part of the cache line that stores the address of the block is called the tag field.
 The tag field specifies the address currently in the cache line.

Cache lines -
94

 The cache memory is divided into blocks or lines. Data is copied to and from the cache one
line at a time.
Page

94
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

Principle of locality -

 Locality model states a process migrates from one locality to another (with respect to
generate address) while it executes and localities may overlap.
 Program and data references within a process tend to cluster.
 Only few pages/piece of a process will be needed over short period of time.
 Possible to make guesses about which pages will be needed in the future.
 Virtual memory may work efficiently.
 Locality of reference of a process refers to its most recent/active pages.

Mapping -

 The transformation of data from main memory to cache memory is referred to as mapping
process.
 There are three popular methods of mapping addresses to cache locations.
 Direct: Each address has specific place in the cache.
 Set associative: Each address can be in any of a small set of cache locations.
 Fully associative Search the entire cache for an address.
--------------------◄►--------------------

95
Page

95
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

Hard Disk scheduling algorithm –

 Disk bandwidth and fast access time are to be considered.


 The disk bandwidth is the total number of bytes transferred, divided by the total time between the
first request for service and the completion of the last transfer.

First Come First Serve (FCFS) scheduling –

The disk in controller processes the I/O request in the order in which they arrive, thereby moving
backwards and forward across the surface of the disk to get the next requested location each time.

Example – A disk queue has the following requests to read tracks.

87 , 170 , 40 , 150 , 36 , 72 , 66 , 15

Consider the disk head is initially at cylinder 60.

FCFS -

Total head movement = ( 87 - 60 ) + ( 170 - 87 ) + ( 170 - 40 ) + ( 150 - 40 ) + ( 150 - 36 ) +

( 72 - 36 ) + ( 72 - 66 ) + ( 66 - 15 )

= 17 + 83 + 130 + 110 + 114 + 36 + 6 + 51

= 557 cylinders

Average head movement = 557 / 8 = 69.6 cylinders

Advantages –

 Improved response time as a request gets response in fair amount of time.

Disadvantages –
96

 Involve a lot of random head movements and disk rotations.


Page

 Throughput is not efficient.


96
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

 Used in small system only where I/O efficient is not very important.
 Acceptable only when the load on a disk is high. As the load grows, FCFS tends to saturate the
device and the response.

Shortest Seek time first (SSTF) scheduling –

 Reduce the number of seeks.


 Select a request with the minimum seek time from the current head position.
 SSTF scheduling is a form of Shortest Job First (SJF) scheduling, may cause starvation of some
request.

SSTF -

Total head movement = ( 66 - 60) + ( 72 - 66 ) + ( 87 - 72 ) + ( 87 - 40 ) + ( 40 - 36 ) +

( 36 - 15 ) + ( 150 - 15 ) + ( 170 - 150 )

= 6 + 6 + 15 + 47 + 4 + 21 + 135 + 20

= 244 cylinders

Average head movement = 244 / 8 = 30.5 cylinders

Advantages –

 Minimize the latency.


 Better throughput than FCFS method.

Disadvantages –

 Starvation occurs if some processes have to wait for long time until its request is satisfied.
97

 SSTF services requests for those tracks which are highly localized.
Page

97
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

SCAN scheduling –

 Sometimes called Elevator algorithm, because it services all the request of going up & then
reaching at the top, it goes downward.
 The disk arm starts at one end of the disk, and moves toward the other end, servicing requests as
it reaches each cylinder, until it gets to the other end of the disk. At the other end, the direction of
head movement is reversed, and servicing continues.
 It needs 2 information –
 1. Direction of head movement.
 2. Last position of the disk head.

SCAN -

Total head movement = ( 66 - 60) + ( 72 - 66 ) + ( 87 - 72 ) + ( 150 - 87 ) + ( 170 - 150 ) +

( 180 - 170 ) + ( 180 - 36 ) + (36 - 15)


= 6 + 6 + 15 + 63 + 20 + 10 + 144 + 21

= 285 cylinders

Average head movement = 285 / 8 = 35.6 cylinders

Advantages –

 Better throughput than FCFS method.


 Basic for most scheduling algorithm.
 Eliminates the discrimination.
 No starvation.
98

Disadvantages –
Page

98
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

 Because of the continuous scheduling scanning of disk from end to end, the outer tracks are
visited less than the mid range tracks.
 Disk arm keeps scanning between 2 extremes; this may result in wear and tear of the disk
assembly.
 Certain requests arriving ahead of the arm position would get immediate service but some
other request that arrive behind the arm position will have to wait for the arm to return back.

C – SCAN (One way Elevator algorithm) scheduling –

 C – SCAN = Circular Scan


 It treats the cylinder as a circular list that wraps around from the final cylinder to the first one.
 C-SCAN moves the head from one end of the disk to the other, servicing requests along the way.
When the head reaches the other end, however, it immediately returns to the beginning of the
disk, without servicing any requests on the return trip.

Advantages –

 Provide more waiting time.

Disadvantages –

 Time taken for the back swing has been ignored.


 Average head moment is more compared to SCAN algorithm.
 This increases the total seek time because of the long seek from the edge back to the hub.

99
Page

99
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

Look scheduling –

 In SCAN and C – SCAN, the arm goes only as far as the final request in each direction then; it
reverses direction immediately, without going all the way to the end of the disk.
 These versions of SCAN and C-SCAN are called LOOK and C-LOOK scheduling, because they look
for a request before continuing to move in a given direction.

100
Page

100
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

Disk Allocation Methods

a) Contiguous Allocation

 each file occupies a set of consecutive addresses on disk


 each directory entry contains:
o file name
o starting address of the first block
o block address = sector id (e.g., block = 4K)
o length in blocks
 usual dynamic storage allocation problem
o use first fit, best fit, or worst fit algorithms to manage storage
 if the file can increase in size, either
o leave no extra space, and copy the file elsewhere if it expands
o leave extra space

101
Page

101
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

b) Linked Allocation

 each data block contains the block address of the next block in the file
 each directory entry contains:
o file name
o block address: pointer to the first block
o sometimes, also have a pointer to the last block (adding to the end of the file is much
faster using this pointer)

 a view of the linked list

c) Indexed Allocation

 store all pointers together in an index table


102

o the index table is stored in several index blocks


o assume index table has been loaded into main memory
Page

i)
102
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

all files in one index

The index has one entry for each block on disk.


103
Page

103
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

 better than linked allocation if we want to seek a particular offset of a file because many links
are stored together instead of each one in a separate block
 SGG call this organization a ``linked'' scheme, but I call it an ``indexed'' scheme because an
index is kept in main memory.
 problem: index is too large to fit in main memory for large disks
o FAT may get really large and we may need to store FAT on disk, which will increase
access time
o e.g., 500 Mb disk with 1 Kb blocks = 4 bytes * 500 K = 2Mb entries

ii)
separate index for each file

 index block gives pointers to data blocks which can be scattered


 direct access (computed offset)

a) One index block per file (assumes index is contiguous)

b) Linked List of index blocks for each file

c) Multilevel index

d) Combined scheme (i-node scheme)


104
Page

104
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

Question :

Consider a file currently consisting of 150 blocks. Assume that the file control block (and the index block, in
the case of indexed allocation) is already in memory. Calculate how many disk I/O operations are requried for
continguous, linked, and indexed (single--level) allocation strategies, if, for one block, the following conditions
hold. In the contiguous--allocation case, assume that there is no room to grow in the beginning, but there is
room to grow in the end. Assume that the block information to be added is stored in memory.

Assumptions:

 Each I/O operation reads or writes a whole block.


 For linked allocation, a file allocation table (FAT) is not used, i.e., only the address of the starting block
is in memory.
 The blocks are numbered 1 to 150 and the current positions of these blocks are also numbered 1 to
150.
 All preparation of a block (including putting in the data and any link value) is done in main memory
and then the block is written to disk with one write operation.
 The file control block does not have to be written to disk after a change (this is typical where many
operations are performed on a file).
 At most one index block is required per file and it does not have to be written to disk after a change.
 For linked allocation, assume that no I/O operations are necessary to add a freed block to the free list.

a)
The block is added in the middle:

Contiguous: Assume that in the middle means after block 75 and before block 76. We move the last 75
blocks down one position and then write in the new block.

I/O operations

Linked: We cannot find block 75 without traversing the linked list stored in the first 74 data blocks. So,
we first read through these 74 blocks. Then we read block 75, copy its link into the new block (in main
memory), update block 75's link to point to the new block, write out block 75, write new block.

74r + 1r + 1w + 1w = 75r + 2w = 77 I/O operations


block75 new

Indexed: Update the index in main memory. Write the new block.
105

1w = 1 I/O operation

b)
Page

The block is removed from the beginning.


105
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

Contiguous: Simply change the starting address to 2.

0 I/O operations

Linked: Read in block 1 and change the starting address to the link stored in this block.

1r = 1 I/O operation

Indexed: Simply remove the block's address from the linked list in the index block.

0 I/O operations

Question:

Consider a file system on a disk that has both logical and physical block sizes of 512 bytes. Assume that the
information about each file is already in memory. For the contiguous strategy, answer these questions:

a)
How is the logical-to-physical address mapping accomplished in this system? (For the indexed
allocation, assume that a file is always less than 512 blocks long.)
b)
If we are currently at logical block 10 (the last block accessed was block 10) and want to access logical
block 4, how many physical blocks must be read from the disk?

Answer:

Assumptions: 1. Let L be the logical address and let P be the physical address. 2. The assumption in part (a) is
poorly given. It's more reasonable to simply assume that the index is small enough to fit into a single block. In
fact, a 512 block file will probably require more than a single 512 byte block because block addresses typically
require 3-4 bytes each.

(a) Overview The CPU generates a logical address L (a relative offset in a file) and the file system has to
convert it to a physical address P (a disk address represented by a block number PB and an offset in this
block). For convenience of calculation, we assume that blocks are numbered from 0. In any approach, we can
determine the logical block number LB by dividing the logical address L by the logical block size (here 512).
Similarly, the offset, which will be the same for logical and physical addresses since the block sizes are
identical, is determined by applying modulus. The offset is the same in all approaches.

LB := L div 512
offset := L mod 512
106

Contiguous: Assume S is the starting address of the contiguous segment. Then a simple approach to mapping
Page

the address is:

106
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

P=S+L

If we prefer to consider the block level,

PB = SB + LB

(b) If we are currently at logical block 10 and we want to access logical block 4 ...

Contiguous: We simply move the disk head back by 6 blocks (from physical block 10 to physical block 4)
because the space allocated to the file is contiguous. Then we read block 4, for a total of one read.

Free Space Management

 Bit Vector:
o Each block is represented by 1 bit.
o If a block is free, then the bit is 1, but if the block is in use, then the bit is 0.
o For example, if the disk had 10 blocks, and blocks 2, 4, 5, and 8 were free, while blocks 0, 1, 3, 6,
7, and 9 were in use, the bit vector would be represented as: 0010110010

"A 1.3-gigabyte disk with 512-byte blocks would need a bit


map of over 332Kb to track its free blocks."

Reasoning:

Disk Size: 1.3 G bytes = 1.3 * 2^30 bytes

Block Size: 512 bytes = 2^9 bytes

# of Blocks:
= disk size / block size
= 1.3 * 2^30 bytes / 2^9 bytes = 1.3 * 2^21

# of entries in bit table


= # of blocks
= 1.3 * 2^21

Size of bit table in bits


= 1.3 * 2^21 bits

Size of bit table in bytes


= size in bits / (8 bits/byte)
= 1.3 * 2^21 / 8 bytes
= 1.3 * 2^21 / 2^3 bytes
107

= 1.3 * 2^18 bytes


= 1.3 * 2^8 * 2^10 bytes
= 1.3 * 256 * K bytes
= 332.8 K bytes
Page

= 332.8K bytes
107
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

 Free List (Linked List of Free Blocks)


o The address (block number) of the first free block is kept in a designated place in memory.
o The first free block contains a pointer to the next free block, it contains a pointer to the next
free block, and so forth.
o Can add a free block to the beginning of the free list in O(1) time.
o Can remove a free block from the beginning of the free list in O(1) time.

Example File System: Original UNIX File System

o layout of a disk containing one UNIX file system

The super block contains:

o number of i-nodes
o number of data blocks
o start of the list of free blocks
 first few hundred entries
 the rest of the free list is stored in a block that is otherwise free

Example on UNIX:

df = disk free
df -i /u
108

/u is the directory on hercules is where student files are stored


Filesystem Type blocks use avail %use iuse ifree %iuse Mounted
Page

/dev/dsk/dks efs 7654152 2790059 4864093 36% 158252 647230 20% /u

108
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

7654152 Kb = total space

2790059 Kb = being used

4864093 Kb = free

i use = no. of i-nodes in use = no. of files

i free = extra i-nodes

Each i-node:

 describes one file


 accounting info (owner and protection bits)
 provides the address information for all blocks in the file
o direct pointers to the first 10 blocks
o indirect pointer to a block containing more pointers
o double indirect pointer to blocks of pointers
o triple indirect
 a block of pointers to blocks of pointers to blocks of pointers to data blocks

Sample calculation of maximum file size

 Assume that there are 10 direct pointers to data blocks, 1 indirect pointer, 1 double indirect pointer,
and 1 triple indirect pointer
 Assume that the size of the data blocks is 1024 bytes = 1Kb, i.e., BlockSize = 1Kb
 Assume that the block numbers are represented as 4 byte unsigned integers, i.e., BlockNumberSize =
4b
 Some data blocks are used as index blocks. They store 1024 bytes / 4 bytes/entry = 256 entries
 Maximum number of bytes addressed by 10 direct pointers is

= Number of direct pointers * Blocksize


= 10 * 1Kb
= 10Kb

 Maximum number of bytes addressed by single indirect pointer is

= NumberOfEntries * BlockSize
= (Blocksize / BlockNumberSize) * BlockSize
= (1Kb / 4b) * 1Kb
109

= 256 * 1Kb
= 256Kb
Page

109
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

 Maximum number of bytes addressed by double indirect pointer is

= NumberOfEntries^2 * BlockSize
= (Blocksize / BlockNumberSize)^2 * BlockSize
= (1Kb / 4b)^2 * 1Kb
= (2^10 / 2^2)^2 * (2^10b)
= (2^8)^2 * (2^10)b
= (2^16) * (2^10)b
= 2^6 * 2^20 b
= 64 Mb

 Maximum number of bytes addressed by triple indirect pointer is

= NumberOfEntries^3 * BlockSize
= (Blocksize / BlockNumberSize)^3 * BlockSize
= (1Kb / 4b)^3 * 1Kb
= (2^10 / 2^2)^3 * (2^10b)
= (2^8)^3 * (2^10)b
= (2^24) * (2^10)b
= 2^4 * 2^30 b
= 16 Gb

 Maximum file size is 16Gb + 64Mb + 266Kb

--------------------◄►--------------------

110
Page

110
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

GATE 2012

1. A process executes the code


fork ();
fork ();
fork ();

The total number of child processes created is


(A) 3 (B) 4 (C) 7 (D) 8

Answer (C)
Let us put some label names for the three lines

fork (); // Line 1


fork (); // Line 2
fork (); // Line 3

L1 // There will be 1 child process created by line 1


/ \
L2 L2 // There will be 2 child processes created by line 2
/ \ / \
L3 L3 L3 L3 // There will be 4 child processes created by line 3

We can also use direct formula to get the number of child processes. With n fork statements, there are
always 2^n – 1 child processes. Also see this post for more details.

2. consider the 3 processes, P1, P2 and P3 shown in the table


Process Arrival time Time unit required
P1 0 5
111

P2 1 7
P3 3 4
Page

111
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

The completion order of the 3 processes under the policies FCFS and RRS (round robin scheduling with
CPU quantum of 2 time units) are
(A) FCFS: P1, P2, P3 RR2: P1, P2, P3
(B) FCFS: P1, P3, P2 RR2: P1, P3, P2
(C) FCFS: P1, P2, P3 RR2: P1, P3, P2
(D) FCFS: P1, P3, P2 RR2: P1, P2, P3

Answer (C)

3. Consider the virtual page reference string


1, 2, 3, 2, 4, 1, 3, 2, 4, 1
On a demand paged virtual memory system running on a computer system that main memory size of 3
pages frames which are initially empty. Let LRU, FIFO and OPTIMAL denote the number of page faults
under the corresponding page replacements policy. Then
(A) OPTIMAL < LRU < FIFO (B) OPTIMAL < FIFO < LRU (C) OPTIMAL = LRU (D) OPTIMAL = FIFO
Answer (B) The OPTIMAL will be 5, FIFO 6 and LRU 9.

4. A file system with 300 GByte uses a file descriptor with 8 direct block address. 1 indirect block address
and 1 doubly indirect block address. The size of each disk block is 128 Bytes and the size of each disk block
address is 8 Bytes. The maximum possible file size in this file system is
(A) 3 Kbytes
(B) 35 Kbytes
(C) 280 Bytes
(D) Dependent on the size of the disk

Answer (B)
Total number of possible addresses stored in a disk block = 128/8 = 16
Maximum number of addressable bytes due to direct address block = 8*128
Maximum number of addressable bytes due to 1 single indirect address block = 16*128
Maximum number of addressable bytes due to 1 double indirect address block = 16*16*128
The maximum possible file size = 8*128 + 16*128 + 16*16*128 = 35KB
GATE 2011

1) A thread is usually defined as a ‘light weight process’ because an operating system (OS) maintains
smaller data structures for a thread than for a process. In relation to this, which of the followings is TRUE?
(A) On per-thread basis, the OS maintains only CPU register state
112

(B) The OS does not maintain a separate stack for each thread
(C) On per-thread basis, the OS does not maintain virtual memory state
Page

(D) On per thread basis, the OS maintains only scheduling and accounting information.

112
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

Answer (C)
Threads share address space of Process. Virtually memory is concerned with processes not with Threads.
2) Let the page fault service time be 10ms in a computer with average memory access time being 20ns. If
one page fault is generated for every 10^6 memory accesses, what is the effective access time for the
memory?
(A) 21ns (B) 30ns (C) 23ns (D) 35ns

Answer (B)

Let P be the page fault rate


Effective Memory Access Time = p * (page fault service time) +
(1 - p) * (Memory access time)
= ( 1/(10^6) )* 10 * (10^6) ns +
(1 - 1/(10^6)) * 20 ns
= 30 ns (approx)

3) An application loads 100 libraries at startup. Loading each library requires exactly one disk access. The
seek time of the disk to a random location is given as 10ms. Rotational speed of disk is 6000rpm. If all 100
libraries are loaded from random locations on the disk, how long does it take to load all libraries? (The time
to transfer data from the disk block once the head has been positioned at the start of the block may be
neglected)
(A) 0.50s (B) 1.50s (C) 1.25s (D) 1.00s

Answer (B)
Since transfer time can be neglected, the average access time is sum of average seek time and average
rotational latency. Average seek time for a random location time is given as 10 ms. The average rotational
latency is half of the time needed for complete rotation. It is given that 6000 rotations need 1 minute. So
one rotation will take 60/6000 seconds which is 10 ms. Therefore average rotational latency is half of 10
ms, which is 5ms.

Average disk access time = seek time + rotational latency


= 10 ms + 5 ms
113

= 15 ms
For 100 libraries, the average disk access time will be 15*100 ms
Page

113
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

4. Consider the following table of arrival time and burst time for three processes P0, P1 and P2.
Process Arrival time Burst Time
P0 0 ms 9 ms
P1 1 ms 4 ms
P2 2 ms 9 ms

The pre-emptive shortest job first scheduling algorithm is used. Scheduling is carried out only at arrival or
completion of processes. What is the average waiting time for the three processes?
(A) 5.0 ms (B) 4.33 ms (C) 6.33 ms (D) 7.33 ms

Answer: (A)
Process P0 is allocated processor at 0 ms as there is no other process in ready queue. P0 is preempted
after 1 ms as P1 arrives at 1 ms and burst time for P1 is less than remaining time of P0. P1 runs for 4ms.
P2 arrived at 2 ms but P1 continued as burst time of P2 is longer than P1. After P1 completes, P0 is
scheduled again as the remaining time for P0 is less than the burst time of P2.
P0 waits for 4 ms, P1 waits for 0 ms amd P2 waits for 11 ms. So average waiting time is (0+4+11)/3 = 5.
GATE 2010

1) Let the time taken to switch between user and kernel modes of execution be t1 while the time taken to
switch between two processes be t2. Which of the following is TRUE? (GATE CS 2010)
(A) t1 > t2
(B) t1 = t2
(C) t1 < t2
(D) Nothing can be said about the relation between t1 and t2 Answer: - (C) Process switching involves
mode switch. Context switching can occur only in kernel mode.

2) A system uses FIFO policy for page replacement. It has 4 page frames with no pages loaded to begin with.
The system first accesses 100 distinct pages in some order and then accesses the same 100 pages but now
in the reverse order. How many page faults will occur? (GATE CS 2010)

(A) 196 (B) 192 (C) 197 (D) 195

Answer (A)
Access to 100 pages will cause 100 page faults. When these pages are accessed in reverse order, the first
four accesses will not cause page fault. All other access to pages will cause page faults. So total number of
114

page faults will be 100 + 96.


Page

3) Which of the following statements are true? (GATE CS 2010)


114
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

I. Shortest remaining time first scheduling may cause starvation


II. Preemptive scheduling may cause starvation
III. Round robin is better than FCFS in terms of response time
(A) I only
(B) I and III only
(C) II and III only
(D) I, II and III

Answer (D)
I) Shortest remaining time first scheduling is a preemptive version of shortest job scheduling. It may
cause starvation as shorter processes may keep coming and a long CPU burst process never gets CPU.
II) Preemption may cause starvation. If priority based scheduling with preemption is used, then a low
priority process may never get CPU.
III) Round Robin Scheduling improves response time as all processes get CPU after a specified time.

4) Consider the methods used by processes P1 and P2 for accessing their critical sections whenever
needed, as given below. The initial values of shared boolean variables S1 and S2 are randomly assigned.
Method Used by P1
while (S1 == S2) ;
Critica1 Section
S1 = S2;

Method Used by P2
while (S1 != S2) ;
Critica1 Section
S2 = not (S1);
Which one of the following statements describes the properties achieved? (GATE CS 2010)
(A) Mutual exclusion but not progress
(B) Progress but not mutual exclusion
(C) Neither mutual exclusion nor progress
(D) Both mutual exclusion and progress

Answer (A)
It can be easily observed that the Mutual Exclusion requirement is satisfied by the above solution, P1 can
enter critical section only if S1 is not equal to S2, and P2 can enter critical section only if S1 is equal to S2.
Progress Requirement is not satisfied. Let us first see definition of Progress Requirement.
115

Progress Requirement: If no process is executing in its critical section and there exist some processes that
wishes to enter their critical section, then the selection of the processes that will enter the critical section
Page

next cannot be postponed indefinitely.


115
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

If P1 or P2 want to re-enter the critical section, then they cannot even if there is other process running in
critical section.
GATE 2009

1) In which one of the following page replacement policies, Belady’s anomaly may occur?
(A) FIFO (B) Optimal (C) LRU (D) MRU

Answer (A)
Belady’s anomaly proves that it is possible to have more page faults when increasing the number of page
frames while using the First in First Out (FIFO) page replacement algorithm.

2) The essential content(s) in each entry of a page table is / are


(A) Virtual page number
(B) Page frame number
(C) Both virtual page number and page frame number
(D) Access right information

Answer (B)
A page table entry must contain Page frame number. Virtual page number is typically used as index in
page table to get the corresponding page frame number. See this for details.

3) Consider a system with 4 types of resources R1 (3 units), R2 (2 units), R3 (3 units), R4 (2 units). A non-
preemptive resource allocation policy is used. At any given instance, a request is not entertained if it cannot
be completely satisfied. Three processes P1, P2, P3 request the sources as follows if executed
independently.
Process P1:
t=0: requests 2 units of R2
t=1: requests 1 unit of R3
t=3: requests 2 units of R1
t=5: releases 1 unit of R2
and 1 unit of R1.
t=7: releases 1 unit of R3
t=8: requests 2 units of R4
t=10: Finishes

Process P2:
t=0: requests 2 units of R3
t=2: requests 1 unit of R4
116

t=4: requests 1 unit of R1


t=6: releases 1 unit of R3
Page

t=8: Finishes

116
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

Process P3:
t=0: requests 1 unit of R4
t=2: requests 2 units of R1
t=5: releases 2 units of R1
t=7: requests 1 unit of R2
t=8: requests 1 unit of R3
t=9: Finishes
Which one of the following statements is TRUE if all three processes run concurrently starting at time t=0?
(A) All processes will finish without any deadlock
(B) Only P1 and P2 will be in deadlock.
(C) Only P1 and P3 will be in a deadlock.
(D) All three processes will be in deadlock

Answer (A)
We can apply the following Deadlock Detection algorithm and see that there is no process waiting
indefinitely for a resource. See this for deadlock detection algorithm.

4) Consider a disk system with 100 cylinders. The requests to access the cylinders occur in following
sequence:
4, 34, 10, 7, 19, 73, 2, 15, 6, 20
Assuming that the head is currently at cylinder 50, what is the time taken to satisfy all requests if it takes
1ms to move from one cylinder to adjacent one and shortest seek time first policy is used?
(A) 95ms (B) 119ms (C) 233ms (D) 276ms

Answer (B)
4, 34, 10, 7, 19, 73, 2, 15, 6, 20
Since shortest seek time first policy is used, head will first move to 34. This move will cause 16*1 ms.
After 34, head will move to 20 which will cause 14*1 ms. And so on. So cylinders are accessed in following
order 34, 20, 19, 15, 10, 7, 6, 4, 2, 73 and total time will be (16 + 14 + 1 + 4 + 5 + 3 + 1 + 2 + 2 + 71)*1
= 119 ms.
GATE 2009
1) In the following process state transition diagram for a uniprocessor system, assume that there are
always some processes in the ready state: Now consider the following statements:
117
Page

117
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

I. If a process makes a transition D, it would result in another process making transition A immediately.
II. A process P2 in blocked state can make transition E while another process P1 is in running state.
III. The OS uses preemptive scheduling.
IV. The OS uses non-preemptive scheduling.
Which of the above statements are TRUE?

(A) I and II (B) I and III (C) II and III (D) II and IV

Answer (C)
I is false. If a process makes a transition D, it would result in another process making transition B, not A.
II is true. A process can move to ready state when I/O completes irrespective of other process being in
running state or not.
III is true because there is a transition from running to ready state.
IV is false as the OS uses preemptive scheduling.

2) The enter_CS() and leave_CS() functions to implement critical section of a process are realized using
test-and-set instruction as follows:
void enter_CS(X)
{
while test-and-set(X) ;
}
void leave_CS(X)
{
X = 0;
}

In the above solution, X is a memory location associated with the CS and is initialized to 0. Now consider the
following statements:
I. The above solution to CS problem is deadlock-free
II. The solution is starvation free.
III. The processes enter CS in FIFO order.
IV More than one process can enter CS at the same time.
Which of the above statements is TRUE?
118

(A) I only (B) I and II (C) II and III (D) IV only


Page

118
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

Answer (A)
The above solution is a simple test-and-set solution that makes sure that deadlock doesn’t occur, but it
doesn’t use any queue to avoid starvation or to have FIFO order.

3) A multilevel page table is preferred in comparison to a single level page table for translating virtual
address to physical address because
(A) It reduces the memory access time to read or write a memory location.
(B) It helps to reduce the size of page table needed to implement the virtual address space of a process.
(C) It is required by the translation lookaside buffer.
(D) It helps to reduce the number of page faults in page replacement algorithms.

Answer (B)
The size of page table may become too big (See this) to fit in contiguous space. That is why page tables
are typically divided in levels.

GATE 2008

1) The data blocks of a very large file in the Unix file system are allocated using
(A) contiguous allocation (B) linked allocation
(C) indexed allocation (D) an extension of indexed allocation

Answer (D)
The Unix file system uses an extension of indexed allocation. It uses direct blocks, single indirect blocks,
double indirect blocks and triple indirect blocks. Following diagram shows implementation of Unix file
system.

119
Page

119
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

2) The P and V operations on counting semaphores, where s is a counting semaphore, are defined as
follows:
P(s) : s = s - 1;
if (s < 0) then wait;
V(s) : s = s + 1;
if (s <= 0) then wakeup a process waiting on s;

Assume that Pb and Vb the wait and signal operations on binary semaphores are provided. Two binary
semaphores Xb and Yb are used to implement the semaphore operations P(s) and V(s) as follows:
P(s) : Pb(Xb);
s = s - 1;
if (s < 0) {
Vb(Xb) ;
Pb(Yb) ;
}
else Vb(Xb);

V(s) : Pb(Xb) ;
s = s + 1;
if (s <= 0) Vb(Yb) ;
Vb(Xb) ;

The initial values of Xb and Yb are respectively


(A) 0 and 0 (B) 0 and 1 (C) 1 and 0 (D) 1 and 1

Answer (C)
Both P(s) and V(s) operations are perform Pb(xb) as first step. If Xb is 0, then all processes executing
these operations will be blocked. Therefore, Xb must be 1.
120

If Yb is 1, it may become possible that two processes can execute P(s) one after other (implying 2
processes in critical section). Consider the case when s = 1, y = 1. So Yb must be 0.
Page

120
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

3) Which of the following statements about synchronous and asynchronous I/O is NOT true?
(A) An ISR is invoked on completion of I/O in synchronous I/O but not in asynchronous I/O
(B) In both synchronous and asynchronous I/O, an ISR (Interrupt Service Routine) is invoked after
completion of the I/O
(C) A process making a synchronous I/O call waits until I/O is complete, but a process making an
asynchronous I/O call does not wait for completion of the I/O
(D) In the case of synchronous I/O, the process waiting for the completion of I/O is woken up by the ISR
that is invoked after the completion of I/O

Answer (A)
In both Synchronous and Asynchronous, an interrupt is generated on completion of I/O. In Synchronous,
interrupt is generated to wake up the process waiting for I/O. In Asynchronous, interrupt is generated to
inform the process that the I/O is complete and it can process the data from the I/O operation.
See this for more details.

GATE 2008
1) A process executes the following code
for (i = 0; i < n; i++) fork();

The total number of child processes created is


(A) n (B) 2^n – 1 (C) 2^n (D) 2^(n+1) - 1;
Answer (B)

F0 // There will be 1 child process created by first fork


/ \
F1 F1 // There will be 2 child processes created by second fork
/ \ / \
F2 F2 F2 F2 // There will be 4 child processes created by third fork
/\ /\/\ /\
............... // and so on

If we sum all levels of above tree for i = 0 to n-1, we get 2^n - 1. So there will be 2^n – 1 child processes.

2) Which of the following is NOT true of deadlock prevention and deadlock avoidance schemes?
121

(A) In deadlock prevention, the request for resources is always granted if the resulting state is safe
(B) In deadlock avoidance, the request for resources is always granted if the result state is safe
Page

121
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

(C) Deadlock avoidance is less restrictive than deadlock prevention


(D) Deadlock avoidance requires knowledge of resource requirements a priori

Answer (A)
Deadlock prevention scheme handles deadlock by making sure that one of the four necessary conditions
don't occur. In deadlock prevention, the request for a resource may not be granted even if the resulting
state is safe.

3) A processor uses 36 bit physical addresses and 32 bit virtual addresses, with a page frame size of 4
Kbytes. Each page table entry is of size 4 bytes. A three level page table is used for virtual to physical
address translation, where the virtual address is used as follows
• Bits 30-31 are used to index into the first level page table
• Bits 21-29 are used to index into the second level page table
• Bits 12-20 are used to index into the third level page table, and
• Bits 0-11 are used as offset within the page

The number of bits required for addressing the next level page table (or page frame) in the page table entry
of the first, second and third level page tables are respectively
(A) 20, 20 and 20 (B) 24, 24 and 24 (C) 24, 24 and 20 (D) 25, 25 and 24
Answer (D)
Virtual address size = 32 bits
Physical address size = 36 bits
Physical memory size = 2^36 bytes
Page frame size = 4K bytes = 2^12 bytes
No. of bits required to access physical memory frame = 36 - 12 = 24
So in third level of page table, 24 bits are required to access an entry.
9 bits of virtual address are used to access second level page table entry and size of pages in second level
is 4 bytes. So size of second level page table is (2^9)*4 = 2^11 bytes. It means there are (2^36)/(2^11)
possible locations to store this page table. Therefore the second page table requires 25 bits to address it.
Similarly, the third page table needs 25 bits to address it.
GATE 2007

1) Consider a disk pack with 16 surfaces, 128 tracks per surface and 256 sectors per track. 512 bytes of
data are stored in a bit serial manner in a sector. The capacity of the disk pack and the number of bits
required to specify a particular sector in the disk are respectively:
(A) 256 Mbyte, 19 bits (B) 256 Mbyte, 28 bits
122

(C) 512 Mbyte, 20 bits (D) 64 Gbyte, 28 bits

Answer (A)
Page

Capacity of the disk = 16 surfaces X 128 tracks X 256 sectors X 512 bytes = 256 Mbytes.
122
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

To calculate number of bits required to access a sector, we need to know total number of sectors. Total
number of sectors = 16 surfaces X 128 tracks X 256 sectors = 2^19
So the number of bits required to access a sector is 19.

2) Group 1 contains some CPU scheduling algorithms and Group 2 contains some applications. Match
entries in Group 1 to entries in Group 2.
Group I Group II
(P) Gang Scheduling (1) Guaranteed Scheduling
(Q) Rate Monotonic Scheduling (2) Real-time Scheduling
(R) Fair Share Scheduling (3) Thread Scheduling

(A) P – 3 Q – 2 R – 1 (B) P – 1 Q – 2 R – 3
(C) P – 2 Q – 3 R – 1 (D) P – 1 Q – 3 R – 2
Answer (A)
Gang scheduling for parallel systems that schedules related threads or processes to run simultaneously
on different processors.
Rate monotonic scheduling is used in real-time operating systems with a static-priority scheduling class.
The static priorities are assigned on the basis of the cycle duration of the job: the shorter the cycle
duration is, the higher is the job’s priority.
Fair Share Scheduling is a scheduling strategy in which the CPU usage is equally distributed among
system users or groups, as opposed to equal distribution among processes. It is also known as
Guaranteed scheduling.

3) An operating system uses Shortest Remaining Time first (SRT) process scheduling algorithm. Consider
the arrival times and execution times for the following processes:
Process Execution time Arrival time
P1 20 0
P2 25 15
P3 10 30
P4 15 45

What is the total waiting time for process P2?


123

(A) 5 (B) 15 (C) 40 (D) 55

Answer (B)
Page

At time 0, P1 is the only process, P1 runs for 15 time units.


123
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

At time 15, P2 arrives, but P1 has the shortest remaining time. So P1 continues for 5 more time units.
At time 20, P2 is the only process. So it runs for 10 time units
At time 30, P3 is the shortest remaining time process. So it runs for 10 time units
At time 40, P2 runs as it is the only process. P2 runs for 5 time units.
At time 45, P3 arrives, but P2 has the shortest remaining time. So P2 continues for 10 more time units.
P2 completes its ececution at time 55

Total waiting time for P2 = Complition time - (Arrival time + Execution time)
= 55 - (15 + 25)
= 15

GATE 2007

1) A virtual memory system uses First In First Out (FIFO) page replacement policy and allocates a fixed
number of frames to a process. Consider the following statements:
P: Increasing the number of page frames allocated to a process sometimes increases the page fault rate.
Q: Some programs do not exhibit locality of reference. Which one of the following is TRUE?
(A) Both P and Q are true, and Q is the reason for P
(B) Both P and Q are true, but Q is not the reason for P.
(C) P is false, but Q is true
(D) Both P and Q are false.

Answer (B)
P is true. Increasing the number of page frames allocated to process may increases the no. of page faults.
Q is also true, but Q is not the reason for-P as Belady’s Anomaly occurs for some specific patterns of page
references.

2) A single processor system has three resource types X, Y and Z, which are shared by three processes.
There are 5 units of each resource type. Consider the following scenario, where the column alloc denotes
the number of units of each resource type allocated to each process, and the column request denotes the
number of units of each resource type requested by a process in order to complete execution. Which of
these processes will finish LAST?
alloc request
XYZ XYZ
124

P0 1 2 1 103
P1 2 0 1 012
Page

124
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

P2 2 2 1 120

(A) P0 (B) P1 (C) P2 (D) None of the above, since the system is in a deadlock

Answer (C)
Once all resources (5, 4 and 3 instances of X, Y and Z respectively) are allocated, 0, 1 and 2 instances of X,
Y and Z are left. Only needs of P1 can be satisfied. So P1 can finish its execution first. Once P1 is done, it
releases 2, 1 and 3 units of X, Y and Z respectively. Among P0 and P2, needs of P0 can only be satisfied. So
P0 finishes its execution. Finally, P2 finishes its execution.
3) Two processes, P1 and P2, need to access a critical section of code. Consider the following
synchronization construct used by the processes:Here, wants1 and wants2 are shared variables, which are
initialized to false. Which one of the following statements is TRUE about the above construct?
/* P1 */
while (true) {
wants1 = true;
while (wants2 == true);
/* Critical
Section */
wants1=false;
}
/* Remainder section */

/* P2 */
while (true) {
wants2 = true;
while (wants1==true);
125

/* Critical
Section */
Page

wants2 = false;
125
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

}
/* Remainder section */

(A) It does not ensure mutual exclusion.


(B) It does not ensure bounded waiting.
(C) It requires that processes enter the critical section in strict alternation.
(D) It does not prevent deadlocks, but ensures mutual exclusion.
Answer (D)
The above synchronization constructs don’t prevent deadlock. When both wants1 and wants2 become
true, both P1 and P2 stuck forever in their while loops waiting for each other to finish.
4) Consider the following statements about user level threads and kernel level threads. Which one of the
following statement is FALSE?
(A) Context switch time is longer for kernel level threads than for user level threads.
(B) User level threads do not need any hardware support.
(C) Related kernel level threads can be scheduled on different processors in a multi-processor system.
(D) Blocking one kernel level thread blocks all related threads.

Answer (D)

Since kernel level threads are managed by kernel, blocking one thread doesn’t cause all related threads to
block. It’s a problem with user level threads. See this for more details.

GATE 2006

1) Consider three CPU-intensive processes, which require 10, 20 and 30 time units and arrive at times 0, 2
and 6, respectively. How many context switches are needed if the operating system implements a shortest
remaining time first scheduling algorithm? Do not count the context switches at time zero and at the end.
(A) 1 (B) 2 (C) 3 (D) 4

Answer (B)
Let three process be P0, P1 and P2 with arrival times 0, 2 and 6 respectively and CPU burst times 10, 20
and 30 respectively. At time 0, P0 is the only available process so it runs. At time 2, P1 arrives, but P0 has
the shortest remaining time, so it continues. At time 6, P2 arrives, but P0 has the shortest remaining time,
so it continues. At time 10, P1 is scheduled as it is the shortest remaining time process. At time 30, P2 is
scheduled. Only two context switches are needed. P0 to P1 and P1 to P2.
126

2) A computer system supports 32-bit virtual addresses as well as 32-bit physical addresses. Since the
virtual address space is of the same size as the physical address space, the operating system designers
Page

decide to get rid of the virtual memory entirely. Which one of the following is true?
126
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

(A) Efficient implementation of multi-user support is no longer possible


(B) The processor cache organization can be made more efficient now
(C) Hardware support for memory management is no longer needed
(D) CPU scheduling can be made more efficient now

Answer (C)
For supporting virtual memory, special hardware support is needed from Memory Management Unit.
Since operating system designers decide to get rid of the virtual memory entirely, hardware support for
memory management is no longer needed

3) A CPU generates 32-bit virtual addresses. The page size is 4 KB. The processor has a translation look-
aside buffer (TLB) which can hold a total of 128 page table entries and is 4-way set associative. The
minimum size of the TLB tag is:
(A) 11 bits (B) 13 bits (C) 15 bits (D) 20 bits
Answer C
Size of a page = 4KB = 2^12
Total number of bits needed to address a page frame = 32 – 12 = 20
If there are ‘n’ cache lines in a set, the cache placement is called n-way set associative. Since TLB is 4 way
set associative and can hold total 128 (2^7) page table entries, number of sets in cache = 2^7/4 = 2^5.
So 5 bits are needed to address a set, and 15 (20 – 5) bits are needed for tag.
GATE 2006

1) Consider three processes (process id 0, 1, 2 respectively) with compute time bursts 2, 4 and 8 time units.
All processes arrive at time zero. Consider the longest remaining time first (LRTF) scheduling algorithm. In
LRTF ties are broken by giving priority to the process with the lowest process id. The average turn around
time is:
(A) 13 units (B) 14 units (C) 15 units (D) 16 units

Answer (A)
Let the processes be p0, p1 and p2. These processes will be executed in following order.

p2 p1 p2 p1 p2 p0 p1 p2 p0 p1 p2
0 4 5 6 7 8 9 10 11 12 13 14

Turn around time of a process is total time between submission of the process and its completion.
Turn around time of p0 = 12 (12-0)
Turn around time of p1 = 13 (13-0)
127

Turn around time of p2 = 14 (14-0)


Average turn around time is (12+13+14)/3 = 13.
Page

127
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

2) Consider three processes, all arriving at time zero, with total execution time of 10, 20 and 30 units,
respectively. Each process spends the first 20% of execution time doing I/O, the next 70% of time doing
computation, and the last 10% of time doing I/O again. The operating system uses a shortest remaining
compute time first scheduling algorithm and schedules a new process either when the running process gets
blocked on I/O or when the running process finishes its compute burst. Assume that all I/O operations can
be overlapped as much as possible. For what percentage of time does the CPU remain idle?
(A) 0% (B) 10.6% (C) 30.0% (D) 89.4%

Answer (B)
Let three processes be p0, p1 and p2. Their execution time is 10, 20 and 30 respectively. p0 spends first 2
time units in I/O, 7 units of CPU time and finally 1 unit in I/O. p1 spends first 4 units in I/O, 14 units of
CPU time and finally 2 units in I/O. p2 spends first 6 units in I/O, 21 units of CPU time and finally 3 units
in I/O.

idle p0 p1 p2 idle
0 2 9 23 44 47

Total time spent = 47


Idle time = 2 + 3 = 5
Percentage of idle time = (5/47)*100 = 10.6 %

3) The atomic fetch-and-set x, y instruction unconditionally sets the memory location x to 1 and fetches the
old value of x in y without allowing any intervening access to the memory location x. consider the following
implementation of P and V functions on a binary semaphore .
void P (binary_semaphore *s) {
unsigned y;
unsigned *x = &(s->value);
do {
fetch-and-set x, y;
} while (y);
128

}
Page

128
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

void V (binary_semaphore *s) {


S->value = 0;
}

Which one of the following is true?


(A) The implementation may not work if context switching is disabled in P.
(B) Instead of using fetch-and-set, a pair of normal load/store can be used
(C) The implementation of V is wrong
(D) The code does not implement a binary semaphore

Answer (A)
Let us talk about the operation P(). It stores the value of s in x, then it fetches the old value of x, stores it
in y and sets x as 1. The while loop of a process will continue forever if some other process doesn’t
execute V() and sets the value of s as 0. If context switching is disabled in P, the while loop will run
forever as no other process will be able to execute V().
4) Consider the following snapshot of a system running n processes. Process i is holding Xi instances of a
resource R, 1 <= i <= n. currently, all instances of R are occupied. Further, for all i, process i has placed a
request for an additional Yi instances while holding the Xi instances it already has. There are exactly two
processes p and q such that Yp = Yq = 0. Which one of the following can serve as a necessary condition to
guarantee that the system is not approaching a deadlock?
(A) min (Xp, Xq) < max (Yk) where k != p and k != q
(B) Xp + Xq >= min (Yk) where k != p and k != q
(C) max (Xp, Xq) > 1
(D) min (Xp, Xq) > 1

Answer (B)
Since both p and q don’t need additional resources, they both can finish and release Xp + Xq resources
without asking for any additional resource. If the resources released by p and q are sufficient for another
process waiting for Yk resources, then system is not approaching deadlock.

GATE2005

1) Normally user programs are prevented from handling I/O directly by I/O instructions in them. For CPUs
having explicit I/O instructions, such I/O protection is ensured by having the I/O instructions privileged. In
a CPU with memory mapped I/O, there is no explicit I/O instruction. Which one of the following is true for a
129

CPU with memory mapped I/O?


(a) I/O protection is ensured by operating system routine(s)
(b) I/O protection is ensured by a hardware trap
Page

129
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

(c) I/O protection is ensured during system configuration


(d) I/O protection is not possible

Answwer (a)
Memory mapped I/O means, accessing I/O via general memory access as opposed to specialized IO
instructions. An example,

unsigned int volatile const *pMappedAddress const = (unsigned int *)0x100;

So, the programmer can directly access any memory location directly. To prevent such an access, the OS
(kernel) will divide the address space into kernel space and user space. An user application can easily
access user application. To access kernel space, we need system calls (traps).

2) What is the swap space in the disk used for?


(a) Saving temporary html pages (b) Saving process data
(c) Storing the super-block (d) Storing device drivers

Answer (b)
Swap space is typically used to store process data. See this for more details.

3) Increasing the RAM of a computer typically improves performance because:


(a) Virtual memory increases (b) Larger RAMs are faster
(c) Fewer page faults occur (d) Fewer segmentation faults occur

Answer (c)

4) Suppose n processes, P1, …. Pn share m identical resource units, which can be reserved and released one
at a time. The maximum resource requirement of process Pi is Si, where Si > 0. Which one of the following
is a sufficient condition for ensuring that deadlock does not occur?

Answer (c)
In the extreme condition, all processes acquire Si-1 resources and need 1 more resource. So following
condition must be true to make sure that deadlock never occurs.
130

< m The above expression can be written as following. < (m + n) See this forum
thread for an example.
Page

130
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

5) Consider the following code fragment:


if (fork() == 0)
* a = a + 5; printf(“%d,%d\n”, a, &a); +
else { a = a –5; printf(“%d, %d\n”, a, &a); +

Let u, v be the values printed by the parent process, and x, y be the values printed by the child process.
Which one of the following is TRUE?
(a) u = x + 10 and v = y (b) u = x + 10 and v != y
(c) u + 10 = x and v = y (d) u + 10 = x and v != y

Answer (c)
fork() returns 0 in child process and process ID of child process in parent process.
In Child (x), a = a + 5
In Parent (u), a = a – 5;
Therefore x = u + 10.
The physical addresses of ‘a’ in parent and child must be different. But our program accesses virtual
addresses (assuming we are running on an OS that uses virtual memory). The child process gets an exact
copy of parent process and virtual address of ‘a’ doesn’t change in child process. Therefore, we get same
addresses in both parent and child.
MIX Year GATE Question
1. Suppose the time to service a page fault is on the average 10 milliseconds, while a memory access takes 1
microsecond. Then a 99.99% hit ratio results in average memory access time of (GATE CS 2000)
(a) 1.9999 milliseconds (b) 1 millisecond
(c) 9.999 microseconds (d) 1.9999 microseconds

Answer: (d)
Explanation:
Average memory access time =
[(% of page miss)*(time to service a page fault) +
(% of page hit)*(memory access time)]/100

So, average memory access time in microseconds is.


(99.99*1 + 0.01*10*1000)/100 = (99.99+100)/1000 = 199.99/1000 =1.9999 µs
131

2. Which of the following need not necessarily be saved on a context switch between processes? (GATE CS
Page

2000)
131
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

(a) General purpose registers (b) Translation look-aside buffer


(c) Program counter (d) All of the above

Answer: (b)
Explanation:
In a process context switch, the state of the first process must be saved somehow, so that, when the
scheduler gets back to the execution of the first process, it can restore this state and continue.
The state of the process includes all the registers that the process may be using, especially the program
counter, plus any other operating system specific data that may be necessary.
A Translation lookaside buffer (TLB) is a CPU cache that memory management hardware uses to improve
virtual address translation speed. A TLB has a fixed number of slots that contain page table entries, which
map virtual addresses to physical addresses. On a context switch, some TLB entries can become invalid,
since the virtual-to-physical mapping is different. The simplest strategy to deal with this is to completely
flush the TLB.

3. Where does the swap space reside ? (GATE 2001)


(a) RAM (b) Disk (c) ROM (d) On-chip cache

Answer: (b)
Explanation:
Swap space is an area on disk that temporarily holds a process memory image. When physical memory
demand is sufficiently low, process memory images are brought back into physical memory from the
swap area. Having sufficient swap space enables the system to keep some physical memory free at all
times.

4. Which of the following does not interrupt a running process? (GATE CS 2001)
(a) A device (b) Timer (c) Scheduler process (d) Power failure

Answer: (c)
Explanation:
Scheduler process doesn’t interrupt any process, it’s Job is to select the processes for following three
purposes.
Long-term scheduler(or job scheduler) –selects which processes should be brought into the ready queue
Short-term scheduler(or CPU scheduler) –selects which process should be executed next and allocates
CPU.
Mid-term Scheduler (Swapper)- present in all systems with virtual memory, temporarily removes
processes from main memory and places them on secondary memory (such as a disk drive) or vice versa.
The mid-term scheduler may decide to swap out a process which has not been active for some time, or a
132

process which has a low priority, or a process which is page faulting frequently, or a process which is
taking up a large amount of memory in order to free up main memory for other processes, swapping the
Page

132
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

process back in later when more memory is available, or when the process has been unblocked and is no
longer waiting for a resource.

5. Which of the following scheduling algorithms is non-preemptive? (GATE CS 2002)

a) Round Robin b) First-In First-Out


c) Multilevel Queue Scheduling d) Multilevel Queue Scheduling with Feedback

Answer: (b)

6. Consider a machine with 64 MB physical memory and a 32-bit virtual address space. If the page size is
4KB, what is the approximate size of the page table? (GATE 2001)
(a) 16 MB (b) 8 MB (c) 2 MB (d) 24 MB

Answer: (c)
Explanation:
A page entry is used to get address of physical memory. Here we assume that single level of Paging is
happening. So the resulting page table will contain entries for all the pages of the Virtual address space.
Number of entries in page table =
(virtual address space size)/(page size)

Using above formula we can say that there will be 2^(32-12) = 2^20 entries in page table.
No. of bits required to address the 64MB Physical memory = 26.
So there will be 2^(26-12) = 2^14 page frames in the physical memory. And page table needs to store the
address of all these 2^14 page frames. Therefore, each page table entry will contain 14 bits address of the
page frame and 1 bit for valid-invalid bit.
Since memory is byte addressable. So we take that each page table entry is 16 bits i.e. 2 bytes long.

Size of page table =


(total number of page table entries) *(size of a page table entry)
= (2^20 *2) = 2MB

For the clarity of the concept, please see the following figure. As per our question, here p = 20, d = 12 and
f = 14.
133
Page

133
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

7. Consider Peterson’s algorithm for mutual exclusion between two concurrent processes i and j. The
program executed by process is shown below.
repeat
flag [i] = true;
turn = j;
while ( P ) do no-op;
Enter critical section, perform actions, then exit critical
section
flag [ i ] = false;
Perform other non-critical section actions.
until false;
For the program to guarantee mutual exclusion, the predicate P in the while loop should be (GATE 2001)
a) flag [j] = true and turn = I b) flag [j] = true and turn = j
c) flag [i] = true and turn = j d) flag [i] = true and turn = i

Answer: (b)
Basically, Peterson’s algorithm provides guaranteed mutual exclusion by using the two following
134

constructs –flag[] and turn. flag[] controls that the willingness of a process to be entered in critical
section. While turn controls the process that is allowed to be entered in critical section. So by replacing P
Page

with the following,

134
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

flag [j] = true and turn = j


process i will not enter critical section if process j wants to enter critical section and it is process
j’s turn to enter critical section. The same concept can be extended for more than two processes. For
details, refer the following.

8 More than one word are put in one cache block to (GATE 2001)
(a) exploit the temporal locality of reference in a program
(b) exploit the spatial locality of reference in a program
(c) reduce the miss penalty
(d) none of the above

Answer: (b)
Temporal locality refers to the reuse of specific data and/or resources within relatively small time
durations. Spatial locality refers to the use of data elements within relatively close storage locations.
To exploit the spatial locality, more than one word are put into cache block.

9. Which of the following statements is false? (GATE 2001)


a) Virtual memory implements the translation of a program’s address space into physical memory
address space
b) Virtual memory allows each program to exceed the size of the primary memory
c) Virtual memory increases the degree of multiprogramming
d) Virtual memory reduces the context switching overhead

Answer: (d)
In a system with virtual memory context switch includes extra overhead in switching of address spaces.

10. Consider a set of n tasks with known runtimes r1, r2, … rn to be run on a uniprocessor machine. Which
of the following processor scheduling algorithms will result in the maximum throughput? (GATE 2001)
(a) Round-Robin (b) Shortest-Job-First
(c) Highest-Response-Ratio-Next (d) First-Come-First-Served

Answer: (b)

11. Which of the following is NOT a valid deadlock prevention scheme? (GATE CS 2000)
(a) Release all resources before requesting a new resource
(b) Number the resources uniquely and never request a lower numbered resource than the last one
requested.
(c) Never request a resource after releasing any resource
135

(d) Request and all required resources be allocated before execution.

Answer: (c)
Page

135
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

12. Let m,0-…m,4- be mutexes (binary semaphores) and P[0] …. P[4] be processes.
Suppose each process P[i] executes the following:
wait (m[i]); wait(m[(i+1) mode 4]);

------

release (m[i]); release (m[(i+1)mod 4]);


This could cause (GATE CS 2000)
(a) Thrashing
(b) Deadlock
(c) Starvation, but not deadlock
(d) None of the above

Answer: (b)
Explanation:
You can easily see a deadlock in a situation where..
P[0] has acquired m[0] and waiting for m[1]
P[1] has acquired m[1] and waiting for m[2]
P[2] has acquired m[2] and waiting for m[3]
P[3] has acquired m[3] and waiting for m[0]

13. A graphics card has on board memory of 1 MB. Which of the following modes can the
card not support? (GATE CS 2000)
(a) 1600 x 400 resolution with 256 colours on a 17 inch monitor
(b) 1600 x 400 resolution with 16 million colours on a 14 inch monitor
(c) 800 x 400 resolution with 16 million colours on a 17 inch monitor
(d) 800 x 800 resolution with 256 colours on a 14 inch monitor

Answer: (b)
Explanation:
Monitor size doesn’t matter here. So, we can easily deduct that answer should be (b) as this has the
highest memory requirements. Let us verify it.
Number of bits required to store a 16M colors pixel = ceil(log2(16*1000000)) = 24
Number of bytes required for 1600 x 400 resolution with 16M colors = (1600 * 400 * 24)/8 which is
192000000 (greater than 1MB).

14 Consider a virtual memory system with FIFO page replacement policy. For an arbitrary page access
136

pattern, increasing the number of page frames in main memory will (GATE CS 2001)
a) Always decrease the number of page faults
b) Always increase the number of page faults
Page

136
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

c) Some times increase the number of page faults


d) Never affect the number of page faults

Answer: (c)
Explanation:
Incrementing the number of page frames doesn’t always decrease the page faults (Belady’s Anomaly).

15. Which of the following requires a device driver? (GATE CS 2001)


a) Register b) Cache c) Main memory d) Disk

Answer: (d)

16. Using a larger block size in a fixed block size file system leads to (GATE CS 2003)
a) better disk throughput but poorer disk space utilization
b) better disk throughput and better disk space utilization
c) poorer disk throughput but better disk space utilization
d) poorer disk throughput and poorer disk space utilization

Answer (a)
If block size is large then seek time is less (fewer blocks to seek) and disk performance is improved, but
remember larger block size also causes waste of disk space.
17. Consider the following statements with respect to user-level threads and kernel supported threads
i. context switch is faster with kernel-supported threads
ii. for user-level threads, a system call can block the entire process
iii. Kernel supported threads can be scheduled independently
iv. User level threads are transparent to the kernel
Which of the above statements are true? (GATE CS 2004)
a) (ii), (iii) and (iv) only
b) (ii) and (iii) only
c) (i) and (iii) only
d) (i) and (ii) only
Answer(a)

18. The minimum number of page frames that must be allocated to a running process in a virtual memory
environment is determined by (GATE CS 2004)
a) the instruction set architecture
b) page size
137

c) physical memory size


d) number of processes in memory
Page

137
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

Answer (a)
Each process needs minimum number of pages based on instruction set architecture. Example IBM 370: 6
pages to handle MVC (storage to storage move) instruction
Instruction is 6 bytes, might span 2 pages.
2 pages to handle from.
2 pages to handle to.
19. In a system with 32 bit virtual addresses and 1 KB page size, use of one-level page tables for virtual to
physical address translation is not practical because of (GATE CS 2003)
a) the large amount of internal fragmentation
b) the large amount of external fragmentation
c) the large memory overhead in maintaining page tables
d) the large computation overhead in the translation process
Answer (c)
Since page size is too small it will make size of page tables huge.

Size of page table =


(total number of page table entries) *(size of a page table entry)

Let us see how many entries are there in page table

Number of entries in page table =


(virtual address space size)/(page size)
= (2^32)/(2^10)
= 2^22

Now, let us see how big each entry is.


If size of physical memory is 512 MB then number of bits required to address a byte in 512 MB is 29. So,
there will be (512MB)/(1KB) = (2^29)/(2^10) page frames in physical memory. To address a page
frame 19 bits are required. Therefore, each entry in page table is required to have 19 bits.

Note that page table entry also holds auxiliary information about the page such
as a present bit, a dirty or modified bit, address space or process ID information,
amongst others. So size of page table
138

> (total number of page table entries) *(size of a page table entry)
Page

> (2^22 *19) bytes


138
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System

> 9.5 MB

And this much memory is required for each process because each process maintains its own page table.
Also, size of page table will be more for physical memory more than 512MB. Therefore, it is advised to
use multilevel page table for such scenarios.

139
Page

139
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved

You might also like