Operating System N
Operating System N
Study Material
Operating System ( OS )
For
By
Siddharth Shukla
Website: www.igate.guru
1
Page
1
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
Copyright©,i-gate publication
First edition 2021
2
Page
2
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
Operating System ( OS )
Syllabus –
Processes
Threads
Inter-process communication
Concurrency
Synchronization
Deadlock
CPU scheduling
Memory management and virtual memory
I/O systems
3
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
Operating system –
User friendly
Convenient
Robustness
Reliable
Efficient use of hardware
scalability
CPU scheduling
Protection
Security
Memory management
Handling errors
Acting as interface
Differences between operating systems for mainframe computers and personal computers –
Operating systems for batch systems have simpler requirements than for personal computers.
Batch systems do not have to be concerned with interacting with a user as much as a personal
computer. As a result, an operating system for a PC must be concerned with response time for an
interactive user. Batch systems do not have such requirements.
A pure batch system also may have not to handle time sharing, whereas an operating system must
switch rapidly between different jobs.
User view
4
System view
Page
Functionality view
4
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
Modes of Execution –
User mode
Kernel mode
The operating system is that portion of the software that runs in kernel mode or supervisor mode. It is
protected from user tampering by the hardware. Certain instructions could be executed only when
the CPU is in kernel mode. Hardware devices could be accessed only when the program is executing in
kernel mode. Control over when interrupts could be enabled or disabled is also possible only when
the CPU is in kernel mode.
Compilers and editors run in user mode. The CPU has very limited capability when executing in user
mode, thereby enforcing protection of critical resources.
Basic elements –
Processor: Controls the operation of the computer and performs its data processing functions.
When there is only one processor, it is often referred to as the central processing unit (CPU).
Main memory: Stores data and programs. This memory is typically volatile; that is, when the
computer is shut down, the contents of the memory are lost. In contrast, the contents of disk
memory are retained even when the computer system is shut down. Main memory is also referred
to as real memory or primary memory.
I/O modules: Move data between the computer and its external environment. The external
environment consists of a variety of devices, including secondary memory devices (e. g., disks),
communications equipment, and terminals.
System bus: Provides for communication among processors, main memory, and I/O modules.
One of the processor’s functions is to exchange data with memory.
It typically makes use of two internal registers:
a) Memory address registers (MAR), which specifies the address in memory for the next
read or write.
b) Memory buffer register (MBR), which contains the data to be written into memory or
which receives the data read from memory.
Similarly, an I/O address register (I/OAR) specifies a particular I/O device. An I/O buffer
register (I/OBR) is used for the exchange of data between an I/O module and the
processor.
A memory module consists of a set of locations, defined by sequentially numbered
addresses. Each location contains a bit pattern that can be interpreted as either an
5
instruction or data.
Page
5
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
An I/O module transfers data from external devices to processor and memory, and vice
versa. It contains internal buffers for temporarily holding data until they can be sent on.
The OS also manages secondary memory and I/O (input/output) devices on behalf of its
users.
Layered structure -
6
Page
6
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
Kernel structure -
Only important part of kernel is loaded, rests of them are loaded as per request of client.
A. Batch OS –
It requires the grouping up of similar jobs which consist of programs, data and system
commands.
Users have no control over result of a program.
Off-line debugging.
B. Multiprogramming OS –
Simultaneous execution of multiple programs. It imports system throughput and resource
utilization.
Example: Windows XP,98
Multitasking OS –
Multi-user OS –
7
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
Multiprocessing OS –
The term 'multiprocessing' means multi CPUs perform more than one job at a time.
The term 'multiprocessing' means situation in which a single CPU divides its time
between more than one jobs.
8
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
Processor registers –
A processor includes a set of registers that provide memory that is faster and smaller than main
memory. Processor registers serve two functions:
1. User-visible registers: Enable the machine or assembly language programmer to minimize main
memory references by optimizing register use. For high level languages, an optimizing compiler
will attempt to make intelligent choices of which variables to assign to registers and which to main
memory locations. Types of registers that are typically available are data, address, and condition
code registers.
a. Data registers can be assigned to a variety of functions by the programmer. In some cases,
they are general purpose in nature and can be used with any machine instruction that
performs operations on data.
b. Address registers contain main memory addresses of data and instructions, or they contain
a portion of the address that is used in the calculation of the complete or effective address.
These registers may themselves be general purpose, or may be devoted to a particular way,
or mode, of addressing memory.
9
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
c. Condition codes (also referred to as flags) are bits typically set by the processor hardware
as the result of operations. For example, an arithmetic operation may produce a positive,
negative, zero, or overflow result.
2. Control and status registers: Used by the processor to control the operation of the processor and
by privileged OS routines to control the execution of programs. A variety of processor registers are
employed to control the operation of the processor. On most processors, most of these are not
visible to the user. Some of them may be accessible by machine instructions executed in what is
referred to as a control or kernel mode.
a. Program counter (PC): Contains the address of the next instruction to be fetched.
b. Instruction register (IR): Contains the instruction most recently fetched
Software -
1. Freeware –
We can easily download it from web and use it. (Available freely)
No restriction
2. Shareware –
One can buy & share it
3. Firmware –
Distributed & created by specific firm.
--------------------◄►--------------------
10
Page
10
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
1. PROCESSES
Process -
The operating system is responsible for the following activities in connection with process
management:
11
Page
11
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
Data →
1. Static data
Example – Variable, data structure (whose size is fixed).
2. Dynamic data
Example – Space allocated during runtime using dynamic memory allocation.
Static variable memory is allocated during load time but not at editing or compile time.
Example – int a,b; // static data
int *p = malloc() //dynamic data
Runtime stack – Activation record are maintain where each function call are generated.
12
12
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
As a process executes, it changes state. The state of a process is defined in part by the current activity
of that process.
1. New: A process that has just been created but has not yet been admitted to the pool of executable
processes by the OS. Typically, a new process has not yet been loaded into main memory, although
its process control block has been created.
2. Ready: A process that is prepared to execute when given the opportunity.
3. Running: Instructions are being executed. If we will assume a computer with a single processor, so
at most one process at a time can be in this state.
4. Blocked/Waiting: A process that cannot execute until some event occurs, such as the completion of
an I/O operation or reception of a signal.
5. Exit/ Terminated: A process that has been released from the pool of executable processes by the
OS, either because it halted or because it aborted for some reason.
The types of events that lead to each state transition for a process; the possible transitions are as
follows:
13
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
Running → Blocked: A process is put in the Blocked state if it requests something for which it must
wait. A request to the OS is usually in the form of a system service call; that is, a call from the
running program to a procedure that is part of the operating system code.
Blocked → Ready: A process in the Blocked state is moved to the Ready state when the event for
which it has been waiting occurs.
Ready → Exit: For clarity, this transition is not shown on the state diagram. In some systems, a
parent may terminate a child process at any time. Also, if a parent terminates, all child processes
associated with that parent may be terminated.
Blocked → Exit: The comments under the preceding item apply.
Each process is represented in the operating system by a process control block (PCB) also called a
task control block.
Process state: The state may be new, ready, running, waiting, halted and so on.
Program counter: The counter indicates the address of the next instruction to be executed for this
process.
PSW: U/S (user/supervisor mode), EI (Enable interrupt), Interrupt level mask, condition code etc.
CPU registers: The registers vary in number and type, depending on the computer architecture.
They include accumulators, index registers, stack pointers, and general-purpose registers, plus
any condition-code information.
CPU-scheduling information: This information includes a process priority, pointers to scheduling
queues, and any other scheduling parameters.
Link: Pointer to same scheduling queue.
14
Memory-management information: This information may include such information as the value of
the base and limit registers, the page tables, or the segment tables, depending on the memory
Page
Accounting information: This information includes the amount of CPU and real time used, time
limits, account numbers, job or process numbers, and so on.
I/O status information: This information includes the list of I/O devices allocated to the process, a
list of open files, and so on.
In brief, the PCB simply serves as the repository for any information that may vary from process to
process.
Context switching –
When CPU switches from process Pi to process Pj , state of Pi has to be saved and state of Pj has to
be loaded (from PCB).
Process creation –
1. System initialization.
2. Execution of a process creation system call by a running process.
3. A user request to create a new process.
4. Initiation of a batch job.
15
When an operating system is booted, typically several processes are created. Some of these are
foreground processes, that is, processes that interact with (human) users and perform work for
Page
15
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
them. Others are background processes, which are not associated with particular users, but
instead have some specific function.
All processes have a unique process ID , getpid() , getppid() , system calls allow processes to get
their information.
System call
It provides the interface between a process and the operating system. These calls are available as
assembly language instructions. System calls for Modern Microsoft Windows platform are part of
the win32 application programmer interface (API).
Fork ( )
Exec ( )
Signal ( )
Kill ( )
clone ( )
vfork ( )
wait ( )
exit ( )
Process Suspension –
Swap out on temporary basis and later on resume to increase the performance.
Suspension is not for I/O.
On performance reason a process is suspended.
All 3 state “ready – running – block’ are possible to for suspend.
There are two independent concepts here: whether a process is waiting on an event (blocked or
not) and whether a process has been swapped out of main memory (suspended or not).
To accommodate this 2 × 2 combination, we need four states:
i. Ready: The process is in main memory and available for execution.
ii. Blocked: The process is in main memory and awaiting an event.
iii. Blocked/Suspend: The process is in secondary memory and awaiting an event.
iv. Ready/Suspend: The process is in secondary memory but is available for execution as soon
as it is loaded into main memory.
16
Page
16
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
Blocked → Blocked/Suspend: If there are no ready processes, then at least one blocked process is
swapped out to make room for another process that is not blocked. This transition can be made
even if there are ready processes available, if the OS determines that the currently running
process or a ready process that it would like to dispatch requires more main memory to maintain
adequate performance.
Blocked/Suspend → Ready/Suspend: A process in the Blocked/Suspend state is moved to the
Ready/Suspend state when the event for which it has been waiting occurs. Note that this requires
that the state information concerning suspended processes must be accessible to the OS.
Ready/Suspend → Ready: When there are no ready processes in main memory, the OS will need to
bring one in to continue execution. In addition, it might be the case that a process in the
Ready/Suspend state has higher priority than any of the processes in the Ready state. In that case,
the OS designer may dictate that it is more important to get at the higher-priority process than to
minimize swapping.
Ready → Ready/Suspend: Normally, the OS would prefer to suspend a blocked process rather than
a ready one, because the ready process can now be executed, whereas the blocked process is
taking up main memory space and cannot be executed. However, it may be necessary to suspend a
ready process if that is the only way to free up a sufficiently large block of main memory. Also, the
OS may choose to suspend a lower-priority ready process rather than a higher priority blocked
process if it believes that the blocked process will be ready soon.
17
Several other transitions that are worth considering are the following:
Page
17
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
New → Ready/Suspend and New → Ready: When a new process is created, it can either be added to
the Ready queue or the Ready/Suspend queue. In either case, the OS must create a process control
block and allocate an address space to the process. It might be preferable for the OS to perform
these housekeeping duties at an early time, so that it can maintain a large pool of processes that
are not blocked. With this strategy, there would often be insufficient room in main memory for a
new process; hence the use of the (New → Ready/Suspend) transition. On the other hand, we
could argue that a just-in-time philosophy of creating processes as late as possible reduces OS
overhead and allows that OS to perform the process-creation duties at a time when the system is
clogged with blocked processes anyway.
Blocked/Suspend → Blocked: Inclusion of this transition may seem to be poor design. After all, if a
process is not ready to execute and is not already in main memory, what is the point of bringing it
in? But consider the following scenario: A process terminates, freeing up some main memory.
There is a process in the (Blocked/Suspend) queue with a higher priority than any of the
processes in the (Ready/Suspend) queue and the OS has reason to believe that the blocking event
for that process will occur soon. Under these circumstances, it would seem reasonable to bring a
blocked process into main memory in preference to a ready process.
Running → Ready/Suspend: Normally, a running process is moved to the Ready state when its time
allocation expires. If, however, the OS is preempting the process because a higher-priority process
on the Blocked/Suspend queue has just become unblocked, the OS could move the running
process directly to the (Ready/Suspend) queue and free some main memory.
Any State → Exit: Typically, a process terminates while it is running, either because it has
completed or because of some fatal fault condition. However, in some operating systems, a process
may be terminated by the process that created it or when the parent process is itself terminated. If
this is allowed, then a process in any state can be moved to the Exit state.
Process Termination –
18
After a process has been created, it starts running and does whatever its job is. However, nothing
Page
lasts forever, not even processes. Sooner or later the new process will terminate, usually due to
one of the following conditions:
18
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
--------------------◄►--------------------
19
Page
19
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
2. THREADS
Thread –
20
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
In a single-threaded process model the representation of a process includes its process control
block and user address space, as well as user and kernel stacks to manage the call/return behavior
of the execution of the process.
While the process is running, it controls the processor registers. The contents of these registers
are saved when the process is not running.
In a multithreaded environment, there is still a single process control block and user address
space associated with the process, but now there are separate stacks for each thread, as well as a
separate control block for each thread containing register values, priority, and other thread-
related state information.
Types of thread –
1. User-Level Threads –
In a pure ULT facility, all of the work of thread management is done by the application.
The kernel is not aware of the existence of threads.
User threads are supported above the kernel and are managed without kernel support.
The threads library contains code for creating and destroying threads, for passing
21
messages and data between threads, for scheduling thread execution, and for saving and
restoring thread contexts.
Page
2. Kernel-Level Threads –
In a pure KLT facility, all of the work of thread management is done by the kernel.
There is no thread management code in the application level, simply an application
programming interface (API) to the kernel thread facility.
Kernel threads are supported and managed directly by the operating system.
Windows is an example of this approach.
Advantages –
Takes less time to create a new thread in an existing process than to create a brand new process.
Take less time to terminate a thread than a process.
Switching between thread is faster than a normal context switch.
Threads enhance efficiency in communication between different executing programs. No kernel
involved.
Multithreading Models –
i. Many-to-One Model –
The many-to-one model maps many user-level threads to one kernel thread. Thread management
22
is done by the thread library in user space, so it is efficient; but the entire process will block if a
thread makes a blocking system call. Also, because only one thread can access the kernel at a time,
Page
23
Page
23
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
Concurrent programming –
Assumptions –
2 or more threads.
Each executes in parallel.
We can’t predict exact running speed.
The threads can interact via access to shared variable.
Example –
Security of threads –
Since, there is an extensive sharing among threads; there is potential problem of security. It is
quite possible that one thread overwrites the stack of another thread although it very unlikely
since threads are meant to corporate on a single task.
Advantages of threads –
The best advantage is that a user level threads package can be implemented on an operating
system.
They do not require modification to operating systems.
They have a simple representation i.e. each thread is represented by a PC, register, stack and small
control block all stored in the user process address space.
They have a simple management i.e. thread creation, switching between threads and
synchronization between threads can all be done without intervention of the kernel.
They are fast and efficient i.e. thread switching is not expensive than a procedure call.
Advantages –
Context switching.
Sharing
Disadvantages –
24
Blocking.
Page
24
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
Disadvantages of Threads –
There is a lack of coordination between thread and kernel. Therefore a process as whole gets one
time slice irrespective of whether it has one thread or 1000 threads within. Each thread
relinquishes control to other threads.
They require non – blocking system call i.e., a multithreaded kernel. Otherwise the entire process
will be blocked in the kernel, even if there are runnable threads left in the processes. If one thread
causes a page fault, the process blocks.
Process scheduling –
The objective of multiprogramming is to have some process running at all times, to maximize CPU
utilization.
The objective of time sharing is to switch the CPU among processes so frequently that users can
interact with each program while it is running.
The aim of processor scheduling is to assign processes to be executed by the processor or
processors over time, in a way that meets system objectives, such as response time, throughput,
and processor efficiency.
Scheduling queue –
Queue are implemented and stored in main memory. Queue is for process manager.
1. Ready queue.
2. Job queue.
i. Device queue
ii. Event queue
Job queue consists of all processes in the system.
The processes that are residing in main memory and are ready and waiting to execute are kept on
a list called the ready queue.
The list of processes waiting for a particular I/O device is called a device queue.
Queuing diagram –
Processes entering into the system are put into a job queue.
Processes in memory waiting to be executed are kept in a list called ready queue.
25
25
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
Scheduler –
A process migrates among the various scheduling queues throughout its lifetime. The operating
system must select, for scheduling purposes, processes from these queues in some fashion. The
selection process is carried out by the appropriate scheduler.
Types of Scheduler –
The names suggest the relative time scales with which these functions are performed.
26
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
The key idea behind a medium-term scheduler is that sometimes it can be advantageous to
remove processes from memory and thus reduce the degree of multiprogramming.
Later, the process can be reintroduced into memory, and its execution can be continued
where it left off. This scheme is called swapping.
The process is swapped out, and is later swapped in, by the medium-term scheduler.
The long-term scheduler executes much less frequently; minutes may separate the creation
of one new process and the next.
Page
Context Switch –
Switching the CPU to another process requires saving the state of the old process and loading the
saved state of the new process. This task is known as a context switch.
The context of a process is represented in the PCB of the process; it includes the value of the CPU
registers, the process state and memory management.
Context switch time varies from 1 to 1000 micro sec.
Context-switch times are highly dependent on hardware support information.
A context switch here simply requires changing the pointer to the current register set
Levels of scheduling –
28
Page
28
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
IPC is best provided by a message-passing system, and message passing systems can be defined in
many different ways –
The function of a message system is to allow processes to communicate with one another without
the need to resort to shared data. An IPC facility provides at least the two operations
send(message) and receive(message).
If processes P and Q want to communicate, they must send messages to and receive messages
from each other; a communication link must exist between them.
Several methods for logically implementing a link and the send()/receive() operations are:
Direct or indirect communication.
Synchronous or asynchronous communication.
Automatic or explicit buffering.
Processes that want to communicate can use either direct or indirect communication.
In direct communication, each process that wants to communicate must explicitly name the
recipient or sender of the communication. In this scheme, the send() and receive() primitives are
defined as:
send(P, message) — Send a message to process P.
receive(Q, message) — Receive a message from process Q.
In indirect communication, the messages are sent to and received from mailboxes, or ports.
The send() and receive() primitives are defined as follows:
send(A, message) —Send a message to mailbox A.
receive(A, message) — Receive a message from mailbox A.
29
Page
29
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
Buffering –
Zero capacity: The queue has maximum length 0; thus, the link cannot have any messages
waiting in it. In this case, the sender must block until the recipient receives the message.
Bounded capacity: The queue has finite length n; thus, at most n messages can reside in it. If
the queue is not full when a new message is sent, the latter is placed in the queue and the
sender can continue execution without waiting. The link has a finite capacity, however. If
the link is full, the sender must block until space is available in the queue.
Unbounded capacity: The queue has infinite length; thus, any number of messages can wait
in it. The sender never blocks.
Process synchronization –
Race condition
Deadlock
The situation, where several threads access and manipulate the same data concurrently, and
where the outcome of the execution depends on the particular order in which the access takes
place, is called a race condition.
Critical section –
30
A section of code or set of operations, in which process may be charging shared variable, updating
Page
30
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
Architecture –
If there are several process, during the operation on common data in a common file then only one
process is allow to be there in the C.S. & once the process start during the operation in C.S. it
complete the entire operation before it exit the exit section.
A C.S. environment contain –
Entry section
Critical section
Exit section
A solution to the critical section problem must satisfy the three requirements.
Mutual exclusion
Progress
Bounded waiting
Mutual exclusion: No more than one process can execute in its critical section at a time.
Progress: Process running out – side critical section should not be blocked.
If no process is executing in its critical section and there exist some processes that wish to enter
their critical section, then only those processes that are not executing in the critical section can
participate in the decision of which will enter its critical section next and this selection can’t be
postponed indefinitely.
Bounded waiting: No process should have o wait forever to enter its critical section.
Synchronization Mechanism –
This method is Disable interrupt (DI) while a process is modifying a shared variable.
31
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
1) Strict alternative –
Algorithm 1-
void P0(void) void P1(void)
{ {
while (1) while (1)
{ {
non.cs( ); non.cs( );
entry : while(turn != 0) entry : while(turn != 1)
< C.S > < c.s >
exit : turn = 1; exit : turn = 0;
} }
} }
Algorithm 2-
process = P0 process = P1
flag[0] = true; flag[1] = true;
while( flag[1]) while( flag[1])
{ {
BW; BW;
} }
< C.S > < C.S >
process = P0 }
flag[0] = true; < C.S >
turn = 1;
32
turn = 1;
BW;
32
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
process = P1 }
flag[1] = true; < C.S >
turn = 0;
while( flag[0]&&turn==0) flag[1] = false;
{ turn = 0;
BW;
#define n 100
int Buffer[n];
{ {
while(1) while(1)
{ produceitem(itemp); { while(count==0);
} } } }
SEMAPHORE –
A semaphore could have the value 0, indicating that no wakeups were saved, or some positive
value if one or more wakeups were pending.
A semaphore may be initialized to a nonnegative integer value.
Semaphore operation executed in kernel mode.
33
wait(S) : while(S<=0)
{
//keep testing
}
S=S–1
signal(S) : S = S + 1
Types of semaphore –
A. binary semaphore –
It is also known as “Mutex”. Binary semaphore is initialized by O.S. to 1.
wait operation decrement the value by 1. signal operation increment the value by 1.
Binary semaphore can take value either 0 or 1.
int S = 1
wait (S)
{
while( S <= 0)
{ signal (S)
BW; {
} S ++ ;
S--; }
}
Advantages –
Disadvantages –
Page
A process waiting to enter CS, will perform busy waiting, thus waiting CPU cycle.
B. Counting semaphore –
A pointer to a process queue, the queue will hold process control block (PCB) of all those
processes that are waiting to enter their critical sections. This queue is implemented as
FCFS queue, so that the waiting processes are served on FCFS order.
Implementation –
struct CSEMAPHORE
int value ;
QueueType L ;
};
CSEMAPHORE S ;
S.value = 4 ;
DOWN(S) ;
UP(S);
DOWN( CSEMAPHORE S )
{
35
S.value = S.value – 1 ;
if (S.value < 0)
Page
{
35
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
UP operation defined as –
UP( CSEMAPHORE S )
{
S.value = S.value – 1 ;
if ( S.value <= 1 )
{
select a process from S.L( )
&
wakeup( ) ;
}
}
Advantages –
The waiting process will be permitted to enter their critical section in a FACS order, so the
requirement of bounded waiting is met.
CPU cycles are saved here as waiting process does not perform ant busy waiting.
Disadvantages –
36
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
4. CONCURRENCY
Concurrency control –
Concurrency –
Real –
Active through multiprocessor, array processor, & vector processor (Physical
concurrency)
Pseudo –
Achieve by process.
Dependency graph –
Example –
S1 : a = b + c ;
S2 : d = e + f ;
S3 : k = a + d ;
S4 : l = k * m ;
flow dependency : I → J
I:R{b,c},W{a}
J:R{a},W{k}
W( I ) ∩ R( J ) = ∅
2. Anti dependency –
37
I:b=a+c
Page
J:a=k+l
37
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
Anti dependency : I J
I:W{b},R{a,c}
J:W{a},R{k,l}
R( I ) ∩ W( J ) = ∅
3. Output dependency –
I:a=b+c
J:a=d+e
Output dependency : I J
W( I ) ∩ W( J ) = ∅
join(count)
{
count = count -1 ;
If (count != 0)
exit ;
}
38
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
--------------------◄►--------------------
39
Page
39
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
5. DEADLOCKS
Deadlock –
A process requests resources; if the resources are not available at that time, the process enters a
wait state. Waiting processes may never again change state, because the resources they have
requested are held by other waiting processes. This situation is called a deadlock.
Deadlock is permanent because none of the events is ever triggered.
A set of processes is deadlocked if each process in the set is waiting for an event that only another
process in the set can cause.
Another name for deadlock is LOCK – UP.
Example –
System model –
Each resource type Ri has Wi instance, each process utilize a resource a follows –
a) Request –
40
If the resource is not available when it is requested, the requesting process is forced to wait.
Page
40
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
b) Assignment –
The operating system will assign to the requesting process an instance of the requested resource,
whenever it is available. Then, the process comes out of its waiting state.
c) Use –
The process will use the assigned process.
d) Release –
The processes release the resource.
a) Mutual exclusion –
Each resource is either currently assigned to exactly one process or is available.
Only one process may use a resource at a time. No process may access a resource unit that
has been allocated to another process.
If another process requests that resource, the requesting process must be delayed until the
resource has been released.
c) No preemption –
Resources previously granted cannot be forcibly taken away from a process. They must be
explicitly released by the process holding them.
d) Circular wait –
A closed chain of processes exists, such that each process holds at least one resource
needed by the next process in the chain.
41
Page
41
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
1. No deadlock
2. Allow deadlock
No Deadlock occur –
i. Prevention
ii. Avoidance
Deadlock occur –
i. Detection & recovery
Deadlock detection –
The resource allocation graph is a directed graph that depicts a state of the system of resources
and processes, with each process and each resource represented by a node.
The graph consists of 2 sets, a set of vertices V and a set of edges E.
The set of vertices V is partitioned into two different types of nodes:
P = {P1, P2, . . . . . . . . , Pn}, the set consisting of all the active processes in the system, and
R = {R1, R2, . . . . . . . . . , Rm}, the set consisting of all resource types in the system.
A directed edge from process Pi to resource type Rj is denoted by Pi → Rj; it signifies that process
Pi requested an instance of resource type Rj and is currently waiting for that resource.
A directed edge from resource type Rj to process Pi is denoted by Rj → Pi; it signifies that an
instance of resource type Rj has been allocated to process Pi.
42
Page
42
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
Examples –
43
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
Note –
Deadlock prevention –
1. Mutual exclusion
2. Hold and wait
3. No preemption
4. Circular wait
Mutual exclusion –
If no resource were ever assigned exclusively to a single process, we would never have
deadlocks.
This must hold for non – sharable resources.
If we can prevent processes that hold resources from waiting for more resources, we can
Page
eliminate deadlocks.
44
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
The hold-and-wait condition can be prevented by requiring that a process request all of its
required resources at one time and blocking the process until all requests can be granted
simultaneously.
No preemption –
If a process is holding some resources and requests another resource that cannot be
immediately allocated to it (that is, the process must wait), then all resources currently being
held are preempted.
The preempted resources are added to the list of resources for which the process is waiting.
The process will be restarted only when it can regain its old resources, as well as the new ones
that it is requesting.
There are two approach –
Self
Force
Circular wait –
One way to ensure that this condition never holds is to impose a total ordering of all resource
types and to require that each process requests resources in an increasing order of
enumeration.
Deadlock avoidance –
Simplest and most useful model requires that each process declares the maximum number of
resources of each type that it may need.
The deadlock avoidance algorithm dynamically examines the resource allocation state to
ensure that there can never be a circular wait condition.
Resource allocation state is defined by the number of available and allocated resource and the
maximum demands of the process.
Safe state –
45
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
A sequence of processes <P1, P2, . . . . . . . . . , Pn> is a safe sequence for the current allocation
state if, for each Pi, the resource requests that Pi can still make can be satisfied by the currently
available resources plus the resources held by all Pj , with j < i.
When they have finished, Pi can obtain all of its needed resources, complete its designated task,
return its allocated resources, and terminate. When Pi terminates, Pi+1 can obtain its needed
resources, and so on.
An edge from Pi → Rj indicates that process Pi may request resource Rj at some time in the future.
This is called claim edge.
This edge resembles a request edge in direction but is represented by a dashed line.
When process Pi requests resource Rj, the claim edge Pi → Rj is converted to a request edge.
Similarly, when a resource Rj is released by Pi, the assignment edge Rj → Pi is reconverted to a
claim edge Pi → Rj.
Banker’s algorithm –
The banker’s algorithm considers each request as it occurs, and sees if granting it leads to a safe
state. If it does, the request is granted; otherwise, it is postponed until later.
OR
When a user requests a set of resources, the system must determine whether the allocation of
46
these resources will leave the system in a safe state. If it will, the resources are allocated;
Page
otherwise, the process must wait until some other process releases enough resources.
46
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
These data structures vary over time in both size and value.
Safety Algorithm –
The algorithm for finding out whether or not a system is in a safe state can be described as
follows:
a. Finish[i] == false
47
b. Needi ≤ Work
If no such i exists, go to step 4.
Page
47
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
Finish[i] = true
Go to step 2.
This algorithm may require an order of m × n2 operations to decide whether a state is safe.
Resource-Request Algorithm –
When a request for resources is made by process Pi, the following actions are taken:
Otherwise, raise an error condition, since the process has exceeded its maximum
claim.
Step 3 : Have the system pretend to have allocated the requested resources to process P i by
modifying the state as follows:
If the resulting resource-allocation state is safe, the transaction is completed, and process Pi is
allocated its resources. However, if the new state is unsafe, then Pi must wait for Requesti, and the
old resource-allocation state is restored.
48
Page
48
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
Example –
Consider a system with five processes P0 through P4 and three resource types A, B, C. Resource
type A has 10 instances, resource type B has 5 instances, and resource type C has 7 instances.
Suppose that, at time T0, the following snapshot of the system has been taken:
We can say that the system is currently in a safe state. Indeed, the sequence <P1, P3, P4, P2, P0>
satisfies the safety criteria. Suppose now that process P1 requests one additional instance of
resource type A and two instances of resource type C, so Request1 = (1,0,2). To decide whether
this request can be immediately granted, we first check that Request1 ≤ Available — that is,
(1,0,2) ≤ (3,3,2), which is true. We then pretend that this request has been fulfilled.
49
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
The system state is in safe state. The sequence <P1, P3, P4, P0, P2> satisfies our safety requirement.
Hence, we can immediately grant the request of process P1.
Note that when the system is in this state, a request for (3, 3, 0) by P4 cannot be granted, since the
resources are not available. Furthermore, a request for (0, 2, 0) by P0 cannot be granted, even
though the resources are available, since the resulting state is unsafe.
Deadlock detection is used by employing an algorithm that tracks the circular waiting and killing
one or more processes so that the deadlock is removed. The system state is examined periodically
to determine if a set of processes is deadlocked. A deadlock is resolved by aborting and restarting
a process, relinquishing all the resources that the process held.
If in the RAG, every resource has only on instance (or single instance) then we define a deadlock
detection algorithm that uses a variant of the RAG and is called a wait-for graph.
We can get this by removing the nodes of type resource and collapsing the appropriate edges.
Wait-for-graph has a cycle then there is deadlock in the system.
To detect deadlocks, the system needs to maintain the wait-for graph and to periodically
invoke an algorithm. The complexity of this algorithm is O(n2)where n is the number of vertices in
the graph.
50
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
We draw the wait – for – graph by removing all nodes that represent resource and collapsing their
edge.
Cycle → P1 , P 2 , P 4 , P 1
Cycle → P1 , P 2 , P 3 , P 4 , P 1
--------------------◄►--------------------
51
Page
51
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
6. CPU SCHEDULING
CPU scheduling -
All systems –
Batch systems -
Interactive systems -
Real-time systems -
Terminology –
52
Page
52
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
i. Throughput –
Throughput is the number of jobs per hour / per unit time that the system completes.
v. Waiting time –
Waiting time is the sum of the periods spent waiting in the ready queue.
WT = TAT – BT
WT = CT – AT – BT
It should be minimum.
vii. Deadline –
Maximum number of process done in a given time.
53
Page
53
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
Types of scheduling –
I. Preemptive
II. Non – preemptive
Difference -
Dispatcher –
The dispatcher is the module that gives control of the CPU to the process selected by the short-term
scheduler. This function involves the following:
Switching context
Switching to user mode
Jumping to the proper location in the user program to restart that program
Scheduling algorithms –
CPU scheduling deals with the problem of deciding which of the processes in the ready queue is to
be allocated the CPU. There are many different CPU scheduling algorithms.
The process that requests the CPU first is allocated the CPU first.
Page
54
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
Process AT BT CT TAT WT RT
P1 0 5 5 5 0 0
P2 0 24 29 29 5 5
P3 0 16 45 45 29 29
P4 0 10 55 55 45 45
P5 0 3 58 58 55 55
Total - 192 134 134
Advantages –
Disadvantages –
55
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
Example for non – preemptive scheduling – Find average waiting time and average TAT.
Process AT BT CT TAT WT
1 0 5 8 8 3
2 0 24 58 58 34
3 0 16 34 34 18
4 0 10 18 18 8
5 0 3 3 3 0
Total - 121 63
Preemptive SJF algorithm - A preemptive SJF algorithm will preempt the currently executing
Page
process, whereas a non-preemptive SJF algorithm will allow the currently running process
56
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
to finish its CPU burst. Preemptive SJF scheduling is sometimes called shortest-remaining-
time-first scheduling.
Advantages –
Disadvantages –
3. Priority scheduling –
57
A priority is associated with each process, and the CPU is allocated to the process with the highest
priority. Equal-priority processes are scheduled in FCFS order.
Page
57
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
Priority scheduling can be either preemptive or non preemptive. When a process arrives at the
ready queue, its priority is compared with the priority of the currently running process.
Process AT Priority BT CT WT
P1 0 3 10 17 8–1=7
P2 1 2 5 8 4–2=2
P3 2 1 2 4 0
Process AT Priority BT
P0 0 5 10
P1 1 4 6
58
P2 3 2 2
P3 5 0 4
Page
58
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
Solution –
Gantt chart –
Process Priority BT
P1 3 10
P2 1 1
P3 3 2
P4 4 1
P5 2 5
Solution –
Gantt chart –
Process Priority BT CT WT
P1 3 10 16 6
P2 1 1 1 0
P3 3 2 18 16
P4 4 1 19 18
P5 2 5 6 1
59
59
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
Problems –
W S
R
S
Where,
R = response ratio
W = time spent waiting for the processor
S = expected service time
Process AT BT
P1 0 3
P2 2 6
P3 4 4
P4 6 5
P5 8 2
Solution –
Gantt chart –
60
Page
60
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
Process AT BT CT TAT
P1 0 3 3 3
P2 2 6 15 13
P3 4 4 8 4
P4 6 5 20 14
P5 8 2 10 2
Example – what will be the completion time of all processes in round robin algorithm.
( Time quantum 1 )
Process BT
P 4
Q 1
R 8
S 1
Solution – Gantt chart –
61
Page
61
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
Process BT CT WT
P 4 9 5
Q 1 2 1
R 8 14 6
S 1 4 2
NOTE –
If time quantum (TQ) is very small, efficiency = 0
If TQ is low, more number of context switching is occurring.
If TQ is high, FCFS approach is follow.
TQ (Time quantum)–
Small – More context switching or CPU overload is increase. (Improve response time).
Large – Less context switching overhead. (Less interactive response time)
Very large – Work like FCFS (Very poor response time).
Example -
Page
62
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
--------------------◄►--------------------
63
Page
63
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
Memory management –
The organization and management of main memory has been one of the most important factors
influencing the OS design.
Main memory management is primarily concerned with the allocation of main memory to
requesting processes.
Protection and sharing of memory are two important memory management functions.
Advantages
Simplicity
Small O.S
Disadvantages
64
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
Protection –
If two programs are in memory at the same time there is a chance that one program can
write to the address space of another program.
Relocation –
It generates absolute addresses. It must be known at the Compile time itself that where will
a process reside in the memory.
Problem: If starting address of a process in memory changes then the entire process must
be recompleted to generate the absolute addresses again.
Processes can be moved in memory during execution. Needs good hardware support.
65
Program relocation –
65
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
Relocation is the mechanism to Convert logical (or virtual) address to a physical address.
MMU or Memory Management Unit is special hardware which performs address binding, uses
relocation scheme addressing, which means that code runs differently when loaded at different
places.
Logical and physical addresses differ in execution time address binding scheme. Logical and
physical addresses are same in compile time and load time address binding schemes.
Address of an instruction or data used by a program. it is generated by the CPU. Logical address
space is depicted below.
It is the effective memory address of an instruction or data that is obtained after address binding
has been done i.e., after logical addresses have mapped to their physical addresses.
66
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
The process of mapping logical addresses to physical addresses in the memory is called address
binding.
Relocation is necessary at the time of swapping in of a process from a backing store to the main
memory.
Types of Relocation –
It is a relocation formed during the loading of the program into memory by a loader.
The system with static relocation in a swapped out process must be swapped back into the
same partition from which it was removed.
It implies that mapping from the virtual address space to physical address space at run
time.
67
Page
Protection
67
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
Base Register
Limit Register
It contains the size of the process.
Figure shows the hardware protection mechanism with base and limit registers.
68
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
Swapping
It is the process of temporarily removing inactive programs from the main memory of a computer
system.
A variant of this swapping policy is used for priority based scheduling algorithms. If a higher
priority process arrives and wants service, the memory manager can swap out the lower priority
and then load and execute the higher priority process. When the higher priority process finishes
the lower priority process can be swapped back in and continued. This variant of swapping called
rolled out, rolled in. Context switching time in swapping system is fairly high.
Example - let us assume that the user process is 10 MB in size and backing store is a standard hard
disk with a transfer rate of 40 MB per second. Find actual time transfer of the 10 MB process to or
from main memory.
Memory allocation
Implies the division of memory into number of partitions and its size is made in the beginning
prior- to the execution of user programs and remains fixed thereafter.
Disadvantages
No single program (or process) may exceed the size of the largest partition in a
given system.
It does not support a system having dynamic data structure such as stack, queue,
and heap.
It limits the degree of multi-programming which in turn may reduce the
effectiveness of short term scheduling.
Wastage of memory by programs that is smaller than their partitions. This wastage
is known as Internal Fragmentation.
Suppose a system supports a page size of P bytes, then a program of size M bytes
will have internal fragmentation = P – (M% P) bytes
The size and the number of partitions are decided during the run time by the O.S.
OS keeps track of status of the memory partition and this is done through a data structure
called partition description table (PDT).
70
70
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
Partition
Starting address Size of Partition
numberof partition partition status
1 0K 200 K Allocated
2 200 K 200 K Free
3 400 K 200 K Allocated
4 600 K 300 K Allocated
5 900 K 100 K Allocated
6 1000 K 100 K Free
The most Common strategies to allocate free partitions to the new processes are:
1. First Fit:
Allocate the first free partition, large enough to accommodate the process.
Executes faster.
2. Best Fit:
Allocate the smallest free partition that meets the requirement of the process.
Achieves higher utilization of memory by searching smallest free partition.
3. Worst Fit:
Allocate the largest available partition to the newly entered process in the system.
4. Next Fit:
Start from current location in the list.
5. Quick fit:
Keep separate tests for common sizes.
• Create partitions dynamically to meet the requirements of each requesting process.
• Neither the size nor the numbers of dynamically allocated partitions need to be
limited.
• Memory manager continues creating and allocating partitions to the requesting
processes until all physical memory is exhausted or maximum allowable degree of
multi-programming is reached.
• OS keeps track of which parts of memory are available and which are not.
Compaction
Compaction is a technique by which the resident program are relocated in such a way that
the small chunks of free memory are made contiguous to each other and clubbed together
into a single free partition that may be big enough to accommodate more programs.
71
Page
71
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
Disadvantages
Memory Management
The simplest memory management scheme is to run just one program at a time, sharing the
memory between that program and the operating System.
72
Page
The first model (a) was used on main frames and minicomputers.
The second model (b) is used on palm top computers and embedded systems.
72
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
The third model (c) is used in personal computers (e.g., running MS - DOS), where the
portion of the system in the ROM is called BIOS (Basic Input Output System).
When a job arrives, it can be put into the input queue for the smallest partition large enough to
hold it. Since the partitions are fixed in this scheme, any space in a partition not used by a job is
lost.
↑____ Fixed memory partition with separated input queues for each partition
Disadvantage
When it has decided to bring a K unit process into memory, the memory manager must search
the bitmap to find a run of K consecutive 0 bits in the map.
73
73
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
Keeping track of memory is to maintain a linked list of allocated and free memory partitions.
Modeling Multi-programming
Suppose that a process spends a fraction p of its time waiting for I/O to complete. With n
process in memory at once, the probability that all n processes are waiting for I/O (in which
case the CPU is idle) is Pn CPU utilization = I - Pn
Non-contiguous
To permit the sharing of data and code among processes, to resolve the problem of external
fragmentation of physical memory, to enhance degree of multiprogramming and to support
virtual memory concept, it was decided to have non-contiguous physical address space of a
process.
Paging –
74
74
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
LA : Logical address
PA : Physical address
PMT : Page map table / Page table
MMU : Memory Management Unit.
Page size and frame size is defined between 512 bytes to 4 KB depending on computer
architecture.
(2) Translation Look Aside Buffer (TLB) / content addressing memory / Look aside memory –
Problem – MMU cannot go to page table on every memory access.
Solution – special, small, fast – look up hardware cache associative, high speed memory.
If page number is found in the TLB then it is a TLB HIT otherwise TLB MISS occurs.
75
Page
Hit ratio = the percentage of times that a particular page number is found in the TLB.
75
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
Let the effective memory access time be ‘teff’ in systems with run time address translation equal to
the sum of the address translation time ‘tTR’ and the subsequent access time needed to fetch the
target from memory ‘tM’ mathematically –
teff = tTR + tM
tTR = h tTLB + ( 1 – h )(tTLB + tM)
tTR = tTLB + (1 – h) tM (h = TLB hit ratio)
teff = tTLB (2 – h) tM
76
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
a. Hierarchical Paging –
If page table size is large, then this type of page table is used.
The logical address space is broken up into multiple page tables.
A logical address space (On 32 bit machine with 1 K page size) is divided into a page
number consisting of 22 bit pages of each consisting of 10 bits.
The page number is paged; the page number is divided into 12 bit page number and 10 bit
page offset.
Here P1 is an index into the outer page table, P2 is the displacement within the page of the
outer page table.
Address translation –
77
Page
77
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
The virtual page number is compared with field a in the first element in the linked list. If
there is a match, the corresponding page frame is used to form the desired physical
address. If there is no match, subsequent entries in the linked list are searched for a
matching virtual page number.
Each inverted page-table entry is a pair <process-id, page-number> where the process-id
assumes the role of the address-space identifier.
Page
78
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
This scheme decreases the amount of memory needed to store each page table, it increases
the amount of time needed to search the table when a page reference occurs.
Pros -
Cons -
Pros -
Cons -
Overhead = +
P 2
79
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
Where,
S → average process size in byte
P → Page size in byte
e → Page entry
Page number: This is the page number portion of the virtual address.
Process identifier: The process that owns this page. The combination of page number and process
identifier identify a page within the virtual address space of a particular process.
Control bits: This field includes flags, such as valid, referenced, and modified; and protection and
locking information.
Chain pointer: This field is null (perhaps indicated by a separate bit) if there are no chained entries
for this entry. Otherwise, the field contains the index value (number between 0 and 2m - 1) of the
next entry in the chain.
Virtual memory is a technique that allows the execution of processes that are not completely in
memory.
This technique frees programmers from the concerns of memory-storage limitations.
80
80
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
It allows programs to be altered and recompiled independently, without requiring the entire set of
programs to be re-linked and reloaded.
Supports multi-programming.
Breaking the programs into small pieces is called as “Overlays”.
Pages are swapped in and out.
Demand paging.
Demand segmentation.
Demand Paging –
A demand-paging system is similar to a paging system with swapping where processes reside on
secondary memory (usually a disk).
When a process is to be swapped in, the pager guesses which pages will be used before the
process is swapped out again. Instead of swapping in a whole process, the pager brings only those
necessary pages into memory.
When this bit is set to "valid," the associated page is both legal and in memory.
If the bit is set to "invalid," the page either is not valid or is valid but is currently on the disk.
Page
81
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
Page fault –
If the process tries to access a page that was not brought into memory then page fault trap
occur.
This trap is the result of the operating system's failure to bring the desired page into
memory.
Check an internal table (usually kept with the process control block) for this process to
determine whether the reference was a valid or an invalid memory access.
If the reference was invalid, we terminate the process. If it was valid, but we have not yet
brought in that page, we now page it in.
Find a free frame.
Schedule a disk operation to read the desired page into the newly allocated frame.
When the disk read is complete, we modify the internal table kept with the process and the
page table to indicate that the page is now in memory.
Restart the instruction that was interrupted by the trap. The process can now access the
page as though it had always been in memory.
82
Page
82
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
The hardware to support demand paging is the same as the hardware for paging and swapping:-
Page table: This table has the ability to mark an entry invalid through a valid-invalid bit or special
value of protection bits.
Secondary memory: This memory holds those pages that are not present in main memory. The
secondary memory is usually a high-speed disk. It is known as the swap device, and the section of
disk used for this purpose is known as swap space.
Let the memory-access time (ma), which ranges from 10 to 200 nanoseconds.
If no page faults, the effective access time is equal to the memory access time.
If, however, a page fault occurs, first read the relevant page from disk and then access the desired
word.
Let p be the probability of a page fault (0 ≤ p ≤ 1).
‘p’ to be close to zero—that is, have only a few page faults.
The effective access time is then -
83
83
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
Page Replacement –
If no frame is free, we find one that is not currently being used and free it.
We can free a frame by writing its contents to swap space and changing the page table to indicate
that the page is no longer in memory.
Dirty bit –
We can reduce this overhead by using a modify bit (or dirty bit).
This is set by the hardware whenever any word or byte in the page is written into,
indicating that the page has been modified.
Page replacement is basic to demand paging.
It completes the separation between logical memory and physical memory.
We must develop -
A frame-allocation algorithm and
A page-replacement algorithm.
Segmentation –
84
Hardware –
85
Segmentation Example –
Page
85
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
Segmentation architecture –
Prepaging –
Page is to get before all or some of the pages a process will need, before they are referenced.
If prepaged pages are unused, I/O and memory would be wasted.
Assume 's’ pages are prepaged and α of the page is used.
Cost of (s * α) to save page faults greater than or less than the cost of prepaging s x (1 - α)
unnecessary page.
If α near zero → prep aging is lost.
TLB reach –
occur.
Page
87
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
Page fault
Whenever a processor needs to execute a particular page and that page is not available in main
memory, this situation is said to be “page fault”.
When the page fault occurs the page replacement will be done.
‘Page Replacement’ means select a victim page in the main memory, replace that page with the
required page from the backing store (disk).
0* 0 0 3* 3 3 2* 2 2 1* 1 1 1
1* 1 1 0* 0 0 3* 3 3 2* 2 2
2* 2 2 1* 1 1 0* 0 0 3* 3
88
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
Number of frames = 4
1* 1 1 1 1 1 5* 5 5 5 4* 4
2* 2 2 2 2 2 1* 1 1 1 5*
3* 3 3 3 3 3 2* 2 2 2
4* 4 4 4 4 4 3* 3 3
1* 1 1 4* 4 4 5* 5 5 5 5 5
2* 2 2 1* 1 1 1 1 3* 3 3
3* 3 3 2* 2 2 2 2 4* 4
1* 1 1 1 1 1 1 1 1 1 4* 4
2* 2 2 2 2 2 2 2 2 2 2
3* 3 3 3 3 3 3 3 3 3
4* 4 4 5* 5 5 5 5 5
89
89
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
Disadvantage - It requires future knowledge of reference string, so used for comparison studies.
1* 1 1 1 1 1 1 1 1 1 1 5*
2* 2 2 2 2 2 2 2 2 2 2
3* 3 3 3 5* 5 5 5 4* 4
4* 4 4 4 4 4 3* 3 3
THRASHING –
Cause of thrashing –
Consider the following scenario: -
The OS monitors CPU utilization. If the utilization too low, we increase the degree of multi-
programming by introducing a new process to the system. A global page replacement algorithm is
used, it replaces pages with no regard to the process to which they belong. Suppose that a process
enters a new phase in its execution and needs more frames. It starts faulting and taking frames
away from other processes. These processes need those pages, however and so they also fault,
taking frames from other processes. These faulting processes must use the paging device to swap
pages in and out. As they queue up for paging device, the ready queue empties. As processes wait,
for the paging device, CPU utilization decreases. The CPU scheduler sees the decreasing CPU
90
utilization so it increases the degree of multi-programming. The new process tries to get started
by taking frames from running processes, causing more page faults, and a longer queue for paging
Page
device. As a result, CPU utilization drops even, further. The CPU scheduler tries to increase the
90
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
degree of multi-programming even more. Thrashing occurs, and the system throughput Plunges.
The page fault rate (PFR) increases tremendously. Effective memory access time increases. No
work is getting done because the processes are spending all their time in Paging.
0 0 0 0 0 0 0 0
91
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
Counter Implementation:
Every page entry has a counter, every time page is referenced through this entry,
copy the clock (time stamp) into the counter.
When a page needs to be changed, look at the counters to find the smallest time
stamp to determine which are to change.
Stack Implementation:
Keep a stack of page numbers in a doubly linked list.
When a page is referenced
Move it to the top
Requires pointers to be changed.
No search for replacement.
Proportional allocation –
Allocate according to the size of process.
Si = size of process Pi
S = ΣSi
m = total number of frames
S
ai = allocation of pi = Si × m
Allocation for 64 frames between 2 processes a1 and a2
m = 64
S1 = 10
S2 = 127
10
a1 = 137 × 64 ≅ 5
92
127
a2 = 137 × 64 ≅ 59
Page
92
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
Priority allocation -
In both equal and proportional algorithms
The number of frames allocated also depends on multiprogramming level i.e.,
the more processes, the less frame each gets.
No differentiation on the priority of processes.
We want to allocate more frames to high priority processes to speed up their
execution.
Proportional + Priority allocation.
Use a proportional allocation scheme using priorities rather than size.
Counting algorithms –
Keep a counter of the number of references that have been made to each page.
Selects a page for replacement, if the page has not been used often in the part.
Replaces the page that has smallest count.
Maintains a counter, which shows the least count, replaces that page.
Replaces pages with smallest count.
Allocation of frames –
How does OS allocate the fixed amount of free memory (frames) among the various processes?
Simple frame allocation algorithm: In a single user system, OS takes some frames, the rest
of frames are assigned to user process.
Allocate at-least a minimum number of frames for each process.
Frame allocation is closely related to page replacement which page to replace in a page fault.
Global replacement
Process selects a replacement frame from the set of all frames; one process can take a
frame from another. (The number of frames allocated to a process may change).
Local Replacement
Each process selects from only its own set of allocated frames. (The number of frames
93
Cache Memory –
93
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
Definition -
A cache memory is a small, fast memory that retains copies of recently used information
from main memory.
It operates transparently to the programmers automatically deciding which values to keep
and which to overwrite.
CPU requests contents of memory location. The Cache is checked for the data. If present,
get from cache, otherwise read required block from main memory, then deliver from cache
to CPU.
The performance of cache memory is measured in terms of hit ratio.
When the CPU refers to memory and finds the word in cache, it is called hit ratio.
If the word is not found in cache and is in main memory it is called cache miss.
Hits
Hit ratio =
Hits + Misses
Size
Mapping function
Replacement Algorithm
Write Policy and Block size
Tag fields -
Cache lines -
94
The cache memory is divided into blocks or lines. Data is copied to and from the cache one
line at a time.
Page
94
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
Principle of locality -
Locality model states a process migrates from one locality to another (with respect to
generate address) while it executes and localities may overlap.
Program and data references within a process tend to cluster.
Only few pages/piece of a process will be needed over short period of time.
Possible to make guesses about which pages will be needed in the future.
Virtual memory may work efficiently.
Locality of reference of a process refers to its most recent/active pages.
Mapping -
The transformation of data from main memory to cache memory is referred to as mapping
process.
There are three popular methods of mapping addresses to cache locations.
Direct: Each address has specific place in the cache.
Set associative: Each address can be in any of a small set of cache locations.
Fully associative Search the entire cache for an address.
--------------------◄►--------------------
95
Page
95
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
The disk in controller processes the I/O request in the order in which they arrive, thereby moving
backwards and forward across the surface of the disk to get the next requested location each time.
87 , 170 , 40 , 150 , 36 , 72 , 66 , 15
FCFS -
( 72 - 36 ) + ( 72 - 66 ) + ( 66 - 15 )
= 557 cylinders
Advantages –
Disadvantages –
96
Used in small system only where I/O efficient is not very important.
Acceptable only when the load on a disk is high. As the load grows, FCFS tends to saturate the
device and the response.
SSTF -
= 6 + 6 + 15 + 47 + 4 + 21 + 135 + 20
= 244 cylinders
Advantages –
Disadvantages –
Starvation occurs if some processes have to wait for long time until its request is satisfied.
97
SSTF services requests for those tracks which are highly localized.
Page
97
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
SCAN scheduling –
Sometimes called Elevator algorithm, because it services all the request of going up & then
reaching at the top, it goes downward.
The disk arm starts at one end of the disk, and moves toward the other end, servicing requests as
it reaches each cylinder, until it gets to the other end of the disk. At the other end, the direction of
head movement is reversed, and servicing continues.
It needs 2 information –
1. Direction of head movement.
2. Last position of the disk head.
SCAN -
= 285 cylinders
Advantages –
Disadvantages –
Page
98
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
Because of the continuous scheduling scanning of disk from end to end, the outer tracks are
visited less than the mid range tracks.
Disk arm keeps scanning between 2 extremes; this may result in wear and tear of the disk
assembly.
Certain requests arriving ahead of the arm position would get immediate service but some
other request that arrive behind the arm position will have to wait for the arm to return back.
Advantages –
Disadvantages –
99
Page
99
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
Look scheduling –
In SCAN and C – SCAN, the arm goes only as far as the final request in each direction then; it
reverses direction immediately, without going all the way to the end of the disk.
These versions of SCAN and C-SCAN are called LOOK and C-LOOK scheduling, because they look
for a request before continuing to move in a given direction.
100
Page
100
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
a) Contiguous Allocation
101
Page
101
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
b) Linked Allocation
each data block contains the block address of the next block in the file
each directory entry contains:
o file name
o block address: pointer to the first block
o sometimes, also have a pointer to the last block (adding to the end of the file is much
faster using this pointer)
c) Indexed Allocation
i)
102
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
103
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
better than linked allocation if we want to seek a particular offset of a file because many links
are stored together instead of each one in a separate block
SGG call this organization a ``linked'' scheme, but I call it an ``indexed'' scheme because an
index is kept in main memory.
problem: index is too large to fit in main memory for large disks
o FAT may get really large and we may need to store FAT on disk, which will increase
access time
o e.g., 500 Mb disk with 1 Kb blocks = 4 bytes * 500 K = 2Mb entries
ii)
separate index for each file
c) Multilevel index
104
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
Question :
Consider a file currently consisting of 150 blocks. Assume that the file control block (and the index block, in
the case of indexed allocation) is already in memory. Calculate how many disk I/O operations are requried for
continguous, linked, and indexed (single--level) allocation strategies, if, for one block, the following conditions
hold. In the contiguous--allocation case, assume that there is no room to grow in the beginning, but there is
room to grow in the end. Assume that the block information to be added is stored in memory.
Assumptions:
a)
The block is added in the middle:
Contiguous: Assume that in the middle means after block 75 and before block 76. We move the last 75
blocks down one position and then write in the new block.
I/O operations
Linked: We cannot find block 75 without traversing the linked list stored in the first 74 data blocks. So,
we first read through these 74 blocks. Then we read block 75, copy its link into the new block (in main
memory), update block 75's link to point to the new block, write out block 75, write new block.
Indexed: Update the index in main memory. Write the new block.
105
1w = 1 I/O operation
b)
Page
0 I/O operations
Linked: Read in block 1 and change the starting address to the link stored in this block.
1r = 1 I/O operation
Indexed: Simply remove the block's address from the linked list in the index block.
0 I/O operations
Question:
Consider a file system on a disk that has both logical and physical block sizes of 512 bytes. Assume that the
information about each file is already in memory. For the contiguous strategy, answer these questions:
a)
How is the logical-to-physical address mapping accomplished in this system? (For the indexed
allocation, assume that a file is always less than 512 blocks long.)
b)
If we are currently at logical block 10 (the last block accessed was block 10) and want to access logical
block 4, how many physical blocks must be read from the disk?
Answer:
Assumptions: 1. Let L be the logical address and let P be the physical address. 2. The assumption in part (a) is
poorly given. It's more reasonable to simply assume that the index is small enough to fit into a single block. In
fact, a 512 block file will probably require more than a single 512 byte block because block addresses typically
require 3-4 bytes each.
(a) Overview The CPU generates a logical address L (a relative offset in a file) and the file system has to
convert it to a physical address P (a disk address represented by a block number PB and an offset in this
block). For convenience of calculation, we assume that blocks are numbered from 0. In any approach, we can
determine the logical block number LB by dividing the logical address L by the logical block size (here 512).
Similarly, the offset, which will be the same for logical and physical addresses since the block sizes are
identical, is determined by applying modulus. The offset is the same in all approaches.
LB := L div 512
offset := L mod 512
106
Contiguous: Assume S is the starting address of the contiguous segment. Then a simple approach to mapping
Page
106
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
P=S+L
PB = SB + LB
(b) If we are currently at logical block 10 and we want to access logical block 4 ...
Contiguous: We simply move the disk head back by 6 blocks (from physical block 10 to physical block 4)
because the space allocated to the file is contiguous. Then we read block 4, for a total of one read.
Bit Vector:
o Each block is represented by 1 bit.
o If a block is free, then the bit is 1, but if the block is in use, then the bit is 0.
o For example, if the disk had 10 blocks, and blocks 2, 4, 5, and 8 were free, while blocks 0, 1, 3, 6,
7, and 9 were in use, the bit vector would be represented as: 0010110010
Reasoning:
# of Blocks:
= disk size / block size
= 1.3 * 2^30 bytes / 2^9 bytes = 1.3 * 2^21
= 332.8K bytes
107
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
o number of i-nodes
o number of data blocks
o start of the list of free blocks
first few hundred entries
the rest of the free list is stored in a block that is otherwise free
Example on UNIX:
df = disk free
df -i /u
108
108
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
4864093 Kb = free
Each i-node:
Assume that there are 10 direct pointers to data blocks, 1 indirect pointer, 1 double indirect pointer,
and 1 triple indirect pointer
Assume that the size of the data blocks is 1024 bytes = 1Kb, i.e., BlockSize = 1Kb
Assume that the block numbers are represented as 4 byte unsigned integers, i.e., BlockNumberSize =
4b
Some data blocks are used as index blocks. They store 1024 bytes / 4 bytes/entry = 256 entries
Maximum number of bytes addressed by 10 direct pointers is
= NumberOfEntries * BlockSize
= (Blocksize / BlockNumberSize) * BlockSize
= (1Kb / 4b) * 1Kb
109
= 256 * 1Kb
= 256Kb
Page
109
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
= NumberOfEntries^2 * BlockSize
= (Blocksize / BlockNumberSize)^2 * BlockSize
= (1Kb / 4b)^2 * 1Kb
= (2^10 / 2^2)^2 * (2^10b)
= (2^8)^2 * (2^10)b
= (2^16) * (2^10)b
= 2^6 * 2^20 b
= 64 Mb
= NumberOfEntries^3 * BlockSize
= (Blocksize / BlockNumberSize)^3 * BlockSize
= (1Kb / 4b)^3 * 1Kb
= (2^10 / 2^2)^3 * (2^10b)
= (2^8)^3 * (2^10)b
= (2^24) * (2^10)b
= 2^4 * 2^30 b
= 16 Gb
--------------------◄►--------------------
110
Page
110
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
GATE 2012
Answer (C)
Let us put some label names for the three lines
We can also use direct formula to get the number of child processes. With n fork statements, there are
always 2^n – 1 child processes. Also see this post for more details.
P2 1 7
P3 3 4
Page
111
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
The completion order of the 3 processes under the policies FCFS and RRS (round robin scheduling with
CPU quantum of 2 time units) are
(A) FCFS: P1, P2, P3 RR2: P1, P2, P3
(B) FCFS: P1, P3, P2 RR2: P1, P3, P2
(C) FCFS: P1, P2, P3 RR2: P1, P3, P2
(D) FCFS: P1, P3, P2 RR2: P1, P2, P3
Answer (C)
4. A file system with 300 GByte uses a file descriptor with 8 direct block address. 1 indirect block address
and 1 doubly indirect block address. The size of each disk block is 128 Bytes and the size of each disk block
address is 8 Bytes. The maximum possible file size in this file system is
(A) 3 Kbytes
(B) 35 Kbytes
(C) 280 Bytes
(D) Dependent on the size of the disk
Answer (B)
Total number of possible addresses stored in a disk block = 128/8 = 16
Maximum number of addressable bytes due to direct address block = 8*128
Maximum number of addressable bytes due to 1 single indirect address block = 16*128
Maximum number of addressable bytes due to 1 double indirect address block = 16*16*128
The maximum possible file size = 8*128 + 16*128 + 16*16*128 = 35KB
GATE 2011
1) A thread is usually defined as a ‘light weight process’ because an operating system (OS) maintains
smaller data structures for a thread than for a process. In relation to this, which of the followings is TRUE?
(A) On per-thread basis, the OS maintains only CPU register state
112
(B) The OS does not maintain a separate stack for each thread
(C) On per-thread basis, the OS does not maintain virtual memory state
Page
(D) On per thread basis, the OS maintains only scheduling and accounting information.
112
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
Answer (C)
Threads share address space of Process. Virtually memory is concerned with processes not with Threads.
2) Let the page fault service time be 10ms in a computer with average memory access time being 20ns. If
one page fault is generated for every 10^6 memory accesses, what is the effective access time for the
memory?
(A) 21ns (B) 30ns (C) 23ns (D) 35ns
Answer (B)
3) An application loads 100 libraries at startup. Loading each library requires exactly one disk access. The
seek time of the disk to a random location is given as 10ms. Rotational speed of disk is 6000rpm. If all 100
libraries are loaded from random locations on the disk, how long does it take to load all libraries? (The time
to transfer data from the disk block once the head has been positioned at the start of the block may be
neglected)
(A) 0.50s (B) 1.50s (C) 1.25s (D) 1.00s
Answer (B)
Since transfer time can be neglected, the average access time is sum of average seek time and average
rotational latency. Average seek time for a random location time is given as 10 ms. The average rotational
latency is half of the time needed for complete rotation. It is given that 6000 rotations need 1 minute. So
one rotation will take 60/6000 seconds which is 10 ms. Therefore average rotational latency is half of 10
ms, which is 5ms.
= 15 ms
For 100 libraries, the average disk access time will be 15*100 ms
Page
113
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
4. Consider the following table of arrival time and burst time for three processes P0, P1 and P2.
Process Arrival time Burst Time
P0 0 ms 9 ms
P1 1 ms 4 ms
P2 2 ms 9 ms
The pre-emptive shortest job first scheduling algorithm is used. Scheduling is carried out only at arrival or
completion of processes. What is the average waiting time for the three processes?
(A) 5.0 ms (B) 4.33 ms (C) 6.33 ms (D) 7.33 ms
Answer: (A)
Process P0 is allocated processor at 0 ms as there is no other process in ready queue. P0 is preempted
after 1 ms as P1 arrives at 1 ms and burst time for P1 is less than remaining time of P0. P1 runs for 4ms.
P2 arrived at 2 ms but P1 continued as burst time of P2 is longer than P1. After P1 completes, P0 is
scheduled again as the remaining time for P0 is less than the burst time of P2.
P0 waits for 4 ms, P1 waits for 0 ms amd P2 waits for 11 ms. So average waiting time is (0+4+11)/3 = 5.
GATE 2010
1) Let the time taken to switch between user and kernel modes of execution be t1 while the time taken to
switch between two processes be t2. Which of the following is TRUE? (GATE CS 2010)
(A) t1 > t2
(B) t1 = t2
(C) t1 < t2
(D) Nothing can be said about the relation between t1 and t2 Answer: - (C) Process switching involves
mode switch. Context switching can occur only in kernel mode.
2) A system uses FIFO policy for page replacement. It has 4 page frames with no pages loaded to begin with.
The system first accesses 100 distinct pages in some order and then accesses the same 100 pages but now
in the reverse order. How many page faults will occur? (GATE CS 2010)
Answer (A)
Access to 100 pages will cause 100 page faults. When these pages are accessed in reverse order, the first
four accesses will not cause page fault. All other access to pages will cause page faults. So total number of
114
Answer (D)
I) Shortest remaining time first scheduling is a preemptive version of shortest job scheduling. It may
cause starvation as shorter processes may keep coming and a long CPU burst process never gets CPU.
II) Preemption may cause starvation. If priority based scheduling with preemption is used, then a low
priority process may never get CPU.
III) Round Robin Scheduling improves response time as all processes get CPU after a specified time.
4) Consider the methods used by processes P1 and P2 for accessing their critical sections whenever
needed, as given below. The initial values of shared boolean variables S1 and S2 are randomly assigned.
Method Used by P1
while (S1 == S2) ;
Critica1 Section
S1 = S2;
Method Used by P2
while (S1 != S2) ;
Critica1 Section
S2 = not (S1);
Which one of the following statements describes the properties achieved? (GATE CS 2010)
(A) Mutual exclusion but not progress
(B) Progress but not mutual exclusion
(C) Neither mutual exclusion nor progress
(D) Both mutual exclusion and progress
Answer (A)
It can be easily observed that the Mutual Exclusion requirement is satisfied by the above solution, P1 can
enter critical section only if S1 is not equal to S2, and P2 can enter critical section only if S1 is equal to S2.
Progress Requirement is not satisfied. Let us first see definition of Progress Requirement.
115
Progress Requirement: If no process is executing in its critical section and there exist some processes that
wishes to enter their critical section, then the selection of the processes that will enter the critical section
Page
If P1 or P2 want to re-enter the critical section, then they cannot even if there is other process running in
critical section.
GATE 2009
1) In which one of the following page replacement policies, Belady’s anomaly may occur?
(A) FIFO (B) Optimal (C) LRU (D) MRU
Answer (A)
Belady’s anomaly proves that it is possible to have more page faults when increasing the number of page
frames while using the First in First Out (FIFO) page replacement algorithm.
Answer (B)
A page table entry must contain Page frame number. Virtual page number is typically used as index in
page table to get the corresponding page frame number. See this for details.
3) Consider a system with 4 types of resources R1 (3 units), R2 (2 units), R3 (3 units), R4 (2 units). A non-
preemptive resource allocation policy is used. At any given instance, a request is not entertained if it cannot
be completely satisfied. Three processes P1, P2, P3 request the sources as follows if executed
independently.
Process P1:
t=0: requests 2 units of R2
t=1: requests 1 unit of R3
t=3: requests 2 units of R1
t=5: releases 1 unit of R2
and 1 unit of R1.
t=7: releases 1 unit of R3
t=8: requests 2 units of R4
t=10: Finishes
Process P2:
t=0: requests 2 units of R3
t=2: requests 1 unit of R4
116
t=8: Finishes
116
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
Process P3:
t=0: requests 1 unit of R4
t=2: requests 2 units of R1
t=5: releases 2 units of R1
t=7: requests 1 unit of R2
t=8: requests 1 unit of R3
t=9: Finishes
Which one of the following statements is TRUE if all three processes run concurrently starting at time t=0?
(A) All processes will finish without any deadlock
(B) Only P1 and P2 will be in deadlock.
(C) Only P1 and P3 will be in a deadlock.
(D) All three processes will be in deadlock
Answer (A)
We can apply the following Deadlock Detection algorithm and see that there is no process waiting
indefinitely for a resource. See this for deadlock detection algorithm.
4) Consider a disk system with 100 cylinders. The requests to access the cylinders occur in following
sequence:
4, 34, 10, 7, 19, 73, 2, 15, 6, 20
Assuming that the head is currently at cylinder 50, what is the time taken to satisfy all requests if it takes
1ms to move from one cylinder to adjacent one and shortest seek time first policy is used?
(A) 95ms (B) 119ms (C) 233ms (D) 276ms
Answer (B)
4, 34, 10, 7, 19, 73, 2, 15, 6, 20
Since shortest seek time first policy is used, head will first move to 34. This move will cause 16*1 ms.
After 34, head will move to 20 which will cause 14*1 ms. And so on. So cylinders are accessed in following
order 34, 20, 19, 15, 10, 7, 6, 4, 2, 73 and total time will be (16 + 14 + 1 + 4 + 5 + 3 + 1 + 2 + 2 + 71)*1
= 119 ms.
GATE 2009
1) In the following process state transition diagram for a uniprocessor system, assume that there are
always some processes in the ready state: Now consider the following statements:
117
Page
117
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
I. If a process makes a transition D, it would result in another process making transition A immediately.
II. A process P2 in blocked state can make transition E while another process P1 is in running state.
III. The OS uses preemptive scheduling.
IV. The OS uses non-preemptive scheduling.
Which of the above statements are TRUE?
(A) I and II (B) I and III (C) II and III (D) II and IV
Answer (C)
I is false. If a process makes a transition D, it would result in another process making transition B, not A.
II is true. A process can move to ready state when I/O completes irrespective of other process being in
running state or not.
III is true because there is a transition from running to ready state.
IV is false as the OS uses preemptive scheduling.
2) The enter_CS() and leave_CS() functions to implement critical section of a process are realized using
test-and-set instruction as follows:
void enter_CS(X)
{
while test-and-set(X) ;
}
void leave_CS(X)
{
X = 0;
}
In the above solution, X is a memory location associated with the CS and is initialized to 0. Now consider the
following statements:
I. The above solution to CS problem is deadlock-free
II. The solution is starvation free.
III. The processes enter CS in FIFO order.
IV More than one process can enter CS at the same time.
Which of the above statements is TRUE?
118
118
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
Answer (A)
The above solution is a simple test-and-set solution that makes sure that deadlock doesn’t occur, but it
doesn’t use any queue to avoid starvation or to have FIFO order.
3) A multilevel page table is preferred in comparison to a single level page table for translating virtual
address to physical address because
(A) It reduces the memory access time to read or write a memory location.
(B) It helps to reduce the size of page table needed to implement the virtual address space of a process.
(C) It is required by the translation lookaside buffer.
(D) It helps to reduce the number of page faults in page replacement algorithms.
Answer (B)
The size of page table may become too big (See this) to fit in contiguous space. That is why page tables
are typically divided in levels.
GATE 2008
1) The data blocks of a very large file in the Unix file system are allocated using
(A) contiguous allocation (B) linked allocation
(C) indexed allocation (D) an extension of indexed allocation
Answer (D)
The Unix file system uses an extension of indexed allocation. It uses direct blocks, single indirect blocks,
double indirect blocks and triple indirect blocks. Following diagram shows implementation of Unix file
system.
119
Page
119
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
2) The P and V operations on counting semaphores, where s is a counting semaphore, are defined as
follows:
P(s) : s = s - 1;
if (s < 0) then wait;
V(s) : s = s + 1;
if (s <= 0) then wakeup a process waiting on s;
Assume that Pb and Vb the wait and signal operations on binary semaphores are provided. Two binary
semaphores Xb and Yb are used to implement the semaphore operations P(s) and V(s) as follows:
P(s) : Pb(Xb);
s = s - 1;
if (s < 0) {
Vb(Xb) ;
Pb(Yb) ;
}
else Vb(Xb);
V(s) : Pb(Xb) ;
s = s + 1;
if (s <= 0) Vb(Yb) ;
Vb(Xb) ;
Answer (C)
Both P(s) and V(s) operations are perform Pb(xb) as first step. If Xb is 0, then all processes executing
these operations will be blocked. Therefore, Xb must be 1.
120
If Yb is 1, it may become possible that two processes can execute P(s) one after other (implying 2
processes in critical section). Consider the case when s = 1, y = 1. So Yb must be 0.
Page
120
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
3) Which of the following statements about synchronous and asynchronous I/O is NOT true?
(A) An ISR is invoked on completion of I/O in synchronous I/O but not in asynchronous I/O
(B) In both synchronous and asynchronous I/O, an ISR (Interrupt Service Routine) is invoked after
completion of the I/O
(C) A process making a synchronous I/O call waits until I/O is complete, but a process making an
asynchronous I/O call does not wait for completion of the I/O
(D) In the case of synchronous I/O, the process waiting for the completion of I/O is woken up by the ISR
that is invoked after the completion of I/O
Answer (A)
In both Synchronous and Asynchronous, an interrupt is generated on completion of I/O. In Synchronous,
interrupt is generated to wake up the process waiting for I/O. In Asynchronous, interrupt is generated to
inform the process that the I/O is complete and it can process the data from the I/O operation.
See this for more details.
GATE 2008
1) A process executes the following code
for (i = 0; i < n; i++) fork();
If we sum all levels of above tree for i = 0 to n-1, we get 2^n - 1. So there will be 2^n – 1 child processes.
2) Which of the following is NOT true of deadlock prevention and deadlock avoidance schemes?
121
(A) In deadlock prevention, the request for resources is always granted if the resulting state is safe
(B) In deadlock avoidance, the request for resources is always granted if the result state is safe
Page
121
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
Answer (A)
Deadlock prevention scheme handles deadlock by making sure that one of the four necessary conditions
don't occur. In deadlock prevention, the request for a resource may not be granted even if the resulting
state is safe.
3) A processor uses 36 bit physical addresses and 32 bit virtual addresses, with a page frame size of 4
Kbytes. Each page table entry is of size 4 bytes. A three level page table is used for virtual to physical
address translation, where the virtual address is used as follows
• Bits 30-31 are used to index into the first level page table
• Bits 21-29 are used to index into the second level page table
• Bits 12-20 are used to index into the third level page table, and
• Bits 0-11 are used as offset within the page
The number of bits required for addressing the next level page table (or page frame) in the page table entry
of the first, second and third level page tables are respectively
(A) 20, 20 and 20 (B) 24, 24 and 24 (C) 24, 24 and 20 (D) 25, 25 and 24
Answer (D)
Virtual address size = 32 bits
Physical address size = 36 bits
Physical memory size = 2^36 bytes
Page frame size = 4K bytes = 2^12 bytes
No. of bits required to access physical memory frame = 36 - 12 = 24
So in third level of page table, 24 bits are required to access an entry.
9 bits of virtual address are used to access second level page table entry and size of pages in second level
is 4 bytes. So size of second level page table is (2^9)*4 = 2^11 bytes. It means there are (2^36)/(2^11)
possible locations to store this page table. Therefore the second page table requires 25 bits to address it.
Similarly, the third page table needs 25 bits to address it.
GATE 2007
1) Consider a disk pack with 16 surfaces, 128 tracks per surface and 256 sectors per track. 512 bytes of
data are stored in a bit serial manner in a sector. The capacity of the disk pack and the number of bits
required to specify a particular sector in the disk are respectively:
(A) 256 Mbyte, 19 bits (B) 256 Mbyte, 28 bits
122
Answer (A)
Page
Capacity of the disk = 16 surfaces X 128 tracks X 256 sectors X 512 bytes = 256 Mbytes.
122
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
To calculate number of bits required to access a sector, we need to know total number of sectors. Total
number of sectors = 16 surfaces X 128 tracks X 256 sectors = 2^19
So the number of bits required to access a sector is 19.
2) Group 1 contains some CPU scheduling algorithms and Group 2 contains some applications. Match
entries in Group 1 to entries in Group 2.
Group I Group II
(P) Gang Scheduling (1) Guaranteed Scheduling
(Q) Rate Monotonic Scheduling (2) Real-time Scheduling
(R) Fair Share Scheduling (3) Thread Scheduling
(A) P – 3 Q – 2 R – 1 (B) P – 1 Q – 2 R – 3
(C) P – 2 Q – 3 R – 1 (D) P – 1 Q – 3 R – 2
Answer (A)
Gang scheduling for parallel systems that schedules related threads or processes to run simultaneously
on different processors.
Rate monotonic scheduling is used in real-time operating systems with a static-priority scheduling class.
The static priorities are assigned on the basis of the cycle duration of the job: the shorter the cycle
duration is, the higher is the job’s priority.
Fair Share Scheduling is a scheduling strategy in which the CPU usage is equally distributed among
system users or groups, as opposed to equal distribution among processes. It is also known as
Guaranteed scheduling.
3) An operating system uses Shortest Remaining Time first (SRT) process scheduling algorithm. Consider
the arrival times and execution times for the following processes:
Process Execution time Arrival time
P1 20 0
P2 25 15
P3 10 30
P4 15 45
Answer (B)
Page
At time 15, P2 arrives, but P1 has the shortest remaining time. So P1 continues for 5 more time units.
At time 20, P2 is the only process. So it runs for 10 time units
At time 30, P3 is the shortest remaining time process. So it runs for 10 time units
At time 40, P2 runs as it is the only process. P2 runs for 5 time units.
At time 45, P3 arrives, but P2 has the shortest remaining time. So P2 continues for 10 more time units.
P2 completes its ececution at time 55
Total waiting time for P2 = Complition time - (Arrival time + Execution time)
= 55 - (15 + 25)
= 15
GATE 2007
1) A virtual memory system uses First In First Out (FIFO) page replacement policy and allocates a fixed
number of frames to a process. Consider the following statements:
P: Increasing the number of page frames allocated to a process sometimes increases the page fault rate.
Q: Some programs do not exhibit locality of reference. Which one of the following is TRUE?
(A) Both P and Q are true, and Q is the reason for P
(B) Both P and Q are true, but Q is not the reason for P.
(C) P is false, but Q is true
(D) Both P and Q are false.
Answer (B)
P is true. Increasing the number of page frames allocated to process may increases the no. of page faults.
Q is also true, but Q is not the reason for-P as Belady’s Anomaly occurs for some specific patterns of page
references.
2) A single processor system has three resource types X, Y and Z, which are shared by three processes.
There are 5 units of each resource type. Consider the following scenario, where the column alloc denotes
the number of units of each resource type allocated to each process, and the column request denotes the
number of units of each resource type requested by a process in order to complete execution. Which of
these processes will finish LAST?
alloc request
XYZ XYZ
124
P0 1 2 1 103
P1 2 0 1 012
Page
124
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
P2 2 2 1 120
(A) P0 (B) P1 (C) P2 (D) None of the above, since the system is in a deadlock
Answer (C)
Once all resources (5, 4 and 3 instances of X, Y and Z respectively) are allocated, 0, 1 and 2 instances of X,
Y and Z are left. Only needs of P1 can be satisfied. So P1 can finish its execution first. Once P1 is done, it
releases 2, 1 and 3 units of X, Y and Z respectively. Among P0 and P2, needs of P0 can only be satisfied. So
P0 finishes its execution. Finally, P2 finishes its execution.
3) Two processes, P1 and P2, need to access a critical section of code. Consider the following
synchronization construct used by the processes:Here, wants1 and wants2 are shared variables, which are
initialized to false. Which one of the following statements is TRUE about the above construct?
/* P1 */
while (true) {
wants1 = true;
while (wants2 == true);
/* Critical
Section */
wants1=false;
}
/* Remainder section */
/* P2 */
while (true) {
wants2 = true;
while (wants1==true);
125
/* Critical
Section */
Page
wants2 = false;
125
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
}
/* Remainder section */
Answer (D)
Since kernel level threads are managed by kernel, blocking one thread doesn’t cause all related threads to
block. It’s a problem with user level threads. See this for more details.
GATE 2006
1) Consider three CPU-intensive processes, which require 10, 20 and 30 time units and arrive at times 0, 2
and 6, respectively. How many context switches are needed if the operating system implements a shortest
remaining time first scheduling algorithm? Do not count the context switches at time zero and at the end.
(A) 1 (B) 2 (C) 3 (D) 4
Answer (B)
Let three process be P0, P1 and P2 with arrival times 0, 2 and 6 respectively and CPU burst times 10, 20
and 30 respectively. At time 0, P0 is the only available process so it runs. At time 2, P1 arrives, but P0 has
the shortest remaining time, so it continues. At time 6, P2 arrives, but P0 has the shortest remaining time,
so it continues. At time 10, P1 is scheduled as it is the shortest remaining time process. At time 30, P2 is
scheduled. Only two context switches are needed. P0 to P1 and P1 to P2.
126
2) A computer system supports 32-bit virtual addresses as well as 32-bit physical addresses. Since the
virtual address space is of the same size as the physical address space, the operating system designers
Page
decide to get rid of the virtual memory entirely. Which one of the following is true?
126
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
Answer (C)
For supporting virtual memory, special hardware support is needed from Memory Management Unit.
Since operating system designers decide to get rid of the virtual memory entirely, hardware support for
memory management is no longer needed
3) A CPU generates 32-bit virtual addresses. The page size is 4 KB. The processor has a translation look-
aside buffer (TLB) which can hold a total of 128 page table entries and is 4-way set associative. The
minimum size of the TLB tag is:
(A) 11 bits (B) 13 bits (C) 15 bits (D) 20 bits
Answer C
Size of a page = 4KB = 2^12
Total number of bits needed to address a page frame = 32 – 12 = 20
If there are ‘n’ cache lines in a set, the cache placement is called n-way set associative. Since TLB is 4 way
set associative and can hold total 128 (2^7) page table entries, number of sets in cache = 2^7/4 = 2^5.
So 5 bits are needed to address a set, and 15 (20 – 5) bits are needed for tag.
GATE 2006
1) Consider three processes (process id 0, 1, 2 respectively) with compute time bursts 2, 4 and 8 time units.
All processes arrive at time zero. Consider the longest remaining time first (LRTF) scheduling algorithm. In
LRTF ties are broken by giving priority to the process with the lowest process id. The average turn around
time is:
(A) 13 units (B) 14 units (C) 15 units (D) 16 units
Answer (A)
Let the processes be p0, p1 and p2. These processes will be executed in following order.
p2 p1 p2 p1 p2 p0 p1 p2 p0 p1 p2
0 4 5 6 7 8 9 10 11 12 13 14
Turn around time of a process is total time between submission of the process and its completion.
Turn around time of p0 = 12 (12-0)
Turn around time of p1 = 13 (13-0)
127
127
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
2) Consider three processes, all arriving at time zero, with total execution time of 10, 20 and 30 units,
respectively. Each process spends the first 20% of execution time doing I/O, the next 70% of time doing
computation, and the last 10% of time doing I/O again. The operating system uses a shortest remaining
compute time first scheduling algorithm and schedules a new process either when the running process gets
blocked on I/O or when the running process finishes its compute burst. Assume that all I/O operations can
be overlapped as much as possible. For what percentage of time does the CPU remain idle?
(A) 0% (B) 10.6% (C) 30.0% (D) 89.4%
Answer (B)
Let three processes be p0, p1 and p2. Their execution time is 10, 20 and 30 respectively. p0 spends first 2
time units in I/O, 7 units of CPU time and finally 1 unit in I/O. p1 spends first 4 units in I/O, 14 units of
CPU time and finally 2 units in I/O. p2 spends first 6 units in I/O, 21 units of CPU time and finally 3 units
in I/O.
idle p0 p1 p2 idle
0 2 9 23 44 47
3) The atomic fetch-and-set x, y instruction unconditionally sets the memory location x to 1 and fetches the
old value of x in y without allowing any intervening access to the memory location x. consider the following
implementation of P and V functions on a binary semaphore .
void P (binary_semaphore *s) {
unsigned y;
unsigned *x = &(s->value);
do {
fetch-and-set x, y;
} while (y);
128
}
Page
128
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
Answer (A)
Let us talk about the operation P(). It stores the value of s in x, then it fetches the old value of x, stores it
in y and sets x as 1. The while loop of a process will continue forever if some other process doesn’t
execute V() and sets the value of s as 0. If context switching is disabled in P, the while loop will run
forever as no other process will be able to execute V().
4) Consider the following snapshot of a system running n processes. Process i is holding Xi instances of a
resource R, 1 <= i <= n. currently, all instances of R are occupied. Further, for all i, process i has placed a
request for an additional Yi instances while holding the Xi instances it already has. There are exactly two
processes p and q such that Yp = Yq = 0. Which one of the following can serve as a necessary condition to
guarantee that the system is not approaching a deadlock?
(A) min (Xp, Xq) < max (Yk) where k != p and k != q
(B) Xp + Xq >= min (Yk) where k != p and k != q
(C) max (Xp, Xq) > 1
(D) min (Xp, Xq) > 1
Answer (B)
Since both p and q don’t need additional resources, they both can finish and release Xp + Xq resources
without asking for any additional resource. If the resources released by p and q are sufficient for another
process waiting for Yk resources, then system is not approaching deadlock.
GATE2005
1) Normally user programs are prevented from handling I/O directly by I/O instructions in them. For CPUs
having explicit I/O instructions, such I/O protection is ensured by having the I/O instructions privileged. In
a CPU with memory mapped I/O, there is no explicit I/O instruction. Which one of the following is true for a
129
129
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
Answwer (a)
Memory mapped I/O means, accessing I/O via general memory access as opposed to specialized IO
instructions. An example,
So, the programmer can directly access any memory location directly. To prevent such an access, the OS
(kernel) will divide the address space into kernel space and user space. An user application can easily
access user application. To access kernel space, we need system calls (traps).
Answer (b)
Swap space is typically used to store process data. See this for more details.
Answer (c)
4) Suppose n processes, P1, …. Pn share m identical resource units, which can be reserved and released one
at a time. The maximum resource requirement of process Pi is Si, where Si > 0. Which one of the following
is a sufficient condition for ensuring that deadlock does not occur?
Answer (c)
In the extreme condition, all processes acquire Si-1 resources and need 1 more resource. So following
condition must be true to make sure that deadlock never occurs.
130
< m The above expression can be written as following. < (m + n) See this forum
thread for an example.
Page
130
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
Let u, v be the values printed by the parent process, and x, y be the values printed by the child process.
Which one of the following is TRUE?
(a) u = x + 10 and v = y (b) u = x + 10 and v != y
(c) u + 10 = x and v = y (d) u + 10 = x and v != y
Answer (c)
fork() returns 0 in child process and process ID of child process in parent process.
In Child (x), a = a + 5
In Parent (u), a = a – 5;
Therefore x = u + 10.
The physical addresses of ‘a’ in parent and child must be different. But our program accesses virtual
addresses (assuming we are running on an OS that uses virtual memory). The child process gets an exact
copy of parent process and virtual address of ‘a’ doesn’t change in child process. Therefore, we get same
addresses in both parent and child.
MIX Year GATE Question
1. Suppose the time to service a page fault is on the average 10 milliseconds, while a memory access takes 1
microsecond. Then a 99.99% hit ratio results in average memory access time of (GATE CS 2000)
(a) 1.9999 milliseconds (b) 1 millisecond
(c) 9.999 microseconds (d) 1.9999 microseconds
Answer: (d)
Explanation:
Average memory access time =
[(% of page miss)*(time to service a page fault) +
(% of page hit)*(memory access time)]/100
2. Which of the following need not necessarily be saved on a context switch between processes? (GATE CS
Page
2000)
131
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
Answer: (b)
Explanation:
In a process context switch, the state of the first process must be saved somehow, so that, when the
scheduler gets back to the execution of the first process, it can restore this state and continue.
The state of the process includes all the registers that the process may be using, especially the program
counter, plus any other operating system specific data that may be necessary.
A Translation lookaside buffer (TLB) is a CPU cache that memory management hardware uses to improve
virtual address translation speed. A TLB has a fixed number of slots that contain page table entries, which
map virtual addresses to physical addresses. On a context switch, some TLB entries can become invalid,
since the virtual-to-physical mapping is different. The simplest strategy to deal with this is to completely
flush the TLB.
Answer: (b)
Explanation:
Swap space is an area on disk that temporarily holds a process memory image. When physical memory
demand is sufficiently low, process memory images are brought back into physical memory from the
swap area. Having sufficient swap space enables the system to keep some physical memory free at all
times.
4. Which of the following does not interrupt a running process? (GATE CS 2001)
(a) A device (b) Timer (c) Scheduler process (d) Power failure
Answer: (c)
Explanation:
Scheduler process doesn’t interrupt any process, it’s Job is to select the processes for following three
purposes.
Long-term scheduler(or job scheduler) –selects which processes should be brought into the ready queue
Short-term scheduler(or CPU scheduler) –selects which process should be executed next and allocates
CPU.
Mid-term Scheduler (Swapper)- present in all systems with virtual memory, temporarily removes
processes from main memory and places them on secondary memory (such as a disk drive) or vice versa.
The mid-term scheduler may decide to swap out a process which has not been active for some time, or a
132
process which has a low priority, or a process which is page faulting frequently, or a process which is
taking up a large amount of memory in order to free up main memory for other processes, swapping the
Page
132
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
process back in later when more memory is available, or when the process has been unblocked and is no
longer waiting for a resource.
Answer: (b)
6. Consider a machine with 64 MB physical memory and a 32-bit virtual address space. If the page size is
4KB, what is the approximate size of the page table? (GATE 2001)
(a) 16 MB (b) 8 MB (c) 2 MB (d) 24 MB
Answer: (c)
Explanation:
A page entry is used to get address of physical memory. Here we assume that single level of Paging is
happening. So the resulting page table will contain entries for all the pages of the Virtual address space.
Number of entries in page table =
(virtual address space size)/(page size)
Using above formula we can say that there will be 2^(32-12) = 2^20 entries in page table.
No. of bits required to address the 64MB Physical memory = 26.
So there will be 2^(26-12) = 2^14 page frames in the physical memory. And page table needs to store the
address of all these 2^14 page frames. Therefore, each page table entry will contain 14 bits address of the
page frame and 1 bit for valid-invalid bit.
Since memory is byte addressable. So we take that each page table entry is 16 bits i.e. 2 bytes long.
For the clarity of the concept, please see the following figure. As per our question, here p = 20, d = 12 and
f = 14.
133
Page
133
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
7. Consider Peterson’s algorithm for mutual exclusion between two concurrent processes i and j. The
program executed by process is shown below.
repeat
flag [i] = true;
turn = j;
while ( P ) do no-op;
Enter critical section, perform actions, then exit critical
section
flag [ i ] = false;
Perform other non-critical section actions.
until false;
For the program to guarantee mutual exclusion, the predicate P in the while loop should be (GATE 2001)
a) flag [j] = true and turn = I b) flag [j] = true and turn = j
c) flag [i] = true and turn = j d) flag [i] = true and turn = i
Answer: (b)
Basically, Peterson’s algorithm provides guaranteed mutual exclusion by using the two following
134
constructs –flag[] and turn. flag[] controls that the willingness of a process to be entered in critical
section. While turn controls the process that is allowed to be entered in critical section. So by replacing P
Page
134
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
8 More than one word are put in one cache block to (GATE 2001)
(a) exploit the temporal locality of reference in a program
(b) exploit the spatial locality of reference in a program
(c) reduce the miss penalty
(d) none of the above
Answer: (b)
Temporal locality refers to the reuse of specific data and/or resources within relatively small time
durations. Spatial locality refers to the use of data elements within relatively close storage locations.
To exploit the spatial locality, more than one word are put into cache block.
Answer: (d)
In a system with virtual memory context switch includes extra overhead in switching of address spaces.
10. Consider a set of n tasks with known runtimes r1, r2, … rn to be run on a uniprocessor machine. Which
of the following processor scheduling algorithms will result in the maximum throughput? (GATE 2001)
(a) Round-Robin (b) Shortest-Job-First
(c) Highest-Response-Ratio-Next (d) First-Come-First-Served
Answer: (b)
11. Which of the following is NOT a valid deadlock prevention scheme? (GATE CS 2000)
(a) Release all resources before requesting a new resource
(b) Number the resources uniquely and never request a lower numbered resource than the last one
requested.
(c) Never request a resource after releasing any resource
135
Answer: (c)
Page
135
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
12. Let m,0-…m,4- be mutexes (binary semaphores) and P[0] …. P[4] be processes.
Suppose each process P[i] executes the following:
wait (m[i]); wait(m[(i+1) mode 4]);
------
Answer: (b)
Explanation:
You can easily see a deadlock in a situation where..
P[0] has acquired m[0] and waiting for m[1]
P[1] has acquired m[1] and waiting for m[2]
P[2] has acquired m[2] and waiting for m[3]
P[3] has acquired m[3] and waiting for m[0]
13. A graphics card has on board memory of 1 MB. Which of the following modes can the
card not support? (GATE CS 2000)
(a) 1600 x 400 resolution with 256 colours on a 17 inch monitor
(b) 1600 x 400 resolution with 16 million colours on a 14 inch monitor
(c) 800 x 400 resolution with 16 million colours on a 17 inch monitor
(d) 800 x 800 resolution with 256 colours on a 14 inch monitor
Answer: (b)
Explanation:
Monitor size doesn’t matter here. So, we can easily deduct that answer should be (b) as this has the
highest memory requirements. Let us verify it.
Number of bits required to store a 16M colors pixel = ceil(log2(16*1000000)) = 24
Number of bytes required for 1600 x 400 resolution with 16M colors = (1600 * 400 * 24)/8 which is
192000000 (greater than 1MB).
14 Consider a virtual memory system with FIFO page replacement policy. For an arbitrary page access
136
pattern, increasing the number of page frames in main memory will (GATE CS 2001)
a) Always decrease the number of page faults
b) Always increase the number of page faults
Page
136
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
Answer: (c)
Explanation:
Incrementing the number of page frames doesn’t always decrease the page faults (Belady’s Anomaly).
Answer: (d)
16. Using a larger block size in a fixed block size file system leads to (GATE CS 2003)
a) better disk throughput but poorer disk space utilization
b) better disk throughput and better disk space utilization
c) poorer disk throughput but better disk space utilization
d) poorer disk throughput and poorer disk space utilization
Answer (a)
If block size is large then seek time is less (fewer blocks to seek) and disk performance is improved, but
remember larger block size also causes waste of disk space.
17. Consider the following statements with respect to user-level threads and kernel supported threads
i. context switch is faster with kernel-supported threads
ii. for user-level threads, a system call can block the entire process
iii. Kernel supported threads can be scheduled independently
iv. User level threads are transparent to the kernel
Which of the above statements are true? (GATE CS 2004)
a) (ii), (iii) and (iv) only
b) (ii) and (iii) only
c) (i) and (iii) only
d) (i) and (ii) only
Answer(a)
18. The minimum number of page frames that must be allocated to a running process in a virtual memory
environment is determined by (GATE CS 2004)
a) the instruction set architecture
b) page size
137
137
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved
Operating System
Answer (a)
Each process needs minimum number of pages based on instruction set architecture. Example IBM 370: 6
pages to handle MVC (storage to storage move) instruction
Instruction is 6 bytes, might span 2 pages.
2 pages to handle from.
2 pages to handle to.
19. In a system with 32 bit virtual addresses and 1 KB page size, use of one-level page tables for virtual to
physical address translation is not practical because of (GATE CS 2003)
a) the large amount of internal fragmentation
b) the large amount of external fragmentation
c) the large memory overhead in maintaining page tables
d) the large computation overhead in the translation process
Answer (c)
Since page size is too small it will make size of page tables huge.
Note that page table entry also holds auxiliary information about the page such
as a present bit, a dirty or modified bit, address space or process ID information,
amongst others. So size of page table
138
> (total number of page table entries) *(size of a page table entry)
Page
> 9.5 MB
And this much memory is required for each process because each process maintains its own page table.
Also, size of page table will be more for physical memory more than 512MB. Therefore, it is advised to
use multilevel page table for such scenarios.
139
Page
139
By Siddharth S. Shukla (BE, ME, PhD* )
i-GATE , B-713, Street 22, Smriti Nagar, Bhilai- 490020, Contact Mobile 98271-62352
No part of this booklet may be reproduced or utilized in any form without the written permission. All right are reserved