Operating System 22mca12 Notes 240224 192133
Operating System 22mca12 Notes 240224 192133
Concepts
Digital Notes By
RAMESH
KRISHNAGOUDAR
Assistant Professor
Department of Master in Computer Application
AMCEC, Bangalore
Module I
Module II
Module III
Module IV
Module V
Storage Management-File Systems: A Simple file system – General model of a file system –
Symbolic file system – Access control verification – Logical file system – Physical file system –
allocation strategy module – Device strategy module, I/O initiators, Device handlers – Disk
scheduling, Design ofIO systems, File Management.
REFERENCE BOOK:
1. Operating Systems – Flynn, McHoes, Cengage Learning
2. Operating Systems – Pabitra Pal Choudhury, PHI
3. Operating Systems – William Stallings, Prentice Hall
4. Operating Systems – H.M. Deitel, P. J. Deitel, D. R. Choffnes, 3rd Edition, Pearson
The prime objective of operating system is to manage & control the various hardware resources
of a computer system.
These hardware resources include processer, memory, and disk space and so on.
The output result was display in monitor. In addition to communicating with the hardware
theoperating system provides on error handling procedure and display an error notification.
If a device not functioning properly, the operating system cannot be communicate with the
device.
Providing an Interface
The operating system controls and coordinate a user of hardware and various application
programs for various users.
It is a program that directly interacts with the hardware.
The operating system is the first encoded with the Computer and it remains on the memory all
time thereafter.
System goals
This set of job is subset of the jobs kept in the job pool. The operating system picks and
beginning to execute one of the jobs in the memory. In this environment the operating
system simply switches and executes another job. When a job needs to wait the CPU is
simply switched to another job and so on. The multiprogramming operating system is
sophisticated because the operating system makes decisions for the user. This is known as
scheduling. If several jobs are ready to run at the same time the system choose one among
Layers Functions
5 User Program
4 I/O Management
Operator Process
3
Communication
2 Memory Management
1 CPU Scheduling
0 Hardware
Implementation: Although the virtual machine concept is useful, it is difficult to implement since
much effort is required to provide an exact duplicate of the underlying machine. The CPU is being
multiprogrammed among several virtual machines, which slows down the virtual machines in
various ways.
Difficulty: A major difficulty with this approach is regarding the disk system. The solution is to
provide virtual disks, which are identical in all respects except size. These are known as mini disks in
IBM’s VM OS. The sum of sizes of all mini disks should be less than the actual amount of physical
disk space available.
I/O Structure
A general purpose computer system consists of a CPU and multiple device controller which is
connected through a common bus. Each device controller is in charge of a specific type of device. A
device controller maintains some buffer storage and a set of special purpose register. The device
controller is responsible for moving the data between peripheral devices and buffer storage.
I/O Interrupt: To start an I/O operation the CPU loads the appropriate register within the device
controller. In turn the device controller examines the content of the register to determine the actions
which will be taken. For example, suppose the device controller finds the read request then, the
controller will start the transfer of data from the device to the buffer. Once the transfer of data is
complete the device controller informs the CPU that the operation has been finished. Once the I/O
is started, two actions are possible such as
In the simplest case the I/O is started then at I/O completion control is return to the user
process. This is known as synchronous I/O.
Process scheduling:
This queue is generally stored as a linked list. A ready queue header contains pointers to the first &
final PCB in the list. The PCB includes a pointer field that points to the next PCB in the ready
queue. The lists of processes waiting for a particular I/O device are kept on a list called device
queue. Each device has its own device queue. A new process is initially put in the ready queue. It
waits in the ready queue until it is selected for execution & is given the CPU.
The waiting time for process P1 = 0, P2 = 2, P3 = 5, P4 = 9 then the turnaround time for process
P3 = 0 + 2 = 2, P1 = 2 + 3 = 5, P4 = 5 + 4 = 9, P2 = 9 + 5 =14.
Then average waiting time = (0 + 2 + 5 + 9)/4 = 16/4 = 4
Average turnaround time = (2 + 5 + 9 + 14)/4 = 30/4 = 7.5
The SJF algorithm may be either preemptive or non preemptive algorithm. The preemptive SJF
is also known as shortest remaining time first.
Consider the following example.
Process Arrival Time CPU time
P1 0 8
P2 1 4
P3 2 9
P4 3 5
In this case the Gantt chart will be
P1 P2 P4 P1 P3
0 1 5 10 17 26
The waiting time for process
P1 = 10 - 1 = 9
P2 = 1 – 1 = 0
P3 = 17 – 2 = 15
P4 = 5 – 3 = 2
The average waiting time = (9 + 0 + 15 + 2)/4 = 26/4 = 6.5
3. Priority Scheduling Algorithm: In this scheduling a priority is associated with each process
and the CPU is allocated to the process with the highest priority. Equal priority processes are
scheduled in FCFS manner. Consider the following example:
Process Arrival Time CPU time
P1 10 3
P2 1 1
P3 2 3
Signal (S1);
When a philosopher thinks she does not interact with her colleagues. From time to time a
philosopher gets hungry and tries to pickup two chopsticks that are closest to her. A
philosopher may pickup one chopstick or two chopsticks at a time but she cannot pickup a
chopstick that is already in hand of the neighbor. When a hungry philosopher has both her
chopsticks at the same time, she eats without releasing her chopsticks. When she finished
eating, she puts down both of her chopsticks and starts thinking again. This problem is
considered as classic synchronization problem. According to this problem each chopstick is
represented by a semaphore. A philosopher grabs the chopsticks by executing the wait
operation on that semaphore. She releases the chopsticks by executing the signal operation
on the appropriate semaphore. The structure of dining philosopher is as follows:
do{
T0 T1
Read (A)
Write (A)
Read (B)
If transactions are overlapped then their execution resulting schedule is known as non-serial
scheduling or concurrent schedule as like below:
T0 T1
Read (A)
Write (A)
Read (A)
Write (A)
Read (B)
Write (B)
Read (B)
Write (B)
4. Locking: This technique governs how the locks are acquired and released. There are two types
of lock such as shared lock and exclusive lock. If a transaction T has obtained a shared lock (S)
on data item Q then T can read this item but cannot write. If a transaction T has obtained an
exclusive lock (S) on data item Q then T can both read and write in the data item Q.
5. Timestamp: In this technique each transaction in the system is associated with unique fixed
timestamp denoted by TS. This timestamp is assigned by the system before the transaction
starts. If a transaction Ti has been assigned with a timestamp TS (Ti) and later a new transaction
Tj enters the system then TS (Ti) < TS (Tj). There are two types of timestamp such as W-
timestamp and R-timestamp. W-timestamp denotes the largest timestamp of any transaction that
performed write operation successfully. R-timestamp denotes the largest timestamp of any
transaction that executed read operation successfully.
Deadlock:
Process
Pirequests instance of Rj Pi
Pi is holding an instance of Rj Pi
Example:
Go to step 2
4. If Finish [i] = false, for some i, 1 i n, then the system is in a deadlock state. Moreover, if Finish
[i] = false, then process Pi is deadlocked.
Memory Management
Memory consists of a large array of words or bytes, each with its own address. The CPU fetches
instructions from memory according to the value of the program counter. These instructions
may cause additional loading from and storing to specific memory addresses.
Memory unit sees only a stream of memory addresses. It does not know how they are generated.
Program must be brought into memory and placed within a process for it to be run.
Input queue – collection of processes on the disk that are waiting to be brought into memory for
execution.
User programs go through several steps before being run.
The run-time mapping from virtual to physical addresses is done by a hardware device called the
memory management unit (MMU).
This method requires hardware support slightly different from the hardware configuration. The
base register is now called a relocation register. The value in the relocation register is added to
every address generated by a user process at the time it is sent to memory.
The user program never sees the real physical addresses. The program can create a pointer to
location 346, store it in memory, manipulate it and compare it to other addresses. The user
program deals with logical addresses. The memory mapping hardware converts logical addresses
into physical addresses. The final location of a referenced memory address is not determined
until the reference is made.
Dynamic Loading
Routine is not loaded until it is called.
All routines are kept on disk in a relocatable load format.
The main program is loaded into memory and is executed. When a routine needs to call another
routine, the calling routine first checks to see whether the other the desired routine into memory
and to update the program’s address tables to reflect this change. Then control is passed to the
newly loaded routine.
Better memory-space utilization; unused routine is never loaded.
Useful when large amounts of code are needed to handle infrequently occurring cases.
Multiple-partition allocation
o Hole – block of available memory; holes of various size are scattered throughout
memory.
o When a process arrives, it is allocated memory from a hole large enough to
accommodate it.
o Operating system maintains information about:
a) allocated partitions b) free partitions (hole)
o A set of holes of various sizes, is scattered throughout memory at any given time. When
a process arrives and needs memory, the system searches this set for a hole that is large
enough for this process. If the hole is too large, it is split into two: one part is allocated
to the arriving process; the other is returned to the set of holes. When a process
terminates, it releases its block of memory, which is then placed back in the set of holes.
If the new hold is adjacent to other holes, these adjacent holes are merged to form one
larger hole.
o This procedure is a particular instance of the general dynamic storage allocation
problem, which is how to satisfy a request of size n from a list of free holes. There are
many solutions to this problem. The set of holes is searched to determine which hole is
best to allocate. The first-fit, best-fit and worst-fit strategies are the most common ones
used to select a free hole from the set of available holes.
When a process arrives in the system to be executed, its size expressed in pages is examined. Each
page of the process needs one frame. Thus if the process requires n pages, at least n frames must be
Where p1 is an index into the outer page table, and p2 is the displacement within the page of the
outer page table.The below figure shows a two level page table scheme.
Each virtual address in the system consists of a triple <process-id, page-number, offset>. Each
inverted page table entry is a pair <process-id, page-number> where the process-id assumes the role
of the address space identifier. When a memory reference occurs, part of the virtual address,
consisting of <process-id, page-number>, is presented to the memory subsystem. The inverted page
table is then searched for a match. If a match is found say at entry i, then the physical address <i,
offset> is generated. If no match is found, then an illegal address access has been attempted.
Shared Page:
Shared code
Segmentation
Memory-management scheme that supports user view of memory.
A program is a collection of segments. A segment is a logical unit such as:
Main program,
Procedure,
Function,
Method,
Object,
Local variables, global variables,
Common block,
Stack,
Segmentation is a memory management scheme that supports this user view of memory.
A logical address space is a collection of segments. Each segment has a name and a length.
The addresses specify both the segment name and the offset within the segment.
The user therefore specifies each address by two quantities such as segment name and an offset.
For simplicity of implementation, segments are numbered and are referred to by a segment
number, rather than by a segment name.
Logical address consists of a two tuples:
<segment-number, offset>
Segment table – maps two-dimensional physical addresses; each table entry has:
o Base – contains the starting physical address where the segments reside in memory.
o Limit – specifies the length of the segment.
Segment-table base register (STBR) points to the segment table’s location in memory.
Segment-table length register (STLR) indicates number of segments used by a program;
Segment number s is legal if s< STLR.
Virtual Memory
It is a technique which allows execution of process that may not be compiled within the primary
memory.
It separates the user logical memory from the physical memory. This separation allows an
extremely large memory to be provided for program when only a small physical memory is
available.
Virtual memory makes the task of programming much easier because the programmer no longer
needs to working about the amount of the physical memory is available or not.
The virtual memory allows files and memory to be shared by different processes by page
sharing.
It is most commonly implemented by demand paging.
Thus it avoids reading into memory pages that will not used any way decreasing the swap time and
the amount of physical memory needed. In this technique we need some hardware support to
distinct between the pages that are in memory and those that are on the disk. A valid and invalid bit
is used for this purpose. When this bit is set to valid it indicates that the associate page is in memory.
If the bit is set to invalid it indicates that the page is either not valid or is valid but currently not in
the disk.
Reference # 1 2 3 4 5 6 7 8 9 10 11 12
Page referenced 1 2 3 4 1 2 5 1 2 3 4 5
Frames 1 1 1 1 1 1 1 1 1 1 4 4
_ = faulting page 2 2 2 2 2 2 2 2 2 2 2
3 3 3 3 3 3 3 3 3 3
4 4 4 5 5 5 5 5 5
Analysis: 12 page references, 6 page faults, 2 page replacements. Page faults per number of
frames = 6/4 = 1.5
Unfortunately, the optimal algorithm requires special hardware (crystal ball, magic mirror, etc.)
not typically found on today’s computers
Optimal algorithm is still used as a metric for judging other page replacement algorithms
FIFO algorithm
Replaces pages based on their order of arrival: oldest page is replaced
Example using 4 frames:
Analysis: 12 page references, 10 page faults, 6 page replacements. Page faults per number of
frames = 10/4 = 2.5
LFU algorithm (page-based)
procedure: replace the page which has been referenced least often
For each page in the reference string, we need to keep a reference count. All reference counts
start at 0 and are incremented every time a page is referenced.
example using 4 frames:
Reference # 1 2 3 4 5 6 7 8 9 10 11 12
Page referenced 1 2 3 4 1 2 5 1 2 3 4 5
1 1 1 1 2 2 2 3 3 3 3 3
Frames 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 2 2 2 3 3 3 3
_ = faulting page 2 2 2 2 2 2 2 2 2 2 2
n 1 1 1 1 1 1 1 2 2 2
= reference count 3 3 3 3 5 5 5 3 3 5
1 1 1 1 1 1 1 2 2
4 4 4 4 4 4 4 4 4
At the 7th page in the reference string, we need to select a page to be victimized. Either 3 or 4
will do since they have the same reference count (1). Let’s pick 3.
Likewise at the 10th page reference; pages 4 and 5 have been referenced once each. Let’s pick
page 4 to victimize. Page 3 is brought in, and its reference count (which was 1 before we paged
it out a while ago) is updated to 2.
Analysis: 12 page references, 7 page faults, 3 page replacements. Page faults per number of
frames = 7/4 = 1.75
Reference # 1 2 3 4 5 6 7 8 9 10 11 12
Page referenced 1 2 3 4 1 2 5 1 2 3 4 5
1 1 1 1 2 2 2 3 3 3 3 3
Frames 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 2 2 2 3 3 3 3
_ = faulting page 2 2 2 2 2 2 2 2 2 2 2
n 1 1 1 1 1 1 1 1 1 1
= reference count 3 3 3 3 5 5 5 3 3 5
1 1 1 1 1 1 1 2 2
4 4 4 4 4 4 4 4 4
At the 7th reference, we victimize the page in the frame which has been referenced least often --
in this case, pages 3 and 4 (contained within frames 3 and 4) are candidates, each with a
reference count of 1. Let’s pick the page in frame 3. Page 5 is paged in and frame 3’s reference
count is reset to 1.
At the 10th reference, we again have a page fault. Pages 5 and 4 (contained within frames 3 and
4) are candidates, each with a count of 1. Let’s pick page 4. Page 3 is paged into frame 3, and
frame 3’s reference count is reset to 1.
Analysis: 12 page references, 7 page faults, 3 page replacements. Page faults per number of
frames = 7/4 = 1.75
LRU algorithm
Replaces pages based on their most recent reference – replace the page with the greatest
backward distance in the reference string
Example using 4 frames:
Reference # 1 2 3 4 5 6 7 8 9 10 11 12
Analysis: 12 page references, 8 page faults, 4 page replacements. Page faults per number of
frames = 8/4 = 2
One possible implementation (not necessarily the best):
o Every frame has a time field; every time a page is referenced, copy the current time into
its frame’s time field
o When a page needs to be replaced, look at the time stamps to find the oldest
Thrashing
• If a process does not have “enough” pages, the page-fault rate is very high
– low CPU utilization
– OS thinks it needs increased multiprogramming
– adds another process to system
• Thrashing is when a process is busy swapping pages in and out
• Thrashing results in severe performance problems. Consider the following scenario, which is
based on the actual behaviour of early paging systems. The operating system monitors CPU
utilization. If CPU utilization is too low, we increase the degree of multiprogramming by
introducing a new process to the system. A global page replacement algorithm is used; it
replaces pages with no regard to the process to which they belong. Now suppose that a
process enters a new phase in its execution and needs more frames.
byte
Record
Tree:In this organization, a file consists of a tree of records of varying lengths. Each record
consists of a key field. The tree is stored on the key field to allow first searching for a
particular key.
Access methods: Basically, access method is divided into 2 types:
Sequential access: It is the simplest access method. Information in the file is processed in
order i.e. one record after another. A process can read all the data in a file in order starting
from beginning but can’t skip & read arbitrarily from any location. Sequential files can be
rewound. It is convenient when storage medium was magnetic tape rather than disk.
Direct access: A file is made up of fixed length-logical records that allow programs to read
& write records rapidly in no particular O order. This method can be used when disk are
used for storing files. This method is used in many applications e.g. database systems. If an
airline customer wants to reserve a seat on a particular flight, the reservation program must
be able to access the record for that flight directly without reading the records before it. In a
direct access file, there is no restriction in the order of reading or writing. For example, we
can read block 14, then read block 50 & then write block 7 etc. Direct access files are very
useful for immediate access to large amount of information.