Os Notes PDF
Os Notes PDF
User
Shell
Kernel
Hardware
Dedicated To,
1/78
ACKNOWLEDGEMENT
This piece of study of Operating System is an outcome of the encouragement, guidance, help
and assistance provided to me by my colleagues, Sr. Faculties, friends and my family members.
Here, I am taking this opportunity to express my deep sense of gratitude to everybody whose
roles were crucial in the successful completion this study report, especially to my sr. students.
I mention special thanks to Prof. Dr. B. G. Prasad, whose permission enabled me to submit this
study to the premier Educational Organization where the advance IT Curriculum provides a
platform for the students to make their dreams come true.
My primary goal is to provide a sufficient introduction and details of the Operating System so
that the students can have the efficient knowledge about Operating System. Moreover, it
presupposes knowledge of the principles and concepts. While reading this guide, students may
find the going somewhat tough in spite of my treating the topics in depth but in simple manner.
In that case, reread the guide, if still it doesn’t do the trick, then only you may need the help of
expert. Any remaining errors and inaccuracies are my responsibility and any suggestions in this
regard are warmly welcomed!
Last but not the least, I pay my sincere regards to my Mother and my Daughter who’re ever
obliged for bearing with me from time to time.
2/78
PREFACE
Operating Systems are an important aspect of any computer system. Similarly, study of
OS is an essential part of any computer-science education and of course for the BCA
(Voc.) of Patna University, Patna, Bihar, IND. This study is intended as a text for an
introductory course of OS for Junior (1st Year’s Students) and Sr. graduates (2nd & 3rd
Year’s students).
explain fundamental concepts of OS that are applicable to various brands & versions of
operating systems.
I have attempted to wash out every error in my first edition of guide after being reviewed
understand bugs shall remain” - and I shall highly welcome the suggestions from the part
of students that may lead to improvement of next edition in shortcoming future. I would be
glad to hear suggestions from you. Any correspondence should be sent to;
3/78
CONTENTS
Unit Topic Pg No
1 Acknowledgement 05
2 Preface 06
3 Introduction to various categories of software 07
4 Introduction to Operating System 07
Need For Operating System
Linker, Loader, Macro And Utilities
Interaction of OS with Hardware and Users.
Historical Prospects of Operating System
Types of Operating System
Functions of Operating
System Components of Operating System
Services of Operating System
5 Memory Management 12
Hierarchy of storage
The OS and Memory management
Allocation & Deallocation of memory space
First, Best and Worst Fit
Program Execution Life Cycle-IBM Based PC
Brief Overview of Registers & Data Transfer
Memory Management strategies
Contiguous Memory Management
Memory Inference
Memory Fragmentation
Internal Fragmentation
External Fragmentation
Differences b/w Internal & External Fragmentation
Memory Compaction
Partitioned Memory Management
Fixed Partitioned Management
Variable Partitioned Management
Paged Memory Management
Virtual Memory
Advantages & Disadvantages of Virtual Memory.
Paging & Demand Paging
Logical, Physical & Virtual Address
Conversion of Logical to Physical Address
Page Thrashing
Page Replacement Algorithm
Least Recently Used
First In First Out
Cache Memory.
Type of Cache Memory
S/W & H/W Implemented
SPOOLING & Buffer
Types of Buffering - Anticipatory & Demand
Buffer & Cache Memory
Function Of Cache Memory.
Hit Ratio & Locality of Reference
Security of Main Memory
Swapping
4/78
6 Input/Output Device Management 30
Input/Output Organization
Bus & types of Buses
System Vs I/O Bus
Shared Vs Multipoint Bus
Devices-Block & Character Devices
Synchronous & Asynchronous Devices
Data Transfer Methods
Software Polled
Interrupts Driven
Types Of Interrupts
Trap & It’s Function
Interrupt-Maskable & Non-Maskable
Operation of Interrupt
ISR & IVT
Interrupts Vs. Traps
Exceptions
DMAC
Device Controller & device Driver
Types of DMA Transfer
DMAC Architecture & It’s Functions
6 CPU Management 40
Introduction to Process
Types of Process
I/O Vs. CPU Bound Process
State of Process & Process Control Block Register
Introduction to Kernel & System Calls
Mode of CPU Operation & Process Status Word Register
Privilege & User Modes
Process Communication
Critical Section Problem & It’s Solutions
Process serialization
Process Synchronization
Semaphore & It’s Types
Monitors
Process Creation
Process Flow Graph
CoBegin-CoEnd Construction
Multitasking & Timesharing
Types of Multitasking
Multitasking Vs Multiprogramming
Time quantum
Timer Interrupt
Features of the OS, Needed for Multiprogramming
Requirements of Multitasking OS
Process Status Register & Process Descriptor
The CPU Scheduling
Components of CPU Scheduling
Dispatcher
Scheduler
Scheduling Algorithm & Evaluation
Types of Scheduling Queue
I/O, Process Waiting & Process Ready Queue
Decision Mode
Preemptive & Non-Preemptive
Priority Function
Arbitration Rule
Types of CPU Scheduling Algorithms
First Come First Serve
Short Job First Serve
Priority-Based
Round robin Scheduling
Single Level Process waiting Queue
Multilevel Feedback Queue
Selection Criteria for the CPU scheduling Algorithm
Total & Average Waiting Time of different algorithms
Types of Scheduler
Short-Term, Mid-Term & Long-Term Scheduler
Long-Term Vs Short Term scheduler
5/78
7 Disk/File Management 61
Functions of OS
File-System
Directory Structure
Single, Double, Tree & Acyclic Structure
Disk Scheduling Algorithms
First Come First serve
Shortest Seek Time First Serve
Scan / Elevator
File System Concerns
Read-Ahead and Read-Behind
8 Protection & Security of Resources 67
Type of Resources
Logical & Physical Resources
Protection
Deadlock
Deadlock Environment
Conditions for a Deadlock situation
Mutual Exclusion, Hold & Wait, No Preemption, Circular wait
Prevention from Deadlock
Recovery from Deadlock
Security
9 Distributed System 71
Loosely Coupled System
Networking
Types of Distributed System
6/78
Introduction to various categories of Software
SOFTWARE
H H
System Software Application Software
A U
R Operating System Translating Utilities Standard M
D Programs Software A
W • Assembler N
A • Interpreter W
R • Compiler A
E • Spcl. Compiler Customized R
Software E
T
he Operating System is the logical set of programs that efficiently manages
overall operations of the computer resource(s)/Hardware and provides an
interface to the user(s) to interact with the computer with minimum amount of
interventions.
UserA UserB UserC Usern UserD UserE UserN
Operating System
Hardware
Abstract View of System’s Components.
Interaction of OS with Hardware and User.
7/78
Need of Operating System :
1. User Interaction (Bridging Gap B/W User & H/w
Memory -------------------------- CPU ------------------------- Communication Subsystem
5. Security & Protection - Main memory is divided into no of blocks and every block has a
memory Protection Key. PK is small integer value having only three bits like 000,001,010
etc-etc. PK is assigned by the kernel2 during allotment of block to a program like 001 For
Read / 010 For Write / 100 For Execute etc-etc. OS maintains all such PK3 of allotted blocks
in Protection Key Area inside the main memory. Hence a block can only be accessed
according to its assigned PK by any other instruction/program. For example a block
containing a constant of program has protection key value 001 so it cannot be modified by
the any other instruction of the program. Whenever any instruction of program needs to
access a block, the block can only be accessed according to its protection key maintained
inside PKA4.
1
A part of computer system (S/W & H/W) that directly communicate with the user. E.g. Printer, H/D, File, Cache, I/O Devices. But Main
Memory is not a resource.
2
A program running at all times during the session.
3
Protection Key
4
Protection Key Area
8/78
6. Low Level Activity Controller Linker / Loader.
Linker – Collects all the translated program/functions (in object code) and links them together
properly for execution as one unit called as Absolute Load Module.
Loader – Loads Absolute Load Module from Hard Disk to main memory for further execution.
Address binding of instructions and data to memory addresses can happen at three different
stages.
Compile time: If memory location known a priori, absolute code can be generated; must
recompile code if starting location changes.
Load time: Must generate relocatable code if memory location is not known at compile
time.
Execution time: Binding delayed until run time if the process can be moved during its
execution from one memory segment to another. Need hardware support for address
maps
Dynamic Loading
Routine is not loaded until it is called.
Better memory-space utilization; unused routine is never loaded.
Useful when large amounts of code are needed to handle infrequently occurring cases.
No special support from the operating system is required; implemented through program
design.
Dynamic Linking
Linking postponed until execution time.
Small piece of code, stub, used to locate the appropriate memory-resident library routine.
Stub replaces itself with the address of the routine, and executes the routine.
Operating system needed to check if routine is in processes’ memory address.
There is a macro processor (Provided by OS) that is responsible for replacing all the assurance
of defined macro name by the defined instructions. Whenever Assembler/Compiler encounters
macro definition, it immediately stores its definition to the table for further statement substitution
in the program.
9/78
Types of Operating System
The functions of an operating system can be classified into two categories, that are;:
Processor management
Memory management,
Storage management and
Device management
• Resource Utilization - To provide the applications with an easy/abstract way to use the
hardware resources of the computer without the applications having to know all of those
details.
Application interface
User interface
User interface provides a link between the user and the computer,
which is one of the main functions of an OS.
10/78
Components Of Operating System
The major components5 of Os are as follows;
a) Storage Manager – Controls the utilization of main memory by different user’s programs.
The specific tasks are as follows;
1) Allocation/Deallocation of memory to program.
2) Protection of program area in main memory from unauthorized access.
3) Organizes the memory usages by making use of swapping, demand paging.
4) Monitors the memory block.
b) Process manager- keeps track of each program regarding the
a) Process State.
b) Priority associated with the process.
c) Controls job scheduling techniques.
creator. Commonly, files represent programs (both source and object forms) and data.
The operating system is responsible for the following activities in connection with file
management:
– File creation and deletion.
– Directory creation and deletion.
– Support of primitives for manipulating files and directories.
– Mapping files onto secondary storage.
– File backup on stable (nonvolatile) storage media.
c) I/O Control system – Logical IOCS provides the access method and data organization
for the data stored in files. Physical IOCS is responsible for carrying out device level I/O
operations to implement data transferred optimizes device performance by scheduling
jobs waiting for the device.
d) Console Manager- responsible for changing operating mode i.e. from supervisor to user
mode and vice-versa.
5
Component design varies from OS to OS.
11/78
Services of Operating Systems
Program execution – system capability to load a program into memory and to run it.
Loading Instruction
Output
Process Into memory RAM
Waiting/Input QUEUE
Instruction
Decoding
Storing Intermediary
or final result
Basic Instruction Cycle.
Temp Register
I/O Operations – since user programs cannot execute I/O operations directly, the operating system must
provide some means to perform required I/O operations.
File-System Manipulation – program capability to read, write, create, and delete files.
Communication – exchange of information among the processes executing either on the same system or on
different systems tied together by a network. It is implemented via shared memory or message passing.
Error Detection – ensures correct computing by detecting errors in the CPU, memory hardware, I/O devices, or
user programs.
Resource allocation – allocates available resources among multiple users or multiple jobs running at the same
time.
Accounting – keeps track/record of user’s activities like which user do use how much and what kinds of
computer resources for account billing or for accumulating usage statistics.
Note: The functions of operating system are classified into Resource Sharing & Resource Utilization.
12/78
Memory Management
Register
Cache
Main Memory
HD / FD / CD/ Magnetic Tape/ Electronic Disk (DRAM)
Storage Hierarchy
The Main Memory is the vital resource of the computer system and needs to be managed
effectively due to its limited availability and high cost against the demand of larger main memory
for smooth functioning of the program, if it is larger one. Hence, OS has memory management
as one of its prime function that includes;
1. Allocation of space in main memory for the processes kept in process waiting/input queue
for further processing. It decides how to satisfy a request of process (size n) from the list of
available free block list? The algorithm for memory allocation are as follows;
a) First Fit - OS chooses the first available block from the free block list6 that is quite capable
(equal/larger) to store/allocate a process from the process waiting/input queue7. The
scanning of appropriate block (capable to store the process of requested size) begins from
the beginning of list unless the block is found. The First Fit may lead to large Memory Gap if
the size of program is very less to allocated block.
First Fit – The block A, B and D are
P8 P65 P20 P60 P40 already allocated. Hence the C is
Process Waiting/Input Queue supposed to be allocated while the most
A B C D E suitable candidate is E-70 for the Process
10 60 80 30 70 P40.
Free Block List (Unsorted)
When an N-word block is needed, the free space list is scanned for the first block that is
capable to have N or more words, which is then splited into an N-word block, and the
remainder is returned to the free-space list.
b) Best Fit – The OS chooses the smallest block from the free block list (ordered) that is quite
capable to store/allocate the process from the process waiting/input queue. The scanning of
the smallest block (capable to store/allocate the process of requested size) begins from the
beginning of list unless the block is found. Best Fit may lead to small Memory Gap resulting
minimum internal fragmentation.
P8 P65 P20 P60 P40 Best Fit – Since the list is sorted and the
Process Waiting/Input Queue smallest among all the blocks is C that is
capable to store the process P40.
A B C D E
18 38 48 68 78
Free Block List (Sorted)
When an N-word block is needed, the free-space list is scanned for the block with the minimum
number of words greater than or equal to N. This is allocated as a unit, if it has exactly N-words,
or is split and the remainder returned to the free-space list.
6
Contains a set of free blocks that may be allocated against the request made by a process. It can be
sorted/ordered or unsorted/unordered.
7
Collection of processes on the disk that are waiting to be brought into the memory for execution.
13/78
c) Worst Fit - OS chooses the largest available block from the free block list (ordered) that is
capable to allocate the process from the process waiting/input queue. The scanning of the
largest block begins from the beginning of list unless the largest block is found. Worst Fit
may lead to Largest Memory Gap if the size of program is very less compare to the allocated
block.. It may lead external fragmentation
P8 P65 P20 P60 P40
Process Waiting/Input Queue Best Fit – Since the list is sorted and the
A B C D E largest block among all the blocks is E. Hence
18 38 48 68 78 it is selected for the process P40.
Free Block List (Sorted)
The best & worst fit is better than first-fit in term of decreasing both
a) The time to search a block in the list is on average8 (N+1)/2, (N+1)/2
Note
and N respectively where total number of available blocks are N.)
b) And storage space also.+
2. Deallocation of space after the completion of the execution of process that was currently
being running, for allocation of same memory to other processes (kept in the Process
waiting/input queue). The term liberation is used for the process of freeing an allocated block of
storage using liberation alogothrim.
3. Provides Virtual Memory.
4. Linking (Creating abstract Load Module) and Loading (Loading Abstract Load Module) of
process.
Loader
Dynamically loaded
system library
Dynamic Binding
Binary Image of process
8
Linear Search ((N+1)/2) & Binary Search requires log2n
14/78
6. Protection of memory location, containing system and user program from inference to
other program.
Register – A high-speed storage device, made of FLIP-FLOP in the CPU that is used to store
the data temporary. Types of registers vary from the processor to processor but two types of
registers are most frequently used;
a. General Purpose
b. Special Purpose
MAR – Contains the address of operands Decoder – Decoding is the process to extract
the address of operands in memory and
MDR – Contains the content/value of operand. operation to be performed on operands and
controlled by the Ins. Decoder.
Temporary Register – For temporary tasks.
DATA Transfer – Transferring of data from one to another register is performed using a single
data bus that could be shared by several registers, known as GATING.
Whenever a source register sends the data, it uses gating signals to the common bus to activate
a register for input from the bus. E.g.
9
Memory Management Strategies
15/78
In contiguous memory management, the instruction, and data of a program are
continuously/sequentially stored in main memory.
1. Single Partition MMS -> One partition of the OS and other one partition for the user’s
program. There is no concept of Memory Fragmentation and Memory Inference10.
User Program Area
Operating System
2. Fixed Equal Multiple11 Partition MMS In this scheme, the OS occupies the low
memory and the rest memory that is available for user’s program is divided into fixed
partitions of equal size that vary from an OS to OS. E.g. Size of main memory=6MB, I MB
occupied by the OS and the rest 5MB is divided into five partitions of 1MB i.e. 5X1 MB.
Job12 Size of Job Partition ID Size of Partition Allotment Unused
Space
OS 1024KB P1 1MB=00001024 KB OS 0000KB
A 0800KB P2 1MB=1024-2048 KB A 0224KB
B 1000KB P3 1MB=2048-3072 KB B 0024KB
C 1024KB P4 1MB=3072-4096 KB C 0000KB
D 1200KB P5 1MB=4096-5120 KB External Frag. 1024KB.
E 0624KB P6 1MB=5120-6144 KB E 0400KB
Assuming, the size of memory is of 6MB=6144KB
Total Internal Fragmentation = 0+224+24+0+400=648KB.
And the wastage of an entire partition of P5 is 1024KB because there is no process(s)
capable to be in this partition, is known as External Fragmentation.=1024KB.
External Fragmentation - If an entire partition is wasted due the greater size of job than
the size of partition then it is said to be external fragmentation. Therefore, the total
external fragmentation = 1024kb.
Internal Fragmentation – The unused space of the partitions that are occupied by the
jobs are known as Internal Fragmentation. This means that some memory in the allotted
partition might remain unused during other task steps that cause an internal
fragmentation. Many small multiprogramming OS use this approach due to its simplicity,
as OS overheads are low as no memory allotment actions are necessary during system
operation. Therefore, the total internal fragmentation = 0+224+24+0+400 = 648kb
3. Fixed Variable Partition MMS In this scheme, the OS occupies the low memory and
the rest memory that is available for user’s program, is divided into fixed partition of
unequal size (capable to store biggest step of program).
Job Size of Job Partition ID Size of Partition Allotment Unused
Space
OS 1024KB P1 0000-1024=1024KB OS 000KB
A 0590KB P2 0700KB=1024-1724KB B 0110KB
B 0550KB P3 0600KB=1724-2324KB E 0050KB
C 2000KB P4 1700KB=2324-4044KB External Frag. 1700KB
D 1200KB P5 1300KB=4044-5344KB A 0100KB
E 0900KB P6 0820KB=5344-6144KB External Frag 0820KB
Assuming, the size of memory is of 6MB=6144KB
The arrival orders of jobs are as following order A-> B -> C ->D->E.
10
Is the situation when the instruction or data of a program is attempting to allocate the space in main memory that has been
occupied by other’s program instruction or data?
11
A type of partitioned MMS.
12
A set of computational steps packaged to run as a unit.
16/78
As per First Fit, the job A(590) may be allotted at partition P3 would be more efficient
(Wasting of less space) but B is allotted to P2 where wasting of 100kb is Internal
Fragmentation. The scheduling of job from the ready queue is based upon the arrival
time of job in ready queue.
Total Internal Fragmentation = 0+110+50+100+20=280KB.
And the wastage of an entire partition of P4 and P6 is 1700+820=2520KB because there
is no process(s) capable to be in partition, is External Fragmentation.=2520KB.
4. Dynamic Partition Memory Management In this scheme, the OS occupies the low
memory and the rest memory that is available for user’s program (called Big-hole), the
partition is created dynamically, so that each process is loaded into the partition of
exactly the same size of that process. The partition is created dynamically as per the size
of process that is supposed to be loaded into memory’s partition as per the available
memory other wise the process has to wait until enough partition is available or
deallocated (By Compaction Process) by some other process that may increase the
throughput time of process. (Capable to store biggest step of program). And never raise
the situation of internal fragmentation but suffers from External Fragmentation.
Partition Table
Partition ID Size To get the record of allocated and deallocated memory
P1 1000KB partition, the OS keeps the track of memory partition into the
P2 1460KB table known as Partition Table.
Pn 0456KB
13
Please See the notes of Virtual Memory
17/78
b) External Fragmentation Whenever a job claims to have a memory block /
partition of either fixed or variable size for allocation and if it does not get the claimed
partition (in case of Partition Size < Job Size) while leaving an entire memory space of
that partition then an entire unused memory space of that partition is known as
external fragmentation. Generally Found in the fixed equal/variable/dynamic multiple
partition MMS. Implementing Virtual Memory, Memory Compaction and paging
Scheme can handle it.
14
Please see the notes of Fixed Partition Memory
15
Please see the notes of Variable Partition Memory
16
Please See the notes of Virtual Memory
18/78
VS Internal Fragmentation External fragmentation
The free space/portion of main memory is The wastage of an entire partition in main memory
wasted within an allocated partition/block is or availability of many small noncontiguous free
known as Internal Fragmentation. blocks known as External Partition
More complicated and dangerous that causes Less complicated and dangerous as compare to
the poor performance of the execution of internal fragmentation.
process and system too.
Generally caused by the internal resources i.e. Generally caused by the external resources i.e.
the process that is currently being loaded in the the program, expected to be loaded immediately
memory and paging (Page Break) Scheme. next in the memory but the process does require
more memory than the size of partition. or
segmentation where the wastage of an entire
partition is an external fragmentation..
Generally Found in the fixed equal/variable Generally Found in the fixed
multiple partition MMS. equal/variable/dynamic multiple partition MMS.
Washed out by implementing Segmentation Washed out by implementing Virtual Memory,
Scheme, memory compaction and dynamic Memory Compaction and paging Scheme.
partition MMS.
Memory Compaction/Relocation is the technique of collecting all the free spaces of internal
and external fragmentation together in one block to have a larger block as an unit. The
mechanism involved in this technique is to shift the process to make a larger continuous room
for a larger program of waiting queue to be loaded as a unit instead of fragmented allocation. It
is also known as memory-relocation where logical address17 is relocated dynamically at
execution time.
The process of compaction can be act upon Fixed and Variable Partition MMS.
Example of fixed variable Partition Memory Management.
Job Size of Job Partition ID Size of Partition Allotment Unused Space
OS 1024KB P1 1024KB=0000-1024KB OS 0000KB
A 0550KB P2 0700KB=1024-1724KB A 0150KB
B 0590KB P3 0600KB=1724-2324KB B 0010KB
18
C 1000KB P4 1200KB=2324-3524KB External Frag . 1200KB
D 0600KB P5 1000KB=3524-4524KB D 0400KB
E 1450KB P6 1600KB=4524-6144KB E 0150KB
Assuming, the size of memory is of 6MB=6144KB
Total Internal Fragmentation = 0+150+10+400+150 = 710KB
Total External Fragmentation due to job C= 1200KB
The internal and external fragmented memory space can be collected together by reallocation of
the boundary of each partition, said as compacted memory to have a partition of 1910KB that
enough for the allocation of memory for job C. The compaction process is an additional
overhead upon the OS
In Non-Contiguous memory management, the instruction, and data of a program are stored
randomly in main memory.
a) Paged Memory Management:
Virtual Memory Whenever a large program does not fit inside any one partition of
main memory then the program/user job is dived into fixed size block known as
overlays/pages and stored on the hard disk.
Job Size of Job Partition ID Size of Partition Allotment
OS 1024KB P1 0000-1024=1024KB OS
A 1163KB P2 Free Area Unable
B 1050KB P3 Free Area Unable
17
Also known as virtual address.
18
Fragmentation
19/78
Assuming, the size of memory is only 2MB=20484KB
Both the job A and B are unable to be allocated using any continuous memory allocation
strategies.
0 -------------------
4 ------------------- Virtual Hard Disk
Memory
8 -------------------
… -------------------
4096 ------------------- Assumption – Capacity of each block/page is only 4KB and capacity of
main memory is 4MB
Main Memory
As per the requirements, a particular overlay19/page is brought into main memory for
execution under the control of OS. The method of handling overlay is called Virtual
Memory. Virtual Memory guarantees to rule out the problem of external fragmentation. But
unable to rule out internal fragmentation. The virtual memory can be implemented by
demand paging or demand segmentation where the segments consist of pages/overlays.
Eg. Let’s suppose that the size of a page (P1) of program Pr is 4100K and the size of
respective frame (Fr) inside main memory 4000B. Hence, the remaining 100B crosses the
boundary of frame (Frx) and placed into another frame (Fry) leaving 3900B that is internal
fragment. More programs can be executed at same time. Hence, increasing the CPU
Utilization and Throughput. Less I/O required to swap each process into memory entirely that
improves the execution performance.
Advantages Disadvantages
Less requirement of physical memory. No more increment in the response or turn-
Supports Time-Sharing system. around time.
Does not effect from fragmentation.
Supports Virtual Memory. Additional load on OS to manage the virtual
Sharing of common code (Shared Page) memory.
between two or more process is possible.
Paging20 - Is the technique that automatically does memory management (allocation) for
overlays/pages from virtual memory (secondary memory) into else where (randomly) in
main memory as per memory word available in the system (A word addressable machine
of 16 bits is capable to have all together 2^16=65536 locations.
Word Address Byte Address The word addresses are often 0,4,8, but rarely
0 0 1 2 3 2,5,7, Used by the machines.
4 4 5 6 7
Page – The logical address space divided into a fixed
8 8 9 10 11 size blocks in virtual storage technique, each block is
12 12 13 14 15 used as a page.
It is efficient MMS that is non-continuous memory allocation method and the basic idea
behind paging is the physical memory is divided into fixed size blocks called frames
where the size of frame in virtual memory is same to the size21 of frame in main memory.
The OS maintains a data structure that is Page Table22 for the mapping purpose. The
page table specifies the some useful information, it tells which frame is allocated, which
frame is available, and how many total frames are there, and so on.
19
The technique of repeadly using the same area of internal storage during different stages of a program.
20
A high paging rate may cause high I/O rate.
21
Generally the frame and page size is 4KB but may vary from the OS to OS.
22
Each OS has its own method of storing page table and most of the OS allocate a page table for each process.
20/78
Logical Memory Page Frame Main Frame
for Job-1 No. No. Memory No
OS 0
Page 0 0 3 Page 1
3(J1)
Page 1 1 6 Page 2
2(J1)
Page 2 2 2 Page 3
0(J1)
Page 3 3 1 4
5
Page Map Table for Job 1
Page 6
1(J1)
The OS Logical Address Generated by the CPU and the user program can only deal
with the logical address. It consists of Segment Number along with Offset
Value/Displacement.
Logical Address Space –> Set of all logical addresses.
Physical Address The memory locations available in the main memory starting from
0 to 4096 for 16 bits word addressable is known as physical address. It is kept into
memory address register and seen by the memory unit.
Physical Address Space –> Set of all physical address spaces.
Q. Explain paged memory management& concept of virtual memory?
Segment – Logical grouping of instructions such as subroutine, array, data area that
every program consists of. a program and managing these segments are known as
segmentation. Each segment has a name and a length (fixed variable size).
Address of segment = Segment No. + Offset with in the segment.
The segment map table is used for address relocation.
Seg No. Base Address Limit23
0 1500 1000
1 0900 150
2 3000 400
3 4200 550
Segment Map Table
23
Need not be same.
21/78
Q. A 2500 bytes long segment begins at physical address 9010. Assuming that a location can
hold one byte, what will be the physical address of byte 187 and byte 3004 of a file named by
seg.txt?
Given Segment Size is 2500 bytes. Hence,
Segment-1 begins at 9010 Physical Address of 187th Byte= 9010+187 = 9197th location in
. same segment
Segment-2 begins at Physical Address of 3004th Byte= 9010+3004 = 12014th
9010+2500=11510 location that in next segment at location 12014.
CPU Limit
Register
BASE Register
Process RAM
Waiting/Input The diagram of Dynamic Reallocation
QUEUE
24
Memory is divided into no of blocks having similar capacity known as segment and the process is known as
segmentation. Segmentation is the Memory-management scheme that supports user view of memory. A program is a
collection of segments. A segment is a logical unit such as: Main program, procedure, function, local variables, global
variables, common block, stack, and symbol table, arrays.
22/78
B.) Limit Register – Contains the length of every segment that is how long a segment is?
And the logical address must be less than the length of the respective segment.
Hence, each segment in main memory has their corresponding base & length register entries
kept in Segment Register, specifying starting address and length of each segment of main
memory.
While every page of the virtual memory has Logical Address that consists of two parts;
a.) 20 Bits Segment No. – Contains the position of segment say 1,2,3,4… inside the main
memory where the corresponding page is expected to be loaded from virtual to main
memory. And used as an index to Segment Table.
b.) 12 Bits Offset – Contains the position of page inside a particular segment of main
memory that can be maximum up to 2^12 for 32 Bits addressable machine.
Whenever a page is loaded from the virtual memory to a particular segment of main memory
then the address of concerning segment’s base register is added to the offset of page so that to
obtain the Physical Address. Hence a segment can have several disk pages that are
differentiated in matter of their physical address by using offset and length register of a program.
Q. Assuming the size of logical address to be 2^32 and page size to be 2^12, find the bits required for page number
and page offset in the logical address?
As we know,
Logical Address = 2^M = 2^32 Hence M=32 & Page Size- 2^n =2^12 Hence N=12
No of Bits for Page Number = High Order Bits = M-N=32-12 =20 Bits
No of Bits for Page Offset = Low Order Bits = N=12 Bits
Virtual Address The compile-time and load-time address-binding method generate identical
logical and physical addresses. While as the execution-time address-binding scheme generates
different logical and physical address. In such a situation the logical address is called virtual
address.
Demand Paging Whenever a page reference is not available in main memory, the Page
Fault occurs and the OS treats it by reading the required page from the secondary storage while
mapping memory and making entries in page table .IT repeats the instruction that causes page
fault and This method of handling main memory in such a situation is known as demand paging.
E.g. when an execution of a program begins, a page fault occurs since there is no such required
page currently available in memory and OS demands and loads the required page in main
memory by replacing an existing page from the main memory by using a specific Page
Replacement Algorithm. E.g. whenever a program begins its execution, initially the page fault
occurs and OS performs the task of Demand Paging by using OS’s Page Fault Handler
program.
Belady’s Anomaly – Belady’s invented a page sequence 1,2,3,4 / 1,2,5,1 / 2,3,4,5.. Using this
the page fault rate increases if the number of frames increases. The FIFO page replacement
policy, Belady’s anomaly occurs.
Q. Explain with diagram segmentation & demand paging?
23/78
Operating
System Swapping requires backing
Process Waiting
store that must be large
/ Input QUEUE U enough to accommodate
S copies of all memory images
E
R for all users, and must provide
Px Swap Out A direct access to these memory
R
E images.
A
Py
Swap In RAM
Q. Explain the swapping
management scheme?
Page Thrashing25 Physical memory is divided into number of fixed-size block called blocks
while as the logical memory is also divided into number of blocks of same size known as pages.
As well as backing storage is also divided into number of fixed-size blocks that are of the same
size as of memory frames. The poor paging algorithm may cause the thrashing and requires
virtual system thrashing.
The Thrashing is a situation when two processes A and B (or more) are currently being to be
executed (Multiprogramming). The number of frames required by process A is less as required
frames (Available Free Frames in memory), then it encounters the page fault and swap outs the
occupied page of another process B from the frames to make room of it’s own. While doing so
again the context-switches to process B where it also encounters the page fault and swap outs
the page of process A from the occupied frames.
Due to encountering page fault again and again repeatedly at very less interval, the level of
multiprogramming decreases.
Such type of high and frequent occurrence of page fault requires more time for swap-out and
swap-in operations rather than the execution of processes, causes the Thrashing.
Total available frame = 6
Process B
B B B B B
Process A
Q. Explain Page Thrashing?
1. Initially, the process B occupies 5 frames. 2. Process A requires 2 frames but the
3. 1 frame is swapped out. available frame is only one.. Hence the
5. Context switches to process B and page fault occurs by the process A.
encounters the page fault. 4. Process A get loaded and executed.
Page of process B is loaded and executed. 6. Page of process A is swapped out.
When a process(s) if swap in and Swap-out occur repeatedly again and again during the each
context switch, causes Thrashing. I.e. the situation, in which system storage management
routines such as garbage collector are executing almost all time, is also called thrashing.
Consequently,
Decreases the CPU utilization because the CPU engaged in swapping In-Out
operations rather than execution of process.
Note Effective memory access time increases due to high frequency of the
occurrence of page faults.
25
High Paging Activity.
24/78
How OS does Demand Paging ?
Whenever page fault occurs inside the main memory, the Page Fault Handler (A small program)
of OS checks Page Table (Contains the list of all available pages currently in main memory,
maintained by OS) and determines the page to be read and assigns available free page frame
obtained from Free Page Frame List (Contains the list of all available free page frames inside
memory, maintained by OS) to the disk driver. The disk driver reads the required page from the
virtual memory and put backs into Page Frame and hence the situation of page fault rules out
and the concept of demand paging is employed.
Page Frame OS Memory Area
User Pages Page Free Page Modified Free Page Replacement Algorithm
Table Frame List Frame List Table
Available Physical Memory Model
Paging Scheme Segmentation26 Scheme
1. The main memory is partitioned in to fixed 1. The main memory partitioned into fixed
equal size of frames/blocks. variable size of segments.
2. The logical address space divided into pages 2. The logical address space divided into
by compiler or memory management unit. segments, specified by the programmer.
3. Suffers from Internal Fragmentation or page
break. 3. Suffers from External Fragmentation.
4. Avoids external fragmentation.
5. The OS maintains a free frames list need not 4. Avoids internal fragmentation.
to search for free frame. 5. The OS maintains the particulars of available
6. The OS maintains a page table for mapping memory.
b/w frames and pages. 6. The OS maintains a segment map table for
7. Does not support the user view of memory. mapping purpose.
8. Processor uses the page number and 7. Supports the user view of memory.
displacement/offset to calculate 8. Processor uses the segment number and
absolute/physical address. displacement/offset to calculate
9. Multilevel paging is possible. absolute/physical address.
10. Sharing of page is allowed. 9. Multilevel segmentation is possible but no use.
11. Compaction is applicable. 10. Sharing of page is allowed.
11. Compaction is applicable.
Page Replacement Algorithm OS replaces an existing page from the main memory to the
page from the virtual memory, whenever page fault occurs and there is situation of demand
paging use. An effective Page Replacement Algorithm can be one that is capable of producing
least page faults and has simplicity. The situation of Page Replacement occurs only when there
is no more free room/frame available in the main memory for demanding and loading the
required page from virtual memory. Some of the few page replacement Algorithm that is
frequently used by most of OS are as follows;
1.) Least Recent Used Page Replacement Algorithm Least Recent Used Page
Algorithm, only a page that has been very most recently used instead of very recently
used is replaced to the demanded page from virtual memory. But,
How can we determine the page that has been very least recently used?
Definitely! By using LRU Approximation which is the process to determine and
decide the least recent used page among the all-available pages inside the main
memory. The main objective of such algorithm is to perform the task of LRU
Approximation. The LRU is used to load another page by replacing an existing page.
Some of the most frequently used LRU Approximation algorithm by the OS is as follows;
26
Segmented memory can be paged.
25/78
b) Counter Approach Most of the OS uses Counter Approach for LRU Approximation. In
Counter Approach LRU Approximation, every page inside main memory has a
corresponding Counter (An Integer Value) initially initialized to 0,whenever a page is
created/loaded in the main memory. Again whenever a page is referenced, the counter of
that page is re-initialized to 0 and rest of all counters of all other pages are incremented by 1
by the Daemon Process27 of such algorithm. And hence whenever page fault occurs, the
page having the highest value for their counter is supposed to be replaced.
c) Reference Bit Approach In Reference Bit Approach LRU Approximation, every page
inside main memory has a reference bit, containing 1 for used and 0 for unused page. If a
page is currently referenced then its reference bit is set to 1 and other page’s reference bit is
set to 0. The page having reference bit set to 0 are considerer as to be replaced. But this
algorithm is quite puzzlesome hence is not considered by most of the OS as compared to
LRU Approximation.
2.) First In First Out Page Replacement Algorithm In First In First Out Page Algorithm,
only a page that has been loaded first among the several pages kept in main memory is
considered to be replaced by the page of virtual memory. All blocks inside the main memory
has their physical address in ascending order. Suppose there are four blocks in main
memory addressed by a1<a2<a3<a4 and there are four pages in virtual memory named by
p1, p2, p3 and p4 supposed to loaded in main memory in sequence like p1-p2-p3-p4.
Definitely the first page here is p1 hence the block a1 is assigned to p1 and so forth.
a1<-p1 a2<-p2 a3<-p3 a4<-p4
Consequently, the initial page to be replaced is p1 because it was First in and hence will be
first out, thereafter p2, thereafter p3 and thereafter p4.
Q. Consider the following page reference during a given time interval for a memory consisting of 3
frames: B,C,A,A,C,D,B,E,F,A,B,D,A,F,A. Using (a) FIFO and (b) LRU replacement strategy compare the
result ?
FIFO Replacement Algorithm: Replaces a page, which is entered into the main memory first.
B B B B B D D D F F F D D D D Frame-1
C C C C C B B B A A A A F F Frame-2
A A A A A E E E B B B B A Frame-3
^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ Total No_Page t=15
Total No_Page Hit=12
Total No_Page Mis=03
LRU Replacement28 Algorithm: Replace a page that has not been used for the longest period
of time.
B B B B B D D D F F F D D D D Frame-1
C C C C C C E E E B B B F F Frame-2
A A A A B B B A A A A A A Frame-3
^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ Total No_Page Hit=11
Total No_Page Mis=04
Cache Memory Now it is almost clear that memory is most dominant unit in the computer
system that affects the Execution Time of a program/instructions. Hence to speed up the
execution time, there is need of fast memory, known as Cache Memory placed in between CPU
and Main Memory. The speed factor of cache is 5 to 10 on a standard scale factor as compared
to main memory. It is the fast static RAM where the storage is stored as voltage that is more
stable.
27
Periodically Executed at regular interval.
28
Whenever the page fault happened, the page replacement is used.
26/78
DISK RAM CACHE CPU
S/W H/W
Controlled Implemented
E.g.
It is divided into number of blocks known as cache lines having the size varying from 16 to 64
KB. Let’s suppose there are two memories, one is slow and has 80ηs access times (ime taken
to access an instruction) and another is fast having 10ηs access time. Let’s suppose there are
1000 memory accesses required by a program. Hence by using slow memory, time taken to
access 1000 memory access is 80µs(80ηs * 1000).
And by using fast memory, time taken to access 1000 memory access is 10µs(10ηs * 1000).
Now suppose there are mixed memory (Slow & Fast) in the system in proportion of 20:80 or
20%: 80%. Then time taken to perform 1000 memory accesses is 24µs(200*80ηs + 800*10ηs).
Now it is very straightforward conclusion that “By using slow & fast memory entire execution of a
program can be enhanced.
a) S/W Controlled Where a portion of main memory acts as cache memory that is definitely
fast with respect to Hard Disk and the function is controlled by the OS. It is mainly used during
the communication in between Disk and Cache for data read/write operations.
b) H/W Implemented Where one or more cache device is individually used (Cache slots)
that are definitely fast with respect to RAM. And the functioning is controlled by the cache
controller. It is used during the communication in between RAM and the CPU.
Reading - When the kernel needs a particular piece of information, kernel looks to cache
initially. If the information is available in cache, the kernel gets the benefits of locality of
reference. Otherwise kernel looks to RAM and bring a copy of information from RAM into cache
and continues to read operation. Otherwise looks to Hard Disk and if available then brings a
copy from there to RAM and then to cache.
27/78
Writing - The inverse of above explained operations are performed to get writing operation
while safekeeping of data on the disk.
1.3
DISK RAM
S/W Sequence of Writing Operation;
Controlled 1 .2 1.1 1.2 1.3
CACHE CPU
H/W Implemented
1.1
Function of Cache – CPU requests for a required block in cache memory. If request does get
block in cache then it is said as Hit occurs. If request does not get block in cache then it is said
Miss occurred. The performance of Cache is determined by Hit Ratio;
No of Hits
Hit Ratio/ Page Fault Rate = ------------------------
No of Hits + No of Misses
Hit Ratio of cache can have maximum up to 1/(1+0) where total number of request is 1 that is hit
where the miss count is 0. Hence, the ratio can be 1/1-1, while thinking positively. But in case of
worst case, it can be h/(h+m) where h=No of Hits and m is the number of miss and number of
total request is h+m. Needless to say that a cache of less capacity has more possibility of miss
and a cache of big capacity has less possibility of Miss and enhanced possibility of Hit.
Q. Does always the page hit increases if the no of frames are increased? Give reasons for your answer?
No – because it depends upon the size of process and it’s respective no of pages to be loaded inside the frame.
Cache Memory
d1
CPU d1<d2
d2 Main Memory
Hence, the execution time of a program can be enhanced if it contains several repeatable
instructions, by placing them on to cache memory instead of main memory. It is recommended
to have all those data that is supposed to be shared among several applications and expected
to be executed very frequently and repeatedly again and again. Due to locality of reference, it
has better performance than Ram as well as it’s own elementary structure of static RAM. The
Cache can improve I/O performance. In virtual memory, the page fault frequency is reduced
when the locality of reference is applicable to the process.
In virtual memory, the page fault frequency is reduced when the Locality of Reference is
possible to the process.
28/78
Q. What do you mean by SPOOLING?
CPU
Buffer The temporary storage area, reserved for use in performing I/O operations, into which
the data are read and which the data is written. It is used to compensate (to increase data
transfer rate) for a difference in the rate of flow of data between mechanical and electronical
devices (transfer b/w two devices or one application). It reduces disk overhead and utilization.
Types of Buffering:
a) Anticipatory Buffering – A buffer management technique where by data is read into a
buffer before they are actually requested by program.
b) Demand Buffering - A buffer management technique where by data is read into a buffer
only when they are actually requested by program.
Single Buffering – Only one buffer is used to transfer the data b/w two devices where the data
transfer rate is very slow because the producer29 has to wait while consumer30 consuming the
data.
OS
Producer Single Buffer Consumer
Double Buffering – Two buffers are used in place of one to transfer the data b/w two devices
where the producer produces one buffer and in the same time the consumer consumes another
buffer. So the producers need not to wait for filling the buffer. It is partial solution of single buffer
problem.
OS
Producer Buffer - 1 Consumer
Buffer - 2
29
All the input devices.
30
All the output devices
29/78
Circular Buffering – More than two buffers are used in place of one/two to transfer the data b/w
two devices. The collection of buffers is itself referred to as a circular buffer. Where the producer
produces one buffer and in the same time the consumer consumes another buffer. So the
producers need not to wait for filling the buffer. It is partial solution of single buffer problem.
OS
Buffer - 1
Buffer - n
Buffer Vs. cache - If the cache memory is implemented as software controlled then the buffer
and cache both can share the portion of RAM just one after another and exactly controlled by
the OS.
BUFFER CACHE
Used and implemented in the Used and implemented in the communication in
communication in between Disk and between CPU and RAM
RAM
Can have the data that is either used at Can have the data that is only used repeatedly again
once only or repeatedly again 7& & again. And supposed to be shared among multiple
again. applications.
It is recommended for single tasking. Cache is recommended for Multi-Tasking.
It is less costly and has slower speed. It is costly and has fast speed.
Placed at Far from the CPU Placed at near from the CPU
Gets the data from disk Gets the data from RAM
BUFFER CACHE
DISK S/W CPU
RAM Controlled
Placement of Buffer and Cache in memory hierarchy.
Protection & Security of Main Memory- Every page inside the main memory has a
corresponding small integer of 3-bits known as protection key. It is mainly used to guard and
protect the page from unauthorized access by the instruction of a program. The probable
combination of all three bits and their respective meanings are as follows;
30/78
000 For Empty Page All such protection keys are maintained in the page
001 For Read Access table kept on the reserve area of main memory.
010 For Write Access Whenever, an instruction wants to get access on a
100 For Execute Access particular page, it first checks the page table for the
101 For Execute And Read Access current status of page and does accordingly.
110 For Execute And Write Access Hence, the OS maintains and protects the memory
011 For Read And Write Access location from unauthorized access of instruction of a
program.
31/78
I/O Management
Input Output Organization – Every I/O device has its corresponding port31 that provides an
interface to communicate with the devices. I/O from a processor is carried out by reading data
from Input Port or by writing data on Output Port.
PC Bus Architecture
Disk
S
Processor C
S
I Disk
Cache B
u
Monitor Disk
s
PCI Bus
Disk Disk
-------Expansion Bus-------
Controller – IS a collection of electronics that can operate a port, a bus, or a device. It has one
or more registers for data and control signals. And the processor communicates with the
controller by reading and writing bit pattern in those registers.
Serial Port Controller – Is an example of simple device controller that is a single chip in a
computer that controls the signal on the wires of a serial port.
Port And Bus – Port is a connection point where the device communicates with the system
where if one or more devices use a common set of wires, the connection is called a bus.
31
A channel through which data is transferred between a device and the microprocessor. The port appears to the
microprocessor as one or more memory addresses that it can use to send or receive data. Input/output port is also
called I/O port.
32/78
PCI Bus – Connects the processor – memory subsystem to the fast devices and an expansion
bus that connects relatively slow devices such as Keyboard and serial and parallel ports.
System Bus
| | | |
CPU Memory I/O Controller1 I/O Controller2
|- Hard Disk1 Multipoint I/O Bus
|- Hard Disk2
Shared I/O Bus |- Floppy Disk
|- CD ROM
33/78
I/O Communication Techniques / Data Transferring Method(s) / Organization of I/O
devices
The data transfer b/w the CPU and I/O devices may be handled in a variety of techniques.
Some techniques use the CPU as an intermediate path; other transfers the data directly to and
from the memory. Data transferred to and from I/O device is said to be I/O communication.
The organization of I/O devices and their respective data transferring methods may consist of
any one from the followings available methods;
Software Polled / Programmed I/O Method
Interrupt Driven Method
Direct Memory Access Controller [DMAC] Method
A.) Software Polled/Programmed I/O – In S/w Polled I/O operation (Read/Write), data transfer
from an I/O device to memory requires the execution of Control Program. The CPU stays in a
program loop until the I/O unit indicates that it is ready for data transfer that is time-consuming
process for CPU. It is suitable for small and low speed computers.
Next
Hence, we can say, all the I/O operations on the corresponding I/O devices of the concerned
program are managed under the control of control program.
Due to the intervention of Control Program that causes the CPU has to wait for the ready signal
from the I/O device that is why the transmission of large data becomes very slow that affects the
performance of the CPU b’coz it is an indirect approach while transferring data among devices,
memory, and CPU also.
34/78
B.) Interrupt Driven I/O – In Interrupt Driven I/O, a device when ready for input or output,
generates a signal known as interrupt to the CPU, brining/asking for the attention of CPU
towards the corresponding device that has generated an interrupt.
Types of Interrupt
1.) Traps – Traps are some sort of interrupts (S/W Interrupts) that are processor’s instruction
and mainly caused by the internal resources of the CPU in response to program errors or
requests for operating-system services say printer’s spooler service, scheduler, etc-etc.
Operation of Trap - Whenever an application wants to have the services offered by the
kernel, the application requests along with proper argument (specifies a particular service),
the system call32 accepts the request, checks for the validation and only valid request is
passed to kernel and simultaneously the system call executes a software instruction known
as trap that identifies the required kernel’s service say Event Log, Scheduler, RPC,
Computer browser, Printer Spooler, Net logon, Messenger.
Operating System
S KERNEL Functioning of System Call(s).
H
E
L Process
L System Call
TRAP
Executes a special S/W instruction (Trap) that Eexecutes the service for application.
identifies the requested Kernel’s service.
2.) Exception/Program – Exceptions are some sorts of interrupts that are generated/caused by
faults (illegal instructions of program - External resources) such as Divide By Zero,
Arithmetic Overflow, Page Faults etc-etc.
3.) Timer - generated by clock in the processor. Whenever the time-slot of the processor
expired, the OS sends the interrupt signal to the processor to suspend the current job.
4.) Hardware Interrupt - Interrupts are generated by various hardware devices to request
service or to report problems to the CPU, A hierarchy of interrupt priorities determine which
interrupt request will be handled first if more than one request is made. A program can
temporarily disable some interrupts if it needs the full attention of the processor to complete a
particular task.
4.) I/O Interrupt – Interrupts are the signals caused/generated by the I/O devices or a
program’s instruction to bring the attention of CPU towards the devices or the program. In
another word, by using interrupts, the devices acknowledge the CPU for their willingness to
32
System calls are low-level routine, kept in the kernel that provides the interface b/w a process and the OS.
Generally these are available as assembly language instructions but some of them can also be written in high-level
language. And systems calls e.g. fork(), exec() etc are also invoked by S/W interrupt (Trap).
35/78
read or write operation and CPU immediately responds against the encountered interrupt
while interleaving the task that is currently being processed by the CPU and looks up the IVT
for searching and execution of ISR.
Most CPUs has two interrupt lines that are considered and listen by the CPU after executing
every instruction. A device/ controller/ program puts the interrupts signal on these lines.
CPU
Maskable Wire Non-Maskable Wire
Disabled Interrupt – A condition usually created by the OS during which the processor will
ignore interrupt request signals of a specific class.
The CPU issues Read The CPU issues Read command to I/O device so
command to I/O device. that to get status of I/O device and go on to do some
useful work.
CPU I/O
Read the status by the mean
of interrupt of I/O device.
When the I/O device is ready then it send an
interrupt signal to the processor to acknowledge the
Not Ready - Error CPU for it’s readiness.. I/O CPU
Check
Status
Ready
When the CPU receives the interrupt signal from I/O
Read word from I/O device. device, it checks the status.
Write the word into memory. If the status is ready, then the CPU read the word
from I/O device, otherwise produces error. I/O
CPU
Done No
And then, the CPU write/load the word into memory.
CPU Memory
Next
36/78
Operation of Interrupts – Whenever a device generates a Interrupt to acknowledge the CPU
for its readiness, the processor receives an interrupt, it suspends its current operations, saves
the status of its work, and transfers control to Interrupt Vector Table33 for searching and
executing a special routine known as an interrupt handler or Interrupt Service Routine34, which
contains the instructions for dealing with the particular situation that caused the interrupt. E.g.,
lets suppose, an interrupt, numbered by 12 is generated by the Output device say monitor. Then
the probable location of ISR for Interrupt 12 is 12*4=48th location of IVT in main memory b’coz
ISR address are generally 4-Bytes long in main memory. And the corresponding ISR is not
available only on 48th location but also on 49th, 50th and 51st location in IVT.
IVT Location No. ISR Address
0 1 2 3 ADD-00 Address of 1st ISR =0x4=0
4 5 6 7 ADD-01 Address of 2nd ISR =1x4=4
8 9 10 11 ADD-02 Address of 3nd ISR =2x4=8
- - - - ADD-- Address of 12th ISR =12x4=48
48 49 50 51 ADD-12
ADD-N
Interrupt Request (IRQ) Hardware lines over
- - - -
which devices can send signals to get the
Vector Intel Pentium processor
attention of the processor when the device is
Number event vector table
ready to accept or send information. Interrupt
0 Divide Error request (IRQ) lines are numbered from 0 to 15.
1 Debug Exception Each device must have a unique IRQ line.
2 Null Interrupt Breakpoint
3 Breakpoint
4 INTO-detected Overflow
5 Bound Range Exception Interrupt Vector Table
6 Invalid Opcode
7 Device Not Available
8 Double Fault
9 Coprocessor segment overrun
10 Invalid task state segment
11 Segment not present
12 Stack Fault
13 General Protection
14 Page Fault
15 Intel Reserved
16 Floating-Point Error
7 Alignment Check
18 Machine Check
19-31 Intel Reserved
32-255 Mask able Interrupts. & Device Q. What is the difference between a trap and an
generated interrupts. interrupt? What is the use of each function?
33
IVT contains the memory addresses and the addresses of Interrupt Service Routine. IVT along with all probable
ISR including their addresses are routines of ROM-BIOS and supplied with BIOS. IVT is loaded into the memory
during the execution of ROM-BIOS routine at the start-up of the system.
34
ISR is Interrupt Handler that handles the generated interrupt by the I/O devices, or external programs.
37/78
INTERRUPTS VS TRAPS
A) Interrupts are special H/W A) Traps are special S/W instruction/signals that
instructions/signals that are generally are generally caused by internal resources like
caused by external resources like an I/O an instruction of processor placed on register,
devices, an instruction of a program, etc. etc.
Due to the intervention of Interrupts and ISR, the transmission of large amount of data becomes
very slow b’coz it is an indirect approach of transferring the data among several devices,
memory and CPU also.
C) Direct Memory Access Controller – Once again needless to say “Due to the intervention of
Program Control Instructions (S/W polled) and ISR (Interrupts), the transmission of large amount
of data among CPU, I/O Devices & Memory becomes very slow”. The utilization of above said
approaches/methods are indirect approach for data transferring hence the transmission
becomes slow. So there should be a proper method for direct data transferring among memory,
I/O Devices and CPU and it is DMAC. Before proceeding towards DMAC we should have proper
knowledge of Device Controller & Device Driver.
The DMA provides the Memory access that does not involve the microprocessor hence, avoids
the burdening of the main CPU by programmed I/O methods. DMA is frequently used for data
transfer directly between memory and a peripheral device such as a disk drive with high speed.
Direct memory access is also called DMA.
Device Controller – is Hardware responsible for converting electronic signals inside the
Computer into a form that the device can understand.
Device Driver – is a program (S/W) that handles the hardware specific details of the device
controller. It is a part of OS specific to a particular computer system. It mainly handles, hides the
details of device controller and devices from the OS.
DEVICE Operating
DEVICE CONTROLLER Device Driver System
Role of Device Controller – Needless to say the speed of CPU is quite high as compared to the
speed of I/O devices say Hard Disk, Keyboard, and Printers etc. Hence Device Controller accepts
the data at the same speed as it was sent from CPU and holds the data for buffering and
thereafter sends it to the corresponding device say Hard Disk so that to free up the CPU or fill the
gap b/w the speed of I/O devices and CPU. So we can say “Device Controller is responsible for
the task of data buffering and take care of the differences in speed b/w slower and faster devices.
38/78
I/O Scheduling –
User
Application Program
User
Operating System
Device Driver
Software (OS)
DMAC- The data transfer speed inside the computer system can be enhanced and entire
performance can be obtained increasingly because there is no need for intervention of Control
Instructions or Interrupts using CPU while transferring data from device to memory or vice-
versa or from memory to memory.
The DMA Controller takes over the buses to manage the transfer directly b/w the I/O device
and memory/memory buses without any control of the CPU.
The DMAC, as a controller device has multiple DMA channels so that multiple channels can
directly communicate to several I/O devices, and Memory. Functionally DMAC provides
read/write controls for devices and memory. Typically there are several types of DMA
transfers;
a) Device To Memory b) Memory To Device
c) Memory To Memory d) Device To Device
DMAC
Printer
Disk
39/78
Functioning of DMAC – The DMAC has 8 DMA channels that can be used for data
transferring b/w memory and peripheral devices. A Channel is a device that connects the
CPU and main memory with the I/O control units. The channels are numbered as 0,1,2…7.
0-DMA Refresh
1-Network Card
2-FD Controller
3-Scanner Card
4-Cascade Channel
DMAC
5-SCSI Controller
X86
6-Tape Backup Controller
Family
7-Proprietary Printer Card
The device controller puts a signal on the DMA-Request wire when a word is supposed to
be transferred.
The signal causes the DMA-Controller to look/reserve the memory bus to place the
required address on the memory address wire.
The DMAC places a signal on DMA-Acknowledgement wire to the device controller.
As soon as device controller receives the ACK-signal, then it immediately transfers the
data to memory and removes the DMA-Request signal.
The DMA-Controller interrupts the CPU when entire transfer of data gets over.
Note: The DMA transfer scheme, where transfer of one data word at a time, is called Cycle Stealing.
Before an I/O operation begins, a path b/w main memory and the device must be
established. If the channel, the control unit or the addressed device is busy, then the
pathway construction must wait. In order to avoid such wait, DMAC has several channels
that are fast processor. The responsibilities of channels are as follows;
1. Test I/O to determine whether or not the pathway to the addressed device is busy.
2. Starts I/O on a particular device.
3. Halts I/O on a particular device.
The CPU first initializes the DMA and the CPU should send some useful information to
DMAC that includes;
1. The starting address of the memory block where data are available for read or write.
2. The word count, which is the number of words in the block.
3. Specify the mode of transfer such as read or write.
4. A control to start the DMA transfers.
When DMAC receives the above said information, the DAM starts and continues to transfer
data b/w memory and I/O devices until an entire block is transferred. When the assigned job
of the DMA is done, it informs to the CPU about the termination from the job.
40/78
DMAC
5. DMA sends interrupt Signal when done. Block address
Word Count
CPU 1. Initialize the DMA Controller
Control
All the channels commonly communicate through interrupt. Whenever a program requests
R/W operation on a device the following steps are taken;
1. It interrupts the respective I/O Controller.
2. Then the I/O controller builds a channel program main memory.
3. The channel program35 is executed by the addressed channel.
4. Appropriate signals are transmitted to the addressed control unit.
5. These signals are interpreted by the Control Unit and used to control the device
operations.
6. An interrupt is issued by channel to signal continuation of the program’s execution.
7. Then finally, the control returns to the program.
35
The instructions that are executed by a channel to access devices and control the pathway of data. And also
controls the single read/write operation to only one device at a time.
41/78
Process Management
Program Process
Consists of Instruction in any programming language. Instruction Executions in machine code.
Is Static Object Dynamic Object
Reside in Secondary storage devices Main memory (RAM)
Span of time Unlimited Limited
Entity Passive Active
Process Control Block (PCB) - Information The operating system is responsible for the
associated with each process. following activities in connection with process
Process state management:
Program counter Process creation and deletion.
CPU registers Process suspension and resumption.
CPU scheduling information Provision of mechanisms for:
Memory-management information Process synchronization
Accounting information Process communication
I/O status information
36
The word process loosely referred to, as task is acceptable in many literatures on OS. A program is passive while
as a process is an active entity. And other difference is – program is treated as static object while as process is
treated as dynamic object. The numbers of programs may be unlimited but no of processes are limited for any
specific machine.
42/78
Types of Process – A process may be of two types, one is based upon the resource
consumption and another is based upon resource sharing and communication;
a) I/O Bound Process – The process that a) Independent process - cannot affect or be
requires I/O operation by using I/O affected by the execution of another process.
Processor, is known as I/O Bound b) Cooperating process - can affect or be affected
Process. by the execution of another process.
Advantages of process cooperation:
• Information sharing
b) CPU Bound Process – The process that • Computation speed-up
requires computational operation by using • Modularity
CPU, is known as CPU Bound Process. • Convenience
State of Process – A process may have following states while execution of CPU is going on;
a) New – A process is being created.
b) Running – The instruction of a process that are currently being executed by CPU.
c) Waiting - A process that has been processed by the CPU and is waiting for some events
(I/O operation) to be completed is in Waiting State.
d) Ready37 - A process that has been processed by the CPU and has completed its I/O
operation so that to participate in next execution cycle of the CPU.
e) Terminated – A processed has finished execution.
The traffic controller (a part of the OS) constantly checks the processor, and the status of
processes.
Diagram Of Process State.
Admitted Scheduler Dispatch
NEW READY RUNNING
Interrupt
I/O or Event Completion
Exit
WAITING/BLOCKING
I/O or Event Wait
TERMINATED
4
kill
fork 1 exec 2 wait
Role of Kernel – The kernel of OS is responsible for virtualizing the Physical CPU38 among
multiple processes to create an illusion of separate CPU for each of the processes and it can
also provide virtual memory for processes. Hence, kernel hides the physical CPU by performing
the functions associated with CPU allotment and management to the User.
37
Ready Queue is implemented by the Linked-List.
38
It is responsibility of KERNEL to divide all the available time of processor among all the to provide/create virtual CPU for every active process.
43/78
System Programs - System programs provide a convenient environment for program
development and execution. They can be divided into:
– File manipulation
– Status information Most users’ view of the operation system is defined
– File modification by system programs, not the actual system calls.
– Programming-language support
– Program loading and execution
– Communications
System Calls – There are some primitives functions built in the kernel of the OS and invoked by
S/W interrupts, known as system calls, which are responsible for;
1) Creating Process 2) Starting Process
3) Stopping Process 4) Destroying Process
All the library functions and utilities are written using the system calls. It is basic routine, defined
in the kernel, which lets a user remain transparent to all complex processes in the system
(UNIX). The Interaction through system calls represents an efficient mean of communication
with the system. It is an interface between a process and the OS.
Generally, these are available/written in either assembly language or high-level language.
Operating System
Interface Functioning of System Call(s).
S KERNEL
H
E
L
L Process
System Call
TRAP
Executes a special S/W instruction (Trap) that Eexecutes the service for application.
identifies the requested Kernel’s service.
Types of System Calls Q. Explain System Calls, OS Commands and its uses?
Q. Differentiate between System Call and System Process?
1. Job Control System Call. Q. What is user program? Explain.
2. File manipulation System Call.
3. Information Maintenance System Call.
44/78
Modes of CPU Operation – The CPU may have two modes while processing the processes
that are as follows;
a. Privilege/Kernel Mode – The CPU runs in privilege mode for the execution of System
Program (OS), it has full control to access Control Registers of CPU, memory table(s)
and I/O devices.
b. User Mode – The CPU runs in user mode for the execution of the instructions of user's
program that has no control to access Control Register of CPU, memory table(s), and I/O
devices.
Hence, any resource allocation and sharing among multiple users is totally controlled by the OS.
The mode of CPU is kept as a bit (either 0 for Privilege Mode or 1 for User Mode) inside
Processor Status Word (PSW) Register on the CPU. Whenever an H/w interrupt is generated
to the CPU, the mode of CPU becomes Privilege Mode from User mode and the status is
changed by from 1 to 0 as a bit. And all the I/O devices are entirely controlled by the OS in
kernel mode.
Process Communication – While working with several processes, two or more processes can
communicate with each other by sharing a common data or sending a signals/messages to
each other but there may be a critical section problem between two processes while sharing a
common data or resources. Whenever a critical section 39problem arises it may lead to
unwanted/unexpected result. e.g. let’s suppose there are two processes named as A & B, both
are sharing a common data X having the value 10. Both the processes want to increase the
value of common data X by 1 as follows;
X=10; Shared Resource (Data)
The result is unexpected due to critical section problem where the Process B is attempted to
access a common variable X where it is already accessed by Process A.
Solution of Critical Section Problem – Do not design the processes in such a way that if one
process is in critical section, another process may not to be in critical section for the common
resource.
1. Do not let the process to be in critical section if their remainder instructions are waiting for
execution.
2. Ensure that only one process among others to be in critical section at a time
3. Mutual exclusion for interprocess communication must be preserved.
4. If any one process requires continuation for a moment, it must be allowed.
39
A critical section is a sequence of program statements within a task where the task is operating on some data object shared with other task.
Critical section may be implemented in tasks by associating a semaphore with each shared data object.
45/78
Process Serialization – We may prevent the critical section problem by designing the
execution of processes in such a way that one process may get the accessing of common
resource at a time and there after other and so on, serially. It means the scheduling of
processes must be scheduled serially.
X=10
Process Synchronization – It is the task of acquiring the lock on a resource by a process that
is currently accessing that resource so that to prevent simultaneous access by the other
process. And hence, to overcome the situation of deadlock.
During the concurrent execution of several tasks, each task proceeds asynchronously with the
other; that is, each task executes at its own speed, independently of the others. Thus, when task
A has executed 10 statements, task B, which was initiated at the same time, may have
executed only seven statements or no statements, or it may have already run to completion and
terminated. In order for two tasks running asynchronously to coordinate their activities, there
must be a mean of synchronization, so that one task can tell/pass the signal to the other when it
completes the execution of a particular section of its code.
The signal, sent between the tasks allows the tasks to synchronize their activities so that the
second does not start the processing data before the first has finished and vice-versa. The OS
may opt any of the synchronization method among several synchronization methods like
Interrupts, monitor, and Semaphores etc-2. But the most commonly used method is the
Semaphore.
Semaphores – Are non-negative integer variables, mainly used by the OS for process
synchronization (among several processes). It maintains the status of resources
(Availability/Non-Availability) like memory location, files and I/O devices so that to overcome
from the situation of Critical Section Problem.
Types of semaphore:
Binary Semaphore – It is an extension of Semaphore that can contain only two binary
values, either 0 or 1.
General Semaphore - Semaphore variables can on be accessed only by a process at a
time. It means OS reserves a lock on the semaphore variable so that to prevent from
simultaneous access and dead lock.
46/78
The OS maintains semaphore variables for the prominent resources of the system to keep the
status of those resources by using binary semaphore. As soon as a process is directed to the
resources of the system the corresponding semaphore is contacted to check the status of that
particular resource. It is the task of the OS to maintain the status of semaphore variable as per
the situation. Altogether, we can say that no two processes can share or attempt to access a
resource simultaneously; hence the OS gets overcome from critical section problems.
Q. Differentiate between symmetric & asymmetric inter process communication. What is a mailbox?
Q. Why special H/W instructions are necessary for solving critical section problem in a multiprocessor
environment?
Q. Why special H/W instructions are necessary for solving critical section problem in a multiprocessor
environment?
40
Common mailbox.
41
Process P1 has its separate mailbox, identified by M20 uniquely and assigned by the OS till the lifetime of
process P1. The process P1 gets the message from it’s mailbox as soon as a message comes to it’s
mailbox.
42
Receiver
43
Sender
47/78
Deadlock Management
eadlock - The OS uses the concept of LOCKING to protect the resources44 so that they can
D only be accessed by one process at a time to avoid simultaneous access and to overcome
from the critical situation of deadlock45.
Deadlock = Infinite waiting of a/all process(s) for a resource(s),
Locked / Consumed by some other process(s).
A set of blocked processes each held a lock on a resource and waiting to acquire a lock on
some other resource, already being locked by some other process in the set, waiting infinitely.
Example - System has 2 hard-disk drives. Pr1 and Pr2 each hold lock on one disk drive and
each needs the accessing of another one, respectively.
2) Hold & Wait – A process holds lock on one resource and looking for other resource that
is already locked by another process (Linear Wait.).
X Linear Wait Y
R1 R2
3) No Preemption – Where a process continues to hold the lock on the resources until
completion of task. In such a case, no more process can access or preempts the locked
resource meanwhile working with or by first process.
Resource Preemption Method
Selecting an eviction/victim.
Rollback – return to some safe state and then restart process from that state.
Starvation - same process may always be picked as victim.
44
A part of computer system (S/W & H/W) that directly communicate with the user. E.g. Printer, H/D, File, Cache, I/O Devices.
But Main Memory is not a resource and managed by sophisticated locking mechanism.
45
It a critical situation that occurs whenever a process (X) acquires lock on a resource (R1) and tries to access another
resource (R2) that is locked by another process (Y) and another process (B) is trying to access the resource (R1) that is locked
by the process X.
48/78
Q. Explain necessary conditions for Deadlock?
Q. Mention the necessary conditions for Deadlock Situation?
Q. Discuss the criteria Deadlock Situation?
4) Circular Wait - A process (X) holds lock on one resource (R1) and looking for other
resource (R2) that is already locked by another process (Y) and another process (Y) is
also looking for accessing the resource (R1) in cyclic mode.
1.) We must ensure that at least one of the necessary conditions may never occur/hold at all.
Or imposing a linear order of all resource types, and letting process request resources in
increasing order of enumeration order.
3.) Most of all deadlocks occur due to lack of available resources. To prevent the same,
each process must acknowledge the maximum number of required resources of each
type that it requires. Having such type of prior information, the developer may design an
algorithm that ensures the system will never enter a deadlock state that is known as
Deadlock-Avoidance algorithm.
4.) No circular wait - Serialize/Arrange the request (Safe State) made by process(s) in the
same order as the resources are ordered. (Avoid cross-reference of the requests of
processes for the resources.
2) Abort one process at a time until the deadlock cycle is eliminated so that to reduce the
overhead and re-submit all the processes for execution even some of them have been
partially completed.
Note: Aborting/Rollback process is costly, especially in case of the aborted process that
has been fully or partially completed.
Bridge Crossing Example - Traffic only in one direction.
Each section of a bridge can be viewed as a resource.
If a deadlock occurs, it can be resolved if one truck backs up (preempt resources and rollback).
Several trucks may have to be backed up if a deadlock occurs.
Starvation is possible.
Detection of Deadlock -
Q. What are the various methods of concurrency control? Which of these may lead to deadlock and why?
49/78
Process Design & Creation
System Structure – Layered Approach
The operating system is divided into a number of layers (levels), each built on top of lower
layers. The bottom layer (layer 0) is the hardware; the highest (layer N) is the user interface.
With modularity, layers are selected such that each uses functions (operations) and services of
only lower-level layers. Layered Structure of the THE OS. A layered design was first used in the
THE operating system. Its six layers are as follows:
Process Creation – In earlier days of OS, the OS could only create, maintain and destroy a
process but now a day the user has capability to do it.
Parent process creates children processes, which, in turn create other processes,
forming a tree of processes.
Resource sharing.
Parent and children share all resources.
Children share subset of parent’s resources.
Parent and child share no resources.
Execution.
Parent and children execute concurrently.
Parent waits until children terminate.
Address space.
Child duplicate of parent.
Child has a program loaded into it.
UNIX examples
– fork system call creates new process.
– exec system call used after a fork to replace the process’ memory space with a new
program
50/78
Process creation can be done either statically or dynamically using following Process
constructs;
A) Process Flow Graph –
51/78
B) Cobegin-Coend Construct – Process Flow Graph is not capable to represent
interleaved processes properly where the first operation of one process is stored before
the last operation of other process is completed.
A1 A2
B1 B2
Inte rlea ved Processes A & B where A=A1+B1 & B=B1 +B2
Cobegin-Coend Construct is simple an extension of process flow graph, additionally it
properly describes nested process flow.
Cobegin For Example, the execution Q. Under what circumstances
S1; of following instruction can parent terminates the
Begin execution of its child(s)?
S2; (a+b)*(c+d)–(e/f)
Cobegin shall be as follows ; Process executes last
S3;S4;S5; Begin statement and asks the
Coend Cobegin operating system to delete it
End (exit).
T1=a+b;
S6; Output data from child to
Coend T2=c+d; parent (via wait).
Start T3=e/f; Process’ resources are
Coend deallocated by operating
T4=T1*T2 system.
S1 Parent may terminate
T5=T4-T3
execution of children
End processes (abort).
S2 Start Child has exceeded
S6 allocated resources.
S3 S4 S5 Task assigned to child is no
longer required.
Parent is exiting.
T! T2 T3 Operating system does not
allow child to continue if its
parent terminates.
End T4 Cascading termination.
Cobegin
Denotes concurrent
nested process flow. T5
Coend
End
52/78
Multitasking, Multiprogramming, Multiprocessing:
Time Sharing (Fair Share) – It is the facility provided by the OS that is a virtual machine view to
the user. It is the feeling of user that the entire set of resources on the system are separately
dedicated to just a particular user at a given instance of time.
The CPU switches from one virtual machine to another for some other user. All together, we can
say that CPU time is shared among various processes of user(s) and such features are called
Time Sharing. Multitasking based on the concept of Time-Sharing.
Time Quantum – The time taken to occupy by a process at one go to the CPU is known as
quantum time for a particular process that is currently being running by the CPU.
Multitasking – Is a concept opted by the OS for the concurrent execution of several processes
virtually, by enabling the CPU as multitasking CPU.
The OS enables a CPU as multitasking CPU to achieve multitasking by dividing its execution
time among several small time frames known as Time Slices47. In Multitasking, the CPU
executes the instructions of one process in given time slice and switches to another process just
one after another in cyclic mode. The cycle is repeatedly repeated till the completion of the
processes.
CPU Time
TS1 TS2 TS3 TSn
TSn->Time Slice No
Pn -> Process No
P1 P2 P3 Pn
Single User Multitasking OS – The Os that allows to have several processes for concurrent
virtual execution by time sharing for single user, is known as Single User Multitasking OS. E.g.
Windows 9x
Multi User Multitasking OS – The Os that allows to have several processes for concurrent
virtual execution by time sharing for multiple users, is known as Multi User Multitasking OS. E.g.
Unix, Linux.
Multitasking Vs Multiprogramming –
Needless to say, a program has several processes or we can say several processes can put
together to build a program. As well as a program can have only one process.
Multiprogramming Multitasking / Time Sharing
Logical extension of multiprogramming
Multiprogramming functions behind the concept of Multitasking functions behind the concept of
serving an entire program serving an individual process
47
The CPU Time is divided into number of small time frames that are known as Time Slices.
53/78
Features of the OS, Needed for Multiprogramming
1. I/O routine supplied by the system.
2. Memory management – the system must allocate the memory to several jobs.
3. CPU scheduling – the system must choose among several jobs ready to run.
4. Allocation of devices.
Multi programming: Multiprogramming is the technique of running several programs at a time using
timesharing.
It allows a computer to do several things at the same time. Multiprogramming creates logical
parallelism.
The concept of multiprogramming is that the operating system keeps several jobs in memory
simultaneously. The operating system selects a job from the job pool and starts executing a job, when
that job needs to wait for any i/o operations the CPU is switched to another job. So the main idea here
is that the CPU is never idle.
Multi tasking: Multitasking is the logical extension of multiprogramming .The concept of multitasking is
quite similar to multiprogramming but difference is that the switching between jobs occurs so frequently
that the users can interact with each program while it is running. This concept is also known as time-
sharing systems. A time-shared operating system uses CPU scheduling and multiprogramming to
provide each user with a small portion of time-shared system.
Multi threading: An application typically is implemented as a separate process with several threads of
control. In some situations a single application may be required to perform several similar tasks for
example a web server accepts client requests for web pages, images, sound, and so forth. A busy web
server may have several of clients concurrently accessing it. If the web server ran as a traditional
single-threaded process, it would be able to service only one client at a time. The amount of time that a
client might have to wait for its request to be serviced could be enormous.
So it is efficient to have one process that contains multiple threads to serve the same purpose. This
approach would multithread the web-server process, the server would create a separate thread that
would listen for client requests when a request was made rather than creating another process it would
create another thread to service the request.
So to get the advantages like responsiveness, Resource sharing economy and utilization of
multiprocessor architectures multithreading concept can be used
Threads - A thread (or lightweight process) is a basic unit of CPU utilization; it consists of:
Program counter
Register set
Stack space
A thread shares with its peer threads:
Code section
Data section
Operating-system resources collectively known as a task.
54/78
In a multiple threaded task, while one server thread is blocked and waiting, a second thread in
the same task can run.
Cooperation of multiple threads in same job confers higher throughput and improved
performance.
Applications that require sharing a common buffer (i.e., producer–consumer) benefit from thread
utilization.
Threads provide a mechanism that allows sequential processes to make blocking system calls
while also achieving parallelism.
Kernel-supported threads (Mach and OS/2).
User-level threads; supported above the kernel, via a set of library calls at the user level (Project
Andrew from CMU).
Hybrid approach implements both user-level and kernel-supported threads (Solaris 2).
A multiprocessor system is the parallel system with more than one CPU in close
communication. It allows the same computer to have the multiple processors. Multiple processor
configurations are very efficient, but only on some applications.
A multiprocessing OS cannot be implemented on Hardware that does not support Address
Translation.
Tightly coupled system – processors share Memory and a clock; communication usually takes
place through the shared memory.
Advantages: Type of Multiprocessing
Increased throughput. Symmetric multiprocessing - Each processor runs an
Economical. identical copy of the operating system. Many processes can run at
Increased reliability. once without performance deterioration.
Graceful degradation. Asymmetric multiprocessing - Each processor is assigned a
specific task; master processor schedules and allocates work to
Fail – Safe System. slave processors. More common used in extremely large systems.
A program that must interact with input-output devices or other tasks within some fixed time
constraints is said to be operating in Real-Time. For example, a program that is used to monitor
the pressure within a nuclear reactor may be required to receive and process pressure
information from an external pressure sensor attached to the reactor every 100 ms. A program
that controls the rocket engines on a spacecraft may be required to produce start and stop
commands as nodded within intervals of ¼ seconds.
When tasks are executing concurrently, their interactions are often subject to timing constraints
if the overall computer system is being used for real time processing.. For example a task A
wishes to rendezvous with another task B may not be able to delay more than a fixed length of
time before proceeding, even without starting the rendezvous. In Real Time Computer System,
failure of part of the hardware or of an external input-output device often leads to a task’s being
abruptly terminated. If other tasks wait on such a failed task, the entire system of tasks may
deadlock and cause a crash of the system.
The special demand of real time system requires some explicit notation of time that can be used
in the control and synchronization of tasks.
55/78
stored in short-term memory, or read-only Useful in applications (multimedia, virtual
memory (ROM). It Conflicts with time-sharing reality) requiring advanced operating-system
systems; not supported by general-purpose features.
operating systems.
Q. Explain multi-user, multitasking, multiprocessing and real time?
Q. Explain multiprogramming and time sharing techniques of improving CPU utilization?
Timer Interrupt – A timer interrupt denotes that the time slice assigned to a process has been
expired and saves the state of process as well as invokes the dispatcher to bring another
process into scheme. Hence, Timer Interrupt is only responsible for maintaining the time
duration of a particular time slice assigned to a process. As soon as the duration of a particular
time slice expires, it saves the processed state and status of process to the process status
register (Containing Process Table) and invokes the dispatcher to bring another process into
scheme.
Process Status Register & Process Descriptor – Process Status Register is a register, which
contains a process table, containing an entry for each and every running, waiting and ready to
run processes known as process descriptor. The OS while managing processes use process
Descriptor.
Dispatcher Scheduler
D I S P A T C H E R – It is the low level routine (in-built with the OS) or the internal procedure of
the OS that brings a Process in its running state that is in ready to run state by loading the
process state from process control register where state of process was saved IN previous turn
of cycle last by Timer Interrupt. The selection of process to be dispatched is based upon on
some fair scheduling discipline and priority. It gives the control of the CPU to the process
selected by sort term scheduler.
56/78
As soon as dispatcher gets signal from the timer interrupt it dispatches a process based on the
scheduling algorithm and priority and load the state of the process that was PREVEOUSLY
saved. It is responsible for providing following services;
Dispatch Latency – Time taken by
Context-Switch the dispatcher to stop one process
Switching from privilege to user mode. and start another running (context-
Jumping to proper location in the user program. switch).
Scheduling a process by the scheduler among multiple available active processes in ready
queue may be based upon multiple scheduling algorithms and policies. All the scheduling
algorithms have only one motive to decide/evaluate the priority of the processes at given
instance of time. And the process that has higher priority at given instance of specific time is
decided to be scheduled if there is no priority confliction48. Such scheduling algorithms are
based on the following three criteria;
A) Decision Mode – Specifies instance of time at which priority of processes are compared,
evaluated and decided for scheduling. Decision mode can be of two types;
1) Non-Preemptive Decision Mode – In Non-Preemptive mode, once a process is
started it is allowed to run to its completion and only after completion of a process the
scheduling decision can be made for another process to assign the CPU Time Slice.
E.g. Batch Processing.
2) Preemptive Decision Mode - In Preemptive mode, a process being currently run
may be halted by using Priority Interrupt and decision can be made for scheduling
another process of higher or equal priority to assign the CPU Time Slice. It is used in
all the circumference of Time Sharing Processing. The user neither considers the
resources – that a certain task will consume, nor completions of one task to start
another. E.g. WIN NT.
B) Priority Function – It is responsible for evaluating and deciding the priority of a process
based on the following process parameters;
1) No of instructions.
2) Memory Requirement.
The process can have only higher priority if it has
3) System evolvement load less value of all the above said process
4) Total Service Time parameters.
C) Arbitration Rule – Whenever two processes have the similar Priorities, there is a conflict
while deciding and selecting a process for scheduling. To resolve such confliction, Arbitration
rule allows selecting the processes on Round Robin Basis or Cyclic Basis.
57/78
1. Switches from running to waiting state.
2. Switches from running to ready state.
3. Switches from waiting to ready.
4. Terminates.
Q. Explain different scheduler and their usages?
Q. Differentiate between Short and Long Term Scheduler?
Q. What do you understand by Short and Long Term Scheduler?
Types of Scheduler
Throughout it’s lifetime. It is the responsibility of the OS to select a process from the queue by
using its specific low-level routine known as Scheduler to schedule the process. Most
commercially used such scheduler can be of three types;
a) Long Term Scheduler (Job Scheduler)– It is most appropriate and used in Batch
Processing OS where several processes are collectively submitted and sequentially
executed just one after another. Hence, in Batch Processing System, all the submitted
processes are spooled to the mass storage device say Disk where they are kept for
later execution. It is the responsibility of Long Term Scheduler or Job Scheduler to
select a process from this pool and load it into memory for execution. Consequently, we
can say that LTS works/selects a process from the pool kept on the dis and loads into
memory for execution.
It can also control Stable Degree of Multiprogramming where average rate of process
creation is equals to the average rate of process departure in the system. Thus, the LTS
is usually invoked when a process leaves the system entirely and during the execution
of process (Long Interval), the LTS has enough time to select a process from only
process pool kept on disk as queue. The SJFS can be preemptive or non-preemptive
and frequently uses Long Term scheduler.
b) Short Term Scheduler (CPU Scheduler) - It is most appropriate and used in Time
Sharing Processing OS where several processes are collectively submitted and
executed concurrently in context switch mode, using available Time Slice of the CPU. It
is the responsibility of Short Term Scheduler or CPU Scheduler to select a process from
Ready Queue and load it into memory for execution. Consequently, we can say that
STS works/selects a process from the Ready Queue.
Ready CPU
Queue
End
I/O I/O Waiting Queue(s)
58/78
Long Term Scheduler (Job Scheduler) Vs Short Term Scheduler (CPU Scheduler)
The differences in b/w LTS49 and STS50 is based upon the frequency upon which the
corresponding scheduler is called so that to bring a process into it’s execution state.
The STS executes very frequently at least once in every 100ms b’coz it has to bring another
process as soon as the brief duration of time (Time slice) expires for a process. But, the LTS
executes less frequently at least once after successful completion of a process. Consequently
we can say that the execution of STS is quite greater as compared to LTS for a scheduling a
process.
Frequent execution of STS has a disadvantage that is wastage of CPU Time during the
scheduling of processes. If STS takes 10ms to decide and execute a process for 100ms then
10/(100+10)=9% of CPU is being used by STS and it is the wastage Time of CPU. While such
wastage of CPU Time is not encountered in LTS for scheduling a process b’coz it executes very
less frequently as compare to the LTS.
Short Term Scheduler Selects the jobs from ready queue and connects to the CPU.
Medium Term Scheduler Selects the jobs from blocked/waiting queue and connects to ready
queue.
Long Term Scheduler Selects the jobs from the pool of jobs and loads to the ready queue.
Suppose there is a process known Pr-A having total execution time 1000ms. It can be
processed in both of the processing environment that are Batch Processing (Non-Preemptive)
Using LTS and Time Sharing Processing (Preemptive) using STS. We have to compare the
CPU wastage Time in both the cases.
49
Long Term Scheduler
50
Short Term Scheduler
59/78
Scenario-1 Time Sharing using STS Scenario-2 Batch Processing using LTS
Lets suppose, Lets suppose,
There are 10 Time Slices of process Pr-1 having The execution time of LTS=1ms.
100ms
of Quantum Time of each of them. Hence, the total no execution of LTS for
The execution time of STS=1ms. processing Pr-1 =1 Times.
Hence, the total no execution of STS=10 Times.
Hence, the total time for execution of STS=10ms. Hence, the total time for execution of LTS=1ms.
Hence, the total time for execution of process Pr-
1=Actual Execution Time + Scheduler Execution Hence, the total time for execution of process Pr-
Time 1=Actual Execution Time + Scheduler Execution
Therefore 1000+10 = 1010ms Time
Therefore 1000+1 = 1001ms
Wastage of CPU Time = Total execution Time I
Wastage of CPU Time = including Scheduler - Actual Execution Time of
Total execution Time Including Scheduler - Actual Process Pr-1
Execution Time of Process Pr-1 =1001-1000 = 1ms
=1010-1000 = 10ms.
Scheduling Algorithms
There are following types of scheduling algorithm that can be selected so that to schedule a
process among several processes;
1.) First Job First Serve Scheduling – In FJFS scheduling algorithm, a process that
arrives first among the several processes at a particular instance of time, has given
the higher priority or selected to be scheduled for execution.
60
Process ID Arrival Time CPU Burst
50
(ms) Time (ms) P4
P1 T0 10 40
P3
P2 T1 7 30
P2
P3 T2 23 20
P1
P4 T3 12 10
0
1
Pn P3 P2 P1 CPU
`
Where T0>T1>T2>T3 are the arrival time of Processes P1
, P2, P3 and Pn.
2.) Short Job First Serve Scheduling – Whenever the scheduling of a process among
several processes could not be decided by the FJFS where more than one job came
at same instance of time then the scheduler opts SJFS. In SJFS scheduling algorithm,
a process that has less load (evaluated from their burst time) among the several
processes is given the priority/selected to be scheduled for execution. The SJF
consisting of least waiting time. It can be preemptive or non-preemptive that may lead
to indefinite blocking.
3.) Priority Based Scheduling – It is special case of SJF where priority is associated
with each process and CPU is allocated according to highest priority. Equal priority is
scheduled in FCFS basis. It can be preemptive or non preemptive that may lead to
indefinite blocking.
4.) Round Robin Scheduling Algorithm - Whenever the scheduling of a process
among several processes could not be decided by the FJFS and SJFS where more
than one job came at same instance of time along with equal load/priority then the
scheduler opts Round Robin Scheduling. In Round Robin Scheduling algorithm, a
60/78
process is arbitratory (Randomly) selected among several processes has equal load
and the same instance of arrival time. The execution is scheduled for all the
processes just one after another in time-sharing mode and it is mainly designed for
time sharing system, having preemptive b/w processes. In RR-Scheduling, the ready
queue is treated as Circular-Queue and it is most suitable for a time-shared OS and
preemptive version of FIFO algorithm. The performance is highly depends upon
Quantum Time and varies from given various quantum time to quantum time on the
same system. Round Robin Scheduling is implemented by using a Process Waiting
Queue that can be of following types;
a) Single Level Process Waiting Queue – In Single Level Queue, a process that’s
time quantum has been expired, is inserted into the process ready queue and
another process is being called for execution from the same queue. The cycle is
repeated in First In First Out manner till the completion of all processes
.
P1 P2 P3 Pn P1
Interactive Process
Batch Process
c) Multi Level Feedback Queue – There might be two types of processes in Single
Process Waiting Queue; One of having large load and gets low priority and
another having less load gets high priority. It partions the process ready queue into
several queues as per the different CPU Brust-Time. If a process requires less
CPU time (Low CPU Brust), it assigned to the higher priority queue and other that
requires more CPU time, is assigned to the lowest priority queue at lower level.
Hence, needless to say that the process with high priority has to wait for the
process with low priority. Hence, there is the waiting problem that may cause
starvation, so the OS maintains two separate queues; one for the high priority
processes and another for the low priority processes. High priority queue only
61/78
contains the processes that are having higher priority means less load and low
priority queue contains the processes that are having large load. As soon as the
high priority queue gets empty after completing the processes of high priority, the
process from low priority queue promoted to the high priority queue for execution
that avoids the problem of starvation (Infinite Waiting).
Note:
Preemptive Round Robin, Priority Based Scheduling Algorithm.
Non Preemptive FIFO, SJF Scheduling Algorithm
Q. State an advantage in having different time-quantum sizes for different levels of a multilevel queuing system?
The priority of queue is based upon the assigned quantum time that varies from level to level so that to
a. Get the CPU quickly as per the brust time of process so that to finish it’s CPU brust quickly.
b. To prevent from starvation by promoting the processes from lowest priority to highest priority queue.
c. -------------------------
d. -------------------------
And hence high priority processes need not have to wait for the processes of low priority queue.
If a process in a queue at any level completes it’s execution then the matter is settled otherwise
the process is preempted as the quantum expires and demoted to the end of immediate lower
level queue. If again at the lower level, the quantum expires and the process does not get it’s
completion then it is demoted to the last queue that works upon the FCFS basis.
If queue-0 is empty then the execution of queue-1 begins and if the queue-0 & queue-1 both are
empty then the queue-2 begins it’s execution based on FCFS.
If queue-0 is empty and there is the execution of queue-1 is currently being scheduled,
meanwhile a process belongs to queue-0 arrives then the execution of queue-1 is preempted
immediately. And the same is repeated in case of queue2 and queue-1/queue-o.
Selecting CPU scheduling algorithm for a particular System-The selection criteria are
based upon following factors;
a) CPU Utilization – The percentage of time that the processor is busy. We want to utilize the CPU as
much as possible b’coz it is costly resource of the system and utilization may range from 0-100%.
b) Response Time (Rt)– For Interactive System, it is time from the submission of a request until the
previous response is produced. It is the amount of time that it takes to start the responding of a
submitted process, but not the time it takes to output the response.
Response Time (Rt) of a process (P1)= First Response Time (P1) - Arrival Time (P1)
Average Response Time (ARt) = Sum of all response Time of process / No of Processes.
c) Waiting Time (Wt)– Time taken by a process to wait in the Ready Queue to get into the
memory for execution.
TWt=Sum of previously spent waiting time in the ready/ waiting/ IO queue.
62/78
The CPU scheduling algorithm, especially RR does not affect the amount of time during
which a process is executed by the CPU or does I/O operations.
Note: The Waiting Time of process does not count the execution time of that particular
process during the cyclic execution by the CPU in RR.
Waiting Time (Wt) for process (P1) = Starting Time (P1) – Arrival Time (P1)
Average Waiting Time (AWt) = Sum of the waiting time of all processes / No of Processes.
d) Throughput – It is the measure of work that is the number of processes that are completed
per unit of time. E.g. 10 Processes/Second. It measures the performance of the CPU.
e) Turnaround Time (Tt)– The amount of time / interval of submission of a process to the time of
completion. Mostly preferred in Batch Processing.
The Turnaround Time of process counts the execution time of that particular process during the
cyclic execution along with the Waiting Time by the CPU in RR.
Turnaround Time(Tt)=Waiting Time waiting/Ready/IO Queue + Execution Time + Context Switch Time.
Turnaround Time (T t) of a process (P1)= Finish Time (P1) - Arrival Time (P1)
Average Turnaround Time (AT t) of a processes = Sum of all turnaround time of processes/ No of
Processes
f) Context-Switch Time (Ct)– Switching the CPU from current to another process by saving the
state of old process and loading the saved state of new process from process control block (PCB)
Register. Context switch time is pure overhead and it depends upon the hardware support.
Because the system does no useful work while switching from one to another process. The Context
Switch51 Time is highly dependent upon the hardware support.
51
It is a part of interrupt handling.
63/78
E.g. assume we have the workload of 5-processes as given below along with their CPU Burst Time and
priority too. All 5-processes arrive at time 0in the given order wit h the given length of CPU Burst Time in
milliseconds. The Larger CPU Burst-Time of a process causes the process to have lower priority;
Process Arrival Time CPU Burst Priority
Now, we shall evaluate the best ID Time (ms)
scheduling algorithm from the given P1 0 5 3
set of process using average P2 2 14 1
waiting time of processes. P3 4 13 3
P4 7 7 1
P5 9 11 4
1. First Come First Serve (FCFS) – FCFS scheduling algorithm for given set of processes
are represented by using Gantt Chart as follows;
P1 P2 P3 P4 P5
0 5 19 32 39 50
Waiting Time Of Process Completion Time Of Process(s) Duration In msec
P1 X 0
P2 P1 0+5=5
P3 P1+P2 0+5+15=19
P4 P1+P2+P3 0+5+14+13=32
P5 P1+P2+P3+P4 0+5+14+13+7=39
Total Waiting Time (P1+P2+P3+P4+P5) 0+5+19+32+39=95ms
Avg Waiting Time=Total Waiting Time / Total No. Of Processes 95/5=19.00ms
2) Short Job First Serve / Short Job First Scheduling (SJF) – Is of two types
P1 P4 P5 P3 P2
0 5 12 23 36 50
Waiting Time Of Process Completion Time Of Process(s) Duration In msec
P1 X 0
P4 P1 0+5=5
P5 P1+P4 0+5+7=12
P3 P1+P4+P5 0+5+7+11=23
64/78
P2 P1+P4+P5+P3 0+5+7+11+13=36
Total Waiting Time (P1+P4+P5+P3+P2) 0+5+12+23+36=76 ms
Avg Waiting Time=Total Waiting Time / Total No. of Processes 76/5=15.20ms
Preemptive (Shortest Remaining Time First) – Preempts the process that is currently being
executed when a new process arrives at the ready queue of having less brust time / shortest
remaining time than the current one.
Process Arrival CPU Burst Remaining time of P1=5ms is larger than the
ID Time Time (ms) process P2 and P. Hence both are scheduled
P1 0 6 while preempting the process P1. But the
P2 1 2 remaining time of P1=5msec that is smaller than
P3 2 7 P3 hence it is executed prior to p3. And,
P4 3 3 represented by using Gantt chart as follows;
P1 P2 P4 P1 P3
0 1 3 6 11 18
3) Priority Based Scheduling - It is special case of SJF where priority (Integer) is associated
with each process and CPU is allocated according to highest priority. Equal priority is
scheduled in FCFS basis. A priority number (integer) is associated with each process. The
CPU is allocated to the process with the highest priority (smallest integer highest
priority).
Preemptive / Non-preemptive
Q. What is problem with Priority Scheduling Algorithm? Also state the solution?
Problem Solution
Starvation – The processes of Low Aging – As time progresses, the increment
priority may never execute or infinitely in the priority of process.
waits.
Priority Evaluation
4500 But there is no general standard/scale on priority
4000
representation that whether 0 represents the highest
3500
or lowest priority. It leads to have confusion. In this
3000
2500
connection, I use high numbers to represent High-
2000 Priority and low number to represent Low-Priority.
1500 The larger CPU Burst-Time of a process causes the
1000 process to have lower priority. The processes of
500 equal priority are scheduled in First Come First
0 Serve basis.
-500 No Priority Low priority High Priority
The
The Priority
Priority Based
Based scheduling
scheduling algorithm
algorithm for
for given
given Process CPU Burst Priority
set
set of
of processes
processes are are arranged
arranged according
according to to their
their ID Time (ms)
associated
associated priority
priority in
in descending
descending order
order so
so that
that to
to P5 11 4
evaluate
evaluate highest
highest priority
priority of
of processes
processes and and P1 5 3
represented
represented byby using
using Gantt
Gantt chart
chart as
as follows;
follows; P3 13 3
P4 7 1
P2 14 1
65/78
P5 P1 P3 P4 P2
0 11 16 29 36 50
4) Round Robin Scheduling (With Quantum Time=10ms) – Each process gets a small unit
of CPU time (time quantum), usually 10–100 milliseconds. After this time has elapsed, the
process is preempted and added to the end of the ready queue.
If there are n processes in the ready queue and the time quantum is q, then each process
gets 1 /n of the CPU time in chunks of at most q time units at once. No process waits more
than ( n - 1) q time units.
Process ID CPU Burst Priority RR scheduling algorithm for given set of processes
Time (ms) having quantum time 10 ms, are represented by using
Gantt chart as given to left.
P1 5 3 Where TTt =Total Turnaround Time.
P2 14 1 Pn WTt =Total Waiting Time.
P3 13 3 TWt TTt
P4 7 1 Denotes the last/final turn of process(Pn) just prior to its
P5 11 4 completion.
Comparison & Conclusion Now, we can easily compare the best suitable scheduling
algorithm using Total Waiting & Average Waiting Time as given to left;
66/78
According to Total Waiting Time, the best algorithms are as follows;
Types of Scheduling Total Waiting Time Average Waiting Time
SJFS 76.00ms 15.20ms
PB 92.00ms 18.40ms
FCFS 95.00ms 19.00ms
RR 132.00ms 26.40ms
150
100
50
0
SJFS PB FCFS RR
Series1 76 92 95 132
30
20
10
0
SJFS PB FCFS RR
Series1 15.2 18.4 19 26.4
Self-Practice – Compute the average Turnaround Time using each of the following scheduling
policies:
a) FCFS
c) RR With Time Quantum = 5 Unit
67/78
Disk/File Management
The OS is responsible for providing the services to perform the following tasks;
52
A set of routines that manage directories, device accesses, buffers, and enabling programs to access files
without having concerned with details of physical storage characteristics and device timing.
68/78
Types Of Directory Structure
Single Level Each file must have a unique name
Rd
because there is no more concepts of
multiple directories hence a filename can
F1 F2 F3 only belong to an user (single user) at a
time.
Double level Rd
Separate directory of each user (Multi-
user) having their own files along with
F1 S1 Fn attributes (H+R+W+A).
F1 Fn
S1 S1 Fn
3) Providing File System Structure – A disk may be partitioned into one or more Logical
disk. A File System occupies one logical disk containing the following Information. E.g.
FAT16, FAT32, NTFS, I-Node etc.
69/78
A. Method for accessing disk contents
B. Available disk space
C. File(s) Dir(s) Information
D. Method for creating data blocks.
E.G. The time to transfer 4KB from a disk requiring 40ms to locate a track, making 3000
revolutions per minute and with a data transfer rate of 1000KB per second is;
Access Time = 40 ms + 10 ms = 5 ms = 54 ms.
4) Disk Scheduling – It is the responsibility of the OS to efficiently use the disk drive and
disk controller so that to have fast access time and better disk bandwidth. The OS
improves the access time and disk bandwidth by scheduling the I/O services of I/O
requests. Whenever a process needs I/O operation to or from the disk, it issues a system
call having following arguments (Mop, Dadd, M add, Nby) to the OS;
a) Mode of operation that is whether it is Input or Output.
b) The disk address for data transfer.
c) The memory Address fro data transfer.
d) Number of bytes to be transferred.
70/78
1) First Come First Serve Scheduling – It is simplest form of disk scheduling where the
request that arrives at initially is processed/serviced first. Let’s suppose, Disk requests
kept into disk queue, come into the disk driver for I/O operations to and from the tracks in
the order of;
32 40 70 67 90 84 32 77
71/78
53
3) Scan / Elevator Algorithm – In which, the arm of disk starts I/O operations at
one end of disk and moves towards the other end, while servicing all the
requests of disk queue for I/O operations on underlying cylinders54. The direction
of head is reversed again from other end to beginning of disk servicing
underlying requests, and services continue. Let’s suppose, Disk requests are
kept into disk queue, come to the disk driver for I/O operations to and from tracks
in the order of;
32 40 70 67 90 84 32 77
The disk head is initially at track 66 then it will first move towards end and first services
track 67 then 70 then 77 then 84 then 90 and thereafter the arm/head reverses from
end to 0th, servicing track 40 then 32 then 32.
32 32-> 40 60 67 70 77 84 90
100 improvement in
Cylinder Number
80 performance
60 over FCFS but
40
not better than
20
0
SSTF.
1 2 3 4 5 6 7 8 9
Disk Request
4) C-Scan Algorithm – In which, the arm of disk starts I/O operations at one end of
disk and moves towards the other end, while servicing all the requests of disk queue for
I/O operations on underlying cylinders55. The direction of head is immediately returns to
beginning of the disk without servicing any request along the way. It treats the cylinders
as circular list. The average head movements in this algorithm is more than in the scan-
scheduling algorithm but it provides more uniform waiting time. Let’s suppose, Disk
requests are kept into disk queue, come to the disk driver for I/O operations to and from
tracks in the order of;
The disk head is initially at track 66 then it will first move towards end and first services
track 67 then 70 then 77 then 84 then 90 and thereafter the arm/head reverses from
end to 0th, and then servicing track 32 then 32 then 40. Q. Disk requests come into
the disk driver for tracks in
the order of
53
First come to ground floor (0) and then move/elevate to top floor. The variation of SCAN is C-SCAN. 55,58,39,18,90,160,150,38
54
55
Set of tracks that are at one-arm position forms a cylinder. ,184. The disk arm is
Set of tracks that are at one-arm position forms a cylinder. initially placed at track 100.
A seek time takes 5msec
per track moved. Compare
72/78 the average seek lengths
32 32 40 60 67 70 77 84 90
C-SCAN Algorithm
100
Cylinder Number
80
60
40
20
0
1 2 3 4 5 6 7 8 9 10
Request Number
32 40 70 67 90 84 32 77
File System Concern – There are several constraints that enforces a File System to be bound
with a particular OS, hence following character varies regarding the File System from an OS to
other;
Size – Size of Files and entire File System varies from an OS to other.
Performance – R/W access time also varies from one file system to other and hence
from one OS to other b’coz R/W access methods opted by the file system varies.
Data Availability – During the failure of a disk the recovery of data/file from the disk is
known as data availability that also varies from one File System to other b’coz the
method opted for data availability varies from one FS to other say Checksum, Mirroring
etc etc.
Read Ahead & Read Behind - In Read Ahead, the read operation of next block from the FS is
initiated while one block is currently being processed by the CPU where as In Read Behind, the
read operation of next block from the FS is not initiated until unless the current block that is
being processed by the CPU does not get completed
Such concepts are only used during Read operations that are provided by either the OS or Disk
Controller.
While the CPU is busy, the processes are kept While drive and controller is busy, the
in Waiting/Ready Queue being maintained by requests are kept in the disk queue being
the OS. As soon as the CPU gets free, the OS maintained by the OS. As soon as the drive
schedules a request from the ready queue. and controller get free, the OS schedules a
request from the queue.
The selection of selecting a process among The selection of selecting a request among
several processes for execution is based upon several requests toward Input or Output is
73/78
several CPU-scheduling algorithm that are as based upon several disk-scheduling algorithm
follows; that are as follows;
First Job First Serve Scheduling FCFS Scheduling
Short Job First Serve Scheduling SSTF Scheduling
Priority Based Scheduling SCAN Scheduling
Round Robin Scheduling Algorithm C-SCAN Scheduling
LOOK Scheduling
The context switch time and execution time of No more concept of context switch time and
scheduler (Long/Short Term) as well as execution time of scheduler (Long/Short
marinating Process Control Block Register is Term) as well as and marinating Process
an overhead. Control Block Register
74/78
Domain D1
The association of process and domain can be of two types;
Static – In which the domain contains more & more access rights to fulfill the requirements of a
process at one place/domain. But is more complex than dynamic but so simple to use.
Dynamic - In which the domain contains only the limited set of access rights that has less
capability to satisfy the requirements of a process and the process is free to migrate from one
domain to other as per requirements. It is so simple than static but so complex in operations.
Domain Qualifiers –
Each user may be a domain where the objects are identified.
Each Process may be a domain by the identity of user/process.
Each procedure may be a domain.
Access Matrix – A matrix consists of rows and columns, where row represents the domain and
column represents the object. Further more, it is a way to define the access rights of domains
and objects to the process/user. It specifies a variety of policies (what will be done) and provides
an appropriate mechanism (how something will be done) for defining and implementing strict
control for both static & dynamic association of process and domain as well as owner of object
too.
Object O1 O2 Printer File X D1 D2
Domain
D1 Owner Read X Print R, W Switch Capability List of
Write Domain D1
D2 W Switch
D3 X W* Switch Switch Access Matrix
D4 R, X Print R
Capability56 List – Specifies the objects and access rights/permissible operations are allowed
for a particular domain is executed by capability list and represented in a row. It is maintained
by the OS rather than the process but can be accessed by the user. E.g. Domain D1 has the
capability list that specifies;
<>The domain D1 can execute the object O2
<>The domain can access the printer to print
<>The domain can read and write from/to file X
<>The process associated to domain D1 can migrate to D2 as per requirement that is dynamic association.
A process having the access of D1 has capability to have the access rights of all the objects that
belong to domain D1.
56
Physical name of object is known as Capability.
75/78
Access List Capability List
Belongs To The Object(s) The Domain(s)
Represented By The Column(s) The Row(s)
Maintained By The Process The OS
Specifies Accessing of an object and permissible All the objects and permissible operations
operations by all the domains belong to the domain
Objective On Object Specification On Domain Specification
Security – The OS is responsible for providing security for the resources of the Single/Multiple
User. The OS ensures following things;
1) Ownership of resources must be respected for the resources.
2) Disk Files must be protected to the user other than owner.
3) Use of password to control access.
4) Encryption of files.
5) Encryption of data sent across network.
In support of Security some OS (Say Win 2000) maintains Access Control List57”, containing
permission on resources for corresponding resources.
Encryption
Types of Encryption
a) Symmetric – Uses same key for encryption & decryption
b) Asymmetric -
Distributed System
Distributed Systems – It is the collection of processors, having their own local memory and the
processors communicate to each other through common lines such as high-speed buses or
telephone lines. Distributes the computation among several physical processors.
Loosely coupled system Tightly Coupled
system
Each processor has its own local memory; processors Several processors
communicate with one another through various communication share a common
77/78
lines, such as high-speed buses or telephone lines. Advantages of memory and a clock.
distributed systems: The communication is
• Resource sharing performed through
• Computation speed up – load sharing common memory.
• Reliability
• Communication
78/78