0% found this document useful (0 votes)
355 views

Operating System

it is operating system book get all information in it . mostly it is use for engineering students and teachers.

Uploaded by

KARAN GHOLAP
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
355 views

Operating System

it is operating system book get all information in it . mostly it is use for engineering students and teachers.

Uploaded by

KARAN GHOLAP
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 213

Operating System (MU - Sem 4 - IT) Table of Contents

1.10 Operating System Services (May 1 6) ............................. 1-1 5


Module I
1.11 Objectives of Operating System

ntutpter 1 j Overview of Operating System 1-1 to 1-30 (June 1 5 , Nov. 1 5 ) ........ ........ u .. b+n... a......... 1—16

1 .1 1 .1 Functions of Operating System


Syllabus * Introduction ; Operating System Structure and
(Dec. 14, June 15, Nov. 15) ....................................... ..... 1-16

z Syllabus Topic : Operating-System Interface ______ 1 - 1 7


management, Protection and security, Distributed and special
1.12 User Operating- System Interface _____ __________ 1-17
purpose Systems; System Structure: Operating system services
and interface, System calls and its types. System programs, z* Syllabus Topic : System Calls ............. ........ ................. 1 - 1 s

Operating System Design and implementation, OS structure. 1.13 System Calls (June 15, Nov. 15, May 1 6 ) . „............... v i e
Virtual machines, OS debugging and generation, System boot. z - Syllabus Topic : Types of System Calls ..................... 1-19

✓ Syllabus Topic : Introduction .............. .............................. 1 -1 1 .1 3 . 1 Types of System Calls (Juno 15, Nov. 15) ....... ......... 1 - 1 9
1 .1 Introduction to Operating System 1 . 1 3.2 Some Examples of System Calls — ............ 1-20
i (Dec. 14, June 15, Nov. 15) .............................................. 1-1
z Syllabus Topic : System Programs ............. .................. 1-20
Z Syllabus Topic : Operating System Structure .............. 1-1
1.14 System Programs ......................„........................................ 1 -20
12 Operating System Structure ............ ................................. . 1 -1
1 . 1 4. 1 Comparison between System Program and
1 2. 1 Monolithic Systems ....................................... 1-2
Application Program .....„.............................................. „...1-21
X22 Layered Systems. ................................................................, 1-2 z Syllabus Topic : Operating System Design

1.2.3 Client-Server Model.................. ........................ 1-3 and Implementation ......................................... 1 -22

1.2.4 Monolithic Kernel vs. Microkernel 1.15 Operating System Design and Implementation 1 -22

(June 15, Nov. 15, Dec. 16) .................................... .... 1-4


z Syllabus Topic : Operating System Operations .... .... 1-5
1 . 1 5.2 Separating Policies from Mechanisms ....................... 1 -22
1.3 Operating System Operations ...................................... .... 1-5
1.1 5.3 Implementation ........... ...................................... 1 -22
1.3.1 Dual Mode Operation ................................. ................ ..... 1-5 z Syllabus Topic : Virtual Machines ................................. 1-23
1.3.2 Timer ....................... . ........... 1-6 1.16 Virtual Machines _.......................................... ...................... 1-23
z Syllabus Topic : Process Management .......... ......... ..... 1-6
1 .16.1 Histor ... ........ .......... . 1 —23
1.4 Process Management ....... ........................................... ..... 1-6
1.16.2 Benefits ....... ................................ ......................... . ........ 1-24
✓ Syllabus Topic : Memory Management ................... ..... V7
1 .16.3 Simulation ................ ....... ..................................................... 1-25
1.5 Memory Management ........................................................ 1-7
1 . 1 6.4 Para-Virtualization. .......................... 1-25
z Syllabus Topic : Storage Management ........................ 1 - 7
1.1 8.5 Implementation ................ ......................... ,................. 1 -25
1.8 Storage Management .......... . 1-7
1 .16.6 Examples .......... ............................................................. ....... 1 -25
1.6.1 File Management ....... .................................... . ............. ..... 1-0
1 . 1 6.6(A) VMware ................................................... ..... 1-25
1.6.2 Mass Storage Management ............. 1-8
1.16.6(B) The Java Virtual Machine ............................................. 1-26
1.6.3 Caching ........................... ..... 1-8
1.6.4 1.16.6(C) The .NET Framework... .................................................... . 1 - 2 6
M3 Systems .......
Z z Syllabus Topic : Operating System Debugging ......... 1 -27
Syliatxis Topic : Protection and Security ................ ..... 1-9
1.17 Operating System Debugging.. ........................... ......... 1 -27
1.7 Protection and Security ............ ................................. 1-9
1.17.1 Faiure Analysis ............................... ........ ..........................,1-27
1.7.1 Threats ...
z 1 .172 Performance Timing . ................................... . ............m V27
Syllabus Topic : Distributed System ......... ............ .... 1-11
1.8 1.17.3 DTrace..... ....... ....... ....................................................
ibuied System ........................................................ . . . . 1 - 1 1
✓ Syllabus Topic : Special Purpose Systems ...........
z Syllabus Topic : Operating System Generation ......... 1-28
.... 1-13
1.9 Special Purpose Systems .......................................... 1 .1 8 Operating System Generation .......................................... v28
.... 1-13
1.9.1 Real-Time Embedded Systems z Syllabus Topic : System Boot ........................................ 1-2B
.... 1 - 1 3
1.9.2 Multimedia Systems ...............
.... 1 - 1 4 1.19 System Boot ............................ ...... ........,..1-28
1.9.3 Kan
d Systems ....................... .... 1-14
1 .20 Exam Pack (University and Review Questions) .......... 1 -29
Syllabus Topic : Operating System Services ......
....1-15

Scanned by CamScanner
2.14.1 The torxu ano exec() System

2.14.2 Cancellation

2,14.3 Signal Handling


Chapter 2 : Process Management
2.14.4 Thread Pools..,.
Syllabus : Process concept : Process Scheduling, Opemtio Thread-Specific Data
2,14.5
process and Interprocess communication; Multithreading. lh?oc
2.14.6 Scheduler Activations 3
: Multithreading models and thread libraries, threading issues.
Process Scheduling: Basic concepts. Scheduling algorithms an Syllabus Topic : Process Scheduling •

Criteria. Thread Scheduling and Multiple Processor Scheduling, — Basic Concepts


2*t 2.15 Process Scheduling r
g
2.1 Introduction ••• «*
z Syllabus Topic : Process Concept .«« -.......... 2-1 2.15,1 Scheduling Decisions ....
2
2.2 Process Concept (Dec. 14) ♦., * * ‘ 2,15.2 Types of Scheduling
2-2
23 Context Switch -.......... Syllabus Topic : Scheduling Criteria
✓ Syllabus Topic : Operation on Processes 2-2 2.16 Scheduling
2.4 Operations on Processes ,»».„» 2-2 Scheduling Criteria
2.16.1
2.4.1 Process Creation ............................ 2-2 Syllabus Topic : Scheduling Algorithms
2.4.2 Process Termination./. 2-3 2.16.2 Scheduling Algorithms ..........
2.5 Process Control Block (Dec. 1 4) „......... 2-3 2.16.2(A) First In First Out (FIFO)
2.6 Process States and Process State
2.16.2(B) Shortest Job First (SJF)
Transition Diagram ............................ ............ 2-4
2.16.2(C) Priority Scheduling -----
,2.7 Process vs. Thread ..................................
2.16.2(D) Round Robin Scheduling.. —
Syllabus Topic : Process Scheduling ,. 2-6
2.8 Process Scheduling ................ .............................. 2-6 2.16.2(E) Multilevel Queue Scheduling ..........—

2.8,1 Scheduling Queues and Schedulers ............................ 2-6 2.16.2(F) Multilevel Feedback-Queue Scheduling — -

28.1(A) Long-term Scheduler. 2 -7 2,1 7 Examples on Uniprocessor Scheduling


Algorithms... ——
2.8.1(B) Short-term Scheduler 2 -7
z
Syl labus Topic : Thread Scheduling —
2.8.1(C) Medium-term Scheduler 2 .7
2J 0
Thread Sctiedu ling.. .......... —
2.8.1(D) Comparison of Three Schedulers 2 .7
2.18.1 Coritenti on Scope ............... .....
Syllabus Topic : Interprocess Communication. 2-8
2.9 2.18.2 Pthread Scheduling ♦
Interprocess Communication ’ 0 -
z
2,9.1 Message Passing.,.. ~ Syllabus Topic : Multiple Processor Set
2 19
2.9,2 Shared Memory „„ g - Multiple-Processor Scheduling
,r9
Syl labus Topic : Multith reading 2 2,19.1 Approaches to Multiple-Processor Sch
2.10 Multithreading 2
............... 2-9 19 2 Processor Affinity
2.11 Types of Threads...
. .......... 2-9 2.19 3 Load Balancing *
Syllabus Topic : Process - Multithreading
Models. ..... ..... ,, 2,19.4 Multicore Processors
2.12 Multithreading Models ........ 2 19.5 Virtualization and Scheduling
Z
Syllabus T o p i c : Thread Libraries..... 2 19 6
Thread Libraries . ......... ,.......... ’ Other Multiprocessor Scheduling ApP
213
2
1 3 6(A) Load Sharing
2.13.1 POSIX Pthreads. ........ .....
........... 2-12 2
W 6(B) G ari g Scheduling >
2.13.2 Win32 Threads ................ . .........
...... —.2-12
2.13.3 Java Threads ................ ,....... ’1 9 -6(C) Dedicated Processor Assignment- "
........... 2-12
Z Syllabus Topic : Threading Issues....... 2 19
6(D) Dynamic Scheduling . "
........... 2-12
2.14 Threading Issues ,eW
Exam Pack (University and Pev
T
"
—— ____ __

Scanned by CamScanner
Operating tem (MU ■ Sem 4 -JT) 3 Table of Contents

3.12.1 System Model of Deadlocks ............................, ........... 3-14


Module IP |
z Syllabus Topic : Deadlock Characterization..... * .... 3-15

rhnpter 3 : Process Coordination 3-1 to 3-27 3.13 Deadlock Charactenzation ............................................ 3-15

3.13.1 Conditions (June 15, Nov. 15, May 16, Dk. 16)....... 3-15
SylLatrns : Synchronization : The critical Section Problem,
Peterson's Solulion* synchronization Hardware and semaphores. 3.13 .2 Resource Allocation Graphs (Dac. 16) ........................ 3-15

Classic problems of synchronization, monitors, Atomic z Syllabus Topic : Deadlock Prevention ....................... 3-16

transactions; 3.14 Deadlock Prevention (June 15, Nov. 15} ....... *....... ....3-16

Deadlocks System Model, Deadlock Characterization, Methods z Syllabus Topic : Deadlock Avoidance ....................... 3-17

for Handling Deadlocks, Deadlock Prevention, Deadlock 3.15 Deadlock Avoidance (June 15, Nov, 15} .................... 3-17
Avoidance , Deadlock Detection, Recovery from Deadlock. 3 1 5. 1 Deadlock Avoidance Algorithms
/ Syllabus Topic : Synchronization ................................. 3-1 (May 16, Dec. 16) .......................................................... 3-1B

3.1 Background (May 16)....................................................... 3-1 3.15.1(A) Resource-Allocation Graph Algorithm ......................... 3-19

3.2 interprocess Communication (May 16} ......................... 3-2 3.151(B) Banker's Algorithm (Dec* 16}......................................... 3-19
33 Race Condition ................................. *............. 3-3 3.15.1(C) Resource- Request Algorithm......................................... 3-19
z Syllabus Topic : The Critical Section Problem ............ 3-4 3.15.1(D) Safety Algorithm......................................... *.................... 3-20
3.4 The Critical Section Problem (Dec. 14, June 15) ........ 3-4
3.16 Solved Problems . ............ 3-21
3.5 Mutual Exclusion (June 15, Nov. 15, May 16) ............. 3-5 z Syllabus Topic : Deadlock Detection .........*...... *....... 3-23
z Syllabus Topic : Patersons Solution............................ 3-5 3.1 7 Deadlock Detection (June 15} ................ 3-23
3.6 Peterson’s Solution................ *..... *............. 3-5 J Syllabus Topic : Recovery from Deadlock ................. 3-25
Z Syllabus Topic ; Synchronization Hardware ................ 3-6 3.1B Deadlock Recovery............... 3-25
3.7 Synchronization Hardware ............................................... 3-6 3.13.1 Process Termination (Kill a process)-.......* ................. 3-25
J Syllabus Topic : Semaphores ....................................... 3-7
3.16.2 Resource Preemption . ............................... 3-25
3.8 Semaphores ..................... - ..................... 3 ’ 7
3. 19 Exam Pack (University and Review Questions) ....... 3-26
Z Syllabus Topic : Classic Problems
of Synchronization............................................................. 3-B Module IV
3.9 Classic Problems of Synchronization ............................. 3-0
Chapter 4 : Memory Management 4-1 to 4-35
3.9.1 Producer Consumer Problem (Dec, 16} ........................ 3-8
Syllabus : Memory Management strategies : Background,
3.9.2 Producer Consumer Problem
Using Semaphore (June 1 5 ) . *.............. *....................... - 3-8 Swapping, Contiguous Memory Allocation, Paging , Structure of
the Page Table. Segmentation; Virtual Memory Management:
3.9.3 Readers/Wrtters Problem ............................... *................. 3 9

Demand Paging. Copy-on- Write, Page Replacement. Allocation


3.9.4 Dining Philosopher Problem (Dec. 14, Nov. 1 5 ) 3-10
of Frames, Thrashing, Memory-Mapped Files, Allocating Kernel
✓ Syllabus Topic : Monitors ................................. *—....... S’ 1 1 Memory , Other Considerations . ______________________________
3.10 Monitors...... ............................................*..* *.3-11 z Syllabus Topic : Memory Management
4 1
z
Syllabus Topic ; Atomic Transactions ....................... * 3*12 Strategies ............................. *............................ ♦...............

311 Atomic Transactions .............................................. *........ 3-12 4. 1 Memory Management Strategies...... *..... *...................... 4-1

3,11*1 System Model of Atomic Transactions ......................... 3-1 2 4.1.1 Monoprogramming ................................. *****..................... 4-1

3.11.2 Log-Based Recovery ....... ................................ *.............. 4 1 2 Multiprogramming ***....... 4-2

3-11,3 Checkpoints.................... ................. ........ .............- 3-13 4. 1.3 Dynamic Loading .............. * ............................. *....... 4-2
3.11-4 Concurrent Atomic Transactions ....... ............ 3*13 4.1.4 Overlays ..............................................................................
3.11.4(A) Serializability...... ....... .......... .......................... 3* 1 3 Relocation ................................ 4-3
3 13
3.11.4(B) Locking Protocol................. .......... *....................... — ' Logical and Physical Address Space ........................ *..... 4-3
1 4
3.1 1.4(C) Time stamp-Based Protocols . ..... *......... * Syllabus Topic : Swapping .............................*............. 4-3
4-3
z
Syllabus Topic : Deadlocks ................................... -..... 3
'14 Swapping ..................... .......
a ....
* .
— ..
3.12 Deadlocks (Jun* 15, Not* 15* May 16, □«. 16) 3-1 4 Syllabus Topic : Contiguous Memory Allocation ...... 4-4
z
Syllabu ■ Topic ; System Model ................................... 3“ 14
Contiguous Memory Allocation .......... ...... ............ i" 4 " 4 .

Scanned by CamScanner
4
Operating System (MU • Sem 4 - IT)

4.9.5
4,2.1 Multiprogramming with Fixed and Counting-Based P age R

Variable Partitions 4-4


4.9.5(A) Not Frequently Used or Ll

42.2 Dynamic Partition Technique 4-5

4-6 4.9.5(B) ■***— a


4.2.3 Compaction .♦ Most Frequently Used PaaoD '
9 Replac 5 1
42.4 4-6 Algorithm (MFU)...... e \
Memory Allocation Strategies (May 2016) «...
4,10 Examples on p aga Replace ■
Syllabus Topic : Paging ......................... 4-7

4.3 Paging (June ,j.................... 4-7 Syllabus Topic : Allocation

4.3.1 Basic Operation 4-6 4.11


Allocation of Frames '
4.3.2 Memory Protection and Sharing 4-9 Syllabus Topic :Plrash . ' -- 5.1.6
y '*■■■*■**.
4.3.3 * Translation Lookaside Buffer..... 1 <12
■Thrashing (June 15)
4.3.4 Effect of Page Size on Performance 4.13
Locality (Working Set Mode0 ~~
4-9 Syfebus Topic = Memo ' 5.1,8
4.3,5 Hardware Support for Paging (Dec. 1 6) 4 .g
515(A)’
<14
syllabus Topic : Structure of the Page Table 4-10 Memory-Mapped Ales..,
<15 Memory-Mapped I/O 5.1.8(C)
S njaure
' °f Page Tables (Dec. 14)
— ....... ....... 4 - 1 0 5.1,9
4.4,1
Hierarchical Paging Syllabus Topic zAltoca Ke 5,1.10
4,42
.................. ” ......*......... — ...4-10 4.16
Hashed Page Table Allocating Kernel Memory
. . .............—................ .. 4.16J Buddy System.... F
Inverted Page Tab(6
* ..... ....’ ....* ....-............. 4,1? in
< ’6.1(A) Operation of Buddy Algorithm
Syllabus Topic: Segmentation s
4.5 <16.2
Segmentation " Slab Allocation 5.1.13 Fi
4.5J
-Pnp a 9: seg ~

4.6
°ther Considerations
Segmentatjon p
Prepaging Sy
Sy
" » Topic ........... .......... Pre
<17,2 p
Virtual Memory Management a9 *""*''' *' S age Size

TLB Reach ; 5.1.14(B) Pro


VirtualMemory (May 16) MS
<17.4 . 5 1 . 1 4 ( C ) Acc
S ln
’"-bu. Topic t Demand Paghg............................. ed Page Tables. Syll

Paging (Oec., ............. .......... 4-1 5 Program Structure Hie


4.18 p Hte
xa.m p (University and Review QuestxxiSl-
S F Un
;;’ ; Restarts,; ..................... Sylli
> bU
4.0
Ca
"’ ’ *>*: .wnte .......... r„
LMoluIe V
. ------------ 52.2 impli

Py’OH-Write...... • ............................. ... Filo ‘


2.2(B) Virtue
gSLg -.Storage Management
4.9
----------™ - 4 - ’ S Sylta.
0° Replace s .....
: Flle System 8 2 3 Dfrcd
4.9,1 DitectoX -■ File Concept. Aa* ri
hmfMayt/ Sharingg f and Dhk StT
f»ctura. File-System M -2.3(A }
4 . 9 J? Protection; ' 2 -3(B) Hash ■
Pago Rep,ecementZr ....
<17 ■ Sylfeb
4.9,3
L«3tRe« m y U s M p ’ hnplemem F
' le S
* stem: Afe-System St" cW
J 2 4 Allocnt
99 4-17
Algorithm (L R u ) acement Fiee
* SpaJ?’ Implementation. Configt
4.9,4
‘•"U-Appre, Page' aZ; ..... ftCC0
.NFS : agCmCnt
’ EffiCieflCy LfnkecJ

..4*10
4.9.4(A) Secondary c . 4 r j-2-4(C) Linked;
. A ' ervie
■4 - l Q lure. Disk "T : °'' ( fy-W fndaxec
4.9.4(B) B
°° nd ’Ch a noa A l g o I j t h m Disk S,ru
- 4-lg M>. «“re. Disk Attachnxni. l-rwdas.
, 4.9.4(C) Enhancad .C h a n c e A — 6
SKOnd Plementar "1 1, RAID
SttWW SyJiabu
- Mg a,
’4geni C n t ,0n
* ’ 'Tertiary -Storage StriicW*®’
<9..4(O) Clock Pag flRep |a c tAt
4-1g
Kert VCrvicw
>el vn V ° M) Hardware. App

Scanned by CamScanner
Table of Contents

..... 5-25
...... 5-1 5.2.5 Free Space Management ............♦ *
Syllabus Topic : Ale System - File Concept. ..... 5-25
........ 5-1 5.2.5(A) Bit Map or Bit Vector ...... -
5.1 File System ..... 5-26
........ 5-1 5.2.5(B) Linked Ust of Disk Blocks . ...............
Pile Concept 5-26
........ 5-1 5.2.5(C) Grouping .......... ..............................
5.1.2 File Attributes ...................... — ..... 5-26
........ 5-2 5.2.5(D) Counting - *
Fife Operations * 5-26
5.1.3
........ 5-3 ✓ Syllabus Topic : Efficiency and Performance
Fite Naming ................................... - +;... 5-26
........ 5-4 52.6 Efficiency and Performance.... ..............
5.15 File Types ..... 5-26
........ 5-5 5.2.6(A) Efficiency -.............................—
51.6 Fite Structure ....................-.................. ..... 5-27
........ 5-6 5.2.6(B) Performance ............................. —
..... 5-28
........ 5-6 Syllabus Topic : Recovery........................ -
Access Methods (June 15) .....5-2S
........ 5-7 5.2.7 Recovery
Syllabus Topic : Directory Structures ..... 5-29
5.2.7(A) Consistency Checking *..................
........ 5-7
Directory Structures . ..... .....5-29
5.1.6 52.7(B) Backup and Restore ................... —................
........ 5-7
5 1 8(A) ' Single-Level Directory Systems ..... 5-29
5.2.7(C) Log-Structured File Systems * «-
........ 5-7
Two-level Directory Systems.... 5-30
5.1.6(B)
........ 5-8 ✓
5.18(C) Hierarchical Directory Systems 5-30
........ 5-9 5.2.8 NFS
Path Names ............ 5-30
5.1.9 NFS Architecture ..... ++.................... *.......-....... -
........ 5-9 5.2.8(A)
Directory Operations . .......... 5-30
5,1.10 NFS Protocols... ....................... *
...... 5-10 5.2.0(B)
„ 5-31
.... 5-10 5.26(C) NFS Implementation.,.. *......................
FileSystem Mounting ....... -......
...... 5-1 1 ✓ Syllabus Topic : Secondary Storage Structure 5-31
5.1.12 Working of Files , ....... Secondary Storage Structure... +,-....+*-—
... 5-12 5.3
Syllabus Topic : File Sharing
....... 5-12 ✓ Syllabus Topic : Overview of Mass-Storage
5-31
5.1.13 File Sharing . * Structure . ............*.......... - ..... .................
....... 5-12 .......5-31
5.1.13(A) Multiple Users 5.3.1 Overview of Mass-Storage Structure... ...........
..... „ 5-13 ...... 5-31
5.1.13(B) Remote File System 5.3.1(A) Magnetic Disks ..... ............-...... —.....................
....... 5-14 5-32
5.3.1(B) Magnetic Tapes ......... ...............*............
...... 5-14 ...... 5-32
✓ Sy llabus Topic ; Protection -.......... y' Syllabus Topic ; Disk Structure..... .....
...... 5-14 5-32
5.1.14 Protection * - * 5.3.2 Disk Structure ..........* - ....... *
...... 5-14 5-33
5.1.14(A) Type of Accesses ........................ - ✓ Syllabus Topic : Disk Attachment..
...... 5-15 .5-33
5. 1, 1 4(B) Protection Domains *.................' 5.33 Disk Attachment ............... ........—...........
...... 5-15 5-33
5.1.14(C) Access Control 5.3.3(A) Host- Attached Storage...... * ’
...... 5-16 ......5-33
Syllabus Topic : File System implementation Network- Attached Storage (NAS)
5.3.3(B) ..... 5-33
...... 5-16
52 File System implementation -- Storage-Area Network (SAN) ...........-..............
5.3.3(C) 5-34
...... 5-17
52.1 Fite System Structure ' *-
...... 5-17 5-34
15
Syllabus Topic : Implementing File System ... 5.3.4 Disk Scheduling (Nov* )
...... 5-17 ..34
5.22 Implementing Fite System... * - 5.34(A)
...... 5-18
5.2.2(A) File System Layout.. 5.3.4(B) „„ 5-34
...... 5-18
5.22(B) Virtual File System Algorithm (Dec. 16) .............-- ................
; 5-35
...5-19
5.3.4(C) SCAN Scheduling Algorithm (Dec. 16).....
...... 5-36
... 5-19 C-SCAN Scheduling Algorithm (Dec. 1 6 ) .
52.3 Directory Implementation ................ 5.3.4(D) ...... ,+.5-36
...... 5-19
5.2.3(A) Linear List 5.3.4(E) LOOK Scheduling Algorithm (Dec. 16)
5-37
...... 5-21 Examples on Disk Scheduling Algorithms
52.3(B) Hash Table 5.3.5 ......... 5-42
...... 5-21 syllabus Topic ! Disk Management ........
Syllabus Topic : Allocation Methods 5-42
...... 5-21
52.4 Allocation Methods (Dec. 14, May 1®) 5.3.6 Disk Management....... ........... *...........
5-42
...... 5-21
5.2.4(A) Contiguous Allocation,....,. - * 5.36(A) Disk Formatting - 5-43
...... 5-22
52.4(0) Linked List Allocation - 5.36(B) Boot Block. ........... -......... . ...... :........ *........
...... 5-23 Sad Blocks
52.4(C) Linked Ust Allocation using a Table in Memory , 5.3.6(C) .5-43
...... 5-23
Syllabus Topic ; RAID Structure
52.4(0) Indexed Allocation ♦■- * * .5-43
...... 5-25
5.2.4(E) I-nodes,., 5.3*7 RAID Structure *
...... 5-25 |
Syllabus Topic £ Free Space Management.....

Scanned by CamScanner
Module VI
5-43

5-46 . Distributed Systems


P.8
5.37(A)

syl
5.3.6 St ' X'Xstorag® - * s sX bu

Distribu,ed rae sys,em:

file access. Stateful Versus Statfe, X ’


Devlces...... .... 2
5.3.9 K

5.3.9(A) Te
«-s» ;..... . ..... ... Licata Distributed Synchronization :
K P _ _i 1 anrl Uparfloclc nanrlltriA
5.39(B)
o psra iing-sys ' en ’ Suppo
■■"■■■■" .......... — ------------------ — "&■
..... 5'49 Concui
pedomance ™
5.3.9(C) ' j Tropte: Distributed Opeptfngs .92
(
Syllabus Topic :Swap-w .... 5-49

p-SpaceMaW !m9nl
................... ....... 5-49 Distributed Systems.................... ........... — 1 A3 (
swa
5.3.10 6.1
5.3.10(A) Swat>Spa« ... ....................... ......... ....... 5-50 Definition ..................*..............................— -9.4 C

5310(B) SwafhSpaM . . .................. ...... 5-50


Motivation ................................ • -................. — r 9 5
A
6.1.2
, Sy1M>b. Topic ................... ........ 5-50 S
Resource Sharing...... ..................- ....,..... — 3 s
VO systems..... ................................. 6.1.2(A)
5.4 * .... 5-50
Overview... ..... -....... -.................................... Computation Speedup .................................— j 1 0 . s
5.4.1 ........ 5-50 6.1.2(B)
5.4.1(A) I/O Devices,™. .................. ..................
Reliability ..................................................... — J Si
........ 5-50 6.1.2(C)
5.4.1(B) Differences between VO Devices ............... Hl
........ 5-51 6.12(D) Communication .................................. .........
Syllabus Topic : Overview VO Hardware S)
........ 5-51
5,42 VO Hardware ............. ............................... 6.2 Types of Distributed Operating Systems -------- Dii
........ 5-51
5.4.2(A) Device Controllers ............. ........ . ............. Syllabus Topic ; Network Based OS-™.™— 1 Mi
........ 5 - 5 1
5.4.2(B) Polling ......«...................................................
6.2.1 Network Operating System (NOS); ---------------12.1(A) Ce
interrupt Handier ........................ ..... ........ 5-52
5.42(C)
........ 5-53
6.2.1(A) Remote Login ..................................... 12.1(B) Fuf
5.42(D) Interrupt Service Routine (ISR) .........

5.42(E) Direct Memory Access (DMA) ................... ........ 5-53 6.2.1(B) Remote File Transfer .......... . ........... -....... -12.1(C) To*
Syllabus Topic : Application I/O Interlace ..... ........ 5-54 6.2.2 Distributed Operating Systems (DOS)...—------
5.4.3 Application 1.0 Interlace ............................. ......... 5-54
6.2.2(A) Data Migration ..............
5.4.3(A) Block or Character Devices ..................... ................... 5-55
62.2(B) Computation Migration
5.4.3(B) Network Devices...... . ..... .......................... 5.55
5.4 -3(C) Clocks and Timers......... ...................... 5.55 6.22(C) Process Migration .......
5.4.3(D) Blocking and Non-blocking VO....... ................. Syllabus Topic : Network Structure ...
..... 5-55
Syllabus Topic : Kernel I/O Subsystem ......... 6.3
..... 5-56 Network Structure ................................
5.4,4 Kamel VO Subsystem.........
6.3.1
5.4.4(A) VO Scheduling .......... Local-Area Networks (LANs) ..............
5.44(B) Buffering ,z.™............ 6.3.2 Wide-Area Networks (WANs) ....... -.....
5.4.4(C) Caching ....... ......
..... 5-57 ✓
Syllabus Topic : Network Topology -
5.4.4(D) Spooling and Device Reservation ..... 5-57
6.4
Network Topology...... ............
54.4(E) Error Handling ...................
5.4.5 VO Protection............ _ _ ..... 5-58 Syllabus Topic : Communication StnK
5.4.6
6.5
Kamel Data Structures ................ 5-58 Communication Structure .........
6.5.1
..... 5-58 Naming and Name Resolution...
65,2
touting Strategies .....
5.4.5
..... 5-58 65,3
............... Jacket Strategies........
6.5,4
Syltabue Topic: STREAMS... ............ ..... S’SB Connection Strategies
5.4.6 STREAMS ......... 6.5,5
.... 5-59 Contention........
z Syllabus Topic : P ertOrmance ’ ’ .............. . ........
...... 5-59
5.4.7 Performance ..... ....... ....... -..... . 6.6
...... 5-60
Convocation Protocols..
5,5 Exam Pack (Untamty and
...... 5-6o s
yllabu» Topic : Dislributed HI® SV5 '
6.7
...... 5-60 DhWb
«W FHb Sy8t8ms. ........

Scanned by CamScanner
|( |* Operatktg 7 Table of Contents

v
6.8 Naming and Transparency ............................................... 6-10 Syllabus Topic : Concurrency Control ........................6-1 6
6.13 Concurrency Control .......................... 6-16
6.8.1 Naming Structures .............................................................. 6-1 0
6.13.1 Locking Protocols .............. 6-16
6.8.2 Naming Schemes ................................ ..... 6-1 1
6.13.1(A) Nonreplicated Scheme ....................,.......................... ..b-16
6.8.3 implementation Techniques . ....... ............................... 8-1 1
z Syllabus Topic : Remote File Access .,.......... 6-12 6.13.1(8) Single-Coor dinato r Approach .......................................... 6-16

6 9 Remote File Access ...... .................................................... 6 - 1 2 6.13.1(C) Majority Protocol .................... ..... .....................................6-17

6.9.1 Basic Caching Scheme ..................................................... 6-12 6.13.1(D) Biased Protocol ........................................... ...................... 6-17

6.9.2 Cache Location ........ ................................................... 6-12 6.13.1(E) Primary Copy .............. ................. ..... ............... 6-17

6.9.3 Cache-Update Policy ................................................ 6-12 6.13.2 Timestamping . ................................................. .........6-17

6.9.4 Consistency..., ........... ............................ 6-13 6.13.2(A) Generation of Unique Timestamps ................................. 6-17

6.9.5 A Comparison of Caching and Remote Service. ......... 6-13 6.13.2(B) Timestamp-Ordering Scheme.. ......................... ............6-18

✓ Syllabus Topic : Stateful Versus J Syllabus Topic : Deadlock Handling .......................... 6-18
Stateless Service.,...,.... ................... 6-14 6.1 4 Deadlock Handling ....... ......... ,6-18
6.10 Stateful Versus Stateless Service ............ ..................... 6-1 4
6.14.1 Deadlock Prevention and Avoidance. ............ ............. 6-18
✓ Syllabus Topic : File Replication ............. ....... 6-14
6.14.2 Deadlock Detection.... .......... ................................ 6-19
6.11 Ale Replication.. ................... ;................................... 6-14
6.14.2(A) Centralized Approach .................................................., . . . 6 - 1 9
✓ Sy I lab u s Topic : Distributed Synch ronization ............ 6 - 1 5
6.14.2(B) Fully Distributed Approach..... .......... .......... ........6-19
6.1 2 Distributed Synchronization. .................... ;.......... 6-1 5
6.15 Exam Pack (Review Questions) .................. ................... 6-20
6.12.1 Mutual Exclusion ......... . 6-15
• Lab Manual ___________________________L T to L-26
6.12.1(A) Centralized Approach ........ ................................ 6-15

6.12.1(B) Fully Distributed Approach ............................ .6-15

6.12.1(C) Token-Passing Approach., ........ ....... ....... ....... ............ 6-16

□□□

Scanned by CamScanner
CHAPTER Module I

Overview of Operating System

~ ®yl,abua J?P'S..
1 mana
storage management, Protection 7 n <1 security oXteXnd ZT" 9eme " k
Structure: Operating system services and interfaro e „ P C a purpose Systems; System
SyS,em
Operating System Design and ZtemeZ ion Os' X, X T rams,
lrn
generation. System boot Pl e m 6 "'o p on, OS structure. Virtual machines, OS debugging and

Syllabus Topic ; Introduction unii to complete the execution and computer system
should offer the service to allocate processing uni"
to
users program.
The operating system allocates
memory to user
program as per need.
What ts OS?
In the same way, user programs interact with the other
user programs through devices like key board or a
Q- What is operating system? mouse or even a joy stick.
ne 2015, Nov. 2015, 2 Marks
Syllabus Topic : Operating System Structure
An operating system is system software which
manages, operates and communicates with the computer 1.2
hardware and software. To complete the execution user
program need many resources. The main job of the
K r ----- uc vices
usy all the time, operating system supports
user program. So without operating system, a computer multiprogramming.
would be useless. In multiprogramming, operating system organizes the
jobs in such a way that the CPU always get one job to
execute, Hence, Multiprogramming improves CPU
An operating system acts as an interface between the utilization.
user and hardware of the computer and also controls Multiple jobs remain in memoiy so that CPU can take
L
execution of application programs.
Operating system is also called as resource manager. does not

Multiple users are


identifying penorm tasks, for example
keyboard 8 mn P fmm the tnp,lt devices
such as Time shared system is capable of completing each
Sending out ut to
devices such aT **’• P output
of files and d "™’“ O r ’ pri ■“* etC1 and kce P' n 8 ’rack
peripheral devfcXh ° n the d i s k > c
°°'«>Jling
Printers, scanners,' audio mixers system and after expiration of slot, system gets
The h * allocated to other user program.
called CPU f * c ° m Pu t e r system is a processing unit

Scanned by CamScanner
Overview of Operate

StnMUrt
organic
1ST Operating System (MU - Sem 4jTj

- So the allocation of computer sharing


caUs demaoded
several programs simultaneously IS logg %
manner. Consider the example of multiple use*
tn same server. G ts A Xwn of service procedures that perfa
- The resources in server machine i e system calls-
allocated to each user on time sharing
feeling to each user that only he or she is using
procedures.
server machine exclusively.
In ihis representation, there i s single service procafc
Designs of the operating system
for each system call which takes care of it.
- Following are some of the designs of the operating The utility procedures perform things that are dej.
system that is tried practically.
by several service procedures, such as fetching 4
Designs of from user programs.
the operating system

1 . Monolithic Systems

2 . Layered Systems

3. Client-Server Model

Fig. Cl.l : Designs of operating system

-F 1,2 J Monolithic Systems

Q. Explain monolithic system.

- This type of operating system can be treated as having


no structure.
- The operating system is constructed as a set of
procedures and each procedure can call any other ones Fig. 1.2.1 : A simple structuring model ft
whenever needed.
monolithic system
- Each procedure in the system has a precise interface in
terms of parameters and results.
- Every procedure can call any other one, if the called
Q.
one offers some useful compulation that the calh n
procedure needs, caning
Another appr
- In this technique, compilation of all the .
procedures or files containing the procedures is <2 ““ f layeR
° ’ one constructed

’ "’" “ ' rin'toa way was the THE


- Every procedure is able to be seen N
procedure so the rc i s noinfonnation “?very Xrlj£ Hog school
y
students, , & W, Dijkstra (1968)
- A little structure is possible to i mDotf f
m
systems, neolithic
F 8 1 2 2 016 system had totaJ 6 Ja)
- The system calls offered by the 0Derat - Layer 0 '
requested by putting the parameters in g Sysleni are T
his | ayer
Wl<kfl
place (e.g., on the stack) and then ned
SpOBsiWe 5
instruction. xe
cutuig a
““ssor, co nte , t to handle allocation
mets
- Due to this instruction machine gets SV/ , ,“ spired.
mode to kernel mode and transfers 1 fr m
° u
®er fbs la
yer u, J
c
operating system. °ntr o ] to ““'“Ptogran Ponsible to offer «*
g of
The operating system then fetches the Uy er 1 the CPU.
1
find out which system call is to be execut ’
?* Memory.

handled b y tbis la
locate space for proc5f

Scanned by CamScanner
Operating System (MU ■ Sem 4 - IT) 1-3 ______________________Overview of Operating System
If (here is no space i n main memory, then it also An operating system layer is an implementation of an
allocates space on a 512K word drum used for holding abstract object made up of data and the operations that
parts of processes (pages). can manipulate those data.
The layer 1 software was concerned of making needed A characteristic operating system layer consists of data
pages were brought into memory' whenever they were structures and a set of routines that can be invoked by
required, higher-level layers.
The same layer can too invoke operations on lower-
level layers.
The operator
1 ...

User programs
__________ Layer irX
3 / Layer n-1

InpuVoutput management
[ _____ I

Layer 0
Operator-process communication

; Processor ataatmnard multiprogramming

Fig. 1.23 : A layered operating system


Hg* 1.2.2 : Structure of the THE operating system
Advantages
Layer 2

ka,ion betw n each PTOe and ,he It keeps muuh better control over the computer and
“ “ “ “
oXr
Layer 3
con'X e C h Pr
’ “ eSS effcCliVCly had
’, S Own Implementors have more liberty in changing the
ntemal workings of systen J
modular operating systems.
P* management of I/O devices and buffering the
slreams and from thcm was handl
MsXr «* b y

defin vari tr °iU b k W ’ t h the layered a PProach is to


As 3 iaycr c a n Use
of devi
dev"lces ™with “ s wi ‘b gtxtd properties, instead lower
ower-level layers, cautious planning is necessary
many peculiarities.
Layer 4 indined 10 be less
SSr““ 'fficient
The user programs wore resided in this layer
■*1.2.3 Client-Server Model
CM
“™ d
XoToToZ < te, or I/O management. process,
Q.
Layers ______________
A s a large pan of the traditional operating system code
e system operator process was located in layer 5.

ahi8herlayerCMS UleVM/370g0tal0t
f of S imX '
astern XulTT ‘ he WayS
°“d ° approach 10 make
shown
Fig- 1.23, the onenf in
g SyMem 1S broken up int0 a
number of layers. “
; g a
XT i“ « «s iX .
h the hardware
the highest layer\s hv ° ’*
A trend in modern
■-------— N, which is the user interface. a
of moving code ui

Scanned by CamScanner
Overview of Operate- ■

Operating System (MU - Sen?


~ On the contrary to this, in user mode,
remove as much as possible from kernel mode, leaving register access is restricted. ,
a minimal microkernel. The CPU gets switched to kernel mode while .
The usual approach is to implement most of (be operating system code. |
operating system in user processes. The only means to switch from user mode tn J
To request a service, such as reading a block of a file, a mode is through system calls as implemented 3
operating system.
user process (client process) sends the request to a
server process, which then does the work and sends - The operating system can be put in full
system calls are the only basic services a n o S
back the answer.
system provides.
In this model, shown in Fig 1.2.4, all the kernel handles - The operating system can also be put in f u jj
the communication between clients and servers. hardware helps to restrict memory and register to*
By splitting the operating system up into parts, each of - If virtually entire operating system code is excqjj
which only handles one facet of the system, such as file kernel mode, then it is a monolithic program ifa.
service, process service, terminal service, or memory' in a single address space.
The disadvantage of this approach is that it is d—
As all servers run as user-mode processes, and not in to change or adapt operating system compos
kernel mode, they do not have direct access to the
without doing a total shutdown and perhaps eveui:
recompilation and reinstallation.
hardware.
Monolithic operating systems have drawbacks te
As a consequence, if a bug in the file server is viewpoint of openness, software
triggered, the file service may crash, but this will not
usually bring the whole machine down.
Another advantage of the client-server model is its can provide more flex ibility .
adaptableness to use in distributed systems.
£et m
ha °d u tes for managig
laKes
by sending messages. Place
When client sends a message to Scrver 5 | ■
For example, memory
rr-- management module keeps
necessary for client to know whether th? “
processed in its own local maclZ 2 * allocated and free
11
space to the processes.
sent to a server on a remote machine in network ** is required to
execute
Either server is present on client m,.t.- when in kernel mode al the*
remote machine in network from cli™ h '”' 011 a
(MMU)Xl?
(MMU)areset. ° f the Menior
y Managewerij
View
it appears to be : a request was sent JT”*
1 dnd
came back. . a response execute in ?' r o rnt
contains only code IM ’
p
io°n» Twmind
operating system ’s SCCODd
P3 1
Procfty*
Actually a .
fot
- context lcrokcn ’e l require only contain * '
ma
”ipulati ng the ,t MMi’ Setting deViCe
'
- -----. . vuenuserVtr irrupts, MMU, and capturing
Micrcikem i
1-2.4 Kernel v s . Microkernei 10
calls o n ” Conta ins the code to pass sys®*
1T
toduie s arir i Proper user level operates
(Juqq 15 m Fi
Q. S’ tZS sh r C t U n their sults.
16) stem with th?* organization of
injs
Na di
approach.
Q.7 n
Change modules
Os m
’ *foc e
User
Tlte majority of CPUs supt T— £rk s PPlica
0 Process
opemtton i.e. kerne! mode and nserfc™ mode, nxxiuifi
moduJfl
In kernel mode, ah instruction,
allc
executed, and the entire metnorv *ed ,
is accessible throughout the egmff* « all re g ° **

’ Par H '— — _____—


UCatiOnS rrOra
U»r

Scanned by CamScanner
Operating System (MU - Sem 4 - IT)
1-5
__gvgryiew of Operating System
Difference between monolithic and microkernel

Q. Differentiate between monolithic and microkernel (June 15, Nov, 15)


5. 5 Marks
Sr. Parameters
Microkernel
No. Monolithic kernel

Definition In microkernel, set of modules for managing the hardware is


If virtually entire operating system code is
kept which can uniformly well be executed i n user mode. A
executed in kernel mode, then it is a monolithic
small microkernel contains only code that must execute in

-------------------------- ££nd mQde U i s the


part of the operating system
Address Space There is separate address space for user services and kernel
There is same address space for user services
---------------------- _service$, _________________
and kernel services
— ----------------- Microkernel size is smaller than monolithic kernel.
Execution Execution speed is slower than monolithic kernel
speed

It is more flexible. A n y module can be replaced. I t is not flexible as compared to microkernel. Its
component cannot be changed, without doing a
total shutdown and perhaps even a full

6. Crash recompilation and reinstallation, _______•


After crash of service complete operating
microkernel.
system fails.
Communication Jn microkernel, communication is through message passing. _____ ______________ __________________f

8, Drawbacks

communication overhead between different modules.


from the viewpoint o f openness, software

Syllabus Topic j Operating System Operatio


After receiving an interrupt, operating system cany out
some housekeeping so that it can resume its

After this a searching is carried out in the interrupt


Q Bylain operating system operations.
vector or interrupt table.
<deni operating systems are interrupt driven.
Ihis table remains i n kernel-memory space and
Operatmg system sits id|e and WflU ,f
includes address of the code in the device driver that is
CXeCUte n VO devices to Offer service to service the interrupt.
*° ’ °
given n
° USCr tO Whom res
PO n s e is io be The interrupt handler is 1 then executed. When the
handler finishes, control of the CPU is returned to the
SyStem jUS WaiLS fOr so code that was executing before the interrupt occurred.
!o ‘ ™thing
are iigX Interrupt service routine (ISS) deal with interrupts.
The design of the operating system should be i n such a
di,i0 ariS in SyStem durin way that one erroneous or malicious program should
'lecX'd '■°"
n thcn
" ” 8 J"*"™
* « detected by CPU is called trap. not affect other programs.
P ur exactly same point of time of program
execution Software interrupts are trap or exceptions. 1.3.1 Dual Mode Operation
Example ts division by zero.

The majority of CPUs support at least two modes of

Scanned by CamScanner
13.2 Timer
_„ l clocks containing crysy
„ k.™, -M - — »
sta

executed, and the entire memory an sc

is accessible throughout the execution.


On the contrary
register access is to this, in The
restricted. user CPU
mode,
getsmemmy
switch danto

kernel mode while executing operating system cede.


Xi — *
The only means to switch from user mode to kerne base signal can be
mode is through system calls as implemented by By electronic circuitry, this
small integer to get frequencies up
operating system.
Hardware contain mode bit which indicate kernel mode
when 0 and user mode when set to 1- When user Computer contains al least one circuit, than produce
application is running then system is in user mode. synchronizing signal and given to many circuits in
When user application request the service from computer.
operating system then transition from user mode to This synchronizing signal i s fed into the counter io
kernel mode take place. make it count down to zero. When the counter gets to
During booting of the system, hardware is in kernel zero, it causes a CPU interrupt. I
mode. Once the operating system is loaded and start
In one-shot mode, after the clock i s started, holding
executing user application system goes in user mode.
register value gets copied into the counter and then I
Due to dual mode operation, protection from
decrements the counter at each pulse from the crystal. I
misbehaving users is achieved. This protection can be
provided by allowing the execution of privileged When the counter value becomes zero, it causes an]
instructions in kernel mode only.
the software.
Any attempt to execute these privileged instructions in
user mode cause trap to operating system. In square- wave mode, after getting to zero an<
- The instruction to enter in user mode, Instructions for causing the interrupt, the holding register i1
I/O control, timer management, and interrupt
management are some of the examples of privileged
instructions. These periodic interrupts are called clock ticks. Ew
♦From user mode, control is switched to operating computer has a battery-powered backup clock to si’
system through interrupt, a trap, or a system call.
The interface between OS and user programs is deflned
niversal coordinated time is used to synchronize I1
by toe set of system calls that the operating system
docks of the machines.
Operating system ensures to in terni|
System call is the call for the operating system to
hand over the control to user application.
sometaskonbehalfoftoeusefsp ‘°
■■abU , rop)c ; Proce>8 Managem8nt *
interfactbetw n
X:x::x:r “
- There are two ways for the nroc e
P e s
-- - l Management
x plain proces7~" ' ' ~ ~ ~ -------------
10
kernel mode by “ r rnana
°P® ating system 9®r nent function
A. program or n . .
can take over to Tm
syste P . A process execution is
task. "* ” housekeeping
a program re . des
““ execution context j
on
require any ’ “* disk. On disk it do* "1

Scanned by CamScanner
Overview of Operating System
'rating System (MU - S e m 4 - I T )
system. To execute the program, user needs to keep the
A program gets executed in main memory. So it should program in main memory .
be transferred from disk to main memory.
The main memory is volatile. Therefore a user needs to
To complete execution, program needs many resources store his program in some secondary storage which is
non volatile.
files etc
naw it becomes process. From the computation context Every process needs main memory since a process
code, stack, heap (dynamically- allocated structures),
X oMiew. a process is defined by CPU state,
memory contents and execution environment , and data (variables) must all reside in memory.
Process manager implements the process management The management of main memory is required to
support for multiprogramming. Many executables
tomnWrogramming. single CPU is shared among processes exist in main memory at any given time.
Different processes in mam memory have different
kX processes remain busy in completing UO CPU address space.
is allocated to only one process at a given point of time.
Memory manager is the module of the operating system
Here *some
— , [policy is required to allocate CPU to
J ess, called as CPU scheduling.
L users working on the system, operating Programs after completion of execution move out of
If multiple
system switches
i ------- the CPU from one user process to main memory or processes suspended for complehon

User gets the illusion that only he or she is using the


system. .
Process synchronization mechanism is required
ensure that only one process should use hold all processes swapping between main memory and
Process secondary memory is done.
on and resumption of processes and Memory managers move processes back and forth
on of the processes etc are some o
between memory and disk during execution.
rmed in process management.
contains address of the instruction So it is required that operating system should have
some strategy for the management of memory.

The process management activities involves . The memory management activities handled by
operating system are :
1. To provide control access to shared resources like
file, memory, I/O and CPU. L Allocation of memory to the processes.
2. Control the execution of user applications, Free the memory from process after completio
3. Creation, execution and deletion of user an execution.
system processes. Reallocation of memory to a program after used
4. Resume a process execution or cancel it. block becomes free.
5. Scheduling of a process. 4. Keep irack of memory usage by the process.
6. Synchronization, interposes communication an
deadlock handling for processes. *" Syllabus Topic : Storage Management

Syllabus Topic : Memory Management


1.6 Storage Management — — —
1 .5 Memory Management — The operating system
information storage for making
Explain memory management function
operating system. ___________-

Memory is an important resource of the computer


be managed by the operating
system that needs to L_

Scanned by CamScanner
1-8 -
— j, volatile. Storage devices and k
Tr - - - -— Main memory «
devices are used ‘ ssjon
system
devices
. Storage dev
(network c
The OS abstracts from the phy»“J Pr the

storage devices to describe a logical devi i



flle
• .™«d out by operating system
- The mapping of files is carorf out y pe by
onto physical media. These files are
operating system via the storage devices.
“’“Alices are: Diskettes, Hard disks (both
1.6.1 Flle Management ___ "movabJe), High capacity floppy disks,

Following are the necessities for long term information


Primary optical storage devices are: Compact
storage, Read Only Memory (CD ROM), Digital Vtdeo Disk
c It must be possible to store a very large amount of Read Only Memory (DVD ROM) , CD Recordable
information.

o The information should not loss after termination The hard disk is used as main storage device in your
of the process using it.
o Several processes must be able to access the Within one hard disk unit, many physical disks are
information simultaneously.
In order to fulfill the above requirements, it is Several metal platters are present within the hard disk
necessary to store information on disks and other
and these are coated with a special magnetic material.
secondary storage in units called files.
The platters rotate many thousands of times within a
A file is a named collection of related information that
is recorded on secondary storage.
second for accessing the data. Magnetic read and write

- The data cannot be written directly to secondary


the platter.
storage unless they are within a file. Processes can then
read information from the file. It also can write new Material used for making the platters is alunwiu
information into the file if needed. glass, or ceramic and two rcad/write heads are pre$£
After process termination, information in file should one for upper and lower surface.
remain retained and should not vanish.
® are arranged in stack because of which
A file should only vanish when its holder clearlv
116 rea<Uwrite heads often ls
“ “■ tiag system manages the files ’
• its cyfindw' referred ”

protected implemented. Platter he mechanis


fle mana
gement activities nf read
“ from
kium anrt
and • i . .. . ni, time required
consist of : operating system

Creation Md 6 PUt
’■ deletion of files and d - partitioned ana °f hard disk
- Hard disk can *
2 partl,ion
- -mvide access to fi ,es J
and
drive. Storam. can be used as sepa1
We for files. Nation of storage
or terabytes °f hard
“ink can be in gig
’• Wnback-apof

iheprtauti .
for
Atones. Elating Qles
driver. ntrol and monitor the
1.6.2
Ca
Q. chin g

“Wain how

Scanned by CamScanner
1-9 Overview of Operating System =
System (MU - Sem 4 - IT)
can be in gigabyte or terabytes. For magnetic disk,
The memory system can be viewed as hierarchy of 4 typical access time is around 10 nanosecond and
layers Top layer contains many registers which are storage capacity is around 1 to 4 TB.
inside the CPU.
The recently read data remain in disk cache. Sometimes
As material used to build the registers and CPU is adjacent data which is expected to be accessed next
same, these registers are faster just like CPU. time is held by disk cache. Some disk cache offers
These registers can be accessed quickly. For these write caching also.
registers, typical access time is around 1 nanosecond
1.6.4 I/O Systems

The second layer of the hierarchy is cache memory. As we know that I/O is one of the main functions of an
The frequently used data and instructions are cached in operating system and is used to control the entire
computer’s Input/Output devices.
memory is controlled by the hardware.
Basically it should have to issue commands to the
The division of primary memory is done into cache devices, catch interrupts, and handle errors.
lines of 64 bytes, having addresses 0 to 63 in cache line
It should also provide an interface between the devices
0, 64 to 127 in cache line 1, and so on. The most
and the rest of the system that is simple and easy to use.
frequently used cache lines are kept in a high-speed
The I/O code represents a significant fraction of the
cache sited inside or very close to the CPU.
total operating system. Computers operate with many
If cache hardware finds that, needed line by program is
kinds of devices.
in cache then it is cache hit.
As we know that it include storage devices (disks,
If it is not in cache then it is treated as cache miss and
tapes), transmission devices (network cards, modems),
needs to pay for more time as request would be
and human-interface devices (screen, keyboard,
satisfied from main memory.
mouse).
For cache, typical access time is around 2 nanosecond
Following are the components of I/O subsystem.
and storage capacity is around 4 MB.
o Buffering, Caching, and Spooling which are
In main memory, a buffer is reserved for disk sectors
memory management components.
called as disk cache.
o Device driver interface.
The cache keeps a copy of data in some of the sectors
on the disk. o Drivers for specific devices.

- To fulfill an I/O request for a particular sector, first '"""s n busTopIcTprotection and Security___
disk cache is checked to see whether the sector is in
disk cache.
1.7 Protection and Security
If it is present, then request is fulfilled via the cache. If
not present, then required sector is read into the disk
cache from the disk.
- Protection mechanisms refer to the particular operating
The disk cache improves the performance as some
system mechanisms which are used to protect
request is satisfied from it.
information, files and resources in the computer.
The property of locality of reference is used .When a
- Policy means whose data should be protected from
block of data is present in cache to fulfill a single I/O
whom and mechanism is how system put into effect
request, it is expected that same block will be needed in
these policies.
future.
- In some systems, a program called reference monitor is
Due to disk cache mechanism, time required to read
used to impose a protection.
from and write to hard disk is improved. Now days it is
the part of hard disk. - Any attempt to access the resource is verified by
reference monitor whether it is legal or not.
Hard disk can be partitioned, and each partition can be
used as separate drive. Storage capacity of hard disk

Scanned by CamScanner
of Operating s

- Sgm
The referee makes a decision on
n the basis Of policy
security- m tb C particular ope
1 used
table. . . developed to protection n*' ""' wbic h are to pro |
Modem protection sysle m in which ~ reechan . security encomp
improve the reliab.hty of « P information re ty.
use of shared resources » inv n
rail problem Thre e of more
There are many reasons to offre lo avoid Seci
arc nature of rite threats,
ones THe
h ts needed to prov.de the .0
mischievous, deliberate vtolahon o
nature of intruders.
rr Tmahc sure that, each ram o Accidental data loss.
o
component ringin the system uses system
„ per defined policies to ensure the rehab.l y 7 1 Threats

SySlero
uiewnoint of security, computer systems
r .he resources of computer From
Policies to make use of the resources *. Ls wi* corresponding
three general goals. threats to them,
system are put into effect by mechamsm wh c
offered by protection.
These are shown in f i g - • ■

Some of the policies are included in the design of the


Generalgoals g
rvctpm Other policies are decided by the managemen of computer System [
of a system
1 Data confidentiality J
- Many policies are defined by the individual user of the
system to protect their own files and programs.
2. Data integrity
A protection system should be flexible enough to offer
different and put into effect the different types of
policies.

Fig. C l .2 : General goals of computer systems


resource use.
1. Data confidentiality
- The policies designed should allow to offer this
resource use need. Threat to this goal is publicity of data. Aiwa}
The resource use of the applications can change over confidential data should remain confidential. £
the period of time. System should always ensure that, the access rights tol
Hence, instead of relying totally on operating system, data should be given only to authorized person. a
application programmer should use protection
mechanism to protect the resources created against At least, the owner o f the data should be able to staJ
misuse. who can access what, and the system should enfant
The system is secure if its resources are used and these specifications.
1
accessed as proposed under all conditions. Total 2, Data integrity
security is impossible to achieve.
Weat t0
We must have mechanisms to make security breaches * s g° a l is modification of data. Afl
an exceptional occurrence, rather than the rule. unauthorized users should
not be able to
The challenge in developing operating systems security modification to any data without the ownCl
is to design security mechanisms that safeguard uroce« Permission,
execution and their generated data in an environment
with complex interactions.
While protection of the system related to intern 1
environment, security, on the other hand "
consideration of the external environ™ • ’ , re< 1uires it remain urich0 alWays
£ uar antee that data sto
it, a ged until the owner decides to

Scanned by CamScanner
RPT Operating System (MU - Sem 4 - IT)
— - - verview of Operating System
■4 3. System availability
Process often switches from user area to kernel area,
- System should be usable at any time. Nobody should tnat is, from one domain to other.
disturb the system to make it unusable. The kernel part has access to a different objccls

- Denial of service attack is increasingly common to


make system unavailable.
Protecting individuals from misuse of information _ _ Sy,1abua Topic : Distributed Sya~
about them is called privacy.
1.8 Distributed System
- This quickly gets into many legal and moral issues.
System should also ensure the privacy to individual A computer network is defined as a set of
user. communicating devices that are connected together by
communication links.
The system contains many hardware objects such as
These devices include computers, printers and other
CPUs, memory segments, disk drives, printers,
devices capable of sending and/or receiving
magnetic tapes and many software objects such as
information from other devices on the network.
processes, files, databases, or semaphores.
These devices often called as node in the network. So
Each object can be referenced by its unique name. On computer network is interconnected set of autonomous
every object a finite set of operations can be performed. computers.
For example, WAIT and SIGNAL on semaphores and A distributed system is defined as set of autonomous
READ and WRITE on files. computers that appears to its users as a jingle coherent
A system should enforce a mechanism, to restrict the jys/em. Users of distributed system feel that, they are
processes from accessing the needed objects for which working with a single system.
they are not unauthorized. Distributed system is like multicomputers spread
- The mechanism should also ensure to restrict processes worldwide. Each node in distributed system is having
to a subset of the legal operations when that is needed. its own CPU, RAM, network board. OS, and disk for
paging.
For example, process P has a permission to read file F
but not of performing write operation on it. Main characteristics of distributed system
A set of object and rights pair is called domain. Each Q. What are the characteristics of distributed system?
pair denotes an object and some subset of the Explain.
operations that can be performed on it,
- A distributed system comprises computers with distinct
- One domain corresponds to one user and specify the
architecture and data representation. These
permissions to user for certain activities. Consider the
dissimilarities and the ways al) these machines
following three domains.
communicate are hidden from users.
- It is possible for the same object to be in multiple
- The manner in which distributed system is organized
domains. [Read, Write, execute j rights are available on
internally is also hidden from the users of the
each object. At a particular time of execution, each
distributed system.
process executes in some protection domain.
- In that domain there is some set of objects it can access, The interaction of users and applications with
and for each object it has some set of rights shown in distributed system is in consistent and identical way, in
square brackets, spite of where and when interaction occurs.

- During execution, processes can go from one domain to A distributed system should allow for scaling it.
other domain. The rules for domain switching are very Distributed system should support for availability. It
much depends on and varies from system to system should be always available to the users and applications
- In Unix every process is defined by user-id and group- in spite of failures.
id(uid.gid). Failure handling should be hidden from users and
- Two processes with the similar (UID, GID) applications.
combination will have access to precisely the same set
of objects.

Scanned by CamScanner
eratinq s

JdPF Operating System (MU - S e T


ex’amp* of co'XX one
sepor01 n
Comparison of muttlprocaaaoh muttlcomp * 'i7 internet communicate by excha,,
Machines
and distributed ayatem packets.
MulUcornput«< l i n s address of source and destiny
Multlprocewx
Parenwlara Each P** et CO1
routing tables. It extracts addr« s .
Router contain;
CPU, RAM, net
Comp*’** i n c 0 ming Packet ano look — up — the table to find
Node CPU cfxnput ___ I on and thus to
interface
configuration
outgoing line to send the P
pef
exC Full s*
Shared
Al stared
Node node
Maybe disk This'prccess is repeated until the
penpherafs
Possibly
Same room
Same rack
Location worldwide are very dynamic
X uZg° ““ ““
Dedicated
Traditional tinuousfy as routers and links go down and
Internode Shared H A M
network back up and as traffic conditions change.
interconnect
communication

Multiple. same
Possibly aN Protocol is set of rules by which Compaq
One, shared
Operating
different communicates with each other.
systems
Each node Many protocols are present such as router.
One, shared
One, shared
Filesystems protocols, host-host protocols, and others. Prot«;01
has own
stack layers different protocols on top o f one another
Many
Administration One organization f One This protocol stack is used by all modem network
organization
Different layer in protocol stack deal with differ
issues.
Network contains computers with different
architectures (heterogeneous). Most of the distributed systems use the Internet as a
base. Hence, these system uses two important Internet
this heterogeneous environment, a middleware layer is protocols: IP (Internet protocol) and TCP
often placed between higher layer that comprises of (Transmission Control Protocol),
user and applications and lower layer comprising the I P (Internet Protocol) is a datagram protocol. In IP, a
operating system and communication facilities. sender sends datagram of up to 64 KB over the network
- This organization of distributed system is called as and no guarantees are given for its delivery.
middleware is shown in Fig. 1.8.1.
Computer A Computer B Computer C
The datagram may be fragmented into smaller packets
and travel independently, possibly along different
routes.
Distributed Applications

Middlewana Services * assembled in correct order as per sequence number and


delivered to the application.
Local 0$ Local O S Local OS
IP protocol has two versions, v4 and v6. Version 4(V4)
are currently in use and v6 is up and coming. IP v*
Fig. 1A1 : Distributed system organized as middleware packet starts with a 40-byte header that contains eat
In Fig. 1.8. J . applications are tunning on three different 3 2 bit source and destination address with other fields.
machines A, B and C in network These are called IP addresses and routing is carried
As distributed systems are built on top of computer out using these addresses.
networks, which are of two types LANs (LocaX
Networks), and WANs (Wide Area Network
intereT t ‘ ° ffer re iable
‘ unication in *
LAN covers one room, building or campus WAN rehable
(Transmi
J** — i° Control Protocol),
communication, TO
i s prese nt on top"
Ole world Ethemet is

Scanned by CamScanner
Overview of Operating Systern
\ 1?t |nq System (MU -Sent 4 - I T )

IP t o always Types of &


yCP mikes
n• use of process
icmo|c
offer connection-oriented
listen the incoming Special Purpose Systems |
c ion on port number.
1, Real-Time Embedded Systems

and port number. 2. Multimedia Systems


--------------- i
Sender first estat to come out 3. Handheld Systems
over that conneci
( hc other end und
Fig. CL3 : Special purpose systems
C
TCP gives' this S UEirantee b y u s i n g s e q u e n c ?
jILimOLTStchecksums, and retransmissions of
. 1 .9.1 Real-Time Embedded Systems
incorrectly received packets.
Embedded computers are the most common form of
r . machine is identified by IP address and it is difficult
computers. These devices are used in every field of our
manage such list of huge number of IP addresses,
(Domain Name System) was invented as a life, from car engines and manufacturing robots to
DNS
database that maps ASCII names for hosts onto their IP VCRs and microwave ovens.
addresses. - These devices perform very specific tasks. These
This naming system pemtits the mail program on the embedded systems differ significantly.
sending machine to search the destmation host s IP . These can be general-purpose computers, running
address in the DNS database, establish a TCI standard operating systems such as UNIX or Linux
connection to the mail daemon process there, and send
with special-purpose applications to implement the
the message as a file. functionality.
The user-name is sent along to identify which mailbox
- Some of the systems are hardware devices with a
to put the message in. special-purpose embedded operating system providing
A tightly-coupled operating system is called as a just the functionality desired.
distributed operating system (DOS), and it manages
multiprocessors and homogeneous multicomputer. - The embedded systems are becoming more
multifaceted and complex today. Also these systems
The main objective of distributed operating system •* <°
will affect our life with more involvement.
hide the facts of managing the underlying hardware
such that it can be shared by multiple processes. _ This means they will bear more and more
responsibilities on their shoulders to solve real lime
problems to make our life easier.
systems. - So real time operating system needs to be effective to
- Management of the underlying hardware is an manage more complex real time applications.
important issue for a network operating system.
- Real time operating systems must respond quickly.
- The difference from traditional operating systems These systems are used in an environment where a
comes from the fact local services are made available large number of events (generally external) must be
to remote clients. accepted and processed in a short time.
- Real time processing necessitates quick dealing and
Syllabus Topic : Special Purpose Systems
characterized by providing instant response. For
example, a measurement from a petroleum refinery
1,9 Special Purpose Systems
indicating that temperature is getting too htgh and
might demand for immediate attention to avoid an
explosion.
Apart 'from general purpose computer system, some _ In real time operating system swapping of programs
classes of computer systems have limited functions and ns from primary to secondary is not frequent.
objectives are deal with limited computation domains. These
_ Most of the time, processes remain in primary memory
are shown in Fig. C l .3.
in order to provide quick response, therefore, memory.

Scanned by CamScanner
Qveryi

ini _ _ __ ____ . s delivered to de sltt


a 8I,p, O
MU - Sern4 - ’ Multin-“ ‘X“ d directed toward
D,p
demanding personal « ““ As M(J cellar telephones as We|J

management in rea! time


S,em Sh
compared io other system operfltW g sysIem ‘“‘Toneme * ° Uld
The design t* ,„,™.dia systems.
The primary functions of the real
are : 9
Handheld Sy*™"
Jj CPU and other resources
Managementprrcments
of the of an application. .S andMjdsystenm.
fulfill the irqui Write m

:
Synchronization' —
witfl and responding to the _J* for example pabi

system event. ng jesses and

o Efficient movement or □
out coordination among these processes.

In addition to these primary T


operating systems.
accond-y functions that am no. compd -y b
included to enhance die performance. Tbeseam
I. to provide an efficient management of KA ..
amount of memory, slow processors, and smal. d isplay

2. To provide an exclusive access to die computer screens.


resources. Physical memory size of these devices is bet Wee „
ftw more examples of real lime processing are : 512 KB and 128 MB.
1. Airlines reservation system. Therefore it is a job of operating system
2. Air traffic control system. ,! applications to manage manor efficiently.
I. Systems that provide immediate updating. j

4. the speed of the processor used in the devices.


on stock prices. Handheld devices require faster processors with
5. Defense application systems like as RADAR.
compare to PC.

-> 19.2 Multimedia Systems Faster processors need more power. So it is obvious
that, handheld devices require large battery size and
Apart from conventional data, now days operating
will occupy more space for battery.
system should be able to handle multimedia data.
Therefore most handheld devices use smaller, slower
Multimedia data comprises audio and video files as
well as conventional files.

Multimedia data should be delivered to the application As a result, the operating system and applications must
in defined time constraint. E.g. Video frames must be be designed not to toll the processor.
streamed as 25 frames per second.
The final issue deal with program designers fof
Multimedia depicts a broad range of applications that
handheld devices is I/O. A short of physical space
are in well-liked use today.
restricts input methods to small keyboard: , handwriting
These comprise audio files such as MP3 DVD movies
vdeo conferencing, and short video clips of movie recognition, or small screen-based keyboards.
previews or news stories downloaded over the Internet rnall display screens also provide constraints on
Multimedia applications may also , output options.
broadcasting over the World Wide Web Fn
— u! display for a handheld device limits to 3
broadcasting of speeches or sporti „ g evems
Multimedia applications include comhinw ,

"" p’8” ■“
b
audio and video. For example, a movie m a”" °*h
separate audio and video tracks. * C °" S ' S ‘ Of
•*

Scanned by CamScanner
Overview of Operating System
/gTcoerabng (MU - S e n 4 : IT
This involves the use of I/O. During execution,
Ayllnbus Topic: Operating System Services program requires to perform I/O operations for input or
output.
Until I/O is completed, program goes in waiting state.
-> (May 16) I/O can be performed in three ways i.e. programmed
I/O. Interrupts driven I/O and I/O using DMA. The
• Torvices provided by operating system.
Q.
M U ■ M a y 2016. 5 M a r k s
underlying hardware for the I/O is hidden by operating
system from users.
Following are the six services provided by operating These I/O operations make it convenient to execute the
system for efficient execution of users application and to user program.
Operating system provides this service to user program
f Operating System | for efficient execution.
Services |
-4 4, File System Manipulation
1. User interface - Program takes input, processes it and produces the
output. The input can be read from the files and
■ 2. Program Execution produced output again can be written into the files.
This service is provided by the operating system. All
files are stored on secondary storage devices and
4. File System Manipulation manipulated in main memory,
- The user does not have to worry about secondary
t 5. Communications storage management.
- Operating system accomplishes the task of reading
a f 6. Error Detection
from files or writing into the files as per the command
specified by user,

1. User Interface - Although the user level programs can provide these
services, it is better to keep it with operating system.
User interface is essential and all operating systems
- The reason behind this is that, program execution speed
provide it. Users either interface with the operat.ng
is fundamental to VO operations.
system through command-line interface or graphical
5, Communications
Cooperating processes communicates with each other.
Command interpreter executes next user-specified
If communicating processes are running on different
command. A GUI offers the user a mouse-based machines then messages are exchanged among them
and they gets transfer through network.
■4 1 Program Execution This communication can be realized by making the use
- The operating system provides an environment to run of user programs by customizing it to the hardware
that facilitates in transmission of the message.
users programs efficiently.
Customization of the user programs can be done by
- The resources needed Io the programs to complete means of offering the service interface to operating
execution are provided by operating system ensuring system to realize communication among processes
optimum utilization of computer system. distributed system.

- Memory allocation and deal location, processor In this way the communication service will be provided
allocation and scheduling, multitasking etc functions by operating system and user programs from will be
are performed by operating system. free from taking care of communications.

- The operating system has all rights of resource -4 6. Error Detection

management. User program does not given these rights. - In order to prevent the entire system from
malfunctioning, the operating system continually keep
-4 3. I/O Operations
watch on the system for detecting the errors.
- Each program requires cannot produce output without
taking any input.

Scanned by CamScanner
Overview of Operatii

l r Operating System (MU • SernjjJT


Functions of Operating System
User programs kept free from such error detection to
improve the performance.
-------E x p | a i n different functions
On the contrary, if this right would have been kept 5, No-
MU ■
with user programs, most of the user program would
have wasted time in error detection and actual work
would be minimized resulting in performance having its own collection of defined inputs and Qutp
degradation.
These different modules or components of I
Operating system needs to carry out complex tasks system cany out specific tasks to offer the M
C0

Such task comprises deal location of many resources


such as processor, memory etc.
If these tasks are again kept with user programs, then it
Functions of Operating I
System |

1.11 Objectives of Operating System

2. Memory Management
Q. Explain different objectives of operating system.

5
M U - J u n e 2015. N o v 2015. 4 M a r k s 3. File Management

Following are the three objectives of Operating System, 4. Device Management

Three objectives -♦jiTprotection and Security


of Operating System

» e Jser Interface or Command Interpreter "


1. Convenience
7. Booting the Computer
2. Efficiency

8, Performs basic computer tasks


3. Ability to evolve

Fig. Cl .5 : Objectives of operating system Fig. C l .6 : Functions of the operating system


1. Convenience Process Management
Computer system can be conveniently used due to
operating system.
2. Efficiency I. 1 o provide control access to shared resources lib
file, memory, VO and C P U .

AH these resources are utilized by users application in


Creation, execution and deletion of user it
3. Ability to evolve system processes.

developmentjesting. Scheduling of a process.


It is also supports for flexibility by allowing for 6. Synchronization, interposes communication
addition of new system fu n e t l o n s without deadlock handling for processes.
b
with service. 2. Memory Management
Following memory management related function
carried out by OS :

Scanned by CamScanner
Overview of Operating System

L It allocates the primary memory as well as Performs basic computer tasks


secondary memory to the user and system The management of various peripheral devices such as
processes. the mouse, keyboard and printers is carried out by
2 Reclaim the allocated memory from all the operating system.
processes that have finished its execution. Today most of the operating systems are plug and play.
These operating systems automatically recognize and
3 Once used block becomes free, OS allocates it
configure the devices with no user interference.
again to the processes.
4 Monitoring and Keeping track of how much Syllabus Topic : Operating-System Interface
memory used by the process.
-► 3. File Management 1.12 User Operating-System Interface
The file management activities of operating system
| Q. Write short note on user-OS interface.
consist of :
1 Files and directories are created and deleted by User Interface
OS. User interface is essential and all operating systems
2 OS offer the service to access the files and also it provide it.
allocates the storage space for files by using Users either interface with the operating system
different methods of allocation, through command-line interface OF graphical user
3. It keeps back-up of files. interface or GUL
4. It offers the security for files. Command interpreter executes next user-specified
4, Device Management command.
The device management tasks include : A GUI offers the user a mouse-based window and
1. Device drivers are opened, closed and written by menu system as an interface.
OS.
2 Keep an eye on device driver. Communicate, Interpreter
control and monitor the device driver.
User interacts with computer system through operating
5. Protection and Security
system. Hence OS act as an interface between the user
The resources of the system are protected by the and the computer hardware.
operating system.
This user interface offered through set of commands or
rInyS Xorder to s offer
use of the
userneeded protection,fde operating
authentication, auributes a Graphical User Interface (GUI).
Through this interface user makes interaction with the
such as read, write, encryption, and back-up o applications and the machine hardware.

through operating GUI stands for graphical user interface. « J


nf icons or other visual indicators to interact wit
system. Hence OS act as an intent
2>d the computer hardware. Th.s user mterface o.fere
through set of commands or a 8 raph * ,“’“ nteraction
CUI itd ,Sis not necessary to remember
Due to GUI. ofuse user needthe
to
(GUI)- Through this interface user commands. It provi • interaction with
with the applications and the machine har ware.
know any programming language for inter*
.4 7 Booting the Computer computer. os and

_ The process of starting or restarting the computer ts


0
known as booting. o0
r * OS L ' dleZraung systems that

_ If computer is ' Jm’booting is the offer GUL


then it is called cold booting. the
process of using the operattng system
computer.

Scanned by CamScanner
Overview
======= =s===S=of Operating
= ==S5Si:5s |
1-18 1 1 1
____== = l]
— to t be calling program r—
Ing System (MU - Sem After this, control returns
==
persists executing. _ _________ _________ I
Syllabus Topic : System Calls

1.13 System Calls

q. What are system calls?

— ~ hardware" *’'™* davi


” se
7 use
O. Write short note on system calls-
MU - May 2016. 5 Mark
Fig. 1.13.1: The kernel

The interface between OS and user programs is defined


by the set of system calls that the operating system As shown in Fig. 1-13-1 the kernel is a central
of most computer operating systems. Its
responsibilities are to shown in Fig. C l .7 .

Therefore system calls make up the interface between Main re»pon»ibilrties of


processes and the operating system. Kernel

The system calls are functions used in the kernel itself.


1 Act as a standard interface to the system hardware

normal C function call.


2. Manage computer resources
Due to system call, the code is executed in the kernel so
that there must be a mechanism to change the process —» 3. Put into effect isolation between processes
mode from user mode to kernel mode.
4. Implement multitasking
In the UNIX operating system, user applications do not
have direct access to the computer hardware. Fig. C l -7 : Responsibilities of kernel
Applications first request to the kernel to get hardware
access and access to computer resources.
hardware
During execution when application invokes a system
call, it is interrupted and the system switches to kernel For example, while reading a file, application does
space. have to be aware of the hard-drive model or physi
geometry as kernel provides abstraction layer to
The kernel then saves the process execution context of
hardware.

The kernel warily makes sure that the request is valid 2. Manage computer resources
and that the process invoking the system calls has As several users and programs shares machine and i
devices, access to those resources must
synchronized.
kernel implements and ensures a fair access
If the whole thing is fine, the
-quest in kernel mode and can access The 7 SUCh Pr0CeSS
device dZiceT ° r’ thC memOly
dnvers in charge of c on tro U i n g the hardwarc

data of the calling process can . into effect isolation between processes
d
modified by kernel, as it has access to m “
space
' But - “ not execute any codeT' 7
“ ** process cannot
application, for clear security rcaS o n s . * ““
When the kernel finishes the ■
881 18
memory
-stores the process execution * * °f request
’ h

when the system call was invoked"'”' ' hat saved


n
* ’* J ' ernen t multitasking
PWcess gets the

Scanned by CamScanner
Overview of Operating System
E Oosratinq System (MU ■ Sam 4 - IT) 1-19

Some operating systems favor the block or stack


Actually. *everal Processes COD,pe
*e cons,antl ’' f ” method, because those methods do not limit the number or
system resources 1 length of parameters being passed.
scene
process for each pi
to switch from the user Syllabus Topic : Types of System Calls

There are two way’ for ** proceSS ,o !wi


*ch fr
°m
user mode to the kernel mode. These a r e :

A user process can explicitly request to enter in kernel q. Explain any five system calls.
mode by issuing a system call. MU - J u n e 2015. N o v . 2015. 6 M a r k s
During the execution of user process kernel can take These are
over to carry out some system housekeeping task.
The kernel mode is both a software and hardware slate.
Modem processors offer a advantaged execution mode, Types of System Calls |

called as Supervisor Mode in which only kernel runs.


The privileged operations are such as modifying special 1 . Process control
registers, disabling -interrupts, accessing memory
management hardware or computer peripherals. 2. Device manipulation

- ]f it is not in supervisor mode, the processor will reject 3. Communications


these operations.
4. File manipulation
_ System calls take place in different ways, depending on
the computer in use. Apart from the identity of the
desired system call, more information is needed.
- The precise type and amount of information differ Fig. C l . 8 : Types of system calls
according to the particular operating system and call.
Group Examples
Consider the example of getting the input. Sr.
No.
- We may require specifying the source, which can be
1. Process control end, abort, load, execute,
file or device. create process, terminate
- We also need to specify the address and length of the process, get process attributes, 1
memory buffer into which the input should be read. set process attributes, wait for 1
■ The device or file and length may be implicit in the • time, wait event, signal event,
allocate and free memory
call.
2. Device request device, release device,
Ways ol parameter passing to operating
manipulation read, write, reposition, get
system
device attributes, set device 1
Parameters can be passed to the operating system by attributes, logically attach or
following three different ways ; detach devices _______________|

1. Pass the parameters in registers. 3. Communications create, delete communication


connection, 1
2. If there are more parameters than registers, then the
send, receive messages,
parameters are generally stored in a block, or table, in
transfer status information,
memory, and the address of the block is passed as a
attach or detach remote
parameter in a register. This is the approach taken by devices. ____________________
Linux and Solaris.
4. File manipulation create file, delete file, open,
3. Program can place the parameters onto the stack which close, read, write, reposition,
then popped off the stack by the operating system. get file attributes, set file
attributes

Scanned by CamScanner
Overview of Operate.
1-20
4. Write
&
Sr. Group Examples
No. Writef ) is similar to read() system call, On]y
the bytes instead of reading them.
Information get time or dale, set time or
maintenance | date, It returns the number of bytes actually written
get system data. set system almost invariably "size". ’
data, 5. Create process
get process, file, or device
S3 *
attributes.
using create process system call. L
set process, file, or device
attributes The process which creates new process js
parent process, and the new processes are l
1.13.2 Some Examples of System Calls children of that process.
Soma examples of Newly created processes may in turn create .
nt
•ya fem calle
r r!
processes, creating a tree of processes. -’
1. Open
Syllabus Topic : System Programs
2. Close
1.14 System Programs •
3. Read
Q. Explain various system programs tha i ' "
4, Write ar
associated with operating system. '

5. Create process System programs offer a suitable envi roninent


Fig. Cl.» : Examples of system calls development of application programs and its esecufo.

1. Open Out of these system programs, a number of are on

Open system call request to the operating system for user interfaces to system calls and rest of the Sy
using a file.
The files path is used as argument to specify the file, Operating systems are supp | ied wjth

pie 'flags' and ’mode' arguments to this call specifies


o FH* management t System programs in
On, successful approval by operating system, it returns
category usually are responsible for rnanipulstit.
a f ,e
‘ descriptor” which is a positive integer.
of files and directories. These programs general),
create, delete, copy, rename, print, dump and li
und ’eimo” needs to be checked to get reason o f denial"
the files.
* X Close
o Status information : The status information i

lhe sys,em 1S for


example: date, time, total fra
memory or disk space in hand, number of used

Present in the system, etc.


Read
system programs simply inquire for tii
ReadfJ specifies to the OS the number of (s iae y hvf information. Other gives the information regard"!
fo read from the file opened in file descriptor "2 Plete performance, logging, and debugs
h also specify .
10 pu( the
the locati
“c programs have more complex d®’ 1

Pointed to by W. °n
Characteristically, these p rograms carries
formatting and . ,a

Scanned by CamScanner
1-21 Overview of Operating System

files or other output devices or display it in a Systems required for debugging the higher-level
window of the graphical user interface. languages or object code arc also offered.
o Communications : Some of the system programs
o File modification : System program such as text
offer the way for creating virtual connections
editor is used to create and modify data or
information in files stored on disk or other storage among processes, users, and computer systems.
They permit users to exchange messages, to
devices. Some particular commands are used to
browse web pages, to send electronic-mail
look for contents of fifes or carry out conversions
messages, remote login and transferring the files
of the text.
from one machine to another.
o programming-language support : The language
translators such as compilers, assemblers, Just like the system programs, many applications too

interpreters come with operating system to are the part of operating system.

translate the user programs to object code. These programs are called as system utilities or

o Program loading and execution : After application programs.


translation of the user program in object code it It comprises the applications such as web browsers,
needs to be loaded into memory for carrying out word processors and text formatters, spreadsheets,
the execution. The different types of loaders such database systems, compilers, plotting and statistical-
as absolute loaders, relocatable loaders, linkage analysis packages, and games.
editors, and overlay loaders are offered by the
system in order to load the program in memory.

Comparison between System Program and Application Program

Q. Compare system and application program.


Application Program/Software
Parameters System Prog ram/Sy stem Software
Sr.
No.
These software helps the user to carry
Definition System programs are designed to manage and control
1. a specific tasks as per users
computer hardware to help the users program for efficient
requirement.

Its purpose is specific.


Purpose It is general-purpose.
Environment required for application
Environment System software is itsen — program is created by system
environment to execute itself and other application software.
programs. -------------------------- --------------------------------------
ft controls particular task for which it
Responsibility ft is responsible to control and manage entire computer was intentionally designed.
system. _________________ ________ __________
Its execution is as per the need of
Execution ft executes all the time when system is on to offer the user. ______—
services to user programs.
Word processors or any software
Examples Operating system, compilers, loaders, text editors. designed to perform specific task.-------
6.

Scanned by CamScanner
1'22
MU -

1.15 Operating System Design and


Implementation ________
SyS
There are many '6 er example of the mechanism for J
implementation of opending fl— ™
nolicy and mechanism
lf
- “‘Xel’y *en mechanism can be used to
1.15.1 Design Goal* S P
Jon that I/O-bound jobs shou ld J
d

Explain design goals of oporadng system Jv over CPU-bound jobs or to support the

initial problem that needs to be tacen


free
goals and specifications. tem The essential set of P<>>i P rimiti « bujj

microkernel design.
_ Consequently, it allows to add more
mechanisms and policies by means of userJ
kernel modules or by means of user pregr j
themselves.
system. The allocation of resources should be on the bad
i. These
policy decision.
requirements are categorized in two fundamental
1,15.3 Implementation
groups: user goals and system goals.
q. Explain implementation of operating sys
use, simple to learn and to make use or u. reiwuis, design.
and protected and having high performance.
Traditionally assembly language was used
Due to lack of general agreement on how to accomplish implement the operating system.
these goals, the above properties are not useful in the Now high level languages arc used to implement I
design of system. operating system.
Person responsible to design, create, maintain, and Due to this code is easy to read, understand and del
operate the system also desires the properties like: and implementation process became faster.
Simple to design, implement, and maintain, flexibility, After recompilation operating system can be ported
reliability, error free, and efficient. Such requirements other machine.
are unclear and may be inferred in different ways. On the other side, if high level languages are use(
Single solution to the problem of defining the implement the operating system the speed will
requirements for an operating system does not exist. reduced and storage required would increase.
The different systems in existence show that different These issues are negligible as assembly W
requirements can leads to a large diversity of solutions routines can be developed for large progrnins
for different environments.
compilers can optimize the code.
1.15.2 Separating Policies from Mechanisms The operating system routines responsible
bottleneck can be replaced by assembly W
- MecW sms decide the way of doing something
whereas policies determine what will be done. In this way performance can be improved. Use of*
It is necessary to keep policies
data structures and algorithms also impro
performance.
mechanisms to achieve the flexibility
system performance.

Scanned by CamScanner
1-23 Overview of Operating System
0 Sem4
I TooeratingSy - IT)
1,16.1 History
Bottlenecks can be identified by monitoring system
pc rformance.
Internet standards virtual machines were also present
n atinc system should include the code to calculate
earlier. Around I960, IBM developed two hypervisors
Xdisplay measures of system behaviour.
SIMMON and CP-40 which was research project.
In many systems, the operating system carry out this
CP-67 was reimplemented version of CP-40. CP-67
iob by producing trace listings of system behaviour.
AU necessary events are logged with their time and formed the CP/CMS which was virtual machine OS for
important parameters are written to a file. the IBM System/360 Model 67. In 1972, CP-67's
reimplemented version was launched as VM/370 for
Syllabus Topic: Virtual Machines the System/370 series.
In 1990 IBM replaced System/370 by System/390. In
all this journey, although machines were renamed but
1,16 Virtual Machines underlying architecture remained unchanged to
maintain backward compatibility.
q. What is virtual machine? There was better improvement in hardware technology
and newer machines were bigger and faster as well.
By means of CPU scheduling and virtual machine
Over the period of time, although hardware technology
(VM) techniques, host OS can produce illusion that
improved, but virtualization was there and supported
each process has its own processor and memory. by all the machines.
- This memory is virtual memory. The VM offers an In 2000, IBM released the . z-series which was
interface that is the similar to the underlying bare backward compatible with the System/360. Z-series
hardware. had supported 64 -bit virtual address spaces.
The processes running are the guest processes and each All of these systems supported virtualization decades
process gets virtual copy of the underlying computer. before it became popular on the x86.
This guest process is a operating system. The early releases of OS/360 were strictly batch
In this way single machine runs the multiple operating systems. But, due to requirement, many 360 users,
systems in parallel, each in its own VM, decided to have timesharing, so various groups, both
From availability and security point of view, using inside and outside IBM decided to write timesharing
separate computer to put each service is more systems for it.
preferable by organizations. If one server fails, other The TSS/360, the first time sharing system, arrived late
will not be affected. and it was so big and slow that few sites converted to it.
It is also beneficial in case if organizations need to use It was finally neglected. And it its development cost
different types of operating system. was very high around some $50 million (Graham,
However, keeping separate machines for each service is 1970).
costly. Virtual machine technology can solve this But a group at IBM’s Scientific Center in Cambridge,
problem. Although this technology seems to be modem Massachusetts, produced a radically different system
but idea behind it is old. that IBM eventually accepted as a product, and which
- The VMM (Virtual Machine Monitor) creates the is now widely used on its remaining mainframes.
illusion of multiple (virtual) machines on the same
This system, originally called CP/CMS and later
physical hardware.
renamed VM/370 (Seawright and MacKinnon, 1979),
- A VMM is also called as a hypervisor. Using virtual
was based on an astute observation :
machine, it is possible to run legacy applications on
o A timesharing system provides
operating systems no longer supported or which do not
work on current hardware. o Multiprogramming and
Virtual machines permits to run at the same time o An extended machine with a more convenient
applications that use different operating systems. interface than the bare hardware.
Several operating systems can run on single machine The essence of VM/370 is to completely separate these
without installing them in separate partitions. two functions.
Virtualization technology plays major role in cloud
computing.

Scanned by CamScanner
Pe
Operating Sys tarn (MU - Sem 4 - IT] 1-24 Overview of ° gtin-

- The heart of the system, known as the virtu-1 machine monitor, runs on the h-rdwam
multiprogramming, providing not one, but several virtual machines to y up, ds ,
Fig. 1.16.1, \
Virtual 370s

' \ __ System Galla here

I O Instructions CMS CMS I Trap here


-* . CMS
hflre
T
Trap here VMT370

370 Bare hardware ... - ■

Fig. 1.16.1

However, unlike all other operating systems, these


1.16.2 Benefits
virtual machines are not extended machines, with files
and other nice features. Q. What are the benefits of virtual machine?
Instead, they are exact copies of the bare hardware,
iimes
including kemel/user mode, I/O, interrupts, and
Several virtual machines are protected f roni
everything else the real machine has.
and also host system is protected f rotn
To maintain the backward compatibility, the defects i n
machines.
Intel 386 were automatically carried forward into new
CPUs for 20 years.
damage this operating system but not other
Hence virtualization has been a problem on x86
operating system. Also host operating system
architecture. 11
affected by this v i r u s .
The same instructions behave differently when
The direct sharing of resources is not present Ths f
executed in kernel mode with compare to their L
sharing is possible.
execution i n user mode, for example, instructions
carrying out I/O and changing MMU settings. The network of VMs is possible where infamfc
All these instructions are called as sensitive sharing can be done over virtual commu ,

instructions. Some other instructions cause trap i f run network. The research and development in opera l
i n user mode called as privileged instructions. system is possible due to virtual machine concept.

If sensitive instructions are subset of privileged System remains unavailable till changes made rr
instructions then machine is virtualizable. tested in operating system.
In 2005, Intel and AMD added virtualization in their l i m e required for this is system development time.:
CPUs. On the Intel CPUs it is cal ied case of virtual machine, system development is done
(Virtualization Technology); on the AMD CPUs it is virtual machine instead of on a physical machine.
called SVM (Secure Virtual Machine).
The virtualized workstation permits for quick potM
and testing of programs in changeable enviromnatt

container, it carry on running there unti) it causes , . he same Wa


y* Quality-assurance engineers can '
ir appJications
exception and traps to the hypervisor, for example b“ r* in several environments
ymg, powering. and maintaining a computer fot
executing an I/O instruction. ’
environment.
The set of operations that trap is controlIed

hardware bitmap set by the hypervisor. by a


rCC
VM°h ° PUmizall0n is
possible by creating
y combining two separate physical machine

Scanned by CamScanner
nrwating System ( M U Sem 4 - IT) i -25
Overview of Operating System

In this manner, two lightly loaded system can be - The creation of exact copy of underlying machine
converted in one heavily loaded system. needs most of the work,

For extensive acceptance, the design of virtual - This underlying machine has user mode and kernel
machines must be standardized with the intention that mode, Since VM software is OS, it can run in kernel
mode.
any virtual machine will run on any virtualization
Platform. The VM itself can run in user mode. It is necessary to
have virtual user mode and virtual kernel mode.

be successful in combining virtu al -machine formats. Both of these modes must run in physical user mode.
Just like user mode to kernel mode transition on real
1,16.3 Simulation machine, there should be transition from virtual user
mode to virtual kernel mode on VM.
Because of virtualization, guest OS and applications
believes to be executing on native hardware. This transfer can be carried out as follows :
Simulation is other system emulation methodology like - System call by program running on VM in virtual user
virtualization in which host system has one architecture mode causes transfer to the VM monitor in the real
machine.
and compilation of guest system was carried out on
other architecture. VM monitor then alter the register content and program
counter for VM to simulate the effect of system call.
This is just like running instructions on new computer
system that were compiled on the old computer system. - The VM is the restarted which is now in virtual kernel
mode. Virtual I/O may take less or more time than real
- The programs could be executed in an elmulalor that
I/O. As CPU is being multiprogrammed among many
converts each of the old systems instructions into the
VM*s, they can be further slows down. For
native instruction set of the new computer system.
virtualization, hardware support is also important. All
1,16.4 Para-Virtualization general purpose CPU supports for virtualization.

Para-virtualization offers to the guest similar but not 1.16.6 Examples


identical preferred system o f the guest.
Following two, VMware workstation and java virtual
mrwiifv inc
the guest to run on Para-
11 is necessary to mooiiy g machine are the examples of VMs.
virtualized hardware. Examples
of Virtual machine* [
Due to this resources can be utilized more efficiently. A
, matl virtualization layer is required. Solans 0
(A) VMware
operating system contains zones or contamers
produce virtual layer between OS and appheattons. ~ (B) The Java Virtual Machine |

In this case, hardware


virtualized so that processes

within containers believe that only they arc Fig. C L I O : Examples of virtual machine

system. -> 1 .16.6(A) VMware _____________


Containers can be more than one each having its own
Explain VMware in detail. ----------------------------------1
applications, network stacks, network address
----------------L ~ □ atablished marketable
ports user accounts, and so on. _ VMware workstat.cn is a establ. he

CPU resources are partitioned among the containers


tvi ,machines
cSXhXJ -
and system wide processes. VMware Workstation tuns as an application on a host

1 16.5 Implementation OS Examples of host OS are Widows or Linux.

is difficult.

Scanned by CamScanner
Overview of Operas
---- --- »
1*26
tern (MU - Sem 4_J
pointer arithmetic, w ic
to run a number of
This permits this host system
in parallel as
different guest operating systems
his verification is successful then it i s
Java interpreter- The JVM also proves
Appikalfon
host OS. Three collection automatically-
- I n Fig, 1.162. Linux is running Windows
as NT. and The implementation of JVM can be carried
operating systems, free
software on the top of host OS or as a pan
Windows X P are running a-s
App|icatK>n browser.
Application
Appltoiton
The interpreter interprets bytecode one at a M
Guest OS
Guest O S faster interpretation is achieved by just in A
Guest O S (Windows X P ) -t)
(Windows NT)
(free BSD) Virtual CPU. compiler.
Virtual CPU.
Virtual CPU, Memory
Memory
Memory and devices
and devices and devices
and subsequent invocation of the methods a,c
Virtualization Layer

It avoids bytecode interpretation in SU j


Host OS (Linux)
invocation.
Hardware

Fig. 1.16.2 : VMware Architecture out in chip which is faster than

The virtualization layer offer abstraction of physical


hardware into isolated VMs which are running as guest
operating systems. .class files of Java Class Loader Java api
Program
As shown in Fig. 1.16.2, each VM has its o w n virtual
memory, CPLF, devices and network interfaces etc.

The guest file is the copy of host OS file in file system. Java
Interpreter
The guest instance of file is separate copied file due to
which protection of guest instant against disaster is
achieved. •
Host operating system
-> 1.16.6(B) The Java Virtual Machine (Windows. Linux, etc)

0, Write short note on JVM.


Fig, 1 . 1 6 3 : The java virtual machine
1be Java is object oriented programming language in
which program contains one or more classes. 1.16.6(C) The .NET Framework
The compiler produces architecture independent
The .NET Framework includes set of class libraries
bytecode for each class file.
an executing environment which creates platform
This bytecode can run on any implementation of the
developing the software.
java virtual machine.
A program written for .NET framework does
The JVM has class loader and java interpreter. The java
interpreter runs the architecture independent bytecodes
The execution of the program can be carried
successfully by any architecture which imp
NET.
<3

Once the loading of the class is done It k ' k <


ST 2

squired for this execution is abstra


‘ J 1* ” ' on environment and VM i s provided
alld Java
b ode and does.......... J as intimidate between execution environ
derlymg architecture.

Scanned by CamScanner
1-27 _____ Overview ot Operating System

1 ,17.1 Failure Analysis


The NET framework contains common language
- Operating system writes error information in log file or
The programs are written i n C# and VB.NET and takes core dump in case of failure of process.
compiled into Microsoft Intermediate Language The core image is maintained in file for afterward
analysis.
language. Debugger tool probes the executing programs or core
The compiled files (assemblies) having extensions as dump. It is challenge to debug the user level process.
,EaC nr DLL contain MS LL instructions and As kernel is more complex and having large size, its
metadata. debugging is complex task as user level debugging
its
The CLR loads assemblies into the Applicatton tools are not available. When kernel crashes,
Domain upon program execution.
for
As executing program requires instructions, the CLR The tools for OS debugging are different than
translates the MS-IL instructions inside the assemblies process debugging.
not
into native code that is precise to the underlying I f failure occurs in file system code, then it is
possible for kernel to write log in file in file system
before rebooting. In this case, memory slate is saved on
The instructions then will carry on running as native
code for the CPU.
After
The architecture of the CLR for the .NET fmmework is
saved

C# Source Code
to
I t is necessary to m<
discover bottlenecks.
MS-IL Assembly

displaying the measures of system behavior.

Operating system produces the fisting °f system


CLR behavior, A log is ms
with thei

This log
______ an analysis program to

know system performan' and to recognize bottlenecks


Host System and inefficten recommended improved system

- The simulation input


. CLR architecture for .NET framework can be carried out with
Traces also can facilitate people to locate

- y-Tbus TOPIC : OperaTigjB S

17 operating SystemDguggl — -
the system to find bottlenecks.
debugging in detail. _______ ________ _

1 17.3 DTrace
Debugging includes finding hardware
system. Debugging activity is cam performance can _ DTrace ■ts the
the facility
I ? that permits
p r nroce
to kernel.
sses and add probes
dynamically in running user p
and software. By fixing the bugs tn system, pe
be improved,

Scanned by CamScanner
Overview of Operating „

128
The
Thc available devices, their device n Urtl > 1
rating S tem (MU* o and model of device .
using D program™ g
By giving query™™--; , hc kernel, -be mteX'pt number and characteristics.
am
language. surpd«"« irjc5 C an he determined, The preferred OS options, parameters
sysiem sure, and proves. _____ n user |evd ° used The number of buffers and size t 0 ‘
by values, the required CPU schedulmg a

CTdeWi
rhe maximum number of proce SSes
“*"‘'™' "'rbo-h the and
These (ootserN understands supported and so on.
ta mea. the rarera of lhe
A system administrator can use this infor
toolset should * aH
modify os source code.
andshoaMa. J av y i m t
The compilation of modified OS is then carried
produce output-object version as pe r
description.
ZZdl -se requirement, and oner safe,
Io another approach, tables are created and mod
dynamic, low imp«t debugging environment.
selected from precompiled library.
'Syllabm Topic : Oporatlng Systom GeneroUon__
next approach, complete table driven system Cj#
1,18 Operating System Generation created.

[ g, Explain operating system generation. _J


or link time*
- Operating system can be designed for different
machines at diversity of sites with varied peripheral Syllabus Topic : System Boot
configuration.

- System then configured for particular computer site


1.19 System Boot
called as system generation (SYSGEN).

- The distribution of OS is usually done on disk, on CD- Q. Explain booting process of the system in detail
ROM or DVD-ROM. or as an "ISO" image.

This distributed OS is a file tn the format of a CD-


use by hardware computer cannot be ready to use
ROM or DVD-ROM.

The SYSGEN program reads from a given file, or The process of starting a computer by loading
request the operator of the system for information
relating to the exact configuration of the hardware A small program called as the bootstrap program <
system, or query the hardware straight way to know
about components present there.
memory, and starts its execution.
The following information is needed to determine.
O n some computer systems, for
o The CPU is to be used. Options installed like
two-step process.
nSttUC,i floatin
.T 2 "“ 8 Point arithmetic In first step simple bootstrap loader obtains a«
case of multiple implex boot program from disk, and this
CPU system.

The way of fu..., auullK Program then loads the kernel. -s

of sections, or "partitions " in t» e i v e s a reset


. event, for example, Q
and what will go into each partition? “
U p Or rebooled
The totaJ men
loaded ’ the instruction regi '
° ’ory available. This is d L
mferencmg the ntentoiy j done
wecution Sta rts d th e defl,le<1 memOty l0Ca,i0
"'
address is generated ' dJegaJ
Cj

PragrJ u inReea L f ° , " P pragram


<"'Only Memorv fROMl.

Scanned by CamScanner
Overview of Operating System

When system starts up the RAM initially is in unknown Q. Explain layered system in detail.
state. ROM is suitable because it needs no initialization (Refer section 1.2.2)

and cannot be infected by a computer virus. Q. Explain client-server model in detail.


The bootstrap program runs diagnostics to decide the (Refer section 1.2,3)
state of the machine. If the diagnostics pass, the
Q. Differentiate between monolithic and microkernel.
program can carry on with the booting steps.
(Refer section 1.2.4) (5 Marks)
It also initializes all parts of the system, from CPU (June 2015, Nov 2015)

memory. Q. What is Kernel ? Describe briefly the approaches of


designing Kernel.
Il then starts the operating system. Several systems like
(Refer section 1.2.4) (5 Marks) (Dec. 2016)
cellular phones, PDAs, and game consoles store the
complete operating system in ROM. q. Differentiate between monolithic and microkernel.
(Refer section 1.2.4) (5 Marks)
If the operating systems are small in size, storing it in
(June 2015, Nov. 2015)
ROM is appropriate for, simple supporting hardware,
and rugged operation. Syllabus Topic : Operating System Operations
A difficulty with this approach is that changing the q. Explain operating system operations.
. bootstrap code needs changing the ROM hardware (Refer section 1.3)
chips.
tr Syllabus Topic : Process Management
- This problem is solved by some system using Erasable
Programmable Read-Only Memory (EPROM). Q. Explain process management function of operating
system. (Refer section 1.4)
- EPROM is read only apart from when clearly given a
command to become writable. Syllabus Topic : Memory Management

- The characteristics of ROM go down somewhere q. Explain memory management function of operating
between those of hardware and tho.se of software. system. (Refer section 1.5)
Therefore all forms of ROM are also known as & Syllabus Topic : Storage Management
firmware.
Q. Explain file management in OS.
- For large size operating systems like Windows. Mac
(Refer section 1.6.1)
OS X and UNIX or for systems that change often, the
bootstrap loader is stored in ROM. and the operating Q. Write note on mass storage management.
system is on disk. (Refer section 1.6.2)

- In this case, the bootstrap runs diagnostics and has a Explain how caching Is implemented ?
Q.
small piece of code that can read a single block at a (Refer section 1.6.3)
fixed location (say block zero) from disk into memory
ur Syllabus Topic : Protection and Security
and execute the code from that boot block.
Q. Explain protection and security mechanism of OS.
1.20 ExamPack (Refer section 1.7)
(University and Review Questions)—
cF' Syllabus Topic : Distributed System
<r Syllabus Topic : Introduction What are the characteristics of distributed system?
Q.
q What is operating system? Explain. (Refer section 1.8)

Syllabus Topic : Special Purpose Systems


(Refer section 1.1) {2 Marks)
Explain various special purpose systems.
(Dec. 2014, June 2015, Nov. 2015)
(Refer section 1.9)
or Syllabus Topic : Operating System Structure

q Explain monolithic system. (Refer section 1.2. 1)

Scanned by CamScanner
Overview of Operatii

id application program.
are
Sem 4 - 1 Comp
ffrrwetinoSyslemfW:

write note on handheld syrrfe-™. — j ’- - - - I


Q.
(Refer section 1.9.3) N
— t Services
Wn
SyH .bu. T o p i c : operating SX’ ’
Is Of Operating system.
operating system,
Explain services provided by < (May 2 O 1 6 > a. tefer section
Q. (f
(Refer section 1.10) (5 Marks)
Explain different objectives of opeigating system.
Q.

(Refer section 1.11) Virtual Machines

Q. Explain dinereni Q.

Q.

syllabus Topic : Operating-System Interface


Q. Explain VMware in detail. (Refer section I
Q. Write short note on user-OS interface.
(Refer section 1.12) Q. Write short note on JVM. (Refer section 1, 16 .8 J

Syllabus Topic : System Calls Sylllabus Topic : Operating System Debugg-

Q. What are system calls ? o.


Syllabus Topic : Operating System General

Q.
201 6) (Refer section 1.18)
(Refer section 1. 13) (5 Marks) (“ay
Syllabus Topic : Types of System Calls Syllabus Topic : System Boot

Q. Explain any five system calls. Q. Explain booting process of the system in
(Refer section 1.13. 1) (6 Marks) (Refer section 1.19)
(June 2015, Nov. 2015)
Syllabus Topic : System Programs

Q. Explain various system programs that are


associated with operating system.
(Refer section 1. 14)

Scanned by CamScanner
CHAPTER

Process Management

*'9
>
Process concept : Process Scheduling, Operation on process and Interprocess communication;
Multithreading, Process : Multithreading models and thread libraries, threading issues; Process
Scheduling; Basic concepts, Scheduling algorithms and Criteria, Thread Scheduling and Multiple
Processor Scheduling,

•%l
program resides on the disk. On disk it does not require
2.1 Introduction _____________ _ _ _ any resources.

- Process manager implements the process management A program gets executed in main memory. So it should
be transferred from disk to main memory.
functions. In multiprogramming, single CPU is shared
among many processes. If many processes remain busy To complete execution, program needs many resources
in completing I/O, CPU is allocated to only one process and competes for it.
at a given point of time. Now it becomes process. From the computation context
Here some policy is required to allocate CPU to point of view, a process is defined by CPU state,
process, called as CPU scheduling. If multiple users are memory contents and execution environment.
working on the system, operating system switches the A CPU state is defined by the content of the various
CPU from one user process to other. registers such as Instruction Register (IR), Program
Counter (PC), Stack Pointer (SP) and general purpose
User gets the illusion that only he or she is using the
system. Process synchronization mechanism is required registers.

to ensure that only one process should use critical A small amount of data is stored in CPU registers.
section. Memory contains program code and its predefined data
structures.
Process communication, deadlock handling, suspension
and resumption of processes and creation and deletion Heap is reserved memory area for dynamically
of the processes etc are some of the activities allocation of memory to the program at run time.
performed in process management. In stack, program local variables are allocated and
return values of function call are stored. Some register
Syllabus Topic : Process Concept values are also saved in stack.
Execution environment includes open files,
Process Concept . communication channels to other processes etc.

(Dec. 14) following are the components ol the process :

o The object code that is to be executed.


q. What do you mean by process?
MU - Dec. 2014. 3 Marks o Resources required by the program to complete
the execution.
Q. Explain the concept of process. ______ ________*
I -------— -— — ■— -— —
o The data on which program will operate.
- A program or application under execution is called as
o Program execution state.
process. A process includes the execution context. A

Scanned by CamScanner
system
2.3 Context Switch B y using corr
"process executes procesa J>
2 runnmfl P , processes cu
Q. What is context switch 7 other. a
obtained.
------------------------- *. from one pro c e S . is die Process reqi
- When CPU switches f
1 U
belch job memory, file:
context switch occurs. pmcessi
switching Of .he CPU (Centre* Proe The subproci
one process or thread to another. Eve S f pr0WSS can get its nt
Fig . C12 = "‘ °
I r system, or it
When context switch occurs, ° * . ess f°
the infonnation of currently ex
g pr0C

to the s
System taiaaltotton I of the parent

later use
.... g naets
d
the f ter booting'OS cma'es many process V The parent <
or it can div
process. i
execution. ' . process If running P«*“ s eMCUte5
J I f child pro

The context of a process is repnesen


the
X
9 >n call 1 resources o

Control Block (PCB) of a process. address Currently executing process creaus I subprocesst

The infonnation needs to be restc executed recesses by isswng create process 1 sysle The initial

of the next instruction to be needs these processes to assist it j, q parent prex


contents, pointers to process.
counter), CPU register execution. i
memory allocated to rhe process, schedule After creai
accounting Creating new processes is mainly helpfJ
information, changed state, I/O state ’ elated to exect
work to be carried out can easily be forrnulaj
information etc. f The paren
of several related, but otherwise
rfon
While context switch, system does not P* * Until few
. ----------- ----- k nure overhead on me
| and will n
system. 3. A user request to create a new procea
£ With res]
The speed of context switching depends on the numbe In interactive systems, users can start a p■gain two pos<
of registers that must be copied, memory speed, o if Program
context switch speed varies from system to system. It means
-> 4. Starting of a batch job |
Program
Sy I labus Topic : Operation on Processes
In this case users can submit batch jobs to til different
remote system. W h e n the OS makes a dedJ loaded ii
2.4 Operations on Processes
has the resources to execute another job jL
2.4.2
Q. What a re the operations performed on process?
new process to execute the next job front
Exp
queue in it. F

During the execution process can create ne* ™ When j


Operations on processes
using create process system calk I it term
to dele
1. Process creation The process which creates new process u I the pre
parent process, and the new processes ait I by usir
2. Process Termination i
children o f that process. | Then i
C2.1 : Operations on processes Newly created processes may in turn creawi and V
freed
processes, creating a tree of processes. ’
■4 2.4.1 Process Creation in oth
In UNIX or the Windows family of
Q. Explain process creation. By e?
Processes are identified by unique ProCfS termi
Processes are created because of the foil ■ (or pad), which is typically an integer proce
principal events. These are shown in Fie. C2 ? ° Wlng f
° Ur command is used jn t0 obtain
Presses.

Scanned by CamScanner
2-3
— — -vaiipicie information for al] Process Management
processes currently active in the syst e ra can OtteTwise, users could randomly kill Mch other s jobs.
obtained. is necessary that a parent requires knowing the
idenlitifec ,. *
Process requires certain resources like CPU time
, when a process creates a new process, the
memory, files, I/O devices io complete its task.
identity of the child process is passed to the parent.
The subprocess also needs these resources. Subprocess Any of the child processes execution can be terminated
can get its needed resources directly from the operating y child for a diversity of reasons, such as these :
system, or it may be forced to a subset of the resources
of the parent process. ° If the allocated resource usage is exceeded by
child. (The parent must have a means to examine
The parent can share the resources among its children
or it can divide the resources to allocate to its children.
There is no longer requirement of allocated task
If child process is restricted to a subset of the parent's to the child.
resources overloading the system by creating too many o In many systems, If the parent is terminated then
subprocesses can be avoided. OS does not permit a child to carry on execution.
The initialization data may be passed along by the For example, VMS system does not permit a child to
parent process to the child process after creation of the carry on execution after its parent process is
process. terminated.
After creation of a new process, two possibilities exist If all children of particular parent are terminated then it
related to execution : is called as cascading termination. After normal or
— The parent and its children both executes in parallel. abnormal termination of parent, cascaded termination is
carried out in much system. It is initiated by the OS.
- Until few or all the children terminates, parent will wait
and will not be terminated. In UNIX, exit() system call is used to terminate the
process. The wait() system call is used by parent to
With respect to the address space of the new process
wait until child is terminated.
again two possibilities exist :
The terminated child's process identifier is returned by
Program and data of child and parent process is same.
the wait() system call. Because of this process
It means child is duplicate of its parent.
identifier, parent process comes to know about which
— Program and data of child and parent process is child is terminated.
different. It means child process has a new program
- After the termination of the parent, the rnif process is
loaded into it. assigned as parent to all terminated children.
2.4.2 Process Termination - This init process collects all terminated children's status
and execution statistics after termination of their parent.
Q, Explain process termination. J

- When process finishes the execution of last statement, 2.5 Process Control Block
it terminates. After this it request the operating system
to delete it by using the exit () system call. Just then,
Q. Explain role of process control block.
the process may return an integer to its parent process MU - Dec. 2014. 5 Marks
by using wait() system call.
Q. Explain process control block. _ J
- Then all allocated resources to the process like physical
and vl&al memory, open files, and VO buffers are Any process is identified by its process control block
freed by the operating system. Termination can happen (PCB). PCB is the data structure used by the operating
system to keep track on the processes.
i n other situation also.
All the information associated with process is kept m
By executing the suitable system call, any process can
process control block. There is separate PCB for each
terminate the other process. Normally, parent of t e
process.
process to be terminated, executes this system call.

Scanned by CamScanner
process m

2-4

PrOS
MU'Sen ' „ x‘ s *’ ,ch
-
Operating System the status of
After c° nielc - reviously executing proc«Ss
urn of 0,6 P Eac
Treffree— * state When < ' „ts dlocated to it- Program
' *’* instruction f rorn is i
the processes. wh jC h arC in Uw” . hcr
7 - Ttogct
PCB’s of process;
linked M °f ,t,e Pn

(ready,
giving
executing.
executing
a specific name
- - |is , such as
in chain
j5.«— ** it)

linked toged-r
RefllB,er ’
c
Of all these pn* . che cks the 5 gene
It inclu* ” u m
w ,ingf0 a C l n,raffiC fOf ,he
I f "device become.
?X L f-« ' « process i* w.i>i”8

,s
again H P,acca 2 5.1.
AccoU nt fl ca)culating the process’s
device again I. b. l. o _i.
c k ie
M jchown in rifr
A icaJ processed!’ Information™
Current Sure relative to othe P information
Pointer
This may me inc i u des amount Of
Process ID

"d “rS “me used, time limits, process „u


Priority
and so on.
Program counter
Memory allocation
Registers This information may incMe the value of baM

Accounting
Lt register. It includes paging. segmentation re

Memory allocations information depending on the memory system


Address space allocated to the process etc.
Event information
Event information
List of open files ______
For a process in the blocked state this field contain

is waiting.

Fig. 15.1 : Process control block List of open files

»■ Pointer Files opened by the process.

This field points to other process’s PCB. The After creation o f the process, hardware registers
scheduling list is maintained by pointer. flags are set as per the details supplied by loaders or linker

Current state Al any time the process i s blocked, the processor registers
F
content are generally placed on the stack and the pointer
Currently process can be in any of the state from new,
the respective stack frame i s stored i n the PCB. In ihh
ready, executing, waiting etc as described above.
fashion, the hardware state can be restored when the process
Process ID
is scheduled and resume execution a g a i n .
Identification number of the process. Operating system
assign this number to process to distinguish it from 2.6 Process States and Process State
other processes.
r
_ _ _ _Transition Diagram
Priority

Different process can have different priority. Priority Q Draw


‘ and explain process state transition diagramI
Y
field indicate the priority of the process.
During execution, process changes its state. Pro®9

currcnt
stat ° W S five states.activity
state contains of the process. Pro®9

Scanned by CamScanner
Operating System (MU - S e m 4 - IT) 2-5
- — - . Process Management

N. Each process remains in one of these five states. There - When the process is created, it remains in new state.
After the process admitted for execution, it goes in
is a queue associated with each state of the process.
'S Process resides on that queue as per the state in which
-
ready state.
A process in this state, wait in the ready queue.
State* of Scheduler dispatches the ready process for execution
proc*** model i.e. CPU is now allocated to the process.
~ When CPU is executing the process, it is in executing
1 . New state state. After context switch, process goes from
Of executing to ready state.
2. Ready state
If executing process initiates an I/O operation before its
is
allotted time expires, the executing process voluntarily
3. Execution state
give u p the CPU.
4. Watting (blocked) state - In this case process transit from executing to waiting
state. When the external event for which a process was
5 . Terminated state
waiting happens, process transit from waiting to ready
Fig. C23 : States of Process Model state.
- When process finishes the execution, it transit to
-> 1. New state
terminated state.
The new process being created.
-> 2. Ready state 2.7 Process vs. Thread
A process is ready to run but it is waiting for CPU A thread is a single sequence stream within a process.
being assigned to it. Threads are also called as lightweight processes as it
*♦ 3. Executing state possess some of the properties of processes. Each
A process is said to be in running state if currently CPU thread belongs to exactly one process.
is allocated to it and it is executing. - In operating system that support multithreading,
■4 4. Waiting (blocked) state process can consist of many threads.

A process can’t continue the execution because it is - These threads run in parallel improving the application
waiting for event to happen such as I/O completion. performance. Each such thread has its own CPU state
Process is able to run when some external event and stack, blit they share the address space of the
happens. process and the environment.
> 5. Terminated state - Threads can share common data so they do not need to

The process has completed execution. use interprocess communication.

The process state transition diagram is shown in Like the processes, threads also have states like ready, •
Fig. 2.6.1. executing, blocked etc. priority can be assigned to the
threads just like process and highest priority thread is
New
scheduled first.
- Each thread has its own Thread Control Block (TCB).
Submitted Finished Like process context switch occurs for the thread and
Interrupt
register contents are saved in TCB.
Ready Executing
- As threads share the same address space and resources,
synchronization is also required for the various
activities of the thread.
I/O or event completion I/O or event

Fig. 2.6.1 : Process state transition diagram

Scanned by CamScanner
ratin em MU ■ Thread OP 6

o- Difference between Afte


process exec
Sr. Parameters Thread i» P* 1 Of prOCeSS
' h
“ ’‘so rum
It is
No* called as P r«
MS
' flight wtigiPP
Unt
ion
Program in ««’"
Definition Thread context switch takes l ess bev
sys
compared W P rocess cont
ext
Process context ' . "Lcuse it sor
Context switch because it needs only interrupt to
Th
needs interface of operating only- scl

New thread creation takes less ti sc

nation takes more time as compared to new process creation.


New Process creation
Creation compared to newthread creatio j
New thread termination takes less
New Process termination takes more tune as compared to new process termination
Termination compared to new thread termination.

All threads can share same set of


eTexecutes the same code but has its
Execution files, child processes.
own memory and file resources.

In multithreaded server implementaij


implementation If implementation is process based, then
one
blocking of one process cause the blocking of
other server process until the first process
unblocked and these are not allowed to execute
until blocked process i s unblocked.

Multiple Multiple redundant processes use more resources With compare to multiple red
resources than multiple threaded process. process, multiple threaded processes n
fewer resources.
Address space Context switch flushes the MMU (TLB) No need to flush TLB as address
registers as address space of process changes.
remains same after context switch bee

Syllabus Topic : Process Schedul ing

Ready queue
Process Scheduling CPU

l/o I/O queue


I/O request

execution. Processes ready f or


Time slice
expired

System places ail the ?nt


Child
A sses Foika
” “>e processes in the « job queue. executes
y le Slde child
~ arere d f; « >" job queue.
CPU. they kept in ««uti on and wait
Interrupt
queue Watt for an
AW from lhe job ; - occurs
interrupt

2?
— -k =Pt«respecti vedevjce * Pedlar
8 281 1 Uin8 <liaBran
Of“n
of ‘ ’
scheduling

Scanned by CamScanner
Process Management
I fgpi*O0ara tinHSystem(MU-Se m4-IT) 2-7
- When process state transition take place from new to
/ ' After allocation of CPU to the process start the
ready, then there long term scheduler come into picture
I I execution. While executing the process, one of the
for scheduling purpose.
N I numbers of events could occur.
j
I I the process finishes the execution, it travels 2.8.1(B) Short-term Scheduler
I I between various scheduling queues. The operating
Q. Explain short-term scheduler.
j I system selects the processes from queues based on
I some policy. - Processes which are in ready queue wait for CPU. A
I I - This selection is carried out by the program called as short term scheduler chooses the process from ready
I scheduler. There are three types of schedulers to queue and assigns it to the CPU based on some policy.
| schedule a process. These are shown in Fig. C2.4. - These policies can be First Come First Served (FCFS),
Shortest Job First (SJF), priority based and round robin
F Three type* •ch»duteri»”l
I to Bchedute a procw | etc, Main objective is increasing system performance
by keeping the CPU busy.
Lj (A) Long-term Scheduler - It is the transition of the process from ready state to
running state. Actual allocation of process to CPU is
I ' L> (B) Short-term Scheduler
done by dispatcher.
! L+ (C) Medium -term Scheduler I Short term scheduler is faster than long tern scheduler
and should be invoked more frequently compare to
I Fig. CZ4 : Types of scheduler long term scheduler.

-> 2.8.1(A) Long-term Scheduler ■4 2.8.1(C) Medium-term Scheduler


I j~q. Explain long-term scheduler. __________ _ Q. Explain medium-term scheduler.

When programs are submitted to the system for the - If the degree of the multiprogramming increases,
purpose of processing, long term scheduler comes to medium-term scheduler swap out the processes from
know about it. main memory.
- Then its job is to choose processes from the queue and - The swapped out processes again swapped in by
place them into main memory for execution purpose. medium-term scheduler.
CPU bound processes require more CPU time and less This is done to control the degree of multiprogramming
I/O time until execution completes. or to free up a memory.
- On the contrary, I/O bound processes use up more time - This is also helpful to balance the mix of different
in doing I/O and require less CPU time for processes, some time sharing operating system have
computation. this additional scheduler.
The main job of the long term scheduler is to provide a
2.8.1(D) Comparison of Three Schedulers
balance mix of I/O bound and CPU bound jobs.
- The number of processes in memory for execution and Q. Compare the functions of different types of
degree of multiprogramming is related to each other. schedulers.

More number of processes in memory for execution Sr. Long-term Short-term I Medium-term ’
indicates degree of multiprogramming is high. Long No. Scheduler Scheduler Scheduler
term scheduler controls the degree of
1, Selects Chooses the Swap in and
multiprogramming.
processes from process from out the
If the average rate of new process creation and average
the queue and ready queue processes from
departure rate of processes leaving the system is equal
loads them and assigns it to memory.
then degree of multiprogramming is steady.
into memory the CPU. .
The Jong term scheduler is not present in timesharing for execution.
operating systems.

Scanned by CamScanner
2*0 ePa8S n9
r ’ ad 1
_ IO u
■ «tion and communication
a
IJ
Sbort-tenn S ynch ' “ shoold satisfied wtletl
tequiremenU other Synch
Long-term
Sr. Scheuler
Scheduler is in
No. Speed
very both '"“““Tfe requued to achieve the mu W
between pr0CeSSe
Speed is and term . nrocesses do communic 1
fast short _ Independen P proce sses may need to
than the short and
invoiced
scheduler other but coope Coopemdve processes
term than term
frequent
long
information-
scheduler.
communicates through shared memory Or '
term
long scheduler.
scheduler.
passing-
of No process
of
Transition
Message naming provides both functions, u
Transition state state transition'
passing ■has
process u the further benefit that it lends
process state
to
from Ready nation in distributed systems
from Ne* t0 implemei.—
Executing. multiprocessor and un
Ready- shared-memory
in Present in time
Minimal
Not present in
systems-
sharing sharing system.
time .. „ „ are
are the two primitives used io
time sharing Following
system.
system.
Processes are
passing : |
Select a new
Supply
10 swapped in and 1 1 send (destination, message) s
reasonable process
out for 2. receive (source, message) j
allocate to CPU
mix of jobs,
frequently. balanced
such as I/O This is the minimum two operations reqjjJ
process mix.
bound and processes to send and receive the messages.
CPU bound sends data in the form of a message to another pn
It has control Reduce the indicated by a destination. A process receives di
It controls
over degree of degree of executing the receive primitive, indicating the :
degree of

multiprogram multiprogramm multiprogramm and the message.

ming through ing as it ing by - Communication by sending and receiving mess


placing allocates swapping the require synchronization. The receiver cannot tea
processes in processes to processes in message until it has been sent by another process,
ready queue. CPU for and out. The sending process is blocked until the mess
execution. received, or it is not after the send primitive ei
by process. Similarly, when a process issues a
It is also called It is also called It is used for
primitive, there are two possibilities :
as job as CPU swapping.
scheduler. scheduler. o Previously sent message is received and e
continues.
Syllabus Topic : interprocew CommuSZT
° there is no waiting message, then either 01
process is blocked until a message arrived 4
-J»~MssComm tion the process continues to execute, abando -l
attempt to receive, ■
rasa- n
* both the sender
°nblocking.
and receiver can be blo

'■Message passiimg

blnad
2,
Particul °? ° nS comraon
-
q_ Shared memory
wil1 usuaU have only one
«®*wi X ilrr
Onns
iplemented: y ‘
W>CtS8
, Modus of Interpret <■
Om
---------------- --------------Z2 '"unicati( , ri

Scanned by CamScanner
System (MU - Sam 4 - IT) 2 -9
- _____ Process Management
o The blocking send and blocking receive : Until In operating system that support multithreading,
the message is handed over, the sender and process can consist of many threads.
receiver both gets blocked. These threads run in parallel improving the application
o The nonblocking send and blocking receive : performance. Each such thread has its own CPU state
After sending the message, the sender continues and stack, but they share the address space of the
its work but receiver remains blocked until process and the environment.
message is arrived to it. This permits a process to —
Threads can share common data so they do not need to
send one or more messages to a multiple use interprocess communication. Like the processes,
destinations as quickly as possible. Here receiver threads also have states like ready, executing, blocked
is in need of message so that it can resume the etc. priority can be assigned to the threads just like
execution . So it gets blocked until message process and highest priority thread is scheduled first.
arrives. — Each thread has its own Thread Control Block (TCB).
o Nonblocking send, nonblocking receive : Like process context switch occurs for the thread and
Neither party is required to wait. register contents are saved in TCB.
o The nonblocking send and non blocking receive: — As threads share the same address space and resources,
Both sender and receiver will not wait and both synchronization is also required for the various
will continue the work. activities of the thread.
Message passing system should give guarantee that
messages will be correctly received by receiver. 2.11 Types of Threads
Receiver sends acknowledgement to sender after Q. Explain user level and kernel level threads with
receiving the message. advantages and disadvantages.

- If acknowledgement not received in defined time then


sender resend the message. It also offers authentication Threads can be implemented in two ways.
service. Types of Thread* |

2.9.2 Shared Memory 1 ____________


1 . User Level Threads
- Cooperating processes require an Interprocess I .
--. . --. ......
Communication (IPC) mechanism that will allow them L* 2. Kernel Level Threads |
to exchange data and information.
Fig. C2.6 : Types of Threads
In the shared-memory model, a region of memory that
is shared by cooperating processes is established. 1. User Level Threads
Processes can then exchange information by reading — In user level implementation, kerne) unaware of the
and writing data to the shared region. thread. In this case, thread package entirely put in user
In the message passing model, communication takes space. Java language supports threading package.
place by means of messages exchanged between the User can implement the multithreaded application in
cooperating processes. java language.
Kernel treats this application as a single threaded
Syllabus Topic : Multithreading application. In a user level implementation, all of the
work of thread management is done by the thread
2,10 Multithreading package.
Thread management includes creation and termination
- A thread is a single sequence stream within a process.
of thread, messages and data passing between the
Threads are also called as lightweight processes as it
threads, scheduling thread for execution, thread
possess some of the properties of processes. Eac synchronization and after context switch saving and
thread belongs to exactly one process.
restoring thread context etc.

Scanned by CamScanner
:
"' ; d
less time 3,14 ' oftboopp" 1
;mory
t0 process- a are ge0 erBlly TT more
S
--------destroying 316
.. Creation and «s r cost o
(nci
“ v ''
f allow * (he memory
iocaung _ —I threads
Kernel m * the
than he user threads.
1
iscW n k and deal
cream “’ d t n , ’ k erneH
BvelthrBa
J
XX threads
Up a ,a9
the S <r Ad**° hcdule another thread of I
while destroying Icss tin*- ’ .. Tifias
The Tough one thread in a process
- Both op - XX. no process even *0“ does not block lhe M
b l 0 ,B
' ' “’ t Xmainssa , n = ’ ftcl rfiead, and can
address 5p* e exin 1
? BJoc S °f ’
lcveldirc
° nal Pr0CeSS ' —Itaneously schedule multiple tu
h fhToX
nehieve high <* F Kernel can sim u < onrnultipI e processes,
rlevel ds
s h O wsthe»« level «”«
U9er from the same p thrMds

, DlMdv-nWfl’’ “ s ternei interven tion.

_ contest swttc raUy requires more


Tread Nbnsfy

U3&ra««*
_ Kernel threads
create and manage
3
Kernel a
Syllabus T o p g -
process

2 ,1 2 MultithreadingModels
3 03
Explain van’ * *
Q.
, . fTLBand „ Hering the advantages of user level and fe,
" Xi treads, a hybrid threading model using
- Thread switching does register X of threads can be implemented.
doing CPU accounting. On y
need to be stored and reloaded again.
The Solaris operating system supports this hybr
. user level threads are platform independent, ttey can
model.
I n this implementation, all the thread manage™
. Scheduling can be as per need of application. functions are carried out by user level thread package
Thread management are at user level and do y user space. So operations on thread do not reqs
thread library. So kernels burden is taken by threading kernel intervention.
package. Kernels time is saved for other acuvittes

" Disadvantages of user level threads - the applications are multithreaded then it can I
advantage of multiple CPUs if they are available. 0
If one thread is blocked on I/O, entire process gets threads can continue to make progress even if
blocked. kernel blocks one thread in a system function,
The applications where after blocking of one thread
other requires to run in parallel, user level threads are
of no use.
Three types of
Z Kernel Level Threads multithreading models
In this, threads are implemented in operating system’s
kernel. The thread management is carried out by kernel. 1 , One to one
All these thread management activities are carried out
in kernel space. So thread context and process context 2. Many to one
switching becomes same.
3, Many to many

Fig. C2.7 : Types of multithreading models

Scanned by CamScanner
gf Operating System ( M U - Sem 4 n) 2-n
-♦ 1. One to one model
Therefore concurrent execution of threads cannot be
- In this model relationship between user level thread and achieved.
kernel level thread is one to one. This type of relationship facilitates an effective

- It means that there is mapping of a single user-level context- switching environment, easily implementable
even on simple kernels with no thread support
thread to a single kernel-level thread.
3- Many to Many Model
Because of such type of relationship multiple threads
Many to many association exist between user level
executes in parallel leading to more concurrency.
thread and kernel level thread in this model. It means
However, since it is needed to create kernel thread for that more number of user-level threads are allied to
every new creation of user thread. equal or less number of kernel-level threads.
So application performance will be degraded. Windows The necessity of altering code in both kernel and user
series and Linux operating systems try to minimize this spaces leads to a level of complexity not present in the
problem by restricting the expansion of the thread one tone and many to one model.
count. OS72, Windows NT and windows 2000 use one
Like many-to-one model, this model offers an efficient
to one relationship model,
context-switching environment as it repel from system
jJ |J |J User level th roads calls.
The keen complexity offers the potential for priority
inversion and suboplima! scheduling with minimum
coordination between the user and kernel schedulers.

User level threads


Kernel level threads

Fig. 2.12.1

-F 2. Many to one model


- In this model relationship between user level thread and
KJ ( K j f K ) Kernel level threads
kernel level thread is many to one.
It means that there is mapping of many user- level
threads to a single kernel-level thread. Fig. 2.12.3 : Many to Many model

In this model management is done in user space. When Syllabus Topic : Thread Li brads*
one thread makes a system call for blocking, the entire
process gets blocked.
2.13 Thread Libraries
User level threads
q Write note on thread libraries.

Programmers can create and manage the threads using


API that are provided by thread library.
- Thread library can be implemented at user level where
Kernel level threads
it runs in user space without kernel support. Hence
calling a function in library is local function call m user
space.

At a time only one thread can access the Kernel thread - Kernel level library <s supported by operating system
at a time, so many other threads cannot execute in The data structures and code remains in kernel space.
parallel on multiple processors.

Scanned by CamScanner
= s“ s=S=SS=
VfindoWS systems. Java threads
HC0Ce
System (M_U» ’ ° n ted using the W i 0 3 2 and
*0
of function in library » and Linux systems. <4
Hence any invocation , t a lo calhmctioncll.
i n use “ h set of f* atureS * aVallable jav
’ k
, bread libraries that
The
I [he main Ihre . API to manage and create the tfe
Following are
8ndjaVa
n r ) method in java program i s
today. Type* * I The
Thread Llt>r*[j2L— J Csingie threati i n J V M . <

least single thread is supp


HenCe b
’ In first approach of thread creati ’
S siscmatedwhichis ;
_ _I

Thread Class and then overrides its ru n ( )


3. Java
Icond approach a class is defined that
Fig, C2.8 : Thread libraries Runnable interface. ________

.» 2.13.1 POSIX Pthreads llabtis Topic : Threading Imim,

level library.

. Pthreads define API for the creation of threads an Q. What are different threading issues?
synchronization. As it is a specification, the des.gn
Following are the threading issues that needs J
of operating can choose their own way or
implementation of specification. considered with multithreaded programs.

- Solaris. Linux, Mac OS X, and Tru64 UNIX are the


systems that implements Pthreads specification. For all Threading Issues
the types of Windows operating systems,, a shareware
implementation- is available in public domain.
1. The forkO and exec() System Calls
2.13.2 Win32 Threads
2. Cancellation
It is kernel-level library available on windows system.
The approach for creation of threads using W i n 3 2 is 3. Signal Handling
similar to Pthreads.

- Just as Pthreads, in Win32 CreateThread( ) function is 4. Thread Pools


used to create the threads.
5. Thread-Specific Data
The required set of attributes for the thread is passed to
this function. Security information, stack size, and a
■*> 6. Scheduler Activations

suspended state are the examples of these attributes.


Fig. C2.9 : Threading issues i
-» 2.13.3 Java Threads
2.14.1 The fork() and exec() System Cails
JaVa CrcatiOn
’ S
threads can J?
be dmeclly carried out in Java programs Of 'he A separate duplicate process is created with

calls is different in multithreaded program.


usual1
IwleawWbynukmg p, *” “ *
existi
on the host system. ng cal| s0 ?o re . UNIX Vcrsions - w hen single thread of
( ) then it duplicate all the threads of
UN
LtLT.”" “. -r »-pi— «*
Scanned by CamScanner
When thread invokes exec( ) system call, the whole - The delivered signal should be handled
process including all the threads will be replaced by Following are the examples of synchronous signals.
program that is given as parameter to exec( ) call. - Illegal memory access.
The use of particular version of forkf) is carried out as Division by zero.
per application need. If the executing program carries out above two actions
If it is necessary to call exec( ) just after forking then then generated signals are delivered to the same process
no need to duplicate all the threads. If exec( ) is not which performed above actions. Hence, these signals are
called just after forking then duplication of all the called as synchronous signals. When signals are due to
threads is required. events of external process then the executing process receive
asynchronous signals
-♦ 2.14.2 Cancellation
Following are examples of asynchronous signals.
Termination of thread before it completes the execution
Terminating process by applying some specific
is thread cancellation. If simultaneously many threads keystrokes.
are searching through database and one thread return Timer expires.
the result then cancellation of remaining threads is Both types of signals are handled either by default
done. signal handler or user defined handler. If program is
The thread chosen for cancellation is called as target multithreaded program then following options are available
thread. Following two scenarios arc there to cancel the for delivery of these signals to the process..
target thread. Signal can be delivered to the thread to which it is
o Asynchronous cancellation. Instant termination applicable.
of the target thread is done by one thread Signal can be delivered to every thread in the process.
o Deferred cancellation. The periodic check is - Signal can be delivered to specific threads in the
carried out by target thread to decide whether it process.
to
should terminate, permitting it an opportunity
- Allocate any one particular thread to take delivery of
all signals for the process
in
The asynchronous cancellation is troublesome The way signals are delivered depends on types of
to
situation if cancellation of thread is carried out
signals generated.
which system resources have been allocated.
2.14.4 Thread Pools
Asynchronous cancellation is also difficult in case
thread is in middle stage of updating data that it i There should be bound on number of threads active in
system. If unlimited number of threads exists in system
will reclaim the resources from cancelledI thread some
then many resources will be consumed by them.
resources will be there with cancelled thread and
cannot be reclaimed. Thread pool is solution on this issue. When process

In deferred cancellation, thread to be cancelled checks starts up then some number of threads are created.

the flag to know whether it should be cancelled or not. These threads are then placed in pool and they waits or
work.
If server comprising these threads receives request it

In UNIX, process is
Once thread completes the requested work, it returns to
event is occurred. The sign f()r event
pool and waits for other work.

til] thread will be available.

. undelivered to the process.


- This genei

Scanned by CamScanner
process

Ruling =
get 11 opt
e<Julin Act
scn 1
dis
3
rente thread* set
MduU rg .
co
,re made in following situati
_ T!
Decision is n -
- A thread pool p vrdcs the * * . SCh
number O'
that caist at ony ofic P
’' )rtrg c A fter cr
.ystems that caano< a
' ch005 ingei ParC
threads. eSS K
When any P
fic
-» 2.14.5 ThrMd-Speel rf [hat
:keu of
_ Always threads of a Pnl d la offers beitefi'*' °
When process
p cess. Surely, -his shanng new process from ready q y
multithreaded programming. other reason. ■
selected.
Q
- On the other hand, i. , da(a is a
require IB own copy « f « rta

_,M Type selwdullng


the -------- - Tbetween preemptive and
tils import to consider common. » °- preemptive scheduling?
kernel and the thread library, whtcb may be
also
the nw-to-nw “d ** ’
Types of scheduling
model of threads. algorithms
This coordination permits the number of kernel t
CO be dynamically adjusted to guarantee the best 1 . Non-Preemptive
performance.

2. Preemptive
or user and kernel level model of threads place an
intermediate data structure between the user and kernel Fig. C2.10 : Types of scheduling algorithms
threads.
■4 1. Non-Preemptive
- This data structure is called as lightweight process, or
LWP. For user level library L W P is virtual processor to Non-preemptive algorithms are designed so that onct
schedule the application to run. process is allocated to CPU, it does not free CPU un
LWP is attached with kerne) thread and OS schedules
it completes its execution.
LWPs as kernel threads to run on physical processor. 2. Preemptive
With blocking of kernel thread, LWP also blocks and
Preemptive algorithms allow taking away CPU fro®
tace user lhread assoeiated w . h ih .s Lwp
process during execution. If highest priority process!
blocks.
arnves in the system, CPU from currently execute
Scheduler activation is the method f™
low priority process is allocated to it. It ensures
always highest priority process will be executing.

Dispa,cher
' and dispatch latency
schedule user threads on[o ' 6 appllcatlon
can hod term scheduler allocates CPU to ready pw6 '
an
Processor. available virtual

P"UC». I. ,s the trattsltlodolttef


suae , o r u n n i n g s u t e .

Scanned by CamScanner
Operating System ( M U - Sern 4 - IT)
2-15

- Actual allocation of process to CPU js done by

dispatcher. Short term scheduler is fas t e r than long |em Syllabus Topic; Scheduling Algorithms
scheduler and should be invoked more frequently 2-1 S 2
- Scheduling Algorithms
compare to long term scheduler.
Scheduling algorithm. I
one process and starts running of other process is called
as dispatch latency.
(B) Shortest Job First (SJF)
Syllabus Topic : Scheduling Criteria
(C) Priority Scheduling

2.16 Scheduling (D) Round Robin Scheduling

2.16.1 Scheduling Criteria (E) Multilevel Queue Scheduling

Q. What are the criteria for evaluation of scheduling


algorithm performance?
Fig. C2J 1 : Scheduling algorithms
In multiprogramming, many programs remain in
memory at the same time.
Processes carry out I/O operations while performing Q. Explain FIFO scheduling algorithm.

This is a Non- preemptive scheduling algorithm, FIFO


more time to accomplish with compare to CPU
strategy allocates the CPU to processes in the order of
their arrival.
CPU to another ready process whenever a any process
— This algorithm treats ready queue as FIFO. A process
invokes an I/O operation.
does not give up,
Short term scheduler allocates inc precedes iv v i
CPU until it either terminates or performs I/O, If the
per some policy called as scheduling algorithms. longer job assigned to CPU then many shorter jobs has
The main aim of the scheduling is to improve to wait. As long processes can hold the CPU, this
algorithm gives fewer throughputs.

Criteria for the for performance evaluation of the


scheduling strategy is :

o CPU Utilization : It is amount of time CPU useful in such situations.


remains busy. FIFO algorithm is inappropriate for interactive systems;
Throughput ; Number o f jobs processed per unit large fluctuations in average turnaround time are
possible.
time.
If we assume that arrival time is zero. Consider the
o Turnaround time : Time elapsed between
submission of job and completion of its execut.on.
Process Burst time
o Waiting time : Processes waits in ready queue to
C „r limes scent in ready queue is Pl 24

P2 03 ___

o Response Time : Time from submission till the


P3 ._JC— J
t P2, and
Assume that processes
o Fairness : Every process should get fair share

the CPU time.

Scanned by CamScanner
lhe following examptf,
ga n SJF) algonth
Consi der
_
' fiS*
, iob first
--- -
< I
m.
~" -
Burst tiir
pree P!
P3 ____10 Turner
32 0 Tum*u
P£ _ _ _ _5
Pl 2? Averts
P2 _____
24 s 3 2. W
0 (3 - 0 )
P3 15 W
Turnaround time for P2 3
<8 -O) = 8
Turnaround lime
P4 V
88 32
(32-0) V
P4
Turnaround tin* for AV'
Turnaround time Pl
23
Average Jum >5
s 0
Wailing tin* ° r = (10 -0)=10
= 3 0 for P l
Waiting tin* for ** Turnaround
=r 8 for P2
Por P Turnaround time (23 - 2 ) = 2 1
Waiting tin* * 3 + 8) / 3 = 3 ‘66 forP3
Turnaround tune
Average Waiting tin* Turnaround time for P4 (38 - 3 ) = 35
is Signify 1 re t,me
'
waiting
T* BSU " ’Ttiw "and average „ Tunuu
Average Turnaround time
average ' un ’ aroU " . arrival varies = 0
vanes -order of job
Waiting time
= (10-l) = 9
r
. 2 16.2(B) Shortest J o b F W j ------------------ Waiting time f° P2
= (15-2)=13
Waiting time for P3
Q. = ( 2 3 - 3 ) = 20
= (0 + 9 + 1 3 + 2 0 ) 7 4 * 1
Average waiting time

]n P reem e
exe cution time with compare
- —
(SJ F) improves the averag , currenriy executing process, then the CPU will be g
- Ready queue is maintai it in the ready
i erfSc When a job comes m, insert newly arrived process.
Ton ite length. SJF minimizes the average
The preemptive Shortest Job First (SJF) is called

Consider the above example for preemptive S


processes.
While it minimizes average wait time, it may punish
P2 P3 Pl P4
processes with high execution time. Pl

If shorter execution time processes are in ready list, 0 1 14 23


then processes with large service times tend to be left in
P l arrives at time 0. so
the ready list while small processes receive service.
in the queue. At time 1, process P2 arrives hai
It may happen in extreme case that always short
execution time processes will be served and large
execution time 5 (less than remaining time 9 of pl)-
execution time processes will wait indefinitely. P l is preempted and P2 is scheduled. At time 2,
This starvation of longer execution time processes is P3 arrives having execution time 8 which is
the limitation of this algorithm. than remaining time of P2.
Consider the example discussed for FCFS algorithm. In So P2 will complete the execution. After P2,

If the order of arrival is P2, P3, P l , the order of


execution time will be 3, 5, and 24. There is significant 9. So it is scheduled next. Finally P4 will cornp
reduction in average turnaround time and average execution.
waiting time.
Turnaround time for P l = ( 2 3 - 0 ) = 23

Scanned by CamScanner
Operating System (MU - Sem 4 - FT) 2-17
Process Manage ment
Turnaround time for P3 = (14 — 2) = 1 2
Turnaround time for P4 (38 - 3) = 35 P4 P3 P5 Pl P2
Average Turnaround time (23 + 5 + 12 + 35) /4 = 18.75 0 4 9
Waiting time for P l ( 1 4 - 1 ) = 13 12 24 34

Waiting time for P2 (i-l) = O ( 2 4 - 0 ) = 24


Turnaround time for P2 >
(6-2) = 4 (34-0) = 34
Waiting time for P3 Turnaround time for P3
(9 — 0) = 9
Waiting time for P4 s (23 - 3) = 20 Turnaround time for P4
(4-0) = 4
Average Waiting time = (13 + 0 + 4 + 20) / 4 = 18.75 Turnaround time for P5
( 1 2 - 0 ) = 12
2.16.2(C) Priority Scheduling Average Turnaround time

I q. Describe priority scheduling algorithm.


Waiting time for P l
16.6
12
In priority scheduling, each process has a priority Waiting time for P2 24
which is a integer value assigned to it.
Waiting time for P3 4
- Smallest integer is considered as highest priority and
Waiting time for P4 0
largest integer is considered as lowest priority. Always
highest priority process gets the CPU. Waiting time for P5 9
In some system, largest number is treated as a highest Average Waiting time
priority and it depends on implementation.
Jf priorities are internally defined then some
measurable quantity such as time limits, memory Q. With the help of example, explain Round Robin
requirements, the number of open files, and the ratio of Scheduling algorithm.

average I/O burst to average CPU burst are used to Round Robin Scheduling is designed especially for
compute the priorities. time-shari ng systems where many processes get CPU
External priorities are assigned on the basis of factors on time sharing basis.
such as importance of the process, the type and amount In this algorithm, a small unit of time called time
of funds being paid for computer use, the department quantum is defined.
sponsoring the work, and other, often political, factors. CPU is allocated to each process for this time quantum
All these factors are not related to the operating system. period of time. To implement this scheduling, ready
Preemptive and non preemptive SJF is a priority queue treated as FIFO queue.
scheduling where priority is the shortest execution time The new processes go to the tail of queue and each time
CPU chooses the process from head of queue.
In this algorithm, low priority processes may never When time quantum expires, context switch occurs and
execute. This is called starvation CPU switches to other process which is scheduled next.
I
Solution to this starvation problem is aging. In aging as The time quantum is fixed and then processes are
time progresses, increase the priority of the process so scheduled such that no process get CPU time more than
that lowest priority processes gets converted to highest one time quantum.
priority gradually. If process is executing and request for I/O the process
goes in waiting (blocked) state.
The Gantt chart shows the result. After the completion of I/O, process again gets added at
the tail of ready queue. The time quantum should not
| Process Burst time Priority
be very small or very large.
1 _ _P l 12 4
If the time quantum is very large, the algorithm will
P2 10 5 behave just like FCFS.

P3 5 2 A smaller time quantum increases context switches


leading to performance degradation (less throughput) as
P4 4 1
context switches elapses more time.
P5 3 3

Scanned by CamScanner
seeSSSS
sf hc five 9 ueuCS and its

Let US e° oskler “scheduling algorithm. P "M


t (MU -
operating -------- ,imc ultileV q
, lf we • "’ "written below in the o
queues are Number 1 indi Cates
\ : „ d to them. V + 2 A 6-2
ass
quantum of 4 mm •
pri*
the resull. ____ -------r E
gural lime 1.
proce** a
I I eracdve P r— S queue
PI tn i
7 pen
P2
Student prases queue que
15
P3 j car
4 bui
P4 I
Ever es unless and until
P3 * ononty <i - Tb
P2 P3 PI
P3 P4 PI Pr
PJ P2 34 3? XZ become empty, processes in 4
20 23 27 30 b
8 J2 1* queue cannot run. . I
0 4 t
( 3 0 - 0 ) = 30
Turnaround time for P 1 L example, processes tn tnteracuve editi 1
( 2 3 - 0 ) = 23 could not execute unless and until '.1
Turnaround lime for P2 SyStem
Turnaround rime for P3 (37 - 0) = 37 Zesses in the queueS
p3
( 1 6 - 0 ) = 16 interactive processes finish the execution
=
Tur7iarou „d time for P4 + 37 + ]6) / 4 = 26.5
( queues becomes empty. 1
Average Turnaround time = ° 20) = l9

Waiting time for P l - 0 +


* If process in lower priority queue is executed
Waiting time for P2 = 4 + ( 2 0 - 8 ) = 16 the same time other process belonging to 1
Waiting time for P3 = 8 + (23- 12)*(3O-27)-22
Waiting time for P4 = 1 2 , in lower priority queue should be preempted. If s ,
process entered the ready queue while a student p
Average Waiting time = (19 + 4 + 22 + 12)/4 -

q. Eqilain multilevel queue | In this algorithm, a fixed amount of CPU tlme K


- Sometimes it is necessary to categorize the processes
into different groups. For example, separation is made from this queue are scheduled. |
between interactive processes and batch processes. For example, the interactive process queue caul
- The response-time need of these two processes can be given 70 percent o f the CPU time for round j
dissimilar. So these processes can have different
scheduling among its processes, whereas the h
scheduling requirement. Also, interactive processes
queue get 30 percent of the CPU to give to its procs
may have higher priority over batch processes.
on an FCFS basis.
In multilevel queue scheduling algorithm, there are
multiple ready queues. The allocation of process to the Highest priority
particular queue depends on property of that process
I

System processes
such as memory size, priority of the process, or its type.
Scheduling algorithm for different queues can be
Ir IF
I I

Interactive processes r
different. The round robin scheduling algorithm can be
used for interactive processes queue and first come first
serve scheduling algorithm can be used for batch Interactive editing processes
processes queue.
Ii t fF

Batch process
I

IS used for scheduling Student processes


between the different q Ueues .

Scanned by CamScanner
Operating System (MU - Sam 4 - 1
2-19
ueue Process Management
Scheduling
pr<XlB
queue- 2, Proce ’ * Pled is placed into
Q- Explain multilevel feedback queue schedUii?? -
basis but are run nni on an FCFS
algorithm. *
y qUCUeS 0 and
the pTOcess * I ate empty .
In multilevel queue scheduling processes are not less then this burs
\ of 8
or
permitted to transfer from one queue to the other Also priority to it tk U ing
8°™ gives highest
SS WU1 ate
queue assigned to the processes are permanent and C P U . finish i™T “ * the
bUKU nd
cannot be changed, leading to the low scheduling cost, burst * 8° °ff its next UO
but it is not flexible.
CSSe
mm \ lha are
milhseconds * reqoire
also * bu. below 24
The multilevel feedback-queue scheduling algorithm
8
processes can move from one queue to other. The CPU- lower priority than shorter pr e Y ’’ with

bound processes use more CPU time and hence it will


be transferred to a lower- priority queue. over from queues
0 and 1.
I/O-bound processes use less CPU time. Keeping CPU
bound processes in low priority queue automatically
Quantum = 8 * Queue - 0
leads to keeping the I/O bound and interactive
processes in the higher-priority queues.
There is a chance of starvation because of processes Quantum = 1 6 * Queue • 1

waiting for longer period of time in a lower-priority


queue. It is avoided by aging; by transferring these
waiting processes in a higher-priority queue. FCFS “* Queue ■ 2

Consider the example of a multilevel feedback-queue


Fig. 2.16.2 : Multilevel feedback queues
scheduler with four queues, queue-0, queue- 1, queue-2
and,queue-3. Initially the scheduler starts executing all 2.1 7 Examples on Uniprocessor
processes in queue-0. Scheduling Algorithms
Once all the processes in queue-0 finish the execution, Example 2.17.1 M U - Dec. 2014, 1 0 Marks
and queue-0 becomes empty, scheduler will consider Consider the following set of processes, with the length of
the processes in queue-1 for execution. After queue-1 the CPU burst given in milliseconds:
becomes empty then scheduler will consider the Process Burst Time Priority
PI 10 3

In the same way, processes in queue-3 will only be P2 3


A process arriving for queue- 1 will preempt a process
P3

P4
2
1
2—
4 I

in queue-2. In the same way process that arrives for


queue-0 will preempt a process in queue- 1.
P5 _ _ 5
I 2
I

The processes are assumed to have arrived in the order Pt,


Similarly if process arrives for queue-2 preempts al1 at tlme 0 Draw ,0Ur Gantt Cha
P2 P 3 P4, P5 ' 1S **
process in queue-3.A process from the ready queue is illustrate the execution of these processes using toe
placed in queue-0. following scheduling algorithms: FCFS. WF
Multilevel feedback queue scheduling with 3 queues priority (a smaller priority number implies a hrgher pnort
are shown in figure 3.9. If the process does not finish and R R (quantum = 1) and also calculate turnaround W

the execution within allocated time quantum of in average waiting time.

queue-0, it is moved to the tail of queue- 1• Solution :


(1) FCFS
The process at the head of queue- 1 is given a quan p
P2 P3 t P4 | 5
I *P1

Scanned by CamScanner
Process

2-20 Sc
priority
(*) Pl L_ 4
P5
P2 16 18
is 0. proc*
t,rne
TAT WMiti 0 Finish TAT
Pl
Finish 0 85 priory time
priority proce
process 10
10 16 F
to
Pl 11 _3
H 1 P
P£ 1
P2
13
13
P2
1 18 r Ave
13 18_ _
P3 14 3
14 19 19
14 P3
P4 19 4
19 P4 6 Avei
9.6 6
P5 134 2
P5 13.4 («>Pr
Average
F<
emp,i
SiF(N
(2) t_ nprt ’' <’ )
is0
,| l i m e f O r a llp<«e '
Arrival
P5 _ _ _£
P4 P3 _ _ _[ 19
P2 9
4
1
0 Waiting
Finish tat
Process Priority time Finish TAT
time Process
Priority
time _ U(r,p
19 9
19
Pl 19 19
0 Pl
P2 2
2 P2
P3 3 7 7
P3
2 4 4
P4 4
P4
9 4
2 9 14 14 (iu
P5 P5 2
7 3.2
Average Average 9.2

(3) SJF (Preemptive)


Example 2.17.2
Since all processes arrive at same time 0, the solution is 0
Calculate average waiting time and average turnaroundt
same as nonpreemptive SJF.
for the following situation. Assume quantum of 2 ms.

P2 P4 P3 . P5 Pl Process Burst T i m e Priority Arrival time i ]


0 1 2 4 9 19 1
P1 5 1 Ii
P2 7 3 5
J’
Process Priority Finish TAT Waiting
P3 6 2 0 ___I
time time Preemptive SJF
L 3 19 Preemptive priority
19 9
_ 1 (Hi) RR
1 1 0
Solution :
Lzlj 3 4 4 2 (0 Preemptive SJF
P4 4 2 2 1 Following is the Gantt chart
P5 2 9 9 4
Average
7 3.2 J P3 | P2
0 1 6 10
11

Scanned by CamScanner
Operatii ig System 1MU - Sem 4 - IT )
2-21
Process Management
1 Process Priority TAT Waiting time
Process Arrival time
I (6-l)=5 Burst time
Pl (1'0 =0 P1
3 (18 - 5 ) = 1 3
1P2 (1 1 - 5 ) = 6 P2
J " --
P3 2 (11 - 0 ) = 11 (6- ]) = 5 P3

Average j 9.6 3.66 P4

Average TAT = (5 + 1 3 + 1 1)73 = 9.6 Solution :


Average waiting time = (0 + 6 + 5 ) 7 3 = 3.66 (0 FCFS

Following is tbc Gantt chart


(ii) Preemptive priority

0
26
6 18

Priority TAT Waiting time Process TAT


Process Watting time 1
Pl (8’0) = 8 0
p 1 (6-1) = 5 (l’l) =0

( 1 8 - 5 ) = 13 P2 (i2-l)=U (8-l) = 7
F 3 (11 - 5 ) = 6
I-------------- P3 (21 - 2 ) = 19
2 ( 1 1 - 0 ) = 11 ( 6 - 1) = 5 ( 1 2 - 2 ) = 10 1
P3
P4 (26 - 3) = 23 ( 2 1 - 3 ) = 18 j
Average 9.66 3.66

Average 15.25 | 8.75 ________j
Average TAT = (5 + J 3 + 1 1)/3 = 9.6
Average waiting time = (0 + 6 + 5)/3 = 3.66 Average TAT = ( 8 + 1 1 + 19 + 23)/4 = 15,75 ms.
Average waiting time = (0 + 7 + 10 + 18)74 = 8.75 ms.
(iii) RR
(ii) RR ( slice = 4 ms)
Following is the Gantt chart
Following is the Gantt chart
P3 P1 | P3 ( P1 P2 P3 P1 P2 P2 P2
P1 P2 P3 P4 P1 P3 P4 | P3
0 2 4 6 fl 10 12 13 15 17 18
0 4 8 12 16 20 24 25 26

Process Priority TAT Waiting time

P1 1 ( 1 3 - 1 ) = 12 ( 2 - 1) + ( 6 - 4 ) + ( 1 2 - 8 ) = 7
I Process TAT Waiting time
P2 3 (18-5) = 13 ( 8 - 5 ) + ( 1 3 - 10) = 6
P1 20 (16 - 4 ) = 12
P3 2 ( 1 2 - 0 ) = 12 (4-2) + (10-6) = 6
P2 (8-1) =7 ( 4 - 1 ) = 3 _________

| Average ] 12.33 6.33 ___________ (26-2) = 24 (8 - 2) + (20 - 12) + (25 - 24) = 15


P3

Average TAT = ( 1 2 + 13 + 12)/3 = 12.33 P4 (25-3) = 22 (12 - 3 ) + (24 -16) = 17

Average waiting time = (7 + 6 + 6)/3 = 6.33 11.75


Average 18.25

Example 2.17,3 Average TAT = (20 + 7 + 24 + 22)74 — 18.25ms


Consider the four processes P1, P2 t P3 and P4 with length
Average waiting time = (12 + 3 + 15 + 17)74 ~ 1 1,75ms
of CPU burst time. Find out average waiting time and
average turnaround time for the following algorithm. (iii) Preemptive SJF

(!) FCFS Following is the Gantt chart


00 RR (slice = 4 ms)
W SJF

Scanned by CamScanner
p3
L— Q 9
13
system 0

TAT Waith
p4
i— — — i— pfT (8-0) = 8 0
f P1 I - — L— — 10 0
L -------------J 5 Pl
1
0 2
WaitlngtimS (l OAl (9 0.4)
tat ( 9 - 1) = 8
Process (10-1)*9
0 =0
Pl (1- 9.53
Average
P2 (5-
07 2) 126
(26 Averag *
P3 <5 Average waiting tin*
(10-3)jL2
P4
13 13)
Average = 13 nis. Exampl® 2.17.5
Average i a i : 6.5 ms. consider the foliowing set ot processes having
millisecond
Average waiting time burst time 22ri—
Process CPU Burst time Arrival In*
Example 2.17.4 execution at
10 0
suppose mat the following process amve P1
cate 5
P2
Burst time
| Process Arrival time 2 2
P3
8
Pt 0,0 ____
Calculate average waiting time using tolkswin
0.4 4
J P2 scheduling algorithms.
1.0 1 FCFS
[ P3 (D
(2) SJF
Calculate average waiting time anu -------
using SRTF and SJF . (3)
2 t P3 = 3as
Solution :
(4) RR (slice
(i) Preemptive SJF is SRTF (Shortest Remaining Time
Next) Solution :

Following is the Gantt chart (1) FCFS


Following is the Gantt chart

P1 P3 P2
l o 10 15 17
TAT Waiting time
Pi
Process Waiting time
Pl __________0 _________
P3
P2 (10-l) = 9 \
6.33
P3
Average TAT = (13 + 5 + l)/ 3 - fi „ ( 1 5 - 2 ) = 13
Average 7.33 _
2 ““'
W Non - preemptive SJF

(2) Preemptive SJF


Following is the Gantt chart
Follnu/Jtvrt

Scanned by CamScanner
2-23
Process Management

(1) FCFS
P2 ____ PI
2 8 (2) SJF
0
(3) RR (slice - g)
process Waiting time Solution :
Pl (8 - I ) = 7 (1) FCFS : Following is the Gantt chart
P2 (4-2-i)=l
P1
| P2 | P3 | P4 | P5 |

w
n
o
I
P3
0
3 8 * 10 15 20
Average 2.67
Process TAT Waiting time 1
Average waiting time = (7 + 1 + 0)/3 = 2.67 ms. Pl 0
(3-3) = 0 1
Prinritv scheduling having priority range from 1 to 3.
I
( P2 (8-l) = 7 (3-l) = 2 1
respectively for process P l = 3, P2 = 2, P3 = 3 as
P3

oo

03
(10-2) = 8

ii
I
given.
P4 ( 1 5 - 3 ) = 12 (10 - 3 ) = 7
Folio w i ng i s the Gantt chart
P5 ( 2 0 - 4 ) = 16 1 ( 1 5 - 4 ) = 11
p3 P3
[ P1 | I |
Average 9.2 1 5.2
n 10 15 17
Average turnaround time = (0 + 7 + 8 + 12 + 16)/5
Process Waiting time
= 9.2 ms.
________0 ___________
Pl
Average waiting time = (0 + 2 + 6 + 6 + l l ) / 5
P2 ( 1 0 - 1) = 9
= 5.2 ms.
P3 _ ( 1 5 - 2 ) = 13
________ 7.33 ________ (2) SJF : Following is the Gantt chart
Average

Average waiting time = (0 + 9 + i 3)/3 P1 | P2 ' P3 I M I -i


= 7.33 ms. (Same as FCFS) 3 5 10 15 20

(4) RR (slice = 2)
Process TAT Waiting time . ]
Following is the Gantt chart
P2 P1 P2 P1 P1 (3 - 3) = 0 0
1 P l ’ P2 P3 P1 Pl
o ' 2 4 6 8 10 12 Id lb 7
P2 (10-l)= 9 (5-l) =4

Process Walting time __________ (5-2) = 3 (3-2) = 1 1


P3
PI (6 - 2) + ( 10 - 8) + ( 1 3 - 1 2 ) = 7 _ (15 - 3 ) = 12 (10-3) =7 |
P4
10) = 7
P2 (2 - 0) + (8 - 4 ) + (12 - ( 2 0 - 4 ) = 16 (15 - 4 ) = 11 I
P5
P3 __________ (4 - 2) = 2 ----------------- 8.6 | 4.6 _______]
Average
Average 5.33 ____________
Average turnaround time - (0 + 9 + 3 + 12 + 16)75
Average waiting time = (7 + 7 + 2)/3 = 8.6 ms.

Example 2.17.6 Average waiting time


4.6 ms.
Arrival time
Process CPU Burst time
(3) RR (slice = 2)
0
PI 3
Following is the Gantt chart
P2 ' 5
2 ______
P3 2(
0 2 4 6
P4 5

P5 5 each
time for
Calculate average waiting and
algorithm.

Scanned by CamScanner
rithms
5** doling al9° *° «**!» J

tting Sjgtg
proc
------(io5iL— — eemptive SJF
eandnor
p2,p«v
_P2,

jw*!*.. -—
LS-
Ls
’ J20-4KJ1
—- i_ ____. &
Average 2 ___

Average 3___
12.8 ms
12 4
wai il time
(8+ ‘
Average ' ’8
8.8 ms.
Solution •

(I) Hf°
Mtowinfl jobs to execute with one
Assume '**

Arrival time
CPU Burst time
Job
0
75 Hnlah
TAT
0 Process Priority
10 time
50
1
10 8 _ j8-0) = B
25
2 PI
80 9 (9-1) = B
20
3 P2 ____
85 12 ( 1 2 - 2 ) = 10
45 __________
P3 ____
id robin with quantum of 1 5. 14 ( 1 4 - 3 ) = 11
Suppose system uses rount P4
(i) Draw Gantt chart 20 ( 2 0 - 4 ) = 16
P5 ____
(it) And average wait and tumarou nd time
(53/5) = 106
Average
Soln. : The Gantt chart is,
P4 PC
| - | p t |p2|H>pi|pz|P3|P4|ro|P<|ral L ___ (ii) SJF( Preemtive)
o” 15 30 45 W re M 100 115 1 30 145 ISO 165 180 185 200 215

P1 P2 P3 P4 P5 Pi

Proem TAT Wafting time

PO ( 2 1 5 - 0 ) = 215 (45 - 1 5 ) + (115 - 60) + (165 - 30) + (200 - 1 8 0 )


Process Priority Finish time TAT Waiting
= 140

PI ( 1 8 5 - 10) - 1 7 5 (15 - 10) + (60 - 30) + (130 - 75) + (180 - 145) PI 3 20 ( 2 0 - 0 ) = 20 (13-1)J
= 125
P2 1 2 (2-!)=l (1-lWJ
P2 (85- 10) = 75 (30 - 1 0 ) + (75 - 45) = 5 0
P3
bJ

2
II

5 (2-a i
1

P3 ( I 5 0 - S 0 ) = 70 (85 -80) +(145 - 1 0 0 ) = 50

P4 (200 - 85) = 115 (100 - 8 5 ) + ( 1 5 0 - 115) + (1B5 - 165) = 70 P4 3 7 (7-3) = 4 (5-g |


1 Average | 130 __________ 87 ________
P5 4 13 (13 - 4 > = 9 (7- d

Average (17/5) |
= 130 ms. (37/5) = 7.4
Average waiting time = (14Q +
- ■ JV t JU + /{jys
=
87 ms. r~m P4 P3
0 8

Scanned by CamScanner
System (MU - Sam 4 - I'
2-25 ____________Process Managsm t
Finish tin» r ' '
priority TAT Watting Unw
Ptocwm Proc+ca TAT Walting Uma 1
(& — 0) - B 0
P1 Pl (29-0) 29 (B ■ 2) + (11 - B) +{16 - 13) +{16 - 17) . 1 0 I
(9-i) -a (8-1}*7 PZ (iB-o)-ia (2 - 0 ) + ( 6 - 4 ) +{13 - 1 0 ) +(17 -15) = 1 1 I
12 ( 1 4 - 2 ) = 12 (11 - 2 ) = 9 P3 (ii - q . i t I (4-0) +(10-6) > B
14 (11-3) = 8 (9 - 3 ) = 6
P4 ] W3 ■ 18.13 1 29/3 =9.66
20 ( 2 0 - 4 ) = 16 (14 - 4 ) = 10
PS Example 2.17.10 MU -Dre. 2016. 1 0 Marks
(52/5) = 10.4 (32/5) = 6.4
Average Assume the following processes arrive tor execution at the
tune i n d i c a t e d and the length of epu burst time given in
Priority (Pr» P* lw> > .. msec.
(iv) P3 I P1
P4
PI P2 Job Burst time Priority Arrival time l
5 12 14 20
T
0 Pi __ 8 3 3
Finish time TAT Waiting time
Proc*®*
rtorfty Pa 1 1 I A
20 ( 1 2 - 0 ) = 12 (5-1)= 4 Ps 3 i 2 I ? |
3
PI
(2-1) = 1
P4 2 3 I 3
1 2 (1-1)= 0
P2 P5 6 4
|

r * I
■fS
!£>

co
II
1

5
m

it
l

0
P3 2 For the above process parameters, find average waiting
7 ( 1 4 - 3 ) = 11 (12-3) = 9 times and average turnaround times for the following
P4 3
scheduling algorithms. First Come First Serve, Shortest Job
4 13 (20 - 4 ) = 16 (14 - 4 ) = 10 First, non preemptive priority and Round Robin (assume
P5
quantum = 2 Units)
(43/5) = 0.6 (23/5) = 4.6
Average J _J Solution :

Example 2 J 7.9
FCFS (First Solution)

J _ _ _r S
processes with CPU burst time in ms. Assume that all
1 2 5 13 15 21
processes arrive at time 0.
I Job Priority Arrival Finish TAT Waiting time I
Time time

ii) Round Robin (Quantum = 2ms) PI 3 3 13 (13=3) = 10 (53) = 2 I

Solution : P2 1 1 1 2 (2-1) - 1 ' (1-1) = 0 I


P3 2 5 i (5-2) = 3 (2-2) = 0 1
(i) FCFS : Following is Gannt chart 2

P4 3 3 15 I (15-3) = 12 (13=3) = 0 |
P1 P3 P2
P5 4 . 4 21 I (21-4) = 17 | (15-4) = 11 I
0 7 10 29
Average | (43/5) = 8.6 | (23/5) = 4.6 |

I Process TAT Waiting time


FCFS (Second Solution) _____ _ _
w
PI (29 - 0) = 29 (10-0)3=10 P2 P3 M r
_1 M I
21
1 P2 (7-0)»7 ___0 1 2 5 7 15_

Arrival Finish 1 TAT I Watting time


(7-0) = 7 Job Priority
j P3 ( 1 0 - 0 ) = 10
Time time 1

1 Average 46/3 = 15.33 17/3 = 5.66 (15-3)=12 I


PI 3 3 15 P-3) -4

(ii) Round Robin (RR) with quantum =2 P2 1 1 2 J (2-1)=1 I (1-1) =0

P3 2 2 5 I (5-2)=3 I (2-2)=0
Following is Gannt chart 3 3 7 (7-3)=4 (5-3) =2
P4
P1 4 (21-4)=17 | (15-4) =1
P2 P3 PI P2 P3 P1I | P2 | P1 | P2 | I P5 4
29 (37/5) = 7,4 ' (17/5) =3
13 15 17 18 Average I
0 2 4 6 8 10

Scanned by CamScanner
Process

24S

(MU - a set of Passes, with |en


WS
eoorKfe - _
No" Arrival Time
Burst Tim*
(This .elution Is sain*
0
P5
21 1
P2 ___[ 13
P2
2
5 2
3
P3
3
TAT 2 3
Anri*
flnW> P4
Priori*
TW* (13-3) ■ 10
P5
(2f4) = «
21
PI J1-D Draw
(2-1)

(S2)-3
(2-2) a 0
_ and RR (a u a n t u r n 2 ) t u

is the turnaround time of each proceas


(7-31-4 («)
(74) - 3
algorithm'?
(1M*9
13 wat is waiting time of each process for eatb
(15/5J-3 (hi) dl
ra&S)-?
above algorithms?
a*w
Nonpmwn PHority: (Samea.fi— of
(iv) Which algorithm results in minimum ave
time?
FCFS)
Solution :
P4
J P3 j P> I I FIFO
1 2 ------
,— « « *
P3 P5
LZ
9 12 14
0 8
Flniah TAT Waging
Job Priority | Arrival
Time time time Process Priority Finish time TAT

PI / 3 3 13 (13-3) = 10 (5-3) - 2 PI 3 8 (8-0) = 8 0 "

P2 1 1 2 (2-1) = 1 (1-1) - 0 P2 1 9 (9-1) = 8 1

P3 2 2 5 (5-2) = 3 (2-2) = 0 P3 2 12 ( 1 2 - 2 ) = 1Q
P4 3 3 15 (15-3) = 12 (13-3) = 10 P4 2 14 ( 1 4 - 3 ) = 11
P5 4 21 (21-4) = 17 (15-4) = 11 P5 4 20 (20-4) = 16
Average (43/5) =8.6 Average
(23/5) = 4.6 (53/5) = 10.6 (33/5) = t6
Round Robin - quantum - 2 units
SJF (Preemtive)
Job Priority Arriwl Anieh TAT Waiting time
Time time i P2 P3 P4 _ P5 P1
PI
21 0 1 2 5 7 13 20
18 13) + (19-15) = 12
P2
2 (2-1) = 1 Proce.1 Priority
(M) 0 Finish time TAT
P3 2 WaNngta
1 (11*2) = 9
(2-2) + (104) = 6 Pl 3
P4 3 20
a (3-3) = 5
( 2 0 - 0 ) = 20 ( 1 3 - 1 ) = 12
(6-3) = 3
P5 P2 1
19 (19-4) (2-11= 1 (1-11=0
15 P3
Average 2
5
ro
1
ll

(48/5) (2-2)=0_
005) = 6 P4
9.6 3
7
P-3) = 4 (5 -3) = 2
P5
13
(13-4) = 9 (7-4).3_
Average

Scanned by CamScanner
system <MU-Sem 4 -IT>
2-27
Process Management
JjHN nP ' t ' Ve)
MU - N o y . 2015. 1 0 M a r k s
Consider following set of processes with their CPU burst
time.

Process Burst time | Arrival time


Finish lime TAT Waiting time
ftforW P1
------- 10 1 _______I
3 8 (8-0) - 8 0 P2 4 2 'I
|P1_
1 9 £9 - 1) = 8 (8-1)» 7 P3 3
5
fe-— 12 ( 1 4 - 2 ) = 12
I
2 (11 - 2 ) = 9 [ P4 I 3 4
I
-

_ _ _i
14 ___ ( 1 1 - 3 H 8

e
CO
<0
D r a w Gantt chart FCFS, SJF preemptive and round

II
3 J
20 ( 2 0 - 4 ) = 16 robin (quantum w 3), Calculate average waiting time
4 (14 - 4) = 10
Pfi_ _ and average turnaround time.
(52/5) = 104 (32/5) = 6.4
Avsmg® («> Explain which scheduling policy adopted by Linux?
Solution :
priority (Preempt)
Process TAT Walting time 1
2 P3 | P4 P5
r rp* I I
1
-------"T 2 5 12 14 20 Pl (23- 1) = 22 ' ( 1 4 - 2 ) = 12 \
1
0

P2 (6 - 2 ) = 4 0
Finish time TAT Waiting time I 1
Priority
Procs*
P3 | ( 1 4 - 3 ) = 11 1 (9-3) = 6 1
3 . 20 ( 1 2 - 0 ) = 12 (5 - 1 ) = 4
PI

VI
P4 1 (6-4) = 2 1

II
1
1 2 (2-1) = 1 (1-1)= 0
P2
Average 42/4=10.5 | 20/4 - 5 |
SL
CM

co

5
ii

II
1

P3 2
FCFS Scheduling
3 7 (14 - 3) = 11 (12 - 3 ) = 9
P4 _]

4 13 ( 2 0 - 4 ) = 16 ( 1 4 - 4 ) = 10
P5 [

(43/5) = 8.6 (23/5) = 4.6 .... .


Average [
Process I TAT Waiting time |
RR (Quantum 2)
Pl (11- l)=10
| P1 P2 | P3 ~ P 4 [ P5 P 1 [ P3 [ P 5 IP1 P5 P1 |
(11 - 2 ) = 9 _ _ _1
P2 ( 1 5 - 2 ) = 13
5 7 9 11 12 14 16 18 20
0 2 ’ 3
P3 (20 - 3 ) = 17 (15 -3) = 12 ___|

P4 ( 2 3 - 4 ) = 19 (20-4)=_16 _ _ _
Waiting time
Praceu Priority Finish time TAT
Average 59/4=14.75 | 37/4=9.25
( 2 0 - 0 ) = 20 0+(9’2)+(14-1lH18-16) = 12
(H) SJF Preemptive Scheduling
(3-1)= 2 (2-0 = 1

7
(12- 2) = 10 (3-2M11-5) =

(7-3} = 4 (5-3) = 2

(18 - 4 ) = U

f P 1 I P9 I P3 I P 4 | Pi I P2 | P 3 | P1 I T U
Average 1 4 7 10 1 3 16 17 19 22 23
lives minimuia average waiting lieie.

Scanned by CamScanner
Operating System (MU • S errl *—S
pthread Scheduling
Waiting time
TAT Following are the pthread ““«““»■> value,
12 PTHREAD_SCOPE_PROCESS is u
(23- I )=22
(4- threads using PCS scheduling.
( 1 7 - 2 ) = 15
PTHREAD-SCOPEJSYSTEM is Used

(19- 3) = I 6 (7-
threads using SCS scheduling. to
(13-4) = 9 ( ro- 4) = 6
40/4= 10 PTHREAD_SCOPE„PROCESS to sched u
62/4 =15. 5
available light weight processes. Thread
Syllabus Topic : Thread Scheduling the number of LWPs. Scheduler activati Ons c

this purpose. The PTHREAD_SCOPE jy s _s


2.78 Thread Scheduling and bind an LWP for each user-level thr ea(J
many systems, successfully mapping threads
2.18.7 Contention Scope
the one-to-one policy.
I Q. Write note on thread scheduting J In order to get and set contention sco
P
- Scheduling of the user level and kernel level threads is Pthread IPC offers following two functions t Wo

different. pthread_attr_setscope(pthread_attr_t *attr jnt

If system implements many to one or many to many pthread_attr_getscope(pthread_attr_t *attr ’ t


model of multithreading then thread library schedules
user-level threads to run on an available LWP. Syllabus Topic : Multiple Processor
- This scheme of scheduling is called as process-
conteiftion scope (PCS) as threads belonging to the 2.19 Multiple-Processor Schedmin,
same process competes for processor.
[ Q. Explain multiprocessor schedulingh f
The operating system has to actually schedule the I
kernel thread on physical processor to run a user thread Here we assume the homogeneous multiorom
P
scheduled by thread library on available LWPs. i processors are identical.

Kernel uses system-contention scope (SCS) to schedule


2.19.1 Approaches to Multiple-Processor
the threads on processor when there are many threads
uhng
competing for the physical processor. I

Windows XP, Solaris 9; and Linux use one-to-one In asymmetric multiprocessing, all sd
model and schedule threads using only system- decisions, I/O processing, and other system t
contention scope.

pr,ont AH other processors execute in user mode. J


rhe scheduler chooses ses
the
the
7 y-
.. , runnable thread withh the
u. system data structures are accessed by single J
highest priority to run.
Pressor leading to minimizing lhe data shanog l
Programmers set the priorities nr

whicharenotaheredbyrhXa .r symmetric multiprocessing, scheduling is earned'


thread libraries may y each processor.
alter the priority of a thread Prammer to
KS Wi
“ aatura lly preempt the thread
Present for a higher-priority thread- On th “
p'«X V h““ e hc
*
there is no assurance of time slid ’ ° ther hand
>
ng amon
equal priority. g threads of

Scanned by CamScanner
2’29
Process Management
2.19.4 Multicore Processors
s
Some “PP° n a* affinity scheduling in
Which same thread again and again gets scheduled on days, it is possible to place multiple processor
cores on the same physical chip.
same CP U. Tins makes better use of previously cached
This is called multicore processor, Each of the cores is
blocks and increases the cache hits.
e
as separate physical processor by OS as there is
Also, the TLB may also have the right pag es , reducing
register in core to set its architectural state.
7L0 faults. For creation of affinity a two-level
SMP systems using multicore processors consume less
scheduling algorithm is used.
power and efficient.
After a ttread 1S
created, it is assigned to a lightly
j ded CPU which is a top level of the algorithm. It Scheduling is complex for multicore system. The
ensures that, each CPU acquires its own collection of processor accessing memory has to wait for data as it is
not available. So processor spends 50 % of its time in
threads*
waiting.
_ The bottom level of the algorithm performs real 4
- As a solution over this, hardware designer provided
scheduling- It is carried out by each CPU separately, on
two or many hardware threads for each core.
the basis of priorities or some other means.
If one thread has to wait for memory then core switch
_ By trying to keep a thread on the same CPU for its
to other thread. OS treats each hardware thread as
entire lifetime, cache affinity is maximized.
logical processor for S/W thread.
, In case if a CPU has no threads to run, it takes one from
- The course grained and fine grained multithreading can
another CPU instead of remaining idle.
be used to multithread a processor.
Approximately equal load distribution over the
Multithreaded multicore processor requires two level of
available CPUs, cache affinity and minimizing
scheduling. OS schedules software thread on hardware
contention for the ready lists as CPU frequently uses thread using any scheduling algorithm.
same ready list are three benefits offered by two-level
Second level of scheduling involves how each core
scheduling algorithm.
make a decision about which hardware thread to
Soft affinity means same process will be scheduled on execute.
same processor and keep running on it. This can be
policy but cannot be guaranteed by OS. 2.19.5 Virtualization and Scheduling
Process migration on other CPU may happen. In hard - With virtualization, single CPU system works Like
affinity process can not be migrated to other processor. multiprocessor system.
The virtualization software offers one or more virtual
2.19.3 Load Balancing
processors to each of the VMs running on the system
The main objective of load balancing in symmetric and then schedules the use of the real CPUs among the
multiprocessing systems (SMP) is to keep the load VMs. VMs are created and managed by host operating
balanced among all the processors. system. Each V M has guest OS and applications
running within it.
Load is evenly distributed among all processors.
Otherwise some processors may remain idle and other The guest OS scheduling algorithm that expects a
will be heavily loaded. convinced amount of progress in a given amount of
time will be negatively impacted by virtualization.
In push migration approach of load balancing, a
specific process periodically check load of processors. 2.19.6 Other Multiprocessor Scheduling
If it finds the processor with light load then it migrates Approaches
process from heavily loaded processor to lightly loaded
Following are the four general approaches used for
or idle processor.
scheduling of threads on multiprocessor system.
In pull migration approach, idle processor pu
waiting task from heavily loaded processor.

Scanned by CamScanner
2-30
T d harinfl approaches

TT on -ftrst-served (FCF$)

bef
7~ iiesTriu o f threads tirg

I r pmGanfl __ ~~

,J
Fig. C2.13 ° ad sharin
K “PProa ch(ai

LJ fD) -------* ! irs»-« m


e-n rs
'- ser
’’ ed
(FCFS)
F

fiR C Z U : Appn-chesnsed for scheduling of th"-* The threads of the newly arrived j ob
R
* C onmultlpnrcessor system consecutively at the end of shared q Ueue
processor selects next ready thread and cx '
(A) Load Sharing it finishes execution or gets blocked.
eue
single global qu

and each processor sdects

it is idle. If the jobs are having smallest number O f


+ (B) Gang Scheduling
a set
ready queue is like priority queue and equ j
of processors all at once, on a one-to-one twb. jobs are ordered according to jobs arrived firsi
to FCFS, thread executes until it finishes exerJ
(C) Dedicated Processor Assignment
gets blocked.
In this approach, during execution of program the
number of processors assigned i s equal to number of
threads available in program. A l l the processors returns Job having smallest n u m b e r o f unscheduled
to pool of processors after program complete the
execution.
number of threads with compare to executing
(D) Dynamic Scheduling it preempts the threads o f scheduled job.
The number of threads tn process may changes during Disadvantages |
the execution of that process.
- A single shared ready queue may occupy the J
+ 2.1 9.6(A) Load Sharing memory portion and if many processors simuitanej
Advantages access it then it may become bottleneck.

In this approach, load is equally distributed among all Preempted threads may not get same processal
the processors so that no one remains idle. resume execution. The caching becomes lessefficJ
There is no need of centralized scheduler The processors have cache memory.
scheduling routine of the operating system executes on It is unlikely to get access to a l l the threads J
available processor to choose the next thread.
program at the same t i m e . If more coordination
maintained global queue can be accessed by using equired between thread t h e n process switch may is
schemes used in uniprocessor scheduling such J to poor performance.

Z'’ ■* 2.19.6(B) Gang Scheduling


v
' Advantages

parallel there id
performance * more incltaii

Scanned by CamScanner
■rating System (MU - S e m 4 - 1
Process Management
Sch eduling overhead can be minimized as a sinele
decision affects a number of processors and procesLs
&h a set of nm-time library routines).
This approx may
Gang scnc . vh anon whose any be appropriate for some
applications and not for all. Applications can be
part is flOt * tfUJ,ng ant other 1S raad
y for execution. implemented to take advantage of this feature of
en,cnted in many ,nukl
lt i s ijnp' Processor operating operating system.
' systems- rn this scheduling process switches are Operating system is only responsible to processor
cd rcJatcd threads
ini ran in parallel. Hence, a ocatinn and performs following tasks when job demands
nnanceisgpod-
one or more processors.
threads that require coordination run in parallel, Allocate the idle processor to satisfy the request.
thcy can access file without additional overhead and
If job is new arrival and need processor then take the
source allocation can be done with less overhead.
processor from currently allocated job to which more
than one processor is allocated and allocate to newly
processors and M applications with N or fewer threads amved job.
are present then each application could be given 1/M of In case if jobs request cannot be satisfied then it
available time on the N processors, using time
slicing- or the job withdraws the request.
After release of one or more processors

In this approach, during execution of program the Search the queue of requests that are mot satisfied for
number of processors assigned is equal to number of processors. Allocate a single processor to each job in
threads available in program. the list that are waiting new arrivals and currently has
no processors.
All the processors returns to pool of processors after
program complete the execution. In this approach, if Then search the list again, allocating the remaining
processors on an FCFS basis.
thread of an application is blocked waiting for I/O or
for synchronization with other thread then processor
remains idle. It can be addressed as following : 2.20 Exam Pack
o If tens or hundreds of processors present in system
(University and Review Questions)
then processor utilization does not matter for
Syllabus Topic : Process Concept
performance,
Q. What do you mean by process ?
o As process switching i s avoided, it results in better
improvement in performance, (Refer section 2.2) (3 Marks) (Dec. 2014)
Q. Explain the concept of process,
2.19.6(D) Dynamic Scheduling
(Refer section 2.2)
- In some applications, the number of threads changes Q. What is context switch?
' during execution. (Refer section 2.3)

The operating system then can adjust the load to Syllabus Topic : Operation on Processes
improve performance. Both OS and application can What are the operations performed on
Q.
carry out scheduling task. process? (Refer section 2.4)
The operating system partitions the processors to Q. Explain process creation.
allocate to the jobs, (Refer section 2.4. 1)
■ The runnable task of each job is mapped to threads and Explain process termination
Q.
is given to the processors for execution at present in its (Refer section 2.4.2)
partition,
Q. Explain role of process control block.
A proper decision, about which subset to run, in (Refer section 2.5) (5 Marks) (Dec. 201 4)
addition to which thread to suspend when a process i_

Scanned by CamScanner
2 32
4
System (MU - Sem 'L Syllabus Topic : Process Sch T
Q. Explain process control blocn. Basic Concepts

(Refer sect fen 2. 5) Q.


What is difference between
J. Draw and explain process state tra

diagram. (Refer section 2.6) Syllabus Topic : Scheduling c
" Syllabus Topic : Process Scheduling *1*
Q. What are the criteria tor eva
Explain long-term scheduler algorithm performance? fR e fer >
(Refer section 2.8. 1(A)) Syllabus Topic : Scheduling a1o
rl
Explain short-term scheduler. %u '
Q. Explain FIFO scheduling algorithm
(Refer section 2. 8. 1(B))

Q.
Q. Explain SJF and SRTN sghe
(Refer section 2.8. 1(C))

Q. Compare the functions of different types of


Q. Describe priority scheduling algorithm
schedulers. (Refer section 2.8, 1(D))
(Refer section 2. 16.2(C))
Sy/iabus Topic : Multithreading
Q. With the help of example, explain
Q, Explain user level and kernel level threads with
Scheduling algorithm. (Refer section 2
advantages and disadvantages.
Explain multilevel queue scheduling
(Refer section 2. 1 1)
(Refer section 2. 16.2(E))
Syllabus Topic : Process - Multithreading
Models Q. Explain multilevel feedback queue

Q. Explain various multithreading models.


(Refer section 2. 12) Example 2.17.1 ( 1 0 Marks)
(Dec
Syllabus Topic : Thread Libraries
(Dei
O. Write note on thread libraries.
(He
(Refer section 2. 13)
Syllabus Topic : Thread Scheduling
Syllabus Topic : Threading Issues
Write note o n thread scheduling.
Q- What are different threading issues?
(Refer section 2.18.1)
(Refer section 2. 14)
Syllabus Topic : Multiple Processor Sc

Q. Explain multiprocessor scheduling in deta


(Refer section 2. 13)

Scanned by CamScanner
I CHAPTER

131
[ Module in
s Process Coordination
•'X
I
sp
\„
Syllabus Topics
— —J
9
X Synchronization : The critical Section P hi

X) 0 SSZss S-z
l6 Syllabus Topic ; Synchro n ization how the operating system handles interrupts, and the
% ,
scheduling policies of the operating system.
3,1 Background The difficulties that arise
1. Global resources sharing. For example, suppose two
(May 16)
processes use the same global variable simultaneously
Q. Explain the process synchronization in brief.
and both carry out read and write operations on that
MU - May 2016, 5 Marks
variable, then various reads and writes execution
In a single r proccssor multiprogramming system. ordering is serious.
2. Optimal management of the resources is not easy for
the appearance o f concurrent execution, a fixed time the operating system.
r'ng slot is allocated to each process. After utilization of this Programming error tracing is not easy as results arc
time slot, CPU gets allocated to other process. Such typically non- deterministic and not reproducible.
switching of C P U back and forth between processes is Only a single user is supported by single processor
called as context switch. multiprogramming system at a lime. While working on
At a lime single process gets executed so parallel one application, user can switch to other application in
tScJ
processing cannot be accomplished. Also there is a this system. The keyboard for input purpose and screen
ieti!. definite amount o f overhead drawn in switching back for output purpose used by each application is same.
The reason is, each application needs to use the
and forth between processes. Apart from above

application, memory can be saved by keeping single


copy of procedure in memory portion which is global
In a multiple processor system, interleaving and for all sharing applications.
overlapping the execution of multiple processes is
Example
achievable. Both interleaving and overlapping
correspond to basically diverse modes of execution and void echof)

and overlapping can be treated as illustration of chinput = gelcharQ;

concurrent processing and both the techniques address choutput = chinput;

the similar problems. putchar(choutput);

The comparative
depends speed
on activities of execution
and behavior of process
of other P -s .

Scanned by CamScanner
3-2 , i f is necessary to use shared 8 . L
at a time.
among
. dose c by

0
processes, sharing Pn sequence :

I- initially process # a MU - Ma
getcharo returns its v stnugh »
chinput. Process Pi gc*® At Hus
xored in --Ration and communication
—XS “ Xi ents should be satisfied w
the mos‘ recenuj
q
variable chinput. o f process P 2 nnicate with each other. Synchr
passes is required to achieve the mutual
2.
it call up the ec on monitor. Independent processes do not comm unicate

displays lhe single* eAecution and the


3th er but cooperating processes may need to
I. now again, p»ce» , bJ c h i n put and as information. Cooperative processes
m ets ovel n
value 8 J jjy e 'chjnput holds value n, communicates through shared memory »

* “ X re variable choutput and displayed, passing.


which is transfe second
Cooperating processes require an J,
Communication (IPC) mechanism that win

to exchange data and information. There at

Two fundamental models of


Interprocess communication
fbove example both P, and P 2 are allowed Io
shared global variable chinput. However, if only one
process at a time is allowed to be in procedure, above
1 . Shared memory
discussed sequence would result in the following : ;
2. Message passing
(i) Initially process P ( calls the echo procedure. The I
getcharf) returns its value and stores it in input
Fig. C3.1 : Fundamental models of interprocess
variable chinput. Process P] gets interrupted
communication
straight away after returned value gets stored in
chinput At this instant, the most recently entered 1. Shared memory
character, m, is stored in variable chinput. ;
In the shared-memory model, a region of menoy
(ii) . After P, gets interrupted, there is turn of process
P 2 and it call up the echo procedure. At this is shared by cooperating processes is estabEi

situation, process P; is in suspended state and it is Processes can then exchange information by
still inside the echo procedure. The process P 2 is and writing data to the shared region.
not allowed to enter inside the echo() procedure.
In the message passing model, communication
Therefore, P 2 is suspended until P, comes out of
place by means of messages exchanged bet##11
the echo procedure.
cooperating processes.

execution of echo procedure. The output ♦ 1 Message Passing


displayed is the character m.
Message passing provides both functions.
(iv) When P, comes oui of echo() procedure p
2
passing has the further benefit that it lends
becomes active. Now process p 2
call the echo() procedure. y implementation in distributed systems as
shared-memory multiprocessor and

Scanned by CamScanner
Operating System (MU - a . m
__ 3-3
- Follow, ng are the two primitives used in I Process Coordination
passing : * I Message passing system should give guarantee that
messages will be correctly received by receiver. Receiver
sends acknowledgement to sender after receiving the
message, if acknowledgement not received in defined time
This is the minimum two operations required for 1 en sender resend the message. It also offers authentication
processes to send and receive the messages. A process service.
sends data in the form of a message to another process 3.3 Race Condition
indicated by a destination. A process receives data by
executing the receive primitive, indicating the source , Q- What Is race condition? Explain with example.
and the message. A race condition takes place when more than one
Communication by sending and receiving messages process write data items so that the final result depends on
require synchronization. The receiver cannot receive a the order of execution of instructions in the multiple
message until it has been sent by another process. processes. Consider two processes, P $ and Pj, which share
the global variable “x”. At the time of execution, process P t
The sending process is blocked until the message is
updates x to the value 1 and at the time of execution of
received, or it is not after the send primitive executed
another process, P 2 updates “x” to the value 2. Thus, the two
by process. Similarly, when a process issues a receive tasks are in a race to write variable “x”. In this example the
process that modifies x value lastly decides the final value
o Previously sent message is received and execution of"x”.
continues. Role of Operating System
o If there is no waiting message, then either (a) the 1. The operating system should be capable of keeping
process is blocked until a message arrives, or track of the different processes.
(b) the process continues to execute, abandoning Allocation and reclamation of various resources should
the attempt to receive. be done by operating system.
Thus, both the sender and receiver can be blocking or Each process’s data and physical resources should be
nonblocking. Three combinations are common, protected by operating system against unintentional
interfering by other processes.
one or two combinations implemented : The operations performed by the process and the
O The blocking send and blocking receive : Until output that it generates should not depend on the speed
the message is handed over, the sender and of execution compare to the speed of other concurrent
receiver both gets blocked. processes.

O The nonblocking send and blocking receive : Definition of process interaction


After sending the message, the sender continues Process Interaction can be defined as :
its work but receiver remains blocked until
The processes not aware of each other
message is arrived to it. This permits a process to
- The processes in a straight way or indirectly aware
send one or more messages to a multiple
destinations as quickly as possible. Here receiver of each other

is in need of message so that it can resume the The conflict occurs between the concurrently executing
execution . So it gets blocked until message
arrives. resource. More than one process may require the same
resource while they are executing. The existence of the
Nonblocking send, nonblocking receive :
particular process is not known to other process. The
Neither party is required to wait.
The nonblocking send and nonblocking receive: competing processes do not pass the information to each
other.
-
Both sender »'« " “
will continue the work.

Scanned by CamScanner
4
Tnoeratinq System (MU SgL- '

mating

> TheCritJcal SectJonProb 161


’'
Mutt
1 . Mutual Exclusion

--— ■ M U - Dc
nnrwri 2. Progress Wha

Q. Whi
Q. Explain cndca) -MU
— - June on 15 10 Blgi
solutions
ou Fig . C3-2 : Conditions of critical section At the
, it i5 necessary <o «"<' ' variable, al
i Mutual exclusion
lai the sar
some means to disallow more th® n,on of
I process h?
section (CS). K currently Process P 5 i a e

CS then other processes should not ?l o n . In thi


(variable;
mus
conditions and ««>«. «* ‘ * ■ time. Or
the codes in Critical Sections in each thread. The type 2. Progress Icritical s
p mX of the code that comprises Criticai Section am as If process’s CS is free, means it is not exe 1 Th
follows : CS. In this situation suppose some processes J process
executi
- These codes reference one or more variables in a “re - go into their critical sections for execution, foj
update-write" way. At the same time, any of those with e
take the decision about who will now enter n j
paralk
variables maybe changed by another thread.
only processes that are not executing in their
- These codes modify one or more variables that can be sections are allowed to take part to make this
1 R
referenced in “read- update- write” fashion by another This decision should be taken in some - 1

thread. and should not delayed indefinitely, 1


- Any part of data structure used by the codes can be
3. Bounded waiting
modified by another thread.
A limit should be set on the number of times lbw
- Codes modify any part of a data structure that is
processes are permitted to go into their critical sc
currently in use by another thread.
after a process has made a request to go into its c
When one process is currently executing shared section and before that request is approved.
modifiable data in its critical section, no other process is to
Semaphore and monitor are the solutions to i
be permitted to execute in its critical section. Hence, the
mutual exclusion.
execution of critical sections by the processes is mutually
exclusive in time.
A synchronization variable taking positive ina

In a code called as critical section, only one process


values is called semaphore. Binary semaphore only hl -
values 0 or L Hardware does not supply the semaiJ
executes at a time. In a critical section problem an Semaphore offer the solution to critical section ptoM
algorithm needs to be designed which permits at most one semaphore variable takes value greater than 1 then W
process tnto the cntical section at a time, with no deadlock called as counting semaphore. Like semaphore, a 0*4
" ciidcal section's problem solution must obey mutual
also solves critical section problem. It is a 4
exclusion, progress and bounded waiting. Any process component which contains one or more procedut l
Erectly cannot enter in critical section. First process has to initialization sequence and local data. Following ri
obtain permission for its entry in its critical ° components of monitors : 1
ment of code which implements this appeal of proce Shared data declaration
1
the entry section. When process conies
Shared data initialization
section after completing its execution there it has
Operations on shared data

Synchronization statement

Scanned by CamScanner
irating System (MU » Sem 4 - IT) 3-5 Process Coordination
3,5 Mutual Exclusion Following 4 conditions should hold to obtain a excellent
solution for the critical section problem (mutual exclusion) :
(June 15, Nov, 15, May 16)
1. Two processes should not be inside their critical
q What is mutual exclusion ?
sections at the same time.
MU - J u n e 2U15- N o v 2015. 5 Rfia r k s
2. Comparative speeds of processes or number of CPUs
What is mutual exclusion ? Explain its
M U - M a y 2016, 5 M a r k s
in the systems are not assumed.
3. Process outside its critical section should not block
At the same time as one process executes the shared other processes.
variable, all remaining processes wishing to accomplish so 4. Any process should not wait for longer amount of time
a t the same instant should be kept waiting; when that arbitarily to enter its critical section.
process has over executing the shared variable, one of the
processes waiting to perform so should be permitted to cany Syllabus Topic : Petarson’s Solution
on . In this manner, each process executing the shared data
(variables) keep outs all others from doing so at the same 3.6 Peterson’s Solution
time. Only one process should be allowed to execute in
critical section. This is called Mutual Exclusion. Q. Explain Peterson solution to mutual exclusion.
Q. Give software approach for mutual exclusion.
The mutual exclusion is put into effect only if
processes access shared changeable data - If during Peterson's solution involves only two processes. It
execution, the operations of the processes do not conflict
gives a good means of algorithmic explanation of work
with each other, they should be permitted to proceed in out the critical-section problem and exemplifies some
parallel.
of the difficulties concerned in designing software that
<r Requirement of Mutual Exclusion addresses the requirements of mutual exclusion,

It is obligatory to enforce a mutual exclusion: I f many progress, and bounded waiting.


processes want to execute in critical section then only The two processes Pi and Pj perform alternate
one process should be allowed to execute in that critical
sections (remaining code of the process). Petersons
solution requires the two processes to share two data
items :
critical section) then it is allowed to do so without
interfering with other p r o c e s s e s .
boolean y{2];

The variable x decides who will enter in its critical


should not be delayed for an indefinite period: that is it
section, if x=i, then process Pi is permissible to
execute in its critical section. The y array is used to
If the critical section is empty (any process not
specify whether process w ready to enter its critical
in critical section), then if any process that
section. For example, if y[i] is true, this value indicates
requests entry to its
enter without delay.
Is or number In order to enter the critical section, process ft first sets
yli] equal to tree and then sets variable x to the value J.
of processors are not made.
Pi x =j because, if the other (Pj) process desires to
Any process will not remain inside its
enter the critical section, it can do so. If both processes
for indefinite time. It should stay t ere or a
attempt to enter simultaneously, x will be set to both t
only.
and j at approximately the same time. Only one of these
Mutual Exclusion Conditions assignments will last; the other will occur but will be

A race condition can be avoided if we J me . overwritten straight away.


processes in their critical section

Scanned by CamScanner
_- - - Prt . 1
9 pi al entCr cri,ic
" " " — i then * ““ «?\l
’oporatiraSystemjMU; lfX
ft will enter the critical section.
•ratin'
fts critical section, it win res M
e
*’ owjng ft to enter its critical section j Following are
y fO = TRUE; *<r yv LU
t i l to tru ’ it 6
alsft must
MOVX
pj resets 1.
&& x== A ft does not change the value of A Return
while (ylfl
Considt
while executing the while statement.
Asquint
critical Berben Critical section (progress) after at most
(bounded waiting). So stq
execut
= SynchronSg , Pj gets
the Cl

synchronization Hardware ] the ct


} while (TRUE* 7
3 . So
The final value of x decides We can prove 011 Cons
. . to c n ( e r its critical s — T iThardware ut* to
Ever

The simple hardware instructions sche


_ Mutual exclusion condition is satisfied. and
many systems can be used successfully ih
_ The progress requirement is satisfied. critical section problem. Hardware Aft-

_ Tbe bounded-warring requirement also achieved. any programming task easier and get to e

Mutual exclusion efficiency. | - If


be
Every Pi go into its critical section if and only i f either If we restrict the interrupts from taking 1
by
y (j] = false or x=i. Also note that, if both Pt and shared variable was being modified, in J
- Si
ft runs in critical sections simultaneously, then y[0] — environment the critical -section problcmj
C
y [1J =tme. It implies that Pi and Pj could not have
solved.
successfully executed their while statements at about 1
Many systems have special instructions called] progn
the same time, since the value of variable x can be
either i or j but cannot be both. Set Lock called as TSL instruction. It is having) requii
“TSL ACC, X”. In this instruction ACC is busy
Hence, one of the processes say, Pi must have
successfully executed the while statement, whereas Pj; register and X is symbolic name of memory loc
had to execute at least one additional statement holds character to be used as flag.

TSL is indivisible instruction means that it c 3.8


However, at that time, y [j J = true and x == j, and this
interrupted in between. After instruction gets i
condition will continue as Jong as Pi is in its critical Q.
following action take place.
section. Therefore we can conclude that mutual
exclusion is preserved. o Content of X is copied to ACC

Progress and bounded waiting o Content of X becomes N

Process Pi can be prohibited from entering the critical


i s not interrupted. If we assume value of X=N initii
condition y Q] =tnte and x= j; this )oo p is the onl meaning of N is critical region is not free and F w
one possible.

1. TSL ACC, X
t0 g t0 the critical section
(Content of X is copied to
m-ir-false,
W T P, can ° go
and “ j nto ils eritjcaj - 'y
Content of X becomes N)
has set y[j] to true and is also ninn - •
2. CMP ACC, “F*
(See if critical region is ft*1
3. BU 1
J (If critical region not
Depending on the value of x either P, „ F DJ - W l . 1*enter
~ in

the critical section. back.)
4,
Return •iff)

Scanned by CamScanner
.rating System (MU - Sem 4 - IT) " 3-7 Process Coordination

Definition
MOV X, “F* (F gets copied to X) A semaphore S is integer variable whose value can be
Return (Return to caller) accessed and changed only by two operations wait
(P or sleep or down) and Signal (V or wakeup
or up). Wait and Signal are atomic operations.
Assume initially scheduler schedules Pi and X = "F*.
Binary Semaphores does not assume all the integer
So step 1 makes ACC= F and X= N . Here Pl after values. It assumes only two values 0 and 1. On the
executing step 4, prepare to enter CR (critical region).
contrary, counting semaphores (general semaphores)
can assume only nonnegative values.
the CR* Pj executes step 1 and X = N. In step 2, Pj fails The wait operation on semaphores S , written as wait(S)
the comparison and loop backs. It cannot execute step or P(S), operates as follows :

wait(S): IF S > 0 I
THEN S: = S — 1 I
Even though context switch occurs and Pj gets
_ _ _ _ _E15E (wait on S) ____ __________ ‘1
and X values are N (busy waiting). The signal operation on semaphore S , written as

to exit the CR ans X — **F . aignal(S): IF (one or more process are waiting on S)
If Pj scheduled after this, it execute step I and ACC
THEN (let one of these processes proceed)
becomes F as X was F in above step. Now due to step I
ELSE S: = S + 1

The two operations, wait and signal are done as single


CR. indivisible atomic operation. It means, once a
semaphore operation has initiated, no other process can
access the semaphore until operation has finished.
progress and bounded wailing. Since special hardware is
Mutual exclusion on the semaphore S is enforced
required cannot be generalized to all machines. Also due to
within wait(S) and signal(S).
If many processes attempt a wait(S) at the same time,
Syllabus Topic : Semaphores only one process will be permitted to proceed. The
other processes will be kept waiting in queue. The
implementation of wait and signal promises that
3.8 Semaphores
processes will not undergo indefinite delay.
Q, What is semaphore? Semaphores solve the lost-wakeup problem.
cr Disadvantages of Semaphore
- E.W. Dijkstra (1965) abstracted the key notion of
mutual exclusion in his concepts of semaphores. 1. Semaphores are fundamentally shared global variables.
- The solutions of the critical section problem 2. From any location in a program it can be accessed
represented in the section are not easy to generalize to during course of execution.
more complex problems. 3. A lack of command over it or assurance of proper
usage.
To avoid this complicatedness, we can use a
synchronization tool call a semaphore. A semaphore 4, A lack of proper linking between the semaphore and
the data to which the semaphore controls access.
is an integer variable that, apart from initialization, is
accessed only through two standard atomic operations: 5 They provide two functions, mutual exclusion and
scheduling constraints
wait and signal. These operations were firstly termed P
(for wait) and V (for signal). Mutex is the concept related to binary semaphore. A
key difference between the two is that the process that locks
the mutex (sets the value to zero) must be the one to unlock

Scanned by CamScanner
for the producer

I".
process

extern ir"? hie for one


tratir
i, me «m.pb»« - ________
consum
|ock
process to
the but 1
unlock it.
ARRAY SIZE) cr' Initial!
Set fu!

, f l extP«»diiced;

« -ss£ ‘‘ jit 53
v rl
. n % ARRAY-SIZE;
Sete

For
counter •I' * .— ■—- ' “

consumer code for the consumer process


Produc
follows : ------------------------------------------------------------
glg
Explain an ° [ «MU - Pf£. 2 0 1 WtUL
P™“ em
' T Tuial buffer is while (true)

- inpnxlu -rp there is a finite


bounded buffer. W , buffcr . While
while (counter = - 0)
numbers of slots a" is full the producer
producing the items, if .be buffer « /* do nothing */
process should be suspe • jBms
nextConsumed = array [out];
in the same ' ' “Ztsumer should be oat = (o „ I + l)%AHRAY_SIZE : Co
X -nsurttbatonlyone W
counter—;
bUffer
' “
conditions or Jost updates.
Producer and consumer functions are
_ SIee UP system calls is used in this situation to
considered separately, they may not J
avoid race condition. Producer process should go to
when executed concurrently.
sleep when buffer is lull and consumer process should
wale up when producer will put data in buffer. In the Race condition also occurs in this approach justi
same way, consumer goes to sleep until the producer came across in previous approaches.
puts some data in buffer and wakes u p . to consume this
As constraint is not imposed on accessing the
data.
variable, the race condition i s occurred.
- Here two processes, producer and consumer share a
common, fixed-size (bounded) buffer. The producer 3.9.2 Producer Consumer Problem
puts data into the buffer and the consumer takes data Semaphore
out.
Difficulty arises when the producer wants to put a new
data in the buffer, but there is no space in buffer. That Q. Explain how Readers/writers problem cm
is, n is already full. Way out to this problem i s that, solved with semaphores ?
producer goes to sleep and to be awakened when the
M U - J u n e 2015. 10Ito
consumer has removed data.
In previous section, we have seen to
It may also happen that, consumer wants to take out
consumer problem. The Solution to producer
data from the buffer but buffer is already empty Way
to this problem is that, consumer goes to sleep unt J
mutex. 1
the producer puts some data in buffer and take ,F I

consumer up. wakes The semaphore ’fuff is used for counting J


SOts
in the buffer that are full. The
counting the number of slots
that are
semaphore ‘mutex’ to make sure that the

Scanned by CamScanner
rating System (MU - Sem 4 - rp
3-9
consumer do not access modifiable shared section of Process Coordination
me buffer simultaneously. While defining the reader/ writers problem, it is
assumed that, many processes only read the file
Initialization
(readers) and many write to the file (writers). File is
- Set full buffer slots to 0. shared among a many number of processes. The
i.e. semaphore Full = 0.
- Set empty buffer slots to N.
to
i.e. semaphore empty = N . many readers.
Writing to the Hie is allowed to only one writer at
- For control access to critical section set mutex to I
the same time.
i.e. semaphore mutex = 1.
Readers are not allowed to read the file while
Producer ( ) writer are writing to the file,
WHILE (true) In this solution, the first reader accesses the fde by
produce-item ( ); performing a down operation on the semaphore file.
P (empty); Other readers only increment a counter, read count
P (mutex); When readers finish the reading, counter is
enter-item () decremented.
V (mutex) When last one ends by performing an up on the
V(fuil); semaphore, permitting a blocked writer, if there is one,
to write. Suppose that while a reader is reading file,
Consumer ( )
another reader comes along. Since having two readers
WHILE (true) at the same time is not a trouble, the second and later
P(fiill) readers can also be allowed if they come down.
P (mutex); After this assumes that a writer wants to perform write
remove-item ( ) ; on the file. The writer cannot be allowed to write the
V (mutex); file, since writers must have restricted access, so the
V (empty);
writer is suspended. The writer will suspended until no
reader is reading the file. If a new reader arrives
conaume-Item (Item)
continuously with very short interval and perform
Busy Waiting reading, the writer will never obtain the access of the
■> (Nov. 15, Dec. 16) file.

What do you mean by busy waiting? What is To stop this circumstance, the program is written in a
wrong with it ? different way: when a reader comes and at the same
M U - N o v . 2015, Dec, 2016, 5 Marks time a writer is waiting, the reader is suspended instead
of being allowed reading immediately.
Now writer will wait for readers that were reading and
If critical section is not free (process is already executing in
about to finish but does not have to wait for readers that
it) then other process trying to enter in critical section
came along after it. The drawback of this solution is
that it achieves less concurrency and thus lower
(continual looping) is the problem as far as performance. Following is the solution given
multiprogramming environment is considered. In this case a
single CPU is shared among many processes. This busy typedef int semaphore;

waiting waste CPU cycles which would have been used by semaphore mutex = 1 * controls access to ’readcount’*/
other processed effectively. semaphore file = 1;/* controls access to file ♦/

3.9.3 Readers/Writers Problem int readcount = 0;

void reader(void)
Q. Explain reader-writer problem.

Scanned by CamScanner
Pl
3-10

■rating System (MU -SgT j, either side of the plate. He


spaghetti-
while (TRUK) 1 P rrpea. r- puntc' */ Algorithm should be designed w ,
do wn(&mu ir xI : /* v h m' c c b h «
philosophers to eat. The algori W

if(rc . d ™, nl ==l) fiH=/’ if


"................ fork at the same time) while avoidi S {

, . .u’reudeount’*/
up{*ntu<«)y* rrleiue e x c l u d e acces
The dini n 8 philosopher’s problem c ’
r»d fik()/*nMd the file*/
example of problems related to
downW) exclusive to ' coun resources, which may occur whei)

ree.koi.nt = readreunt - ! ; / • « * fewer


**
includes concurrent threads of executj * n
roblem is a
ia ,he
way, <h« P ‘wtca X
i f (readout = = 0) upf&iW
approaches to synchronization. *t
reader... */

upfOrmutextV release

uee-datA-rradtV* noncritical region */

r <t[]ri

fork on the right side. Once the Philos


h v o forks are replaced rm

void wrilerf void)


This solution, unfortunately, leads to de i
(
the philosophers are hungry simul taileo

pick up the fork on their left, and right si


(hink up data(); /* noncritical region not be there. In this unseemly position, all
down(&file);/* get exclusive access */ starve.
write_fiIeQ; /* write the file */ Additional five forks ( a more clean solution ,
upl&file); /• release exclusive access * the problem or use of just one fork to eat the
can be a solution. Following program sh J
solution.

/• program dining philosophers ♦/


3.9.4 Dining Philosopher Problem
semaphore fork [5] = { 1 } ;
-> (Dec. 14, Nov. 15)
int i;
Q- Explain dining philosopher problem and solution to
void philosopher (int i )
it MU - Dec 2014. Nov. 2015. 10 Marks
Q. Explain Dining philosopher problem using
semaphores.

The problem is stated as : The work of five


thiu
philosophers is thinking jinri patina a trtU« i *□
them and all agreed that only food that contributed to wait (fork[i|);
their thinking efforts was spaghetti. Each philosopher wai
requires two forks to eat spaghetti.
eat();
d
® tounu table with a big signalffork [(i+1} j | (.
•sernng pot of spaghetti, five piates, and five fork. One m<K 5

plate is given to one phitosonher a -k: . signal(fork(i]);}

■as to use two forts on

Scanned by CamScanner
11 - S e m 4 . ft-)
V ,
i ii - - [ _Proc&ss Coordination
If «maphnre S flTC uwd incorrectly, k efln pnjducc
parbe n (philosopher (0), philosopher ( 1 ) ,
>*»ning e ™ that arc hard to detect. These errors occur
philosopher (2), philosopher (3), only ( f some pabular execution sequences results and
philosopher (4));
icsc sequences do not always occur. In order to handle
*> i such errors, researchers have developed high-level
p 0 language constructs called as monitor.
A monitor is a set of procedures, variables, and data
four philosophers at a time into the dining room. With structures that arc all grouped together in a particular
‘V
at most four seated philosophers, at least one type of module or package. Processes may call the

s philosopher will have access to two forks. Following is


rhe solution using semaphore.

/* program dining philosopher* */


procedures in a monitor if required, but direct access to
the monitor's internal data structures from procedures
declared outside the monitor is restricted to the

semaphore fork [5] = { I } ; ■


monitor example
semaphore room = { 4 } ; integer i ;
condition e ;

void philosopher (ini i) procedure producer ( ) ;

Of white (true) { procedure consumer ( ) ;


end;
thinkQ; I
end monitor;
wait (room);
Monitors can achieve the mutual exclusion: only one
wait (forkfij); process can be active in a monitor at a time. As
wait (fork [ ( i + 1 ) mod 5]); monitors are a programming language construct, the
eat(); compiler manages the calls to monitor procedures
differently from other procedure calls.
Normally, when a process calls a monitor procedure, if
signal (forkfij); any other process is currently executing within the
monitor, it gets checked. If so, the calling process will
be blocked until the other process has left the monitor.
If no other process is using the monitor, the calling
void mainO process may enter.
*
Characteristics of a monitor
parbegin (philosopher (0), philosopher (1), Only the monitor's procedures can access the local data
philosopher (2), philosopher (3), variables. External procedures cannot access it.

A process enters the monitor by calling one of its


procedures.
Only one process may be running in the monitor at a

Syllabus Topic : Monitors


blocked, waiting for the monitor to become available.
Conditional variables are contained within monitors.
3.10 Monitors
These conditional variables can be accessed only within the

What is monitor? How it is used to achieve mutual


cwait () and csignal() carry out the operation on conditional
exclusion? Explain. variable and these are created as special data type in
monitors.

Scanned by CamScanner
3-12

ratinjWMU . nf the caJW '


n

Wh.le processing the transaction its


!. emdt (e) : On * c the BO w
' be preserved although there » poss TX
process gets poised. Afte
access fc). pf=“ s waS there in
and updates data within file on disk.
2. csignal (0 » B ? e
" d P S
'
blocked. Now selected to resume So transaction is treated as sequence
execution. Only one s processes, were
operations that can be terminated by
1
operation. Commit operation indicates ‘
is completed and terminated its execution
Whereas abort operation indicates that
- — terminated unsuccessfully due to some er
As aborted transaction modifies the d *(
termination, the state of the date must be
the signal is lost- I
state what was before transaction started.
When one process is executing in monitor, precesses
tat trying to enter the monitor join a queue __ This roll back of transaction must be en
processes blocked waiting for monitor avmlabthty. ; system. To ensure the atomicity, it is ncce ,.

- Within the monitor process may provisionally block


devices that store the data. Transactions acc

itself on condition x by issuing cwait (x); After from these devices.

Volatile storage
again the monitor when the condition alters, and
resume execution at the point in its program following Examples are main memory or cache memory |
the cwait (x) call. system crash, information on this storage

If the execution of process going on in the monitor and


survived.
during execution, it notices alter in condition variable Nonvolatile storage
x, it issues csignal (x), which signals the related
Examples are disk and magnetic tapes.Ifq
condition queue that the condition has altered.
crashes then information on this storage sunhn
It is possible for producer to put in characters to the
buffer only by way of using procedure append within
the monitor; the producer does not have straight access Stable storage
to buffer.
Rephcate the information on other disk to surw
This procedure initially makes sure the condition nor
M to conclude if there is space existing in the buffer
i, »« available then the process executing the
monitor is blocked on that condition. The atomicity is ensured by recorrW

modifications to the data by transaction. This raa


is done on stable storage. Write-ahead logging'
'0fl3
method to achieve i t .
3
J2_Atomlc Transact I one
this method, a log i s maintained by system 011*
storage and each log record described single of®*
atomic
i? Explain,
y transaction write. Following are the fid* *
record. °
o
ended sLZ Xu XXT' Transaction name : This name of «•*
doo
SOI O
®* ‘ write operation is unique. 1
unknown order. For data base syste'T “ »e
: This name
is wri,!*"" "*"* of 1
S Wn ,en s
o
* ' also unique.
°M value ■

Scanned by CamScanner
gwrating S stemJM -_Sgn 4 - I T ) 3
Process Coordination
o New value : Value of data operation after write
executed atomically. A serial schedule contains a
operation is carried out.
sequence of instructions from different transactions in
Other log records are also there to record the significant which the instructions of a particular transaction appear
events during processing of the transactions. These are together.
for example, start of transaction, commit or abort of If two transactions overlap their execution then the
transactions. Using these logs the data is easily resultant schedule is no longer serial. A nonserial
recovered in case of failures. Recovery algorithm uses schedule does not essentially mean an incorrect
following two procedures: execution. The Fig, 3.1 1. 1 shows the serial schedule in
o undo(Ti) : The data items updated by transaction which TO is followed by T L The Fig. 3.1 f .2 shows the
Ti is restored to old value by undo(Ti) procedure. concurrent serializable schedule.
o rtdofTi) : This procedure set the value of data
TO T1
items to new value. These data items are updated
READ(X)
by transaction Ti. WRIT (X)
READ(Y)
3.11.3 Checkpoints WRITE(Y)
READ(X)
Searching of log is carried out in case of failure. It is WRITE(X)
READ(Y)
necessary to search the log to know the transactions to be WRITE(Y)
redone and undone after failure. This searching is time
consuming and modifications on updated data will cause Fig. 3.11.1
recovery to get longer. Checkpointing is the solution over
- Fig. 3.11.1 shows that serial schedule in which TO is
this problem. Along with maintaining the write-ahead log, followed by T k
system time to time takes checkpoints which involve the
following sequence of actions to take place : TO T1
- Write all log records at present residing in main READ(X)
memory onto stable storage. WRITE(X)
READ(X)
Write all modified data residing in volatile storage to WRITE(X)
READ(Y)
the stable storage. WRfTE(Y)
READ(Y)
- Write a log record <checkpoint> onto stable storage. WRITE(Y)
As <checkpoint> record is present in the log, it permits
the system to make its recovery procedure more efficient. Fig. 3.11.2: A concurrent serializable schedule

3.11.4 Concurrent Atomic T ransactions Let OP 0 and OP ( are the two operations from
transactions TO and T I respectively. If OP 0 and OP> accesses
Each transaction is atomic. There can be many the same data item and at least one of the operation is write
transactions executing concurrently. This concurrent then OP 0 and O P t conflicts.
execution is equivalent to serial executions of the
transactions in any of the order. This is called as 3.11.4(B) Locking Protocol
serializability property. This property can be maintained by q. Explain two-phase locking protocol.
just executing each transaction within a critical section.
— A lock is associated with each data item to ensure the
3.11.4(A) Serializability serializability. In this case it is necessary by each
transaction to follow the locking protocol that defines
| Q, Write note on Serializability I
how locks are acquired and released. Following two
Let X and Y are two data items that can be read and models can be used.
written by two transactions TO and TI. Both o Shared. If a transaction Ti has acquired a shared
transactions executes atomically in - the . order mode lock on data item A, then T i can read this
followed by T I . This sequence of execution is called as item but cannot write A.
schedule. In serial schedule each transaction is

Scanned by CamScanner
Pl

A logical counter is used as ti


rating 5] method, the timestamp of trans >
. „ T j has acquired an
the value of the counter when
exclusive-mode lock on arrives in the system. The counter j,
both read and write A - after a new timestamp is assigned.

Two-phase locking protoco neC j e j that, each


serializability. In tWs p ts in BTS<TO,<TS<T
™z» ' )to
transaction should issue
equivalent schedule to serial schedule b
two phases: transaction may appears before T l . **
rmwine phase, in this phase, a i
o Growing p- — -lease any locks.
uire locks but may not release
~~ Syllabu Topic : Deadlock
i thic nhase a transaction may
o Shrinking pha-*- m I* • !ocks .
release locks but may not acquire y
3.12 Deadlocks
At the start a transaction i s in the growing phase. The
transaction acquires locks as required. After releasing a ■> (June 15, Nov. 15, May i 6 *
. - ---------- — — ------------- —
lock, it enters the shrinking phase, and no more lock What is deadlock ?
Q.
requests can be issued.

The conflict serializability is ensured by two-phase


locking protocol but does not ensure freedom from
Deadlock occurs in system when process waiting
transactions, there are conflict-serializable schedules resources never gets resources as these are held by
that cannot be obtained by using the two-phase locking processes. So in this scenario, several processes
' protocol. To get better performance over two-phase
locking, we must either need added information about
the transactions or to impose some structure or ordering Syllabus Topic : System Model
on the set of data.

3.11.4(C) Timestamp-Based Protocols 3.12.1 System Model of Deadlocks

In above two-phase locking protocol, the order We know that processes needs different resources!
followed by pairs of conflicting transactions is decided order to complete the execution. So in a multiprograra
at execution time. Other method selects the order in environment, many processes may compete for a multy
number of resources. In a system, resources are finite.S
with finite number of resources, it is not possible to Hi
uraenng scheme i s used. System
the resource request of all processes.
assigns a unique fined timestamp to each transaction
ore tt starts the execution. If TO transaction has When a process requests a resource and if the resori
T S(TO) and later T1
IS
system having timestamp TS(Th th t
” Ot available
at that time, the process enters a
a multiprogramming environment it
appen with many processes. There is chance**
W ting Processes
will remain in same state
nev
er again change state.
because the resources have

SyStem U the
tansactions occur on sepa -
for
Processor, not sharing a L t “
f en
will not work. method system then °f resources
avail *
C there 5
resource type instances

Scanned by CamScanner
Operating System (MU - Sem 4 - IT) 3-15 Process Coordination

- Suppose the process request the instance of resource. If


Four condition* of
any instance of the resource type is allocated to it, then deadlock
request will be fulfilled. If it is not possible to fulfill the
request, then the instances are not identical, and the 1 . Mutual exclusion

resource type classes have not been defined


2. Hold and wait
appropriately.
3. No preemption
A s a rule, first process must request a resource and then
use it. After the use of this resource, it should release it. 4. Circular wait
A process may request any number of resources which
Fig. C33 : Necessary conditions of deadlock
are necessary to complete its execution.
Following are the sequence wfien a process may utilize “♦ 1. Mutual exclusion
a resource : This condition ensure that, resource should be used by
1, Request : If it is not possible to fulfill the request single process at a time and it remains with using
instantly, then the requesting process must wait until it process in non-sharable mode. If same resource is
can get the resource. needed by another process then the requesting process
must be postponed to use this resource. Requesting
2, Use : The process use the resource to accomplish the
process will get the resource after released by holding
task.
process,
3, Release : The process free the resource after use
•4 2. Hold and wait
Syllabus Topic : Deadlock Characterization The process holds minimum one resource and waits to
obtain remaining needed resources that are at present
3.13 Deadlock Characterization being held by other processes.
3, No preemption
In deadlock, processes never finish execution as
Preemption of the resource will not be done; Process
resources are held up by other processes. Due to this
holding the resource, releases it on its own after the
unavailability new processes are prevented from starting the
completion of the its chosen task,
execution. Following are the features that characterize the
4. Circular wait
deadlock.
In this condition, a collection of waiting processes
3.13.1 Conditions say|P0, Pl Pn ) exist such that P0 is waiting for a
resource that is held by P l , P l is waiting for a resource
In deadlock, processes never finish execution as
that is held by P2 Pn-1 is waiting for a resource
resources are held up by other processes. Due to this
that is held by Pn, and Pn is waiting for a resource that
unavailability new processes are prevented from starting the
is held by P0.
execution.
Necessary Conditions 3.13.2 Resource Allocation Graphs
-> (Dec. 16)

a. Eti.-=- sufficient condition — q write note on : Resource Allocation Graph.


deadlock to occur. yMtpTjune 2015. Nov. 2015. MU - Dec. 2016. 5 Marks
Fay 20 1 6 Dec ■ 20 1 g . gJVjjgkS,

A directed graph called a system resource-allocation


graph can explain the deadlock precisely.

In this graph the set of vertices V is partitioned into two


different types of nodes. These are,

Scanned by CamScanner
cycle can be broken. He,
Conserve y c ■
MU'
in
«** deadlock state. In case if cy cle K
. of all the active pre—
The set consisung

.riT. B ofalIresourcerypes
The set consisting Rnl

R
= 1 R
'■ tb d-rechon fro* 3.14
peadfockf!?*®
wi
8
h I f there is p .hen it mean
_ m this graph If typ e R n(jW

Q. MU - June 20i c;

Whal i8 difference between deadlock


0.
instance of resource type and deadlock prevention?

fhrmf Vis'X '


THfecribed above, for a deadlock to occUr ,
, te four necessary-conditions: mutual exclusion.
X 2 preemption and circular wait ’
simultaneously in the system. By ensurtng that at

one these conditions cannot hold, we can pn


cycle, then processes are nor deadlocked.
occurrence of a deadlock. So deadlock prevention
If system contains single instance of all the res
*"“ achieved by denying the holding of at least one of,
■ 2s then all processes involved rn cycle is fol lowing four conditions.
deadlocked. So in this case, a cycle in rewurce
1. Mutual exclusion 2. Hold and wait
allocation graph is both necessary and sufficient
3. No preemption 4. Circular wait
represents single instance of that resource.

- Cycle in following graph shows the deadlock situation


The approac h i s expI ained as bel o w .
when resource instances are single. P l has requested
the resource held by P2 and P2 request the resource 1. Mutual exclusion
held by P l . P3 has requested resource R3.
Non-sharable resources cannot be shared by mas

"i processes simultaneously. I f resources are iw


sharable then mutual -exclusion condition ma
hold for them.
"a For example, It is not possible to use printer h
•M many processes at the same time. Printer is
Fig. 113.1 : Resource Allocation Graph sharable but read only file is sharable resource
do not require mutually exclusive access.
If mor. than one instances of each resource type is
present in resource allocation graph then a cycle Therefore sharable resource such as read only
indicates that a deadlock has occurred. In this case a cannot be involved i n a deadlock. We
« a necessary condition for the existence of

some resources are inherently non-sharable


SUffieieW f r
d“ " " ”°' ° ~e of
as printer.
reason behind cycle not being the snffi ■
Holil and wait
condition is that, the precess can release the held
d
resource although cycle exists. This released Here we have to see that hold and wait will
eased
then can be obtained h resource
The
--------------------------- process. * should be an assurance I*
hould
- — P 2 requests a resource only if

Scanned by CamScanner
Process Coordination

not hold any other resource. Following two - The process which requests the resources should
protocols can be used. wait in case If resources are neither available nor
Allocate all the resources needed by process held by waiting process. In this situation, if other
before it starts the execution. This protocol will process make a request for the resource held by
keep the allocated resources unused. The reason is waiting process, then it should be preempted from
that, even though some of the resources are waiting process.
needed by process in the last stage of execution, it 4. Circular waft
will hold it from start of execution.
To prevent the occurring of circular-wait condition
Second protocol allows the process to request the inflict a total ordering of all resource types. In this
resources only when it has released the previously case each process should request the resources in
allocated resources. Again the disadvantage of an increasing order of assigned numbers to the
first protocol appears here. Suppose the important resources.
resource for which many processes compete is
Consider the collection of resource types R = {RI ,
needed only at the start and end of the execution
R2, ..., Rn). To this each resource type, a unique
by this process. If initially process release resource integer number is assigned. As resources are now
and at the end request for the same, then it may recognized by their numbers, it is easy to compare
not possible to get the same resource immediately. them. It is also easy to know if one resource
So we must request all resources at the beginning. precedes the other in our ordering. Assignment of
Apart from the limitation of the above two numbers to resources is a one-to-one function
protocols, other limitation can be starvation. In p ; r _+ N, where N is the set of natural numbers.
this, processes will keep significant and popular In order to prevent the deadlock, following two
resources with them forcing to wait other protocols can be used.
processes indefinitely which require these o Each process requests resources in an
resources. increasing order of assigned numbers to the
3, No Preemption resources.
o Whenever process request instance of
We have to ensure that no preemption condition
resource type Rj it should release any
does not hold. Following two protocols can be
resource Ri such that F(Ri) > = F(Rj) .
used.
— If a process at present keep some resources with it If these two protocols used, circular wait condition
cannot hold. If process P, waits for resource Ri which is held
and requests another resource that cannot be
by process Pi+I we get F(Ri) < F(Ri+l) for all i. It implies
immediately allocated to it, then all resources at
that F(R0) < F(R1) <F(R2) < ...... F(Rn) < F(RO). By
present being held should be preempted.
transitive relation, we get contradictory statement F(R0) <
The list of the resources for which process is
F(R0). circular wait does not hold
waiting is maintained. The preempted resources
from the process are included in this list. The " ""s nTbusTopicTpeadlockAvoidan ~
process now restart the execution provided that it
can get back its old preempted resources, in 3 d 5 Deadlock Avoidance
addition to the new ones that it is requesting.
(June 15, Nov. 15)
- If the needed resources by the process are
available, then allocate to the process requesting Q. Explain deadlock avoidance.
MU • June 2015. 2 Marks
for it. On the other hand i f they are not available,
then check whether it is allocated to the process What is difference between deadlock avoidance
and deadlock prevention? _____________
which i s further waiting for additional resources.
MU - Nov. 2015. 1 0 Marks
If it is the case then withdraw the resources from
waiting process and allot them to requesting
process.

Scanned by CamScanner
3-18 | -----
MU - Sam 4 -J
cr Example
with 10 disk drjy
Consider a
UU<MW«t>'* up
Maximum needs pollcwii
AvUres the nw'htwm number of resources nee
8
PO
An ltZn«»>« method for avoiding deadlocks is
4
require additional information about wjl| Pl
to be requested. I f the sequence in which pr« 7
P2
request the resource and will release it toow
Jk.on.Hy, then we can decide for each reque. \~il71O disk drives are available in
whether or not the process should wait. L 4PJ. P0. P2> is safe seq ueoce M

To avoid the circular wait condition completely, a ’'‘’“fres 8 disk drives, process P l requi 'M
deadlock-avoidance algorithm inspects the resour “ P2 may utre up to 7 disk 4
4
allocation slate in dynamic way. Following are ieTs PO «
holding 2 disk drives and prtrcess PO J
factors which define and describe resource allocation
state. drives. (Thus in the system 2 disk dnves
o The number of available resources in the system. At time to, the system is in safe state. 1 If
o The number of allocated resources in the system. <pi PO, P2> satisfies safety condition,*! if
o The maximum demands of the processes. immediately be allocated its disk drives 1
ii
Dijkstra's Banker's algorithm is used for deadlock them after completion. Now total 4 disk J
available. 1
number of resources each process needs. PO gets all the disk drives. After completion
- After that, the OS act like a sincere small-town banker. them and now total 8 disk drives are avails !
OS will allocate the resources to processes only it has system. Finally P2 can be allocated all the ne J
sufficient number of resources available to fulfill drives and returns them after completion. (The J
possible demand. This is considered as a safe state. will have now all 10 disk drives aJ
Safe State Fig. 3.15.1 shows safe, unsafe and deadlock
spaces.
- If the system is able to allocate resources to each
process up to its maximum need in some order and still
avoid a deadlock then the system is in safe state. Unsafe

If the safe sequence of all processes present in system Deadlock


the system is considered to be in safe state:
Sequence < P l , P2, ... Pn > is considered as a safe 1
- sequence for the present allocation state if, for each Safe j
process Px, the resources which Px can still request can
be fulfilled by,

° The at present available resources in the


Fig 3.15.1 : Safe, unsafe, and deadlock states?
system
J
plus
At time t h if any process requests the

resource say disk drives in our example and if


allocation other process could not be allocated 4
t0 unava
C in unsaf *tability , then system
execution and frees the resources ° mplete the

If the system is a
e 5-1 Deadlock
deadlock. If the systein there can no Avoidance Algorithm#
ssib unsafe
P° ®y of dead]ock “ state, there is the
n (May ifc*

Scanned by CamScanner
Operating System MU - Sem 4 - IT) 3-19 Process Coordination

Following algorithms are used in the avoidance of 3.15.1(B) Banker's Algorithm


deadlock.
Deadlock Avoidance ]
Algorithms 3 Q. Explain the banker's algorithm In detail.
M U - Dec. 2016, 1 0 Marks
(A) Resource -Al location graph Algorithm |
It is applicable to the resource allocation system with
multiple instances of each resource type.
(B) Banker's Algorithm
It is less efficient than resource-allocation graph
(C) Resource-Request Algorithm | algorithm.
Newly entered process should declare maximum
(D) Safety Algorithm |
number of instances of each resource type which it may
Fig. C3.4 : Deadlock avoidance algorithm require.
The request should not be more than total number of
4 3.15.1(A) Resource-Allocation Graph resources in the system.
Algorithm
System checks if allocation of requested resources will
If system has resource allocation system with only one leave the system in safe stale. If it will the requested
instance of each resource type, then only this algorithm resources are allocated.

is applicable. If system determines that resources cannot be allocated


as it will go in unsafe state, the requesting process
Future request edge in resource allocation graph is
should wait until other process free the resources.
represented by claim edge (dotted edge).
Data structures used In Banker’s algorithm
The claim edge is transformed to a request edge when a (June 15)
process requests a resource.
Q. Explain data structures used in bankaris algorithm.
The assignment edge is changed to a claim edge after a MU ■ J u n e 2015, 5 Marks
process free the resource.
When requested resource is allocated to the process
A[m] : Array A of size m shows the number of
available resources.
assignment leads to cycle in resource allocation graph : Two dimensional array M shows maximum
then system is in unsafe state. If not so then system is requirement of the resources by each process.
in safe state. = k indicates process Pi can request at the most
k instances of resource type Rj.
After assignment of resource, if cycle detection
C[n][m] : Two dimensional array C shows cunent
allocation status of resources to each process.
The unsafe state in resource allocation graph is shown
in Fig. 3.15.2. resource type Rj.
2
Cycle detection : Ofn ) N[n][m] : Two dimensional array N shows the
remaining possible need of each process.
i.e., N[iJGl = M[i][j] * N[i][j] = k indicates
process Pi may need additional k instances of resource
f P-

3.15.1(C) Resource-Request Algorithm


R
- This algorithm decides whether requests of the
Fig, 3.15.2 ; Rewuree-allocation graph for deadlock resources can be safely granted so as to system will
avoidance remain in safe state, _________

Scanned by CamScanner
JSToperating System (MU - Sem 4 - IT) 3-20
Maximum
Let R[n][ml be the two dimensional array, describe the uirenwnt)
CfAlK***"”
current outstanding requests of all processes. R[i][j] ~
indicates process Pi want k instances of resource type msw®®6
tyP Q ’ !
Rj. to accomplish the execution. 1
5, 2 , 2 , v ,ilabl'
3. 0.
1 .■ P0 So
H R[i][j] <= N[i](j], to step 2; otherwise, raise a® 1
3, 2, £ 3. 0 . 1
error condition as process is requesting mor 0, 2, that reC
P1 3, 2, 2 2 a- 2 resotirc 1
resources than it needs. 2. 0. <1 e1
wheth
P2
L If R[i](j] > A(jL then the process should wait as exists. I
2, 1 , 0
needed resources are not available. P3 5, 0. 3
0, 2
J, Otherwise, go to step 4 P4
The st®1® „ currently with sequence <pj
E*®r
4, Allocate the requested resources t< The 9yS
*‘ state- The sequence <P2, Pl ,
Corn
is altered as
A|jl = Aoi-mn satisfy the
sP2:
C[il[j] = C[ilO) + pO
’" ( | 2. 2 ) < = ( ' • 6 , 2 )
N(ilU) - N[i]lj] - R 1’1W - if the N<=
\ ( i , 6. 2 ) - < l . 2 . 2 ) = (0, M
4 Once the resources are allocated. **. wait
must A = A
~ C (0. 4. 0) + < 3 - 2. 2) =13,
system state is safe. If unsafe, the —
L the old resource-allocated states restored
:
process Pl
fe N <= A 1( 3 0, D < = ( 3 . 6, 2) is tree I V
This algorithm find out whether system is ■"
’ 6 > 2) _ ( 3 , 0. D = (0,6,1)
or not. A =A-N(3
1. U , w be an integer array of length m, ’ ( 0 , 6 , l ) + (3. 2, 2) = (3,8,
to available in the system after P l finishes.
be array A (Available
a boolean array of number
length n.ofinitialled
resource

(determines if process finished or not). process P0 J

Find i such that both: n<=A (2, 2 , 0 ) < = ( 3 , 8 . 3) fctrae


(
F[il = false A = A - N (3, 8, 3) - ( 2 , 2 , 0 ) = (1,6,3)
N[i]<=W A = A + C ( l , 6 , 3 ) + ( 5 , 2 , l ) = (6,8,4)a*
If DO such i exists, go to step 4 in the system after PO finishes.
3. W = W + C(i]; Process P4 :
Ffi] = true;
N <= A (1,0, 1) < = ( 6 , 8 , 4) is true
Go to step 2
A= A - N (6, 8, 4) - (1, 0, 1) = (5, 8,3)
4. If F[i) = true for all i, then the system is in a safe
state, otherwise unsafe. A= A +C (5, 8, 3) + (5, 0, 3) = (10.
Kun-time complexity : 0(m x n 2 ) available in the system after PO finishes.
I
- To demonstrate the working of banker’s algorithm, Process P3 :
consider the following snapshot of the system at time
N <= A (5, 2, 1) <= (10, 8 , 6) is true
to. Processes are numbered P0 to P4 (total 5 processes).
A= A - N ( 1 0 , 8 , 6 ) - ( 5 , 2 , 1) = (5,6. 5)
- Resources are P, Q. and R (total 3 resources). Resource
type P has 12 instances, resource type Q has 9 A=A+C (5, 6, 5) + (7, 3, 1 ) = (12A
available in the system after P0 finishes.
instances, and resource type R has 6 instances.
So the required resources can be allocated »
processes in the sequence <P2, p ( , PO, P4, P3> * l j

SHSE Suppose at time H . P ? renuest one

Scanned by CamScanner
jgf Operating System (MU - Sem 4 - IT) 3-21 Process Coordination
jnstsnce of resource type P and two instances of resource (c) Yes, request can be granted immediately* After
type Q, so Request R2 - ( 1 , 2, 0) and it is not greater than granting this, Avaflabte will become ( l t 1, 0, 0).One of
available resources A. the safe sequence is (P0, P2, P3, P l , and P4).
So step 4 of the resource request algorithm indicates Allocation for P l becomes ( l t 4, 2, 0) and Need for P l
that request can be allocated. After allocation of these becomes (0, 3 , 3, 0),
sources, now safety algorithm is executed to determine Example 3.16,2
whether system remains in safe state. If any such sequence
Consider the following snapshot of the system.
exists, then system will be in safe state.
Process Allocation Maximum Requirement
3 16 Solved Problems A B C A B C

Example 3.1 8.1 fa [*11 It I'll P0 0 0 1 1 1 1


rrtnsidor the following snapshot of a /stem :
P1 2 0 0 3 2 3
Allocation Max Available
P2 1 3 2 4 3 1
ABCD ABCD ABCD
P3 1 0 1 0 0 1
P0 0012 0012 1520
P4 0 0 1 ! 3 2 1
P1 1000 1750
2356 Let ABC be the resources with instances of A is 7 , B is 3 and
P2 1354
C has 6 instances.
P3 0632 0652
(I) What is the content of matrix need ?
P4 j 0014 0656
(ii) Is the system in a safe state? Prove it.

With reference to banker's algorithm Solution :

(a) Find need matrix (i) Need = Max - Allocation

(b) Is the system in a safe state?

(c) If a request from process P1 arrives for (0, 4, 2, 0),


can the request be granted immediately ?

Solution :
(a) Need matrix is, ____________
- 1
Need

A B C D
(ii) A = 7, B = 3 a n d C = 6 are total instances of resources.
P0 0 0 0 0
By adding the Columns of Allocation matrix
Pl 0 7 5 0 We get, A = 4 , B = 3 and C = 5,

P2 1 0 0 2 Now Available number of resources = Total - Allocation


• Resource A available = 7 - 4 = 3
P3 0 0 2 0
Resource B available = 3 - 3 = 0
P4 0 6 4 2
Resource C available = 6 - 5 - 1

(b) System is in safe state because many safe sequences By using above available number of resources and
exist such as (TO; P2; P l ; P3; P4)- As avadab'e tt, < 1 . 5 . applying bankers algorithm, sequence
2. 0). either process P0 or « can complete the <P2, P3, P4t PO, P l > is safe sequence.
execution. Once process P3 finishes, tt release °
Hence, system is in safe state.
resources which permit all other existing processe.
execute.

Scanned by CamScanner
Max
R1 R2 R3
Ri
3 3 6 b Operator
t0 p4 2 2
pO P1 3 4 3 3
2 0
P2 3 4
Consider the An rthe
2
B c P3
0 c
0 0 current allocation a sate atete?
19
0 0 i) CU
request be flT *
o IIt h1°
0 M

0 1 0 ill wou
* (lU) The 5
P0 0 2
0
2 stat®7 , finist
2 0
P1 0 0 * P 1 requ®5 ' (1

0
0 3 Exampte
3 0
P2 1 0 on 2 3
Sotut* ncecPh? ’** * Conside
2 1
P3 0 2
0 0)
0 2 “: d
:,x— is
-
0
P4 instar** 5
caUon column
adding AH° ’
ABC be MaxlPU
RequesUPU = ' "'““Wil
NoW available in system are ( 7 , 7 . 10)

The request of P2 satisfied as av


10) and then of P3 is satisfied, the
at time *1 is in safe state.
Solution : _ q q o)
rpQi<= Available (0 (il) Request of

3
Example -1 6 5

I «>..« 101
Consider follow* An;

.yto P0, Let the P2 request for resources. Using Banker’s algorithm answer the following (a)

Request (P0)< "Available (0 0 0 < = 0 1 0 ) (b’


(ii) What are the content of need matrix ?
Hence request of P2 is satisfied. P2 free the resources
(c
(3 0 3) and Available becomes (3 1 3). pil) Find if system is in safe state? If it is, find
sequence.
Available - Available + Allocation {P2|.
S<
Process Max Allocation
(a
all their requested resources and safe sequence A B C D B C D

exist < P0, P2, P3, P l , P4>. Therefore system is i n safe


PO 6 0 1 2 4 0 0 1
state.
P1 1 7 5 0 1 1 0 0
(ii) If at time t t P2 makes additional request for an instance

of type C then Request [F0]< = Available (0 0 1 < = 0 P2 2 3 5 6 1 2 5 4


1 0) is false and resource C cannot be granted to P2.
P3 1 6 5 3 0 6 3 3
System will be in deadlock state.
P4 1 6 5 6 0 2 1 2
Example 3,164 " -------------

It is proposed to use bankers algorithm for handling Solution :


deadlock Totai number of resources available for allocation «) A-9; B-13 ; C-10 ; D - l l
fe 7, 7 and 10 respectively. The vocation (H) Need[i, j] = Max[i j] - Allocation jl 50
state is shown as below.
Need matrix is

Scanned by CamScanner
Operating System (MU - Ssm 4 . n")
3-23
Process Coordination
A B C D
PO 2 0 1 1 M U - N o v . 2 0 1 5 . 1 0 M a i ks
Pl- 0 6 5 0 Consider the snap of the system.
P2 I 1 0 2 Process Allocation Max Available
P3 1 0 2 0
P4 1 4 4 4
ABCD abcd | ABCD
P0 0212 0322 2532
(jii) The system is in a safe state as the processes can be
finished in the sequence PO, P2, P4, PI and P3, P1 1102 2752

Example 3.16.6 M U - J u n e 2015. N o v . 2015. 10 M a r k s P2 2254 2376

Consider the following snapshot of th© system P3 0 3 12 1642

Allocation Max P4 2414 J _ _3 6 5 8


Available
Answer the following questions using bankafs algorithm.
A B C A B C A B C
(I) What is the content of Matrix need?
CO

CM

P0 0 1 0 7 5 3 (ii) I s the system in safe state?


*


(iii) If a request from process PI arrives for (1 .3,2,1) can
Pt *2 0 0 3 2 2
the request be granted immediately ?
CM
o

cv
o
ft

Solution :

P3 2 1 1 2 2 2 (I) Needfi, j ] = Max[i jl - Allocation!. j]

So content of Need matrix is


P4 0 0 2 4 3 3
A B c D
Answer following questions using banker's algorithm P0 0 I 1 0
(a) What is the content of need matrix? Pl I 6 5 o
(b) Is the system in a safe state? P2 0 I 2 2

(c) If a request from process P1 arrives for (1, 0, 2), can P3 1 3 3 0


the request be granted immediately ? P4 1 2 4 4

Solution :
(ii) Yes, System is safe and safe sequence (PO, P2, P l ,
(a) Need = Max- Allocation P4, P3) exist.
(iii) As (3, 3, 4, 8 ) remains available in system, P l can be
granted (1,3,2,1) immediately
PO
Sylhbus°Top7c : Deadlock Detection
Pl

P2 3.17 Deadlock Detection

P3

P4 MU - June 2015. 2 Marks

(b) System is in safe state because safe sequence exist and


If the system does not make use of deadlock prevention
it is ( P l ; P3; P4; P2;PO)-
or deadlock detection algorithm, then system will observe
(c) Yes, request can be granted immediately The the
is Request <= Available that is ( 1, 0, ) <- system should have .

Scanned by CamScanner
3-24 sSf=s=S=S
7\ c the two dimensional
Ssrn 4 - RI
“’ C req
““‘’ °f
r«=l. svstetnWii ,em cofte want k
Apr /npurgtrngj nrocess Pi 'Stances JpJ
or n 0t
:un«* . iishtheexecutioQ
r «m p ' s
Syl
deadlock i Ajfiorlth - -- ------
the o1 peadlocK Detecgj
3-1®!

nin-tinx cost ot W be a« integ£r °f a


U
A (Available number of reso Urces)

J It n
which « e "* tunJ ... manure* W® Se'
I b ocess finishcd ot not)
w- Single instant* nf « of a wai ,.for (£ X» " lltii tha
t ,en to i
_ *•*»***!* graph. Wait for For all i , i f <*] ’= *
Ur
F[iJ = tr“ e -
F ind an i such that both Tli
frc
F[i] = false // Pi « currently
tv.
deadlock
allocation graph y e(Jges This is ju
W w
R(i]<=
«*““ l allLtion graph Fig- “ o
shown in resource w rj which If no such i exists, go to step 4
MDuree
shows that, process r p
// reclaim the resources of process Pi
is held by process P2. true; Go to ste 3.18
W = W + C(iJ; FM = P2
Pi
r2 If F[i] = false for some i,

then the system i s in a deadlocked state. Mod kill o


Fig. 117*1 * Resource allocution graph F[i ] = false, then process Pi is deadlocked, | acqui
is algor
Above situation in resource allocation graph Run-time complexity : O(m x n ) : |
the i
represented as show
Invocation o f detection algorithm depends on: I proc

y - Frequency of occurring the deadlock. If it is (J


Fig. 3.17.2 : Wait for graph of above resource allocation
graph frequently then algorithm will be called frequemhl

- The number o f processes that will be inflmJ


- In this way system maintains the wait-for graph and if
it contains the cycle, the deadlock exists in the system. deadlock when it occurs. I
By invoking cycle detection algorithm on wait-for
If the request of any process cannot be fulfilled!
graph, the deadlock can be detected. If n is number of
vertices in wait-for graph, n 2 operations are required to there are chances of deadlocks to occur. I
detect the cycle. The detection algorithm i s invoked in followfiil
Many instances of each resource type cases. |

Extreme : Invoke the algorithm every time a


• .« .««. B . « b.» denied
wh
•nvesugates evely «* amply
UMce for
Alternative : Invoke the algorithm at less fre
Pmcesses which remain to be 'he
The intervals :
used are as follows. structures
A[m] ; Array A of Qi? Once per hour
s
available resources. °ws the number of Whenever CPU utilization < 40%

vantage • Cannot determine

Process ’caused’ .

Scanned by CamScanner
ggT Operating System ( M U - Sem 4 - IT)
3*25
Process Coordination
Syllabus T o p i c : Recovery from Deadl ~
Invocation of the detection algorithm after killing each
process is considerable overhead. We must re-run
algorithm after each kill.
Killing a process is not easy. We should kill only those
processes whose killing will incur minimum cost.
It needs to recover from deadlock when it is detected. Different parameters that need to be consider to ensure
Several options exist after detection algorithm detects the minimum cost arc :
that a deadlock exists in the system. One possibility is
o Priority of the process.
to inform operator and let them decide how to deal with
o How much computation process has finished and
ft manually.
how much more time does it need to finish the
The second option is to allow the system to recover execution.
from the deadlock on its own (automatically). There are
o The number of the resources process has used.
two solutions exist to break a deadlock. One solution is
o The type of resources the process has used.
just to abort one or more processes so that circular wait
will be broken. The second option includes preemption o The number of the resources the process will need
of some of the resources from one or more of the to complete the execution.
processes which are involved in deadlock. o The number of processes needs to be killed.
o Whether the process is interactive or batch.
3,1 8 J Process Termination (Kill a process)
3.18.2 Resource Preemption
When deadlock occurs, the operating system decides to
kill one or more processes. This will reclaim the resources Incrementally preempt the resources from the process
acquired by that killed processes. The deadlock detection and reallocate resources to other processes until the circular
algorithm detects whether deadlock exist or not after killing wait is broken. ________________________
the process. Following two methods are used for killing the Resource Preemption I
process :
u. 1 . Selecting a victim
Two methods used for
killing the process
M 2. Rollback

1 , Kill all deadlocked processes L» 3. Starvation *

Fig. C3.6 : Resource preemption

Fig. C3.5 : Two methods used for killing process 1. Selecting a victim
Which process and which resources should be selected
-Fl. Kill all deadlocked processes
for preemption to minimize the cost? The process
- This method is simple and effective to eliminate the which has finished almost all computation and only
deadlock and clearly will break the deadlock cycle, but few amount of computation is remained should not be
at a huge cost. selected as a victim.
- If all these killed processes have been performed die 2, Rollback
computation for longer period of time, then the part y
After preempting the resources from the particular
computed result would be wasted. Recomputation will
process, it cannot continue the execution as needed
be done and is veiy costly.
resources are preempted. It is required to rollback the
2. Kill one process at a time
time and process to safe state and restarts the execution from
In this method, one process i s killed whether that state. As safe state is difficult to define, total
detection algorithm is in v ° e t0
rollback will be the simple solution. _____________
deadlock is still exist or not in die system

Scanned by CamScanner
3 2

flfTI

— ____ 1 K, Se ' edn,a,,y


riterproWem
Explain «ad - 'S
Q.
It may W ' jesouttes-
the (Refer
times io p««P‘ s)

3.19 Exam pa . an dRey!?*3 u


- ' explain < 9 philosopher
Rsfer section MW
problem
Marfa)

r <DeCJ0 1
svHabutToP ’ ’\
Explain 0Wn9 philosopher
Q. 5 Q.
SIES * semaphores. (Refer sectron 3.9.4) *1
. Toole : Monitors
I j Syllabus Topi
Q.
with example- YYtrat is monitor? How it is used to ach .
„ . — — «“ Q.
exclusion? Explain. (Refer section 3. l0) >

Syllabus Topic : Atomic Transaction,


Syllabus <°P'

Explain critical section probl (Dec. 2014) What is atomic transactions? Explain.
Q,
a (Refer section 3,11)
(Rg fer section 3.4) (5 Marks)
intri With its different
a. Q.
(Refer section 3.11. 4(A))
What is mutual exclusion?
a. a. Explain two-phase locking protocol.
<»*-*“ 3.9(5

Syllabus Topic : Deadlocks


Q. What is mutual exclusion ? Explain its significance.

(Refer section 3.5) (5 Martis) (May 201 6) a. What is deadlock ? (Refer section 3.

Syllabus Topic : Peterson’s Solution (June 2015, Nov. 2015, May 2016,Dec.?

Q. Explain Peterson solution to mutual exclusion. Syllabus Topic : Deadlock Characterization


(Refer section 3.6)
Q. Explain necessary and sufficient condh
Q. Give software approache for mutual exclusion.
deadlock to occur. (Refer section 3.13,1)
(Refer section 3.6)
r
(June 2015, Nov. 2015, May 2016, Dec 3
Syllabus Topic : Synchronization Hardware
I
Q* Write note on : Resource Allocation Graph.
Explain hardware solution to mutual exclusion.

(Refer section 37) (Refer section 3. 13.2) ( 5 Marks) (Dec.U


Syllabus Topic : Semaphores
Syllabus Topic : Deadlock Prevention

° WhatBsemaphorelflefersectionS
Explain deadlock prevention.
*■ Syllabus Topic ; Classic Prnh teffl . of
0 ems (Refer section 3. 14) (2 Marks) (June*
Synchronization of
Q. * Explain an algorithm for ,„ What is difference between deadlock avoids
arorll

deadlock prevention?
Q. Explain how Realtor r section 3.14) (WMarks) (W*
, ,bU,T
, " O«k»k »«*!.«.
1 Ew a
“" ~<ta.»ad.„c..
__ n«20l5)
(Refer . ..

Scanned by CamScanner
gT Operating i System(MU - Sem 4 - IT) 3-27 Process Coordination

What is difference between deadlock avoidance and


Q. Example 3.1 6.6 (10 Marks) (June 2015, Nov. 2015)
deadlock prevention?
Example 3.16.7 (10 Marks) (Nov. 2015)
(Refer section 3.15) { I Q Marks) (Nov. 2015)
Syllabus Topic : Deadlock Detection
Suggest techniques to avoid deadlock.
Q.
(Refer section 3.15.1) ( 1 Mark) Q. Explain deadlock detection.
(May 2016, Dec. 2016)
(Refer section 3. 17) (2 Marks) (June 2017)
Explain the banker's algorithm in detail. Algorithms".
Q. Write note on “Deadlock Detection
(Refer section 3. 15. 1(B)) (1 0 Marks) (Dec. 2016)
(Refer section 3. 17))
q Explain data structures used in banker's algorithm. Syllabus Topic : Recovery from Deadlock
(Refer section 3.15.1(B)) (5 Marks) (June 201 5)
a. Explain deadlock recovery in detail
Example 3.16.1(10 Marks) (Dec. 2014) (Refer section 3.18) ----------□qq’

Scanned by CamScanner
Module iv

CHAPTER
Memory Management

““""X St
Memory, Other Considerations. - ----------------
0, e inmemory for other processes.

Svllabu* Topic : Memory Management Strategies


Rerkqrouno , New processes are required to be loaded
memory.
Memory Management Strategies [f available main memory is not sufficient to hold ajj
processes, swapping between main memory
Background _ secondary memory is done.
Q. What are different requirements for memory Memory managers move processes back and
management ? Explain. _____ _ _
between memory and disk during execution.
computer
Memory is an important resource of the So it is required that operating system should have
the operating
system that needs to be managed by some strategy for the management of memory.
system.
Monoprogramming
To execute the program, user needs to keep rhe
program in main memory. The main memory is
In early computer systems and early microcomputa
volatile.
operating systems, mono-programming concept was
Therefore a user needs to store his program in some
used.
secondary storage which is non volatile.
Only one program was residing in main memory al
It is required to have process code, stack, heap
given point of time.
(dynamically-allocated structures), and data (variables)
to be in primary memory. Operating system was residing in some portion of

Therefore memory must be allocated to every process.


memory and other portion was fully devoted to li*
The management of main memory is required to single executing process.
support for multiprogramming. This system was simple to
design but was 001
Many executables processes exist in main memory at supporting multiprogramming. In this appro#*1
any given time. relocation was needed as program could be dwa}5
- Different processes in main memory have different loaded at the same location.
address space.
User program
- Memory manager is the module of the operating system
responsible for managing memory.
- Programs after completion of execution move out of Operating
system
main memory or processes suspended for completion
:
Monoprogramming

Scanned by CamScanner
Operating System (MU - Sem 4 - IT)
4-2 Memory Man agement

41,2 Multiprogramming ~ It is required to have process in main memory for


execution and hence physical memory should
- Multiprogramming is required to support multiple
processes simultaneously. Since multiple processes are accommodate the process.
resident in memory at the same time, it increases - The process size can be larger Chan the amount of
processor utilization if the processes are I/O bound, memory allocated to it.
CPU does not remain idle. - The overlays allows to have needed modules and data
- CPU utilization increases as it executes multiple in main memory at any given time.
processes on time sharing basis. - The data and modules which are not required for
_ Multiprogramming gives illusion of running multiple execution at that time are swapped out of memory.
processes at once and provides users with interactive - Next time, the needed modules are loaded into space
response to processes. which is freed now by swapping the modules not
required for execution.
4.1.3 Dynamic Loading
- An overlay handler exists in another area of memory
- Dynamic loading ensure the better memory space and is responsible for swapping overlay pages or
utilization. Unless and until called, routine is not loaded overlay segments.
in main memory in dynamic loading. ~ Overlays are implemented by user; no special support
- All routines remain stored on disk in a relocatable load needed from operating system, programming design of
format. overlay structure is complex.
- The main program is loaded into main memory and is Consider the example of two pass assembler. As we
executed. know that pass 2 is executed after pass 1, it is not
- The advantage of dynamic loading is that, memory necessary to keep both modules of pass 1 and pass 2 in
space is utilized only for the routines which are memory simultaneously.
currently need to be executed and other unused routines During pass 1 symbol table is constructed and pass 2
reside on disk. generates machine-language code.

- If user designs the program which uses many library Symbol table and common routines are required in both
routines then operating system will itself support for passes. Consider the memory requirement of pass 1
dynamic loading. No special support require from module, pass 2 module, symbol table and common
operating system. routines.

4.14 Overlays Pass, 1 Module 100 K


Pass 2 Module 90 K
In the early days, as Large programs were too large to fit
into their partitions, programmers created overlays to Symbol Table 30 K
solve this problem.
Common 30 K
Rather than loading an entire program into memory at Routines
once, we can divide it into those modules that are
Total 250 K
always required and those that are required only at
certain times during program execution. Total 250 K memory is needed to run the assembler. If
Overlays involve moving data and program segments only 200 K memory is available then overlays can be
in and out of memory which helps for reusing the area defined as overlay A and overlay B.
in main memory. Overlay A contains pass 1 module, symbol table and
Overlays were physical divisions of the program that common routines. Overlay B contains pass 2 module,
werc symbol table and common routines.
established by the programmer. To complete the
execution, it is required chat, entire program and data of Overlay driver of 20 K is loaded in memory. Initially
a overlay A is loaded in memory which requires 160 K of
Process must be in physical memory.
memory.

Scanned by CamScanner
Memory Ma n

4*3
.Minn System (MU allocat
O
and logical adtte of i ogical 4
in o f ‘
Once overlay A finishes execution, overlay address JP®* a physical address J*'
requir
generated ny correspond ’
0VM lay B.B Overlay
overlay in pace of
B requires IM K o . sto g
collection ot an k r
jf we
the8el 1 f instruct ons d to after
4.1.5 Relocation _______— — — - 1 °bmdirtg » ’ “ "tens.1
swap
- Xe t at rfe time or load
a —
- ---------------------- Turnon
-------------------
as follows.
a
rX.cal addresses art same. If the same blrw
Consider the pactions of m e > « W _ «curs at run time then logical and physreal add It if
effec
100K (Operating system) are different. • » x I
First partition-
Second partition: 100K ™ location register (base register) contains
- IX) Physical address (5200) can be calcul * if sc
Third partition: 20dK prio’
Zws. Consider logical address 200 of
Founh partition: 300K * disk
program.
Consider veiy first instn ‘“ " fci file created by Aft £
Physical address (5200) = logical address (offtet) 1
absolute address 100 within the binary + content of base register (5000)
mat
Manning from virtual to physical addresses is dorei,
UlXioads the program in first partition at address Sw
the memory management unit (MMU) at nin time. Th
KJOK. then instruction will find OS at absolute address dat
user program deals with logical addresses. Tbt
100. cat
memory-mapping hardware converts logical address
If instead of first partition, loader loads the program in foi
second partition, the instruction will jump to address Th
Memory protection is done by using two registers,
100K+100. mi
usually a base and a limit, as shown in Fig. 4.1.2. The
For third partition instruction will jump to address base register holds the smallest legal physical memory T1
200K+100 and so on. This problem is called as address (in our example 5000). The limit register gives pf
relocation problem. the size of the range of physical addresses. For I
The solution to this problem is to in fact modify the example, if the base register holds 5000 and the limit I b;
instructions as the loading of program is done in register is 1000, then the program can legally access all I
memory. addresses from 5000 through 6000. I
il
If programs are loaded in first partition then 100K is r
added to each address, programs loaded into second t
1000
have 200K added to addresses, and so on.
]
To carry out relocation at the time of loading like this, 2000

the binary program must have linker included list or


3000
bitmap indicating which addresses are to be relocated
and which are opcodes, constants, or other items not 4000
needed to relocate.
5000 5000 Base
4.1 .6 Logical and Physical Address Space Si
6000 ------- 1000 Limit
Q. With the help of exempts, dearly differentiate
Fig. 4.1.2 : Memory Protection with base and limit
between logical and physical address space.
registers
An address generated by CPU is called logical address
ot virtual address and an address generated by Memory ------- Syllabus Topic : Swapping
Management Unit (MMU) is called physical address.
Q
4,1 7
If size of program written by user i s 1000 K and - Swapping
loaded in main memory from address 5000 to 6000. Q
Systems in which it js „ mov e
Presses between disk and main memory

Scanned by CamScanner
■stem (MU - Sem 4 - IT)
4-4
time sharing operating system, system's Memory Management
ln a
memory is
located to multiple processes. n this scheme memory is divided into a number of

j, order to have sufficient memory free, it win


fixed-sized partitions. Each partition may contain
Uired to swap data between memory and secondary exactly one process.

storage (disk). The number of partitions and its size would be defined

If we use round robin CPU scheduling algorithm then by the system administrator when the system starts up.

after expire of time quantum memory manager will The degree of multiprogramming will be high if the

swaps out the finished process and swaps in the another number of partitions created is more.
pfOCCSS. Always a selected process from the input queue is
ft increases degree of multiprogramming ensures placed into the free partition. When the executing
effective management of memory among many process finishes the execution and terminates, the
processes. partition becomes free for another process.

If scheduling is priority based, then after arrival of high The operating system maintain a table to keeps track on
priority process low priority process is swapped out on free and occupied partitions by processes.
disk. Partitions can be of fixed size or of variable size. Fixed
_ After completion of high priority process again a low sized partitions are relatively simple to implement. The
priority process gets swapped in memory by memory problems with the fixed size partitions are :
manager.
I f program size is larger than the partition size, program
. Swapping needs a b a c k i n g store to store a swapped cannot be stored in partition.
data. The backing store is usually a fast disk and having
The second problem is internal fragmentation. If
capacity enough to store copies o f a l l memory images
partition size is larger than program size, some space of
for all users.
the partition will be unused within that partition.
■ This backing store must offer direct access to these Fig. 4,2.1 shows fixed size partitions.
memory images.
Size = 8 M
The operating system maintains a ready queue for the
0
processes which are ready to execute.
7
These processes h a v e their memory images on the 15
backing store or i n memory. 23
The CPU scheduler c a l l s the dispatcher program, when 31 OS

it decides to run the process. If dispatcher does not find


Fig. 4.2.1: Fixed size partitions
next process i n the queue i n m a i n memory, i t swaps i n
the desired process. A s a name suggest, in a variable- sized partition,
if memory region is not free to bring the process from partitions of different sizes are available. Memory is
hacking store, dispatcher swaps out current process
from the memory.
The best-fit strategy is used to allocate the partitions to
fr then reloads registers and transfers control to t the processes. In best-fit, that partition is chosen to fit
elected process. The context-switch time m sue a the process so that less amount of memory will be
swapping system is quite high wasted.

Topic : Cnrti-unua Memory Allocation It means smallest partition big enough to accommodate
the process is chosen,
__Contiguous Memory Allocation A queue is organized for each size of the partition
where processes wait for that partition.
Partitions are allocated to the processes with best-fit
Partitions i ______ fil 0
policy Best P ensures to allocate the partition
piain memory management with Fixed
0 Repartitions, exter nal Therefore variable partitions minimize internal
M
' '« is internal fragmentation an fragmentation-
J J ntation? Explain with example ___— -------

Scanned by CamScanner
Memory M an

4'5
Intern* 1 External
rating System ( M U - Sem 4 JT) parameter
reduce the Sr.
allocation may No.
However, such an aliocauv Paging
would quei* up Drawback Contiguous
performance as processes would queue up segmentation
memory
allocation reduce technique
tht
the internal solution
a queue of partition of size 8 , fragmentation externa]
fragmentation
problem.

are w
"“ 8 “ 2 Shows variable si«
external fragment!

' partitions.
0 In this technique basically a variable partitioning ,
Size ■ 1M
1 variable number of partitions determined dynamical!,.
Size = 2M
3
Size - 4M
7 is considered as one large block, of available menm,,
Size - flM
15 a hole.
OS
When a process arrives and needs memory, it search

Fig. 4.2.2 : Variable size partitions for a hole large enough to fit for this process.
This technique ensures the efficient use of main
Difference between Internal fragmentation a n d memory. There is no internal fragmentation.
external fragmentation
However, if the many small holes are scattered, it can’t
Q. Compare internal and external fragmentation. ] be allocated to one large size process. Consider the

Sr. Parameter Internal External


Initially 64 MB main memory i s available. 8 MB is
No. Fragmentation Fragmentation
allocated to operating system as shown in Fig- 4.2.3(a).
1. Partitioning Internal External For user processes 5 6 MB memory is available and
technique fragmentation fragmentation considered as one large single hole.
takes place in takes place in Processes P l , P2 and P3 are loaded and to tiiesc
fixed partitioning variable processes sufficient memory space can be allocated,
technique partitioning creating the hole of size 4 MB at the end of menwr?
technique (Figs. 4.2.3(b), (c) and (d)).
Definition At some point of time none of the allocated process
If partition size is in both fixed and
are ready.
larger than variable size
program size, Operating system swaps out P2 and in its pl*#
partitions it may
process P4 of size 1 8 MB is swapped in Fig- 4.2-3(c)-
some space of the happen that,
The hole of 2 MB is created. Again after some tfo*
partition will be partition au since none of the processes in main memory is res '
unused within remain empty and only P2 is in ready suspend state, is available.
that partition even though
called as internal To swap in P2, there is no sufficient space is avail’ 1’1'
processes are back
fragmentation So operating system swaps P l out and swaps P2
waiting in o j.
»i as shown in Fig. 4.2.3(f).
partitions queue
leading to external plesbowsthat,,hereareman
ShX y scattered
fragmentation.

Scanned by CamScanner
MU
fagg* j Sem 4 - IT)
4-6
cannot be allocated to the
P ess O f Jarger Memory Management
these holes. This is CaJJed
tion. extern niOve
one 1- g size hole of 8 MB at the dstart
downward, we get
of user memory

bm O S size = BM
OS
- 8M
ever in this, 48 MB processes need to be moved
compared to 28 MB shown in Fig. 4.2.4(b).
PI size - 22M t 4.2.4(d) shows the 28 MB shifting of processes.
P1
size =□ 22M
mpaction should ensure the moving of processes
av
ibg minimum total size. Compaction increases the
degree of multiprogramming.
P2 size = 20m It is because processes can be loaded in the memory
Size - 34M
which is available by combining many scattered holes.
If relocation is done at compile time (static),
S i z » = 14M compaction is not possible.
I l is possible only i f relocation is dynamic and done at
(t»
<«)
execution time.
OEJsza . SM O £ s u a = : BU
OS size - 8M OS size = 8M

P? size = 20M P21iMr20M PSbU ZOM

p|«s«22M P I size = 22M P2 size = 20M


Sm ■ 2W
Sue = 8M

P4 SC*- 1BM
Size = 2M P4 size - 18M
P4»ize= ISM
T ?M
P3 sue = TOM
P4 size - 1 8M P4 size k 18M
PIsizs ZOM P3rtz»- 1DM
PS sue . 10M
Si«- 8M
Size = 4 M ~
Size - 2M Size- 2M
(d)
!•)

P3 $ i z e = 1 0 M P3size = lOM
Size* TOM
Fig. 4,2.4 : Compaction
Size = 4M
Si»= 4M Size = 4 M

(O
4.2.4 Memory Allocation Strategies
-> (May 201 6)
Fig. W : Creation of scattered holes in dynamic
q. Discuss partition selection algorithm in brief.
partition technique (MU ~ May 2016, 4 Maries)

<2.3 Compaction Q. Explain different memory allocation strategies.

Solution to above discussed problem i s compaction

Compaction is a technique to covert many small s' process.


Altered holes into one large size continuous hole Memory
allocation atratoflla* |
location of the various processes so as to accumulate
the free space in one place.
1, First-fit
Io compaction, there is efficient use of process
4.2.3 shows the memory after compaction 2. Best-fit |

‘ 4.2.4(a) shows rhe scattered small 3. Worst-fit J


el on
P4 and P3 moved up. ® , fb)

« the end of memory as shown in Fig- Fig. C4.1 = Memory allocation strategies

Scanned by CamScanner
Memory M.

4-7
MU - Sem W‘>rs,fit 600 K partition.
*
,12 K is pi* “> 600 . .
500KPartl
' L7K 1 s PU t*
* tototXltegyof mcoX Kparti,iOT(6ooK 2i2K)
: ; 1 2Kispu.to - -
wait
426 K must ' sout bethelit5t
- in e i e. B «t-mto2 l
available to start the search when!

H U - May
. Searching can either start of d-e set
previously search was s 1S big
oTselection algorithm in brief. Given
d18CO6S parton ZQOk 300k and 550k (In orte r) ,
of holes.to hold
enough Searching end *is local
the process
parttfc>n
° h «f the W fit. beat fit and worst lit algorithm
220k, 430k, 110k , 425k (in order),

*lx* X'ZXkes

Solution :
the moat effident use of memory?

First-fit
2 2O K is put m 500 K partition.

<+

3, Worst-fit
— ] 10 K is pul in 150 k partition.
_ b strategy of meminy
largest bote gets allocated to incoming process. Ag .
, If the list of botes is not kept i n sorted order of size Best-fit
then the complete list needs to be searched Tins
approach leads to create the largest leftover hole which
can be used again to allocate the next incoming
process _______

Example 4.2.1
Worst-fit
Given memory partition of 1 00 K, 500 K, 200 K, 300 K, and
600 K in order, How would each of the First-fit, Best-fit and
Worst-fit algorithms place the processes of 212 K, 417 K,
430 K is put in 500 K partition.
112 K and 426 K in order? Which algorithm makes the most
efficient use of memory?
1 10 K is put i n 330 K partition (550 K - 220 K).

Solution : 425 K mu'st wait.

First-fit In this example, Best-fit turns out to be the best.

- 212 K is put in 500 K partition.


Syllabus Topic : Paging
- 417 K is put in 600 K partition.
112 K is put in 288 K partition (new partition 288 K =
4.3 Paging
5OOK-212K).
426 K must wait. 15)

Best-fit
Whaf is paging? M U - J u n e 2015. 5
- 212 K is put in 300 K partition.
Paging i s memory management technique which
- 417 K is put in 500 K partition.
Physical address space of the process *
- 112 K is put in 200 K partition.
noncontiguous.
- 426 K is pot in 600 K partition.
SCnsc
nr ’ pa 8 in 8 mechanism is similar to .
a book. When we read a book we only see
open the cunent toread

Scanned by CamScanner
(MU - Sem 4 - JT)
4-8
other pa.es are not le to Us
Memory Management
do**1
WS
tbe san* manner, we can say even logical memo model of physical and
We
' L have a large program available, the orn „ Rowing basic operations is done in paging : ’
g a s m a n set of instructions to execute at

aH these instructions which the processor needs two d *° cal address and it is divided into
Page number (p) and a page offset (d).
' pjiecute are within a small proximity of each other
Pafie numbe
* r is used as an index into a page table.
>s is like a page which contains all the statements base address of each page in physical memory is
whic h W e are currently reading.
maintained by pa ge table.
in this way paging allows to keep just the parts of g
Jhe combination of base address with page offset
process that we re using in memory and the rest on the mes the physical memory address that is sent to the
disk. memory unit.
Th
e physical location of the data in memory is
1 Basic Operation
therefore at offset d in page frame f.
55haTisPage Table? Explain~Sr p Because of the paging the user program sights the
Virtual Address to Physical Address in Paging with memory as one single contiguous space, it gr the
example. illusion that memory contains only one program.

- Ik problem of external fragmentation is avoided by But the user program is spread throughout main
using paging. Paging is implemented by integrating the memory (physical memory). The logical addresses are
paging hardware and operating system. translated into physical addresses.
Fig, 4.3.2 shows the operation of the paging hardware.
- Id this technique, logical memory i s divided into blocks
of lie same size called pages.

- Thephysical memory i s divided into fixed-sized blocks


WOW . 0000
called frames (size is power of 2, between 512 bytes
CPU 71
and 8192 bytes, also larger sizes possible i n practice).
The page size and frame size i s equal. Initially all pages
rtmains on secondary storage (backing store).

hen a process is needed to be executed, its pages are


loaded into any available memory frames from the flMmoty

sec
°odary storage. Fig. 43.2 : Paging hardware

Consider the following example of Fig. 4.3.3 Let page

Page No. Frame No,


4 I
5 0 3 0 Page Number 4
Page Number 0
« k
1 2 1
12 Page Number 1
2 5 2 Page Number 1
16 Page Number 2
10 c 3 Page Number 0
20 3 6
d page N u m b e r s
0 4
Page Number 4
5 Page Number 2
Page Table
24 Logical Weniory
6 Page Number 3
28
7

Phyakal Memory
Phy cal Memory
:
433 Paging example for a 32 K memory
W1 d l o c«” ’ ® 1 < ,on
' with4Kpages ______________
model of physical

Scanned by CamScanner
4-9 '1 . memory is presented with an item
FT Operating ------- - ------ The associauve m at same time. If there jj
“TZoil frame nZbeTTpage 1 is in frame numbe compared with E i n g value field is returned.

7 PageOisin frame number! >* Tme ure is expensive, the search


. Although tn , n thjs way n; very faR
mechanism suppo nufflber rf entries.
mapped to frame 7 . So phy offset 15 2 ,
Normally. TL» M Md 1,024.
offset 0). Logical address 10 B i n page an ranging the numbers tie
Page 2 is mapped to frame 2.
So physical address 10 = ((2x 4) + offset ).
431

4.3.2 Memory Protection and Sharing


on performance.
the frame number. So for the Q.
Pa8 Z?i - June 2015. Nov. jOl DeoW
■ ' ’aW 'fTp
reference
n
ie
of the page, page table is verified to get
correct frame number. —7- portant factor to decide the perform
Pforectiou bit is associated with each page to denote the
Xe size is
2dL in many numbers of pages leading to have a
to write read only page, a hardware P g
large page table.
operating system.
On some machine, it is required to ioad the page table
- A legal page remains in logical address space of the
in hardware register every time when context occurs.
process and in page table this page is marked with vahd
Hence more time will be required to load the larger sue
(v) bit. Otherwise page is illegal and marked with
page table due to small size of the pages.
invalid (i) bit
More number of transfers to and from disk will be
- Paging also provides advantages of sharing of common
code. The re-entrant code cannot be modified. If many
required due to more number of page faults.
users share this code, only one copy needs to be kept in Hence more time will be elapsed in seek and rotational
main memory. delay. If page size is large then last page will remain
- The page table of different user points to same physical somewhat empty leading to internal fragmentation. Due
memory (frames). In this case data used by different to large size of pages, most of the unused part of
users can be different so data pages are mapped io program can remain in memory.
different frames.
The size of the memory divided by the page size is
By simply having page table entries for different equal to the number of frames. If the size of page is
processes point to the page frame, the operating system
increased then the number of frame is decreases.
can create regions of memory that are shared by two or
more processes.
Having a fewer frames will increase the number of
page faults because of the lower freedom y
4.3.3 Translation Look aside Buffer replacement choice.
In paging, in order to access a byte first page-table Large pages would also waste space by Intemw
entry needed to be accessed and then byte is accessed. Fragmentation.
Therefore, two memory accesses are needed to be On the other hand, a larger page-size would draw in
performed. It results in slowing down memory access by a more memory per fault; so the number of fault tnay
factor of 2. This delay would be unbearable under most decrease if there is limited contention.
situations.
Larger pages also reduce the number of TLB misses-
- The above problem can be resolved by using translation
look-aside buffer which is fast searching hardware 4-3.5 Hardware Support for Paging
cache. The TLB is associative, high-speed memory.
— _____ _________ (De<>- 16)

Scanned by CamScanner
ding System (MU - Sem 4 - IT)
4-10 Memory Management
Each operating system has its own methods for storing
Solution :
page tables.
TLB hit ratio, RAM access time t and TLB access time
Most allocate a page table for each process. A pointer T is given in problem.
to the pa£ e ublc 1S stored wi
tb the other register values
(1) Effective memory access with TLB (Ea)
(like the instruction counter) in the process contra!
Ea = {(TLB hit ratio) x ( T + t)) + (1 -TLB hit ratio) x ( 2 T + t)
block.
= (0.9 x (100 + 20) + ( 1 - 0 . 9 ) x ( ( 2 x 1 0 0 ) + 20)
When the dispatcher is told to start a process, it must = 130 ns.
the user registers and define the correct (2) Effective memory access without TLB (Ewt)
hardware page-tabie values from the stored user page Ewt . = 2T = 200 ns
table. (3) Reduction in effective access time*
, The hardware implementation of the page table can be = (Ewt - Ea) x (T/Ewt)
done in several ways. = (200 - 130) x (100/200) = 35 %

, In the simplest case, the page table is implemented as a


Syllabus Topic : Structure of the Page Table
set of dedicated registers. These registers should be
built with very high-speed logic to make the paging
4.4 Structure of Page Tables _____
address translation efficient.

, Every access to memory must go through the paging


map, so efficiency is a major consideration. Q. Discuss various techniques for structuring page
- The CPU dispatcher reloads these registers, just as it tables. MU - Dec. 2014, 10 M a r k s
reloads the other registers. Instructions i<£ load or
Following are the most common techniques for
modify the page-tabk registers are, of course,
structuring the page table.
privileged, so that only the

- The use of registers for the page table is satisfactory if Techniques for Structure
of Page Table*
the page table is reasonably small (for example,
256 entries). 1 . Hierarchical Paging
” Most contemporary computers, however, allow the
2. Hashed Page Table
page table to be very large (for example, 1 million
entries).
3. Inverted Page Table
For these machines, the use of fast registers to
implement the page table is not feasible. Rather, the Fig, C4.2 : Techniques for structuring page tables
P®ge table is kept in main memory, and a page-table
4.4.1 Hierarchical Paging
base register (PTBR) points to the page table.

Changing page tables requires changing only this one If the logical address space is very large such as 2 33 or

register, substantially reducing context- switch time.The 2m. the page table itself becomes extremely large.
Problem with this approach is the time required to - In this case each process may require may large
a user memory location. If we want to access physical address space for the page table alone.
Nation /, we must first index into the page table, using - So it would not be advantageous to allocate the page
016
value in the PTBR offset by the page number cable contiguously in main memory. Solution to this
E problem is to split the page table into smaller pieces.
**f*4A1
One method to achieve this i s to use a two-level paging
J’*’ Paged system TLB hit ratio is 0.9. Let the RAM access
tb algorithm, in which the page table itself is also paged.
** « 2 0 n s . and TLB access time T be 100 ns. Find out
Consider the system with 32 bit logical address space
' Effective memory access with TLB
with page size 4 K B . A 32 bit logical address is divided
Effective memory access without T L B in :
Reduction in effective access time. o 20 bit page number and

Scanned by CamScanner
Mei

4-11
Table
System (MU - Sem 4 U} 7T<ashe« PaS* 4
o 12 bit page offset ’ , hc address spaces are larger than 32 bits then %

Since paging of the page tables are done, a ' page table is used in which hash value „ used w
page number is again subdivide page number-
O 10 bit page number and The elements which are hashed at same loe *
' V a M is maintained at each entry in the ha*
o 10 bit page offset to aV id C0UiSi
is divided as folio* 5 : Jt is rey-y
necc3«“j ° “* ° n ' E** w-u
Thus the 32 bit logical address consistsof three fields:
Page offset
page number
d 1. The virtual page number
P2
10 12 The value of the mapped page frame
10
A pointer to the next element in the linl

The working of the algorithm is as follows :


where.
Virtual address contains the virtual page number. -ft,
P, is on index into the outer page table and
virtual page number is hashed into the hash table.
P2 is the displacement within the page of the outer page
table. The comparison of virtual page number and the field |
in the first element in the linked list is performed
The address-translation scheme for this architecture is
shown in following Hg. 4.5.2. As address translation If match occurs then the corresponding page fim
works from the outer page table inward, this scheme is (field 2) is used to form the desired physical address.
also known as a forward-mapped page table. If there is no match then succeeding entries in de
iac*o linked list are searched for a matching virtual w
number. This scheme is shown in Fig. 4.4.4.
- A variant of this method that is good for 64-bit addra
4
Outer IMO* spaces has been proposed.
| I

Pao*tjia*o*
tab* In this, each entry in the hash table refers to a number
r

of pages (such as 16) rather than a single page just Jilt


paging architecture hashed page tables.
Due to this, a single page-table entry can store tv
mappings for many physical-page frames. A gnW
0 page tables are mainly useful for address spaces, wbeff
memory references are non-contiguous and scatic.J
throughout the address space.

Hash

4-4.4 : Hashed Page Table

Fig. 4.43 : A two-level page-table scheme

Scanned by CamScanner
ing System (MU - Sem 4 -
4-12
Inverted Page Table
Memoiy Management
. Generally the operating sy stem CQ
Page 1 rcfeienCe p,ace
mfercnce into a physical memory address virtual adlZ” '?' - P“t of the

"“"tbens. is given to°X. IStU18 ° f sub<proce “' id ' Pa 8 e "


- Because the tabie is sorted by virtual' of the inv e 2 «™- Searching
pa
epemting system rs capable of computing ZT’ , * e tab ' e ' s performed for a match.
able the associated physical address entrv I. i
and to use that value directly. y ,ocated < offset ' na<Ch ently i tflen
‘ “* P h Xsicai address
PrOdUCed
egai address
illegal Z: access has' been tried. “ -* »
- Tbe disadvantages of this scheme are that each ™

°f "*■”'*> «s‘-
P ge
table may contain millions of entries.

g
m ““ “eissaved
- “““ usc
amounts of physical to sea h . | »'t the amount of time needed
the table when a page reference occur
0 001 physicai
ZZTZZi ??* " “
Th el
tables.
page ' «irc table might need to be searched to get a
C
' s search would consume more time.
- There is one entry for each real
To ease this problem, a hash can be used limit the
i nory in inverted page table . h “
search to one-or at most few-page-table entries.
aMess of the page stored in that real metnory loca(ion

is present. Syllabus Topic : Segmentation


Along with this virtual address, entry also contains the
uformauon about the process containing this page. 4.5 Segmentation
B
““ se ° f this onl
y one Pago table i s in the system, Q- What is segmentation? Explain ft with example. |
and it has only one entry for each page of physical
In segmentation, a user program and its associated data
Page table. can be subdivided into a number of segments. Different

segments of the programs can have different length.

Although there is a maximum segment length, the


length depends on purpose of the segment in the
Physical
address program.

Segments are arbitrarily-sized contiguous ranges of the


address space.

They are a tool for structuring the application. As such,


the programmer or compiler may decide to arrange the
address space using multiple segments.

For example, this may be used to create some data


Page table segments that are shared with other processes, while
F<g- 4.4.5 : Inverted Page Table other segments are not.

°tder to explain this approach, example of the Compiler cart create the different segment for global
variables, code, thread stack, library etc.
page table used in the IBM RT is
ttfonstraied. In segmentation, a program may occupy more than one
£Vifi partition, and these partitions need not be contiguous.
virtuaj addre ss in the system is composed of a
Internal fragmentation is eliminated in segmentation
«. ’ < Pr °cess-id f page-number, offset>.
but, like dynamic partitioning, it suffers from external
fragmentation. In segmentation :
a
pair <process-id. page-number>.
o Logical address it is divided into two parts: a
cany out the work as a address-space segment number (s) and offset (d) into that
segment.

Scanned by CamScanner
Memory

4-13
noting System (MU - Sem 4 1 m ====g==aBSE ==

Each entry of segnient table contains se8mentbase therefore physical a


o

and segment limit. of s segment lis present in segment table.


Se nt base indicate stamngp c
o X) therefore segment fault and trap to Os.
segment m main memory and segme to calculate physical address

denotes length of the segment.


Se nt number is used as index tq the segme Se
(lv) 3 222 ■ 8 ment 3 iS PrcSen
‘ ta SegnBnt
*
o
¥
222*302 therefore physical address = 498 + 222 =

o X (d) always between 0 and segment limit 0 111 : Segment 0 is present in segment table.
Xwie « will occur indicating a«em P tmg [ 1 1<124 therefore physical address = 330 + 1 [ ] - 44
access beyond the end of the segment Fig. . .
indicates 4 segments of the program having Example 4.5 J
different sizes. Segment table contains base an

limit of each segment.
Segment Base Length
500
Limt
1 Sapnuril
0 800 500 1300 0 219 600
Sfl»-wo sc*- 1200
1
1200 3300
1700
2 wo -1
MOO SefnsntS 1 2300 14
3 1400 1700
Sirt-OM SUM > 1 4 0 0 3100
3 1327 580
3300
S gmnrt T«Ub
Se msri 1
4 96
4500

MOO
J 1952
Segrwit?
5500
What are physical addresses of following logical addresses.
Pftytic*) Memory (i) 0. 430 (ii) 1 , 1 0 (iii) 2, 500 (iv) 3, 4000 (v) 4, 112
Solution :
Fig. 45.1 : Segmentation example
(i) 0, 430 : Segment 0 is present in segment table. No*
Segment 3 is of 1400 bytes long and its base is 1700.
we have to check (offset < limit) or not If offset is less
Reference to byte 89 of sgment 3 is calculated as
than limit then physical address — base + offset
1700 + 89 = 1789.
430<600 therefore physical address ~ 2 J 4 + 430 - 6 0
Example 4.5.1
(«) 1, 10 : Segment 1 is present in segment table. KK [4
On a system using simple segmentation, compute the
physical address for each of following the logical address If
Therefore physical address = 2300 + 10 = 2310
address generates a segment fault than indicate so. ( 0 2, 500 , Segment 2 is present in segment table. 5®
5
8O (false) therefore segment fault and trap to OS
Segment Base Length
Possible to calculate physical address
0 330 124
1 676 125 an/ 00 1 Se
Bm «nt 3 is present in segment
8
2 99
° fore physical address = 1329 +
1 1 1 = 1729
3 [ 498 302
Scgmcnt 4 is
(I) 0 , 9 9 (II) 2, 7B (Hi) 1 , 2 6 8 (h/) 3, 222 (v)o ; i 1 1 (fJ Present in segment table- I l 3
Solution : faubanduaptoOS
S WetOC
Length given is limit of the segment. ’ physical address.

(i) 0 f 99 : Segment 0 is present in


“Smeot table. N 0Wwe

have to check (offset < limit)


Orn
‘»«« Kt i s l e ss
than limit then physical address = ba 0)
» + offset
99<124 therefore physical address ~ 33O 99
+
’428
M u . Dec.

Scanned by CamScanner
i

Operating System (MU Se

parameters
Sr-
Ko- Mei Mana
Logical P(J
C generates P
address lo
Paging and seemem'* ’g“ g
’ advantages of both
« lcaI address
and 11
is divided into t Wo mah!
« me effect, ’e ° n ™ ple,nen,e<l
t°8«l*r >0
Pans: a
into two parts: a gment
page number ( p j number
and a page offset d Sharing
inTg “eXr “
segment.
11 5egIwnls div
multiple pages dT “ ‘d« i
>n
2. Table index The
Page number f~— - ■- - ths. wgnl
e re
t
1S 3,50
P’ge lable
maintained
Segment number
is
used as an us
into a p a2e “ ed as index t0 part of the logical address (containing
table. gment table. offsetand r —
«gment number) is 8plit „
Base address number and page offset.
1 uaac
aoaress
Segment
of each page in base Evcr
y segment table (
indicate starting indudcS segmajt basc
Physical memory segment limit -
physical — . and
ess of page table of that
is maintained by address particular segment.
of the
page table. segment in
main
memory and
F
segment base comnn ? *
portents '8 4 6 1 1
Segment
logical add
ress
Number has (SN)
three
denotes length of
the segment. Page Number (PN) and page offset.
Physical Memory , management unit uses segment number as an
'The combination
Segment base -ndex into segment table in order to locate the address
address of base <address
’ ‘ indicate starting
with page offset of page table.
physical address
defines the
of the segment in Then PN belonging to logical address is attached to
physical memory main memory and
h i g h order end of address of page table and it is used as
address that is segment base
sent to the an index to page table to locate entry in page table.
denotes length of
memory unit. the segment
5
Fragmentation to calculate
Suffer through Suffer through physical address.
internal external This frame number is attached to high order end of
fragmentation fragmentation
page offset to obtain physical address.

segmentation with paging.

Segment Pag®
number number

Physical
[CPU SN PN|Pageoff r|}iXM Address
Page table 7 ] Page offset]
address

C3C3

page table Physical


memory
Segment
table
itatlon with paging
Fig. 44.1* Segment

Scanned by CamScanner
4-15
needed «o execute a process,
-dnn System (MU -8em4J2
W , fl nWry
\ s wapit * '
naM cn,ire bto
’ swapping ““
J f
|1r , >rv a a ttiet1
h . 1T. J -
Syllabus ij.iliiitl t ~' M n 9° — tas,ead
, ° DaJ! ing *ll° w s t0 SWaP
“ ° ‘n y
demand pag for ej[ecut i On at that time.

7 virtual Memory M a n a g e m e n t - . WUdl


failure in accessing the page, if it j,
The,e
n dy Xped to a frame. This wiU cause 4
4.7.1 Virtual Memory lt which is a special type of interrupt.
-> (May 16 )

T , handle this interrupt, disk operation is initi „


Write short note on Vtrtt-I T..J
Memonh
7 ,72016. 5 Mark!
O S handler to read the required page in memory.
a»« this mapping of page to free frame is carried M
There are many eases where entire is
all this operation the process remains bl ed.
is needed, it a c nage becomes available, the blocked
As soon as
In many cases even though entire program
may not all be needed at (he same time. process during disk operation is now unblocked, «
feel,ng
placed on the ready queue.
Application programs always gel the °
availability of contiguous working address space due When scheduler schedules this process, it restarts m
the idea of virtual memory. execution with the instruction that caused the w
. Actually, this working memory can be physically fault.
fragmented and may even overflow on to disk storage.
. This technique makes programming of large in memory This approach o f fetching pages as they an
applications easier and use real physical memory more needed for execution i s called demand paging.
efficiently than those without virtual memory.
Before swapping in the process in memory, the pager
- Although executing process is not entirely present i n program makes sure about which pages will be used
memory, it can complete its execution due to virtual
before the process is swapped out again.
memory technique,
The complete process is not swapped in. Instead, tht
- Virtual memory permits execution of larger size
pager brings only those required pages into memory-
programs although smaller size physical memory is
available. This would restrict'the swapping in of pages in tnemon
It means larger size programs than available physical which are not needed for execution.
memory can complete the execution. o Page 0 0 0
Virtual memory concept separates the user logical
1 Page 1
memory from actual physical memory. 1
2 Page 2
- This separation offers very large virtual memory to 2 2 Page 4
programmers although actual physical 3 Page 3 3 3
Page 4
Syllabus TopIcTpeman agin """ 4 4
5 p
age 5
5 page 5
4.7.2 Demand Paging 5
6 Pages
6
6
7 Page?
16) 7 PageO
Q. Write short note on Demand Paging, 7
J . Dec. 2014. PeT Memory
p
age Table 8
Q, What is demand
9
- A demand paging is paging system
Wll
where pages are brought in main nrem a « Physical M e r < '
fr m
secondary storage on demand. °
" B* tab
le with valid and invalid Pap *

Scanned by CamScanner
- It saves swap time and the amounTZ **
p
needed. Ysical memory ------ Memory Management
T
sting the1 rv*
_ in demand paging, valid and invalM h -
8 needed
differentiate between those pages that t0 ">W Page L ' °?7"" page into an
rcstaitin
mcttl0l instruction. g the faulting
and those pages that are on the disk y
Pr0CCdure done b
- Demand paging keep more processes in for handlingg tthe
hepa p y operating system
Inemor
the sum of their memory n e e d y than
cn
- utilization as high as possible. * $uring CPU
Fig. 4.7.1 shows page table with val d
pages. Those pages which are in main ' nValid

* valid. Pages on secondary storage are marked


8
X *' in 4
Ul1
........ .......... - rage 1 a n c i 3 are nor ’
memory so marked with invalid (i) bit list ,hlS PaSe find emp[3/ from frae framc

- Page 1, page 3, Page 5, page 6, and page 7are on If dtsk is busy operating system has to wait. If not busy
are On
secondary storage, * search the page on disk and read it into main memory.
- Continuous allocution of memory to the pages is not reading of the page from secondary storage
P
necessary. The reason is , My page _ be J “ - comp etes, internal table and page table of the process
any available page frame. gets updated indicating page i s now in maw memory

- The page table maintains the information regarding Restart the execution of interrupted instruction from
newly transferred page.
the impression of having one contiguous block of
Syllabus Topic : Copy-on-Write
memory.

4.8 Copy-on- Write


land paging.
The fork() system call is used to create child process
Provides large size virtual memory. which is a duplicate of its parent. Usually, fork()
system call worked by creating a copy of the parent's
address space for the child.
Support for degree of multiprogramming is very good.
- While doing so it also duplicates the pages belonging to
the parent. On the other hand, if we assume, many child
I L. disadvantage of demand paging. processes call upon the exec() system call straight away
after creation, the copying of the parent’s address space
Pa
8e interrupts requires more number of table handling may be needless.
Processor overhead with compare to simple paged
n ana - Copy-on- Write allows parent and child to share the
’ gcment techniques.
same pages. If parent or child writes to shared pages
■3 Page Fault and Instruction Restarts then it is marked as copy -on-w rite and its copy is
created.
All unmodified pages can be shared by the parent and
frame in rnain memory, a page fault occurs. (It is a child processes.
operating system)
page fauR QCCUfS operat i n g system is responsible executable code can be shared by the parent and child.
This technique used by Windows XP, Linux, and
Eithcr Solaris.
killing the process in the case of an invalid
reference, or As soon as it is. known that a page is going to be
duplicated using copy-on-write, it is significant to make

Scanned by CamScanner
Memory Manat

a==ssa!
=sS == \7r e placement algorithm and wo*
Th e simple Pa S t (FIFO). It throws out
Operating
dte basis of Xchthey i were brought i,
3 P00
pages in jgted with each page when it
‘ °f
allocated. Several
Th6 t nem
free pages for such request _____________ ______“ i1?intomain' ° ry -
bt0USh
t rithm always chooses oldest page fc,
Syllabus i <” 1V ;
replacement- a ma jntili

PageWP!?£?"!?5 '
------- -> (Dec. 1 — ■ M h 1
1 h
doesn’t care about which pages
_____ ----------“ ------------ replacement This algorithm and which are not. However, it is
Q '“ W r ite short note on vanou_ e c 2014. 5 M a r k s accessed
MU ■ D
policies’ ws
mt strategies J j in wtn<lo 2UUV.
“ der the reference strings. 0.2. 3.0. 1.3.4.M,
T ... accommodate the new
? Ater page fault will remain always iLd consider 3 available frames.
srsrsstM-**-
f

Reference string
2 0 3 4 3
0
o 2 3
5 5 5 3
3
3_ —1
2 2 2
0_ J_ £
. X- -anheevatua ymnmn _o
4 4 4 0
2
s
Ct num of p Z » should be minimum to
get the performance.

The generation of the reference siring » — Since first three pages were initially not tn main
artificially.
memory, references (5, 0, 2) causes page faults and
The reference siring can also be generated by listing the
address of each memory reference by tracing the
brought into these 3 free frames.
system. Reference to page 3 again causes the fault. To bring
The important factor to determine number of page page 3 in memory as per FIFO policy, page 5 is
faults is the number of page frames available.
These numbers of page faults are for a particular memory.
reference string and for page replacement algorithm for
To bring page 1 in memory again as per FIFO policy
given available number of page frames.
page 0 is replaced. Page 3 is already in meWOC1
The number of frames available is inversely
proportional to number of page faults.
Reference to page 4 again causes the fault.
2
It means that if more frames are available less number
To bring page 4 in memory as per FIFO policy, P
of faults will occur. is replaced. This process continues as sho

4.9.1 FIFO Algorithm


Whenever faults occurred, it is shown that which p
— ™ ® 3 ftames
- There are in total 11. faults occult
Q. together.

5UfferS
«

l rate wai increase even


otXX f
frames increases.
al
Q* -
Compare ggeRgpl ace ment Algor
0
tnent
J !,’ LRU
onthms nd FIFO
with Illustration.

Scanned by CamScanner
gj ratrng SystemlMU - Sem 4 - 1
4-18
Memory Management
cw npare to all other page-replacement algorithms, m ,
gjgorithni has the lowest page-fault rate. The time of page's last use is associated with each
Page.
This algorithm replaces that page which will not be
ujed for the longest period of time . When a page must be replaced, LRU chooses that page
that was used farthest back in the past.
This delays the unavoidable page fault as much as LRU is a good approximation to the optimal algorithm.
possible, and thus decreases the total number of page
This algorithm looks backward in time while optimal
faults. replacement algorithm Looks forward in time.
algorithm replace the page on policy suggests that replace a page whose last
usage is farthest from current time.
This algorithm can be implemented with some
_ The optimal algorithm is unrealizable because it is
hardware support and is considered to be a good
impossible to determine the number of instructions that
solution for page replacement.
will be executed in the future before the page will be
This algorithm does not suffer through Belady’s
referenced.
anomaly.
- This algorithm does not suffer through Belady's We will again consider the same reference string 5, 0,
anomaly. 2, 3, 0, I, 3, 4, 5, 4, 2, 0, 3, 4, 3. Following is the result of
applying LRU page replacement algorithm.
, Again consider the same reference string 5, 0, 2, 3, 0, I
5 0 2 3 0 1 3 5 4 2 0 3 4 3
r—
5 £ £ £ 3 7 £ £ 3
5 0 2 3 0 1 3 4 5 4 2 0 3 4 3 0 0 £ 1 2 £ £
£ £ £ £ £ £ 2 2 2 4 4 0 0 0
0 £ £ 7 _4 4 £
2 3 3 3 3 3 Fig. 4.9,3 : Least Recently Used page replacement
algorithm
Fig. 4.9.2 : Optimal page replacement algorithm
Since first three pages were initially not in main
- Since first three pages were initially not in main memory, references (5, 0, 2) causes page faults and
memory, references (5, 0, 2) causes page faults and brought into these 3 free frames. Reference to page 3
brought into these 3 free frames. again causes the fault.
Reference to page 3 again causes the fault. To bring To bring page 3 in memory as per optimal replacement
P®ge 3 in memory as per optimal replacement policy, policy, page 5 is replaced because it is the page which
Pa ge 2 is replaced because it is the page which will not is used least recently.
be used for longest period of time. Next reference is to page 0 and it is already in memory.
Next reference is to page 0 and it is already in memory. To bring page 1 in memory again as per LRU
To bring page 1 in memory again as per optimal replacement policy, page 2 is replaced. Next reference
placement policy, page 0 is replaced. is to page 3 and it is already in memory. This process
continues as shown in Fig. 4.9.3 causing in total 11
page faults.
fetence to page 4 again causes the fault. Since page
<r Implementation of LRU
*ill no longer be referenced, it is replaced. This
continues as shown in Fig. 4.9,2 causing in Using Stack
8 page faults. Initially keep all the page numbers on the stack.
Recently Used Page - Remove the page from stack whenever it is referenced
Replacement Algorithm (LRU) and place it on the top of the stack.
C<M
We Optimal, LRU and HFO £*9® - Any time top of stack shows latest page number that is
re
q. placenwnt algorithms with Illustration. referenced and bottom shows the page which is not
FIFO and LRU pafiP replacement used for longest period of time.
____

Scanned by CamScanner
Memory Maria<

Operating System (MU -Sem,4 ; m


IT) 4~1

Using Counters

“« T'.:""' * -
. • M — « » «• ■ ““ w
“ l
““ ■ replacement, its reference bit B cheeked
incremented. the reference bit is 0 (old unused m

- Copy clock register content to time of use fiel - page is replaced.

_ Time of use field will show the time of last reference I


the page.
(■)
- Replace the page having smallest time value.

4.9.4 LRU-Approximation Page


Replacement ______ (SI

q, Explain LRU approximation approach. 7ZZ]


- Not all the system provides hardware for LRU If ihe reference bit is set to 1 , a second chance is given
replacement
to the page and then preceded on to select the «n
- If hardware support not available then FIFO algorithm FIFO page.
is used.
After a page gets a second chance, its reference bit h
The reference bit is associated with each page and cleared. Now this page is considered as arrived
stored in page table. If page is referenced for read or
currently and its arrival time is reset to the current time.
write then this bit is set. Initially this bit is set to 0 by
Because of this, page that got a second chance will not
OS, Once page is referenced then set to 1.
be replaced until all other pages have been replaced (or
- This information is used for LRU-Approximation Page
given second chances).
Replacement.
- In addition, if a page is used often enough to keep its
4.9.4(A) Additional-Reference-Bits Algorithm reference bit set, it will never be replaced.
In this algorithm, 8 bit byte is associated with page and - Page A was initially loaded at time 0 and given i
maintained in table in memory. second chance. Its arrival time is now set to current
After regular interval, OS is, interrupted which then time 20.
shifts the reference bit for each page into the high-order
bit of its 8-bit byte. 4.9.4(C) Enhanced Second-Chance Algorithm

Other bits are shifted right by I and low order bit is This algorithm considers pair of reference bit and
discarded. This 8 bit shift register shows the history of modify bit. We get 4 combinations with two bits as:
page use m last 8 time period. For example, following
values of shift register shows the use of page as shown ° It is the best page for replacement which is
below. neither recently used nor modified.
o 00000000- Not used for last 8 time period (0, ! ) _ H jn( i icates page is g00 d foe
1 1 1 1111 1 - Used at least once in each period replacement as it is not referenced but modified- fl
to
o 110001 00 - This page is uscd more recently than be written out before replacement*
one with a shift register value of 01 1 101 11 ) This page may as it sb0 * 5
The least recently used page is the lowest number ntly usedbutnotmodificd
when this shift register value is considered as unsiLS
8ne<1 ThlS Page is rc
integer. THrL ently used and modify
The number of bits used is hardwarec
. . pendent. If

Scanned by CamScanner
ng System ( MU_- Sem 4 - IT)

Clock Page Replacement - _ __ Memory Management

Clock l»« e replacement algorithm maintains all ,k, Suniv?See SS COunter


value should be replaced.
Process P has not made use of page 2 and
' frames on a circular list in the form of a clock. pages have a count of usage like 3, 4 or even 5
clock hand points to the oldest page.

After the occurrence of page fault, the pa ge neJd d as compared


Sutnent
to is
thethat these
page at 2.pages may2 still
So page shouldbe
pointed to by the hand is checked. If its ref crence bh * replaced which is having less count.
0, the page is replaced and the new page is inserted into pie limitation of the algorithm is that, if initially page
tjje Jock in its place. is used frequently and then there is no reference to this
P a ge, still it will remain in memory.
After this insertion of new page, the hand is advanced
one position. If reference bit is 1, it is cleared and the 4.9.5(B) Most Frequently Used Page
hand is advanced to the next page. Replacement Algorithm (MFU)
_ This process is repeated until a page is found with It is based on the assumption that, page having less
reference bit equal to 1.
count just arrived in memory and it is yet to be used.

4,9.5 Counting-Based Page Replacement 4.10 Examples on Page Replacement

4.9.5(A) Not Frequently Used o r Least Algorithms _____________ ________


Frequently Used Page Replacement
Example 4.10.1
Algorithm (NFU o r LFU)
For the page reference string 7. 0, 1 , 2, 0, 3, 0 , 4, 2, 3 , 0, 3,

- This policy assumes that, those pages which are 2, 1. 2, 0. 1. 7, 0, 1. Calculate the Page Faults applying (i)
referenced for more number of times will be needed Optimal (ii) LRU and (iii) FIFO Page Replacement Algorithms
again. for a Memory with three frames.
- A counter is used for each page and incremented with
each memory reference.
Solution :

Optimal Algorithm
7 0 1 2 0 0,3 4 2,3 0,3 2 1,2,0,! 7 OJ

Tat
alPage faults = 09
nF0
Algorithm : String = 7, 0, 1, 2, 0, 3, 0, 4, 2. 3, 0, 3, 2, 1, 2. 0. 1. 7,0, 1

Page faulta = 15
2,0. 1.7,0, 1
A1
8orithm : String = 7, 0J , 2, 0- 3 ’ 4> 2 3
’ ! 3
7 0 1 *£
n _4
n a

Scanned by CamScanner
Mei
4-21
Operating System (MU - Sem 4 - J ■

Example 4.10.2
1 , 2, 3. 4, 2, 1, s. "• " Mr all frames are initially empty, so * l

occur tor the following replacement algorithms, assuming tour l


unique pages wilt all cost one fault each.

(1) LRU replacement (2) FIFO replacement

Solution :

( 1 ) LRU
6 2 1 2 3 6 3
String 1 2 3 4 2 1 5
1 1 1 1 1 1 1 1 6 6
2 2 2 2 2 2 2 2 2
3 3 5 5 3 3 3 3
4 4 6 6 7 7 1
Page faults for LRU = 10

(2) FIFO
String 1 2 3 4 2 1 5 6 2 1 2 3 7 6 3
1 1 1 1 5 5 5 5 3 3 3 3 1 1
2 2 2 2 6 6 6 6 7 7 7 7 3
3 3 3 3 2 2 2 2 6 6 6 6
4 4. 4 4 1 1 1 1 2 2 2
Page faults for FIF O = 14

Example 4.10 J

Consider following page reference string

4.3,2,1,4AM, 3,2.1 ,5
Assume frame size = 3, How many page
Solution :
(1) FIFO page replacement algorithm

3 5 4,3 2 1 5
7 7 5 T
7 4_ 2_ £
3 3 3 1
Total number of page faults - 9

(2) Optimal page replacement algorithm

3 2
3
5 4,3 2
7[|7
3 3
7 7 7 2
1 5

2
7 7 7 1
t
Total number of page faults = 7
5
7 7
(3) LRU page replacement algorithm

4
7 4 3
5 4,3 2
1 5
3 _3 3 7 7 7 7
7 7 7
4
ft 7 7 7
3
7" 7 7

Scanned by CamScanner
System (MU ~ Sem 4 --rn

4. 10 M 3*?
Hit and Miss using (flu, Memory Management

raptao
p8ge(ramesi zeis3.0,4,3 >2,M,6,3,0.8, 9 ,3 i8>5 *'W Policies for , he

sider frame size = 3

(1) tB0 0 4 3 2 1 4 6
0 £ 0 £ 7 £ £ £ 3
6 = Hit
7 7 1 7 1 7
3 3 3 4 7 4

Number of hits = 01, Miss - ] 3


(2) Optimal

£ 4
3-Hit 0
0 2 £
6
9 3-Hrt 8=Hrt 5

4
£
3
Number of hits = 04, Miss = 1 0
(3) FIFO
8 = Hit 5

Number of hits = 01, Miss = 1

Ewnple 4.10.5 M U - M_________________


ay 2016, 1 0 Marks
Cafctrlate hit and miss percentage for the following string using page replacement policies FIFO, LRU and Optimal. Compare

Solution:
eSize = 3
Page Replacement
Number
of Hits - 8 Miss _ 12

Frame 2 0 0 0 5 0
0 0
0 0 0 0 0 0 0 0 0
2 2 2
1 0 0 0 0 2 2
0 0 0 2 2 2 2 7 7
2 0
PF
M Page Replacement
n er
£ Hits = j 3 d Miss = 7
0 0 0
2 0
Frame 0 5
5
2
0 0 0 0 0 0 0 0 0 0
0 0
i 0 0 £
3

Scanned by CamScanner
4 23

‘-’''"’ Repl.ecment
HJaruf M i s s ® f n
L Frame 12 0 2 7 5 0 7
© |_4 2 5
0 7 0
2 2 4 0 0 2 7 7 7_ 7 7
0 3 3 3 0 0 0_ 0 7
0 0 0 0 3 o
12___ 3 2 2 2 2 5 5 5
£
Y Y S
Fmmc Size » 4
Wo Page Replacement
for
N“mhcrc>fHits= l andM .M = g
(10
f F me 2_ O 2 0 2 2 2 5 £
rp 0
2 2 2 2 2 2 2 2 7 7 7 2 _7_
[j 2_ _2_ 2 2 2_ 2
7
J
0 0 0 0 0 0 0 0 0 _0 2
2 2 2 £
_2 J
3 3 3 3 3 3 3 £ 3 3 0 0_
2 £ £
3 4 4 4 4 5_ 5 5
4 4 4 4 4 4 5 5 5
PF Y Y Y
Frame Size = 4
Optimal Page Replacement

Number of Hits = 14 and Miss = 6


I Frame 0 0 0 0 5 0
0
0 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 E
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 C
0 0
ti
1
5
PF Y
Frame Size = 4 t LRU Page Replacement
Number of Hits = 14 and Miss = 6

Frame 0 0 5 Q
0 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

When frame size = 4 then number of hits are mote all algorithms for given page string.

MU - J u n e 20 T 5. 10 Marks

What is paging. Explain LRU t FIFO, and OPT page replacement policy for the given page sequences. Page frame size is*
2 f 3, 4, 2, 1, 3, 7, 5, 4, 3, 2, 3, 1
Calculate page hit and miss.

Scanned by CamScanner
Management
2 3 4 2 1
7 5 3

~ 3* T
7
Sit 2 3 1
7
1
Zlr?
4 7

4 and Miss = 9 7
L.

2 3 4 2 1 3 5 3 2
I
7
~2
3 7 3
I

7
4
1
f £r LRUHit = 4 and Miss = 9

(jj)Optimal
5 4 3 2 3 1
2

£
4
5
fa Optimal Hit = 6 and Miss = 7

Ewnpte <<10.7 M U ~ Dec- 2016. 10 M a r k s - - — “™~


Calculate hit and miss for the following string using page replacement policies-HFO, IRC and Optimal. Compare it for the
wme size 3 & 4. * T:
12
32 1 5 2 1 6 2 5 6 3 1 3 6 1 2 4 3
SoWon:
fame Si/f - 3
Wo

5 2 2

2
3 2 2
3 3 3
PF

2
5
5

2 6

Scanned by CamScanner
Mai
4-25
Operating S y s t s m (MU - Se m 4 J raj*
p,e
i«j upumai

Frame 1 2 3 2 1 5 2 r 6 2 r
1 1 1 1 6 6 G r
0 I 1 1 1
2 2
2 2 2 2 2
1 2 Z 2 2
3
5 5 5 5
2 3 3 3 5 5

PF - - Y - - Y ! The
Y Y Y
Frame Size = 4 l K lJ
) FIFO ------r
6 _3_J 1 3 6 i 1 £T
2 1 6 2 5 4
Frame 1 2 3 2 1 5 0
6 6 6 6 6 6
I 6 6 T 6
0 1 1 1 1 1 1 1 1 6 6

2 2 1 ’ 1 ' 1 1 1 1 1 1
1 2 2 2 2 2 2 2 2 2 2 I

2 3 3 3 3 3 3 3 3 3 3 3 3
13 3 , 3 J PF
5 1 5 5
3 5 5 5 5 5 5 5 5
H—
5
1 5
i 1

PF Y - . Y - - - Y - I- -
Y Y • • Y - i Y
Fn
(it) L R U 0

Frame 1 2 3 2 1 5 2 1 6 2 5 6 3 1 3 6 1
14 L3 ' 1

1
0 1 1

2
1

2
I

2
1

2
1

2
1

2
1

2
1

2
1

2
J

2
1

2
3

2
3

1
3

1 1
3

1 1
3

1
P-EEi i
F1 uTii l

!
2 3 3 3 3 3 3 6 6 6 6
16 6 6 6
16 W
3 5 5 5 5 5 5 5 5 5 5
5 5
PF Y Y Y - - Y - • Y - 1_ • Y Y - - - Vh
Frame 1 2 2 6 2 6 3
0 1 I 6 6 6 6 6 6 6 6
1 2 2 2 2 2 2 2
2
2
\1 2 2 2 2
3 3 3 3 3 3 3 3 3 3
3
5 5 5 5 5
PF Y Y

Frame Size
Hit Miss Frame Size = 4
Hit Miss
FIFO 6 14
FIFO II 9
LRU 9 11
LRU 10 10
OPTIMAL 11 9
OPTIMAL 1 1 3 1 7

Scanned by CamScanner
-non System (MU - Sem 4 - IT)
4'26
MU - Dec 2015 10 Mar k s Memory Management

the followios replacement algorithms, assuming three


res
f£ L ' ' 6 ’ 3 ' *■ 1 • *• 3 - 6 How many page faults would
' flv9 ’ r «™»forLRU, FIFO and optimal pagerepiacemanty
SOW**1 1

LRU 15 page faults ii) FIFO, 16 page faults iii) Optimal. n faute
••Y” in bottom rows denote the page fault

fu)H?O

Frame

PF

ffl Optimal
5
Frame

PF Y

If frame size = 5 then


LRll gives 8 page faults ii) FIFO, 10 page faults iii) Optimal, 7 page faults

Boldpage number shows the fault. 1 2 3 6


3 3
® LRU • i , 2, 3, 4, 2, L 5, 6, 2, U * * 2 3 6
®> FIFO : j , 2, 3. 4, 2, 1, 5, 6, 2, 1. 2, 3. ’’ * * * ’’ 3> 6

®> Optima]. 1, 2, 3, 4, 2, 1, 5. 6, 2, 1, 2, 3. • ’ --------------- ------------------—

-oiacement potoes FIFO. Optimal and LRU tor given reference


ate number of page faults and page Hits fw umjng three frame srza).
6 1
«■ 0, 5, 2, o. 3, 0, 4, 2, 3. 0. 3, 2. 5. 2. 0, 6, «■ .

Pullon:
Sitta

Scanned by CamScanner
4-27
im

FIFO Page Replacement


0 5
5
Number of Hits = 5 and Miss = 1 5
0 0 0 6
Frame 0 0 0
0
5 0 0
0 2
0 0 0 0 2 2 5
3
0 0 0
5

Optimal Page Replacement

Number of Hits = 12 and M 5 0 5


6 0 0

2 2
2 2 2
0 0 0
0 0 0 0 0
0 0 0 0
0 0 0 0
5

PF

LRU Page Replacement

Number of Hits = 8 and Miss = 12

Frame 6 0 5 2 0 3 0 4 2 3 0 3 2 5 2 0 5 6 0 5

0 6 6 6 2 2 2 2 4 4 4 0 0 0 5 5 5 5 5 5 5

1 0 0 0 0 0 0 0 0 3 3 3 3 3 0 0 0 0 0
2 5 5 5 3 3 3 2 2 2 2 2 2 2 2 2 6 6 6
PF Y Y Y Y - Y • Y Y Y Y - - Y - Y - Y - -

When the free-frame list was worn out, a P0 ®6


Syllabus Topic : Allocation of Frames
replacement algorithm would be used to select one cf
the 93 in-memory pages to be replaced with the 94 ,
4-11 Allocation of Frames
and so on.
Opentiag system maintains free frame list. Let us take If the more number of frames allocated to process, the
example of single-user system with 512 KB of a
P gc~fault rate decreases.
memory consisting of pages 4 KB in size
If less number of frames allocated to each process»
This system has 1 28 frames each of 4 KB in size The
operating sys i era may occunv t s f„

S
If Pa gc fault occurs prior to an executing instnKtio11
93 frames are for the user process. “ d rcmaini »g
If we consider the pure demand paging all <n niplete, the instruction must be started again- As
would at the start be put on the free-ftatt fet We must
sufficient frames to hold aU
ferem pages instruction
l
“ r ercnce.
Skate
? ie> for allocation of frames
W0Uld fKe
[ erent strategies for allocation of I

Scanned by CamScanner
MU - Sem 4 - 1
4-28
Me g ana men
for A process should have some minimum number of
location °< Tramea
frames to support active pages which are in memory- It
helps to reduce the number of page faults.
t . Equal allocation
If these numbers of frames are not available to the
2, Proportional allocation | process then it will quickly page fault.
To handle this page fault, it is necessary to replace the
3. Global vs local allocation existing page from memory.

: a oca on
Since all the pages of the process are active, it will also
C43 ®f frames
need in future. So any replaced page will cause page
b gqual allocHti° n fault again and again.
p number of processes and q number of - Since in paging, pages are transferred between main
In this case every process will get p/q frames, memory and disk, this has an enormous overhead.
to cs ample suppose 5 processes and 93 frames Because of this frequent movement of pages between
Then 93/5, that is each process will get 18 main memory and disk, system throughput reduces.
and 3 will remain at free-frame list. - This frequent paging activity causing the reduction in
* t Proportional allocation system throughput called as thrashing.

h this allocation, frames are allocated to the processes Although many processes are in memory, due to
thrashing CPU utilization goes low.
as per their needs. Suppose 60 free frames available.
Process Pl has size 10 KB and P2 has size 128 KB. When operating system monitors this CPU utilization,
And if page size is I KB. Then (10/(10+128) ) it introduces new process in memory to increase the
♦60-4.34, say nearly 4 frames allocated to P l . P2 degree of multiprogramming.
win get (128/(10+128) ) * 60 = 55.65, that is 55 Now it is needed that the existing pages should be
frames. replaced for this new process.

4 X Global vs Local allocation If global page replacement algorithm is used, it


replaces the pages of other process and allocates frames
* Pace replacement algorithm is categorized as global
to this newly introduced process. So other processes
replacement if it can choose frame for replacement
whose pages are replaced are also causes page faults.
fom all frames although currently it is allocated to
AU these faulting processes go in wait state and waits
other process also.
for paging device. In this case again CPU utilization
h means processes can take frames from other
goes low.
Processes. Local replacement needs to replace frame by
When CPU scheduler observes the low CPU utilization,
Excess only from its own set of allocated frames.
it introduces new processes again in system to improve
The difficulty with a global replacement algorithm is degree of multiprogramming. This also results in page
process cannot control its own page- fault rate, faults of running processes.
set of pages in memory for a process depends n Again the waiting queue of paging device becomes
'W on the paging actions of that process but also on long causing further drop in CPU utilization.
paging actions of other processes.
In this way page fault rate further increases and CPU
oe-ndy, the same process may perform fahty utilization decreases. There is no actual work getting
trendy. done and processes spend time only in pagtng.
This thrashing can be limited by using local page
„ placement algorithm instead of global page
replacement algorithm.
page replacement algorithm replaces the pages of
lL process instead of other process. Therefore
f s of the other processes will not be taken and
processes wffl not thrash.
Junc
is thrashing ? MU -

Scanned by CamScanner
4-29
| > .._ r . . r Mil -Sent 4 -111 .. --------------—

rr SlwmSetMoae!) -------- ged by


«rnted by providing sufficient


- Thrashing can be prevented y
frames to the process as pen store for the memory region 1024 K to 1 152 K.

- Locality model of proces process B


Thus a read from 1024 K causes a page fault
in page Oof the file.

currently using. I- ™ to the system that supports segmentatron. file


tOgether
works better. In such a system, each file can be
' .« moves tom locality to
onto its own segment so that byte n in the fife 8
byte n in the segment.
Suppose that this process copies file fllei to fife
references.
First it maps the source file, say fllel onto a segn
All the pages in most recent page references are th
Then it creates an empty segment and maps it onto ife
working set. Suppose working set window uze defm
destination file.
is g. At time tl if the page references are
At this point the process can copy the source segment
frames are required. into the destination segment using an ordinary copy
At this time if demand of the process is more that
loop. No read or write system calls are needed.
working set, thrashing will occur. Operating system When it is all done, it can execute the unmap system
will monitor working set of the process and allocate call to remove the files from the address space and then
needed frames.

If sufficient frames are not available i t will suspend the had been created in the conservative way.
process. In this way working set model helps to prevent
Advantage
the thrashing
I/O is not needed programming makes easier.
Syllabus Topic : Memory-Mapped Files
Disadvantages

4.14 Memory-Mapped Files


the output file.
Q. Explain memory-mapped files.
Lt can happen that, if file is mapped in by one process
With compare to accessing ordinary memory, accessing and opened for usual reading by another and if the fiN
files is burdensome.
process updaie a page, that change will not be imitate
To overcome the problem, files can be mapped to the in the file on disk until the page is expelled.
address space of the executing process.
- The system has to take great care to make sure the t*°
- Two system calls, map and unmap, can be conceptually
processes do not see inconsistent versions of the file-
used for this purpose.
If file is bigger than a segment, or even bigger than
map the < entire virtual address space then alternative is arrant
“ at a Evirtual
specified file into a address space
systcm to
ntap
address.
the map system call to be able to map a portion of a
Consider the file of length 12g KB which is manned
file, instead of the entire file.
mto the virtual address starting at address 1024 K.
Then any machine Instruction that reads th*- ♦
Although this works, it is clearly less acceptable
the byteat 1024 K gets byte 0 of the fife, and “ ** mapping the entire file.

Similarly, a write to address 1024 K + inno


jj nofy Mapped I/O
Wta «, PTO,
rnern
°<y-mapped I/O,

Scanned by CamScanner
tem (MU - Sem 4 * IT)
4-30 ■ *
operating system may instruct the devi- ,
Memory Managerrient

nCem
c o *S
°W n
to CPU interacts with the
operating system can know the device's Twn and also with the device data buffers,
pttons are available to achieve the same.
' re ading U* ref avaihble with device control lrSt
.
toethod, I/O port number is allotted to each
This state can be: whether device is ready to , '°" lTOl re
« is,M *hich is 8- or 16-bit integer.
w command, and so on.
P” 1 space is formed by collection of all VO ports
Besides the control registers, many devices are with
is protected so that other than OS ordinary user
0131 the programs cannot access it.
' data buff" operating system can read

write- The second approach, introduced with the PDP-11, is to


Two address
map
to control registers into the memory space, as
OxFFFF
Each control register is assigned a unique memory
Memory
address to which no memory is assigned.

I/O ports This system is called memory- mapped I/O. In most


systems, the assigned addresses are at or near the top of
the address space.
A hybrid scheme, with memory-mapped VO data
(a) buffers and separate I/O ports for the control registers,
is shown in Fig, 4.15.1(c).
One address space
Syllabus Topic : Allocating Kernel Memory

4.16 Allocating Kernel Memory

Following are the two approaches for managing free


memory that is assigned to kernel processes : the "buddy
system." and slab allocation.

f
4.16.1 Buddy System
(b) F Q. Explain Buddy system.
s
The kernel produces and wipe outs small tables and
it
Two address spaces buffets repetitively during the course of execution, each
d
of which requires dynamic memory allocation.
In SVR4, there is dynamic allocation of several objects
o
bv kernel as per requirement; for example these objects
X proc structures, v-nodes, and file descriptor blocks.
te
Several of these blocks are having considerably small
siK with compare to usual machine page size. As a
a kernel meraory aUocauon
Xit. for '
paging system seems non efficient.
an 7 qVR4 a revised buddy system is used. Kernel
<15.1 : (a) Separate I/O and memory space (b)
Memory -mapped I/O (c) Hybrid ' rXroory management requires the fast precess of
allocation and free operations.
As
an example, computers display pixels •nre disadvantage of the buddy system ts the tune
raMt
is possible because of video or the ' ded lo fragment and combine blocks.
bu
ffer that offer facility to lbc prOgfa

Operating system for writing in it-

Scanned by CamScanner
Memory M,
4-31
- Sam 4 - 1
aCCO
‘ in '3 rf locally fee blocks of particular si*

size (Li<=Ai)*

r This is a practical rule for limiting the expand *


"
blocks ex
blocks of a particular size changes slow y locally fo* - P crim “‘ S in
®ARK89i
shows that this method results in remarkable savings,

rr. ir
:x Di = Ai-Li = Ni-2Li-G

4.16.1(A) Operation of Buddy Algorithm


i wb a™y J-"”" '
- ------- ain working operation of budtfr algoritta
To avoid this unnecessary combining and splitting, the
laay buddy system postpones combining unul rt seems The buddy algorithm manages chunk of memory « „
Zly that it is needed, and then combines as many follows. Initially memory consists of a sia
blocks as possible. contiguous piece, 3 2 pages as shown in Fig. 4.164.

The lazy buddy system uses the following parameters : - Any request for memory is first rounded up to a po

o Ni - Current number of blocks of size 2 of two, say 8 pages. The full memory chunk is
divided in half.
o Ai - Cunent number of blocks of size 2 that are
- Since each of these pieces is still too large, the lower
allocated (occupied).
piece is divided in half again. Now suppose that a
o Gi - These are number of blocks that are currently
second request comes in for 4 pages. The smallest
globally free having size 2 ; these blocks are
available chunk is split and half o f it i s claimed.
qualified for combining; if the companion of such
a block turn out to be globally free, then the two
blocks will be combined into a globally free block
16 16 16
of size 2 ' . If standard buddy system contains
free blocks (holes) then all these could be though
as globally free. 32
4 ;
o U - These are number of blocks that are currently
------------
locally free having size 2 ; these blocks are not 16 Allocated
qualified for combining. Although the companion Allocated Allocated
of such a block becomes free, the two blocks are a -J
not combined. Instead, the locally free blocks are
kept awaiting future demand for a block of that Fig. 4.16.1: Operation of the buddy algorithm
size.
4.16,2 Slab Allocation
The following relationship holds :

Ni = Ai + Gi + Li Q. What is slab allocation? Explain:


The lazy buddy system attempts to keep a group of
Usually main memory page frames are allocated to
locally free blocks and only calls coalescing if tfe 1
user-space processes, dynamically allocated W*
number of iocrdly free blocks goes above a tLshold
static kernel code, and the page cache.
value. If more number of locally free blocks i
available, then it may happen that, at next \ Linux kernel memory manages physical ®a*1
memor
suffemnt free block will not be available to fulfill y page frames. Its main function is to «!«**

deal ,ocate
" for particular uses.
Usually, coalescing does not occur after .
block, which leads to minimum accounting manae ° Cat *on mct
hod used for user virtual tpe®*
ba
operational costs. Locally and globally free bl v XT “ « mel memory

— - — - ...
■■ _____ LOSS

Scanned by CamScanner
r (MU • S&m 4 - IT)
4-32
MemoryM agme
the virtual memory scheme, a buddy algorithm is
50
' memory for the kernel can be allocated and
U
*\ j j6 d in units of one or more pages,
minimum amount of memory that can be allocated fahdAiti
3-KB
' way is one page. objects
h _______; ____

kc mei needs small short-term memory chunks in


' si# 5 501 Pa £ e a 0cat0r alone would be
;; .. PhysicaJiy
jpefficient In order to hold such small chunks, Linux contiguous
scheme known as slab allocation within an pages
a
H ■I
allocated page. -
7 -KB J
Op a pc ntium/x86 machine, the page size is 4 Kbytes,
objects
hi u T i iiii 11
:
--■■■'• -- - »

128.252, 508, 2,040, and 4,080 bytes. ’

jjnux also maintains a set of linked lists, one for each


Fig. 4,16.2 : Slab allocator in Linux
o f chunk. Chunks may be split and combined in a

yay like the buddy algorithm and moved between lists As an example, a 16-KB slab (made up of four
accordingly. contiguous 4 KB pages) could store eight 2-KB objects.
At the start, all the objects in the cache are marked as
The memory is allocated to kerne) data structures using free.
dab, A slab comprises one or more physically
At the time of requirement of new object for a kernel
contiguous pages. A cache contains one or more slabs.
data structure, the allocator can allot any free object
- A cache is assigned to each unique kernel data from the cache to fulfill the request Then this assigned
structure. For example, cache for process descriptors, object from the cache is marked as used.
file objects, and a cache for semaphores, and so on. In Linux at any given point of time, a slab may be in
- The objects that are instance of the kernel data structure one of three possible states :

are occupied in the cache which represents this data 1. Full : All objects in the slab are marked as used.

structure. 2. Empty : All objects in the slab are marked as free.

' This means, the cache representing process descriptors 3. Partial : The slab consists of both used and free
objects.
stores instances of process descriptor objects.
The relationship among slabs, caches, and objects Syllabus Topic : Other Considerations
shown in Fig. 4.16.2.
4 j 7 other Considerations
h the following Fig. 4. 16.2 two kernel objects having 3
size and three objects having 7 KB size is shown.
4.17.1 Prepaging
objects are stored in the respective caches for
Demand paging leads to many number of page faults
and 7 KB objects.
" whTprooess starts execution. This is due to attempt to
h slab-allocation algorithm caches are used to obtain the initial locality into memory.
el objects. A number of objects are assigned can occur at other times also. For example,
he after the cache is created. ' insider restarting of swapped-out process having ah
7116 pages on the disk, and requires each page to be
of the associated slab decides the number o f ched by its own page fault.
o Kcts in the cache.
p Ug is nothing but trying to stop this high level of

initial paging-

Scanned by CamScanner
Memory M a n ?
4-33
Operating System (MU - Sem 4 - IT)
J““ ' "«=
The policy is for bringing all needed pages into size.
memory at one time. Some operating system’,
TLB stores the working set of process. Otfte
example, Solaris prepage the page frames for small
process will require more time resolving
files. references in the page table rather than the TLB. '
Working set model permits to keep each process a list
If number of entries in the TLB i s increased the, -
of the pages in its working set. If it is required to
reach also increases.
suspend a process, because of an 1/0 wait or a lack of
Other scheme to increase the TLB reach is cit c
increase the size of the page or offer multiple
For the resumption of such process due to I/O has
sizes.
finishM or sufficient free frames have become
available, it is possible to bring back into memory its It may lead to fragmentation as many applica

Prepaging may offer an benefit in some cases. The manage TLB for multiple page sizes instead

question is simply whether the cost of using prepaging hardware manage it.

is less than the cost of servicing the corresponding page


4.17.4 Inverted Page Tables
faults.

- The inverted page table i s already discussed. The


4.17.2 Page Size
reason behind this type of page management is
Page size is important factor to decide the performance. lessen the amount of physical memory required touacfc
If page size is small then any program w i l l be divided virtual -to-physical address translations.
in more number of pages leading to have a large page
- This saving is achieved by creating a table that has one
table.
entry per page of physical memory, indexed by the pair
On some machine, it i s required to load the page table
<process-id, page-number>.
in hardware register every time when context occurs.
A s inverted page table maintains the information
- Hence more time will be required to load the larger size
regarding w h i c h virtual memory page is stored in each
page table due to small size of the pages. More number
physical frame, they lessen the amount of physical
of transfers to and from disk will be required due to
memory required to store this information.
more number of page faults.
O n the other hand, the inverted page table does not
Hence more time wilt be elapsed in seek and rotational
delay. includes whole information regarding the logical
address space of a process, and that information is
If page size is large then last page will remain
somewhat empty leading to internal fragmentation. necessary i f a referenced page is not presently in
memory.
Due to large size of pages, most of the unused part o f
program can remain in memory. In order to process page faults, demand paging need5
this information. An external page table should I*
4.17.3 TLB Reach
maintained per process for availability of information
This each page table is similar to the conventional p*
P ess page table and includes information on whri*
Y haVmg more
number of entries in the TLB
each Virtual page is located.
This, on the other hand, is not cmt ■
As 0(1
Ktlve an external page tables are referenced only
associative memory is both . ’ “ 'he
eXpens,ve
consuming. “d power
av ■|r nCe Pa ®e fault, they do not require to t*
available quickly. .

is a metric which is similar to the Ti r


refers to the total memory accessible from the"
men mo TniX arc

to
. P e another page fault as it” 7n ex

Scanned by CamScanner
em (MU - Sem 4 - 1
4-34
table it needs to search the virtual Da „ „ , Memory Management
firing store. In kernel this situation is Page Table? Explain the conversion of
dled carefully. dual Address to Physical Address in Paging with
example. (R e f er sect]On 4 3
program Structure
Q.

[f user or compiler is aware of underlying demand

paging then it is possible to improve systenj (Jun® 2015, Nov. 2015, Dec. 2016)
Q. Explain the hardware support for paging.
If data structures and programming structures carefully (Refer section 4.3.5) (1 0 Marks) (Dec. 2016)
selected, then it can increase locality and hence
Syllabus Topic : Structure of the Page Table
decrease the page-fault rate and the number of p ages jn
working set. Q. Discuss various techniques for structuring page
tables. (Refer section 4.4) (1 0 Marks) (Dec, 2014)
4 18 Exam Pack Syllabus Topic : Segmentation
(University and Review Questions)
Q. What is segmentation? Explain it with example.

r Syllabus Topic : Memory Management (Refer section 4.5)


Strategies Background
Q. Explain the difference between paging and
What are different requirements for memory Segmentation. (Refer section 4.5.1) {5 Marks)
Q.
management? Explain. (Refer section 4.1) (Dec. 2016)

Q. Explain segmentation with paging.


0. Write short note on Relocation. (Refer section 4.1.5)
(Refer section 4.6)
Q, With the help of example, clearly differentiate
Syllabus Topic : Virtual Memory Management
between logical a n d physical address space.
(Refer section 4.1.6) Q. Write short note on Virtual Memory.
(Refer section 47.1) (5 Marks) (May 201 6)
Syllabus Topic : Contiguous Memory
Allocation Syllabus Topic : Demand Paging

Q. and Q. Write short note on Demand Paging.


Explain memory management with Fixed
(Refer section 4.7.2) (5 Marks)
(Dec. 2014, Dec. 2016)
0, external
What is internal fragmentation and
a. What is demand paging? (Refer section 4. 7.2)
fragmentation? Explain with example.
(Refer section 4.2. 1) a. List advantages of demand paging.
(Refer section 4. 7.2)
Q.
Compare internal and external fragmentation
Q. List disadvantage of demand paging.
(Refer section 4.2.1)
(Refer section 4. 7.2)
Q.
What is compaction? Explain with example Syllabus Topic : Page Replacement
(Refer section 4.2.3)
write short note on various page replacement
Q.
Discuss partition selection algorithm in brief. policies. (Refer section 4.9) (5 Marks) (Dec. 2014)
(Refer section 4.2.4) (4 Maries) T ies.
Q. E ain the various page replacement strategies.
Ex
Ptain different memory allocation stra a. (Pefer section 4.9)
(Ref
er section 4.2.4)
(May2°16) Write short note on Belady-s anomaly.
1
(5 Marks) Q.
(Peter secSon 4.9) (5 Marks) (May 2016)
# Topic : Paging

What
i» paging ? (Refer section 4.3) (5 5)

Scanned by CamScanner
j LbOperating System (MU - Sem 4 - 1’ 4-35
Memo
Q. Compare Optimal, LRU and FIFO Example 4.10.9 (10 Marks)
page
replacement algorithms with Illustration,
Syllabus Topic : Allocation of Fra
(Refer section 4.9.1)
Q. Explain different strategies for allocs
Q. Compare FIFO and LRU page replacement (Refer section 4.11) n

algorithm. (Refer section 4.9.1)


Syllabus Topic : Thrashing
Q. Compare Optimal, LRU and FIFO page
replacement algorithms with Illustration. Q. What is thrashing ?
(Refer section 4.9.2) (Refer section 4. 13) (5 Marks)
1
Q. Compare Optimal, LRU and FIFO Syllabus Topic : Memory-Mapped Fii *
page
replacement algorithms with Illustration. Q. Explain memory-mapped files.
(Refer section 4.9.3)

a. Compare FIFO and LRU page


replacement Q. Write note on memory- mapped I/O
algorithm. (Refer section 4.9.3)

Q. Explain LRU approximation approach,


Syllabus Topic : Allocating Kernel Men
(Refer section 4.9.4)
Q. Explain Buddy system. (Refer section 4.17)
Example 4.10.4 (10 Marks)
(Nov. 2014)
Q.
Example 4.10,5 (10 Marks)
(May 2016)
Example 4.10.6 (10 Marks)
(June 201 5) Q.
Example <10.7' (10 Marks) " ha,iss,ab
"location? Explain.
(Dec. 2016)
5 £!l<W arks)

Scanned by CamScanner
chapter
Module V

ement

XCS s,=,.„ Fhs „,


. Allocation • S= J SS -

Secondary Storage Structure : Overview of Mn e*


Disk tnjcture
Disk Scheduling, Disk Management, RAID Struck , Disk Attachment,
3
Structure, Swap-Space Management ieipiementation. Tertiary-Storage

Syllabus Topic : File System - File C o n c e p t o The information should not loss after termination
of the process using it.

5.1 File System o Several processes must be able to access the


information simultaneously.
5.1.1 File Concept o In order to fulfill the above requirements, it is
necessary to store information on disks and other
- Every application needs to access and store the
secondary storage in units called files.
information. During execution, process can keeps the
o A file is a named collection of related information
information in its address space. This address space to
that is recorded on secondary storage. The data
store information is sufficient for some type of
cannot be written directly to secondary storage
applications but not for others, which requires huge
unless they are within a file. Processes can then
size of information.
read information from the file.
After finishing the execution, process terminates and
o It also can write new information into the file if
formation also vanishes. For many applications, it is needed. After process termination, information in
required that information should be retained forever. file should remain retained and should not vanish.
Even though computer crashes or process is killed, still A file should only vanish when its holder clearly
information should remain retained. Most of the time, removes it. Operating system manages the files.
information i s sharable among many processes. o Fite 'system describes bow files are structured,
named, accessed, . used, protected and
If information kept in the address space of one process,
implemented.
processes will not be able to access it. It is
wsessary to keep information independent of the
5.1.2 Fite Attributes
Processes.
lowing the necessities for long tenn information File is identified by its name which is string of the
characters. Every file has a name and its data. All
storage.
operating systems associate other information with
0 Jt c
must be possible to store a very larg amount of
mfortnafion.

Scanned by CamScanner
_ _ Storage Manao
|»F Operating System (MU -
Files are used to store information which can
each file, for example, the date and time of file creation
later on. For storage and retrieval purpose, different ty
and the size of the file.
systems offer different operations. For the operation
- Such extra information associated by operating system create, write, read, reposition, operating system offer
is called as attributes. The lists of attributes are not system calls. The majority of common system calls re
same for all the systems and vary from one operating to files are listed in Fig. C5.L
system to another.
Common system calls |
- No single existing system supports al! of these relating to files [
attributes, but each one is present in some system.
Some of the attributes and its meaning of file 1. Create

Q. Explain various file attributes in brief. 2 . Delete


-
Attributes Meaning ____________
3. Open
Protection Access-control information
determines who can do reading, 4. Close
writing, executing, and so on. ______
5. Read
Size The current size of the file ________
Information is needed for systems 6. Write
Type
that support different types of files.
7. Append
Creator ID of the person who created the
file 8. Seek

Owner ■ Current owner of the file 9. Get attributes


Password Password needed to access the file
10. Set attributes
Identifier This unique number which
identifies the file within the file 11. Rename
system.

Hidden flag 0 for normal; 1 for do not display in Fig. C5.1 : Common system calls relating to files
listings
1. Create
Read-only flag 0 for rcad/write; 1 for read only
The new file is created without data. It is needed to
ASCII/binary flag 0 for ASCH file; 1 for binary file
locate the space for this created file in file system. Tl*
Lock flags 0 for unlocked; nonzero for locked
intention of the call is to declare that the file is created
Key length Number of bytes in the key field and some of the attributes needs to be set.
Time of last Date and time the file was lasl 2. Delete
access accessed
File need to be deleted to release the disk space
Time of last Date and time the file has last
change occupied by tile when it is no longer required.
changed
Maximum size 3* Open
dumber of bytes the file may grow
_________________ Before making use of file, it is required to open the file
System flag () for normal files; 1 for system file first by any process. To have quick accesses in future
all the attributes and disk addresses are bought ifi*°
5.1.3 File Operations main memory.

4. Close
Q
\ Explain various file operations in brief.
After use of the file completes, all accesses
finished. The fetched attributes and disk address
memory are no longer required, so the file should

Scanned by CamScanner
system (MU - Sem 4 ■ IT)
5‘3
Storage Management
is vided in blocks and is written' J The rules for flu
. nammg are not same for all systems,
Jl'ng a fife c0 "'P e l writi
"E of the file’s last block.
S rOni s stem to
nnpftif’ system, but all current
. Head
as l allow strings of one to eight letters
nan
j; M «S. If an operating system
tie name of the file. Generally, the bytes are fetched the 1file
n gU1SheS the of a
file, it can then operate on
the r i m logical Ways
fin® the current posmon. The system should maintain
.g s and special characters are also allowed to use
is 10 begin. The caller must state the am ount of * giving the fife names. UNIX operating system
jaU desired and must also offer a buffer to put them in. ifferentiates between upper and fewer case letters,
whereas MS-DOS does not.
x.
Windows 95, Windows 98, Windows NT and
' — -- -------------- - -v mi. spcciry rife name
Windows 2000 support the MS-DOS. fife system and
and information to be written to the file. Writing is
thus inherit its properties.
Fife name is divided into two parts - name and
increase if writing starts from end of the fife. In this
extension which are separated by period character.
User and operating system can recognize the type of
file from name only.
of the file, the previously existed data will be
overwritten and vanished permanently. MS-DOS allows the fife names containing 1 to 8
characters along with an optional extension of 1 to 3
4 7. Append
characters. On the contrary, in UNIX user has a choice
With the help of append system call; data gets of deciding the size of extension. In UNIX, file can
appended only to the end o f the fife. Systems offering have two or more extensions.
less number of system calls do not generally provide
Examples of file extension, Its meaning and
append, type
* I Seek
- Table 5.1.1 shows the some examples of file extension,
The seek system call is used to place the pointer at a its meaning, and type.
particular location in the file. It is needed for random
Table 5.1.1
access files to specify from where to obtain the data.
After this call has completed, data can be read from, or Extension Meaning Type
Witten to, that position.
CompuServe Graphical
Get attributes
gif Interchange Format Image
frocesses frequently need to read file attributes to image
c
°nipfefe theij- designated work. Get attributes system
World Wide Web
is used for this purpose.
html Hypertext Markup Web page
101
Set attributes
Language document
can change the some attributes of file after its
Complied machine
<1 crca
h°O- This system call makes that possible. obj, o Object
language not linked
111
Rename
Ready to execute
needs to change the name of an existing t de. This
exe, bin, machine language Executable
' Stern call makes that possible. com program ____

Naming txt,doc Text, document, data Text

cess
&ves the name to the file when it creates it Libraries of routines Library
lib. a, dfi
co ess
C* «ipletio n of the execution, even though pw for programmers ____
file still exist and other processes can
---------------- —
Slha
\ 'filebythe same name. __

Scanned by CamScanner
Stora

feed character- MS-DOS uses both types of


different length.
Type
Extension
Archive The great benefit of ASCII files is that they
arc, rip, tar Grouped files together displayed and printed as they are. Any *
sometime compressed edit ASCII file- Furthermore, tf large nua
for archiving or storage programs use ASCH files for input and «
Multimedia
Binary files containing easy to connect the output of one program to the J
tnp3,avi,
audio or audio/vidco of another, as in shell pipelines. Binary files
mpg
information __ ASCH files.
Portable Listing them on the printer gives listing whitk
Pdfips Its content cannot be
document format
changed
and postscript if it listed on the printer. Usually, they have
internal structure known to programs that use them.
5.1.5 File Types UNIX binary file is an archive. It consists of ,
collection of library procedures compiled but w

Several operating systems support many types of files.


from the TOPS-20 operating system.
Following are the different types of files.
The recompilation of source file will be carried
Types of files

is done if user wants to execute the object fikli


1. Regular files guarantees that the user all the time runs an up-todgt
object file. It saves the waste of time in executing te 1
2. Directories
old object file.

3. Character special files To achieve this automatic recompilation of modifd


source file, the operating system must have tit
4. Block special files
capability to make a distinction between sourced

Fig. C5.2 : Different types of files


and object file. Also OS should be able to check
creation time and time at which file was las*h
"4 1. Regular files
modified. In order to choose the right compiler, OS
Regular files contain user information. The byte must also have a ability to determine the language of
sequence, record sequence and tree structured the source program.
explained above are the examples of regular files.
Fig- 5.1,1 demonstrate the UNIX version of execute
2. Directories
binary file. Although this file is just a sequence
These are system files for maintaining the structure of bytes, without a proper format operating system can
the file system.
execute this file. The file i s divided into five seg
These are header, text, data, relocation bi&
Character special files are related to input/output and symbol table.
used to model serial I/O devices such as terminals The very first field in header is magic numbcr
Ws
printers, and networks. ’ ognizes the type of file as an executable K

4. Block special files ilitates to .prevent the unintentional exec* 00 ot

These are used to model disks, Whlch


“ not in this format. After magic nun**
- All regular files are normally either A scn fit
different 6,6
the ** *2
binary files. ASCII files contain lines of text Each
ViU ous
is terminated by either carriage return character o 7 exeenti "on stmts, action
soffle
sizes, the address
present
a1 *

8 the header
pro<X
Program ltsel f. “e
... the test and mcflK cf
data <y

Scanned by CamScanner
System (MU - Sem 4 - 1

symbol table is Storage Management


Printers. Many operating systems based their file
■ '.'tW
"Magic Number systems on files consisting of 80-character records
wben
TexlSize __ punched cards were in use.
Data Size Programs read input from file in the unit of 80
/ Header
BBS Size characters and wrote in units of 132 characters keeping
Symbol Table Size remaining 52 characters blank.
"Entry Point ■■
Tree structure
r In this organization, a file consists of the records which
■Flags re organized in tree structure. Ail the records not
'Text
necessarily have same length. Each record has a key
Date field in a fixed position.
"7
Relocation Bite
rhe records in the tree are arranged in the sorted order
Symbol Table
of key field, to permit quick searching for a particular
Fig, 5.1.1 : A n executable file key.
The main aim in this type of organisation is to search
j l 6 File Structure
record on particular key. The next record also can be
g Explain the different techniques to structure the searched. Operating system decides where to add new
files. record. User is not concern about this operation. This
— — --------------------------------------- ------------ -------------------J
type of file is broadly used on the large mainframe
Following are the three ways in which flies can be
computers and still used in some commercial data
rtructured. processing.
Figs, 5.1.2(a), (b) and (c) shows above three kinds of
Three ways of file structure
files. Fig. 5.1.2 (c) shows the tree of records of student
file arranged in sorted order of roll numbers.
1, Unstructured sequence of bytes 1 Byte 1 Byte

2. Sequence of fixed-length records

3. Tree Structure ...

Fig. C53 : Three ways of file structure

t Unstructured sequence of bytes

In this structuring, operating system treats content of


(a) Byte Sequence (b) Record Sequence
file as sequence of bytes. Any meaning must be
imposed by user-level programs.
M UNIX and Windows use this approach. This
structuring provides more flexibility. Any types of
tent can be kept in the file and as per convenience
the user, name can be given to the file-
n«nce of fixed -length records
111
this approach file is treated as sequence of fixed
(c)Tree
M records. Each record has its internal structure.
Fig. 5.1 J
read operation on file returns one record and write
ofl will append or overwrite one record. .
punched cards were in use. record
cl,
aractere and of 132 characters propose o

Scanned by CamScanner
Storage Manage
5-6
o raring System (MU - Sera 4 JT)

Disadvantage of supporting multiple «'• Opc


® this type of file access, process reads ajj the
structures
m a We in order one record after other, starting
If multiple file structures si „. For at the beginning-
operating system Ita sU uctures, the
while accessing, skipping of any record or readmg
SlngVsS- — oul of order is not possible. This access method
convenient for storage medium such as magw
wpe to a certain extent than disks.
Zonally, it is «
operating system supjre (he smJcni re of The reading of the next portion of the file is carried oat
. read operation and automatically file pointet is
.SonTXs X-itcan.eadto
moved forward. The appending of new information to
severe problems. L end of the file is carried out by write operation «
„, advances to the end of the newly write
Opening systems like UNB. M
support file structures which are least portion.
mffl treats each file as a sequence of 8-b.t bytes an 2. Random or Direct access _________________
operating system does not cany out the understanding -- Access (Random acc 7|
Q.
of these bits. As a result of this approach, maximum
flexibility but little support is offered. When use of disk started for storing files, it became
order to interpret an input file to the suitable possible to read the bytes or records of a file out of

structure, application should have included its own order. Il is because; disks allow random access to any
file block. *
code. On the other hand, all operating systems must
support minimum one structure of an executable file. It also became possible to access records by its key

Due to this support, the system will be capable of


be read in any order are called random access files.
loading and running the programs.
They are required by many applications such as
Sy 11abus Topic : Access Methods ______ database systems.

If railway customer calls u p and wants to reserve a seal


f 5.1.7 Access Methods on a particular train, the reservation program must be

(June 15) able to access the record for that train directly instead
of reading hundreds of records of other trains first.
Q. Explain different file access methods.
MU - June 2015. 10 Marks For the random access method, the file operations must
be modified to include the block number as a
Following are the different access methods for files.
parameter. Thus, we have read n and write n, w erc n

Different access methods is the relative block number. Actual absolute


for files '
address of the block is different.

1 . Sequential access The beginning of the relative block number is fr0 ®


address 0. Then next block number is 1 and so 00
2. Random or Direct access although the absolute disk address of the first blo i
14045 and second block is 3 1 9 L
3. Other access methods
The relative block numbers permits the opera
Fig. C5.4 : Different access methods for files system to take the decision about location of
the file and facilitate the user to stop from acces5
*4 L Sequential access
file system portions which are other than b*
Q. Write short note on File Access (sequential portion.
QCCGSS/e

Scanned by CamScanner
Si >m (MU - Sem 4 - it
rati
5-7
« read operation gives the pc Storage Management

tory Systems

s«k. U>e file can be read sequentially f rom


a
struct n e l C v e l directory is the simplest directory
- P° sltlon
- all th<*refiles. This
*)S directory structure,is one
single directory alsodirectory contains
called as a root
Other access methods
w Since on early personal computers, only one user
me h<xL i can hu
working, this system was more general. Fig. 5.1.3
* * ' ** "' ™ top of a random- ows single-level directory system containing five files,
tecess method- These methods generally involve the owned by three different users P, Q, and R User P has two
construction of an index for the file. files, User Q has two files and R has one file in the
index has pointers to the various blocks. The directory.
in the file is searched by searching the index

Root directory
tbc
jpd to find desired record.

$evend factors are important in choice of file


organization- These are QJ ( R

0 Minimum access time


Fig- 5.13 : Single-level directory system
0 Ease of update
Advantages
0 Economy of storage
- It is simple to implement.
□ Simple maintenance
Locating files become faster as there is only one place
o Reliability
to look.

Syllabus Topic : Directory Structures Limitations


If single user has huge number of files kept in single
5.1.8 Directory Structures directory, it becomes difficult to remember the name of
each file.
0. Explain different methods for defining the logical
If more than one user keeps the files in a single
Structure of a directory.
directory, then different users may give the same
■ Directories keep track of files. Directories are itself names to their files, violating the rule of uniqueness of
files in many system. Systems store huge number o f names.
files on large capacity disk. In order to manage all 4 5.1 .8(B) Two-level Directory Systems
these data, we need to organize them.
Two-level Directory System overcomes the limitations
This organization involves the use of directories.
of Single-level directory. In this directory system, a
Following are the most common methods for defining
private directory is given to each user. When a user
logical structure of a directory. refers to a particular file, only his own directory is
searched.

As different users directories are different, the same


name given to the files do not interfere each other.
There is no problem in giving the same name to the
files in different directories.

Single user directory have a compulsion of having all


files unique name. While creating the file for particular
the operating system makes confirmation about
U
hether another file of that name exists in the same
: Common methods for defining the JopcM, directory or not. To know this existence of file
structure of ■ directory

Scanned by CamScanner
Storage Mi
5-8
rating System (MU - Semjjn

the user name ano u


oseIS with a k
I. is not satisfactory “®e of
zrx “ -
flles . Even on a singled personal computer, it
convenient.

-♦ 5.1 8(C) Hierarchical Directory Systems

directories and its files.


wa
r 1 Root directory their files together in logical Ys - A
student fa
example* might have a collection of files related

User d recto
S
P Q A First collection can be the group of flles related to one
subject a second collection of files related to second
X X /"< ( s i (?) * ----- User files
pj © © © 09 © subject and so on. Here, some way is required to group

Root directory
Fig. 5.1.4 : Two-level Directory Systems

Above design multiuser computer or on a simple


network of personal computers that shared a common
S User directory
file server over a local area network. In this system
some sort of login procedure is needed.
s $ 1 4— User aJbdrectoiy
- When a user attempts to open a file, the system knows Q

which user it is in order to know which directory to


search. Files are system programs such as loaders, S
Q I S S
assemblers, compilers, utility routines, libraries.
Loader reads these files and executes when the right User subdirectory

commands are given to the operating system. s S - tl»r fite

- These commands are treated by command interpreters


Fig. 5.1.5 : Hierarchical Directory Systems
as the name of a file in order to load and execute. In
case of two level directory systems, this file name A tree of directories is the solution to store such group
would be searched in the current user directory. of files. A tree is the most common directory structure-
If system files are kept in each user directory then In this approach , each user can have as naty
problem can be resolved. This solution leads to directories as are needed so that files can be groups
wastage of memory as each user directory contains the
together in expected ways.
copied system files. If special user directory containing
all system files is defined the above problem can be
The tree has a root directory, and every file 1°
solved. system has a unique path name. The approach is

Advantages
and S which belongs to different users. Users Q $
- Solve name collision problem as every user has have created the subdirectories.
separate directory.
As user can create random number of subdirectories-’
- Independent user gets isolated from each other.
offers a commanding structuring tool for us
Limitations organize their work. This i s the reason, nearly 311
If the users are co-operative, that is, working OQ modem file systems are organized in this approneb-
common task, then some system does not allow Advantage
accessing the other user’s files. In systcm ,
With a hierarchical directory system, in addih01’
permitted, one user must have the ability to name a fife
their files, users can access the files of other us ‘

specifying its pathname

Scanned by CamScanner
m (MU “ Sem 4 .
if _____ -~ _ _ ?_ 5-g
psth Naffi®® * ........ __ Storage Management

* 0" fi*e SyStem . ° rEanize '1 “ ee of dlr directn * f


° V e 311 w
°ddng of system calls for
sse d by specifying the path name. Paft «*»• « Consider _ _ ____i
_ _ _. r n rrv
1
* f typesThesearesh0WDinFi 8-C5.6 can
Directory operation*

Type* of path name»~~~| 1 . Create

2. Delete
-♦ 1. Absolute Path name
3. Opendir

2. Relative Path name


4. Closedir

5. Readdir
Fig. C5.6 : Types of path names
6. Rename
I. Absolute path name
7 . Link
[t consists of the path from the root directory to the
file. . 0. Unlink

, The meaning of the path /usr/myfolder/myfile is that, Fig. C5.7 : Directory operations
[he root directory contains a subdirectory user, which
"fr 1. Create
in turn contains a subdirectory myfalder, which
contains the file my file. Empty directory is created excluding dot and dotdot
(. and ..), which are placed there automatically, by the
4 1 Relative path name
operating system,
- ft consists of the path from the current working “4 2. Delete
directory to the file.
If only single dot and double dot (. and is present in
- A user can assign one directory as the current working directory then it is treated as empty, single dot and
directory, in which case all path names not beginning double dot within directory cannot usually be deleted.
at the root directory are taken relative to the working
-> 3. Opendir
directory.
Directory can be read to list all the files from it
' If the current working directory is /usr/myf older, then Directory should be opened before reading just
the file whose absolute path usr/myfolder/myfile can opening and reading the file,
he referenced simply as myfile.
-> 4. Closedir
to UNIX operating system the elements of the path are
After reading completes, a directory should be closed
separated by /. In Windows the separator is t The same
to free up inner table space.
Pat h /usr/myfolder/myfile in UNIX, is specified
5. Readdir
Widows as \usr\myfolde myfile- * t h c fir ‘ chara€tcr
of
Path is separator then path is absolute path. If during It returns the next entry in an open directory, readdir
Wor
Lng, program always needs a particular ft always returns one entry in a standard format
should use absolute path to access that file froin irrespective of possible directory structures is being
CUntnt used.
working directory.
6. Rename
directory Operations __ Directories can be renamed just like files. Rename
Q/ 2' ----------------- —-----------
renames the directory.
VWdifferenl directory operatio __— -----—
<+ 7. Link
"“wage the directories, with
Due to linking, a file appears in more than one
a C n ore
* ' dissimilarity from syste .. directory. Link system call specify an existing file and
to
Wm calls for files.

Scanned by CamScanner
Storage
5-10
Operating System (MU
, implementation of mounVunmourrt
its path name, and creates a link File systems can be on same or different devices.
to the name specified by the path. In this fashion t . rhrsc file system can be viewed as single Fll
the mount these j &ie
system Files can be moved between different director of
8. Unlink different file systems thereby appearing as a singfe *

Unlink removes the directory entry. If the


flle system. The mounting can be realized in UNIX
unlinked only exist in one directory then it ts deleted mount table. Along with other information, the mount

from file system. contains following information.


I-node of root node of the file system to be mounted.
Syllabus Topic : File Syetem Mounting
I-node of the directory in which mounting is to
carried out. This directory is called as mount point.
5.1.11 File System Mounting
The mount table allows the kernel to traverse from one
As discussed earlier, a ms* file system to other. The user remains unaware about it
partitions and each partition can have its own file system or under the feeling of single file system.
there can be multiple disks connected to system. So t ere implementation of link/unlink
can be multiple file systems. It is possible to copy a file

with file systems as shown in ri the Link needs to be created from R l to P12. To this shared
and rectangles shows directories. file now two absolute pathnames exist to this shared file as
Disk 1 Disk 2 shown in Fig. 5.1.8. These are /P/P1/P12 and /R/RUP12 In
directory R l the same file can be called by different name
sav R I . In this case, pathnames /P/P1/P12 and /R/R1/R1
would be the same.

p) CO)

Fig. 5.1.6 : Two file systems on two disks

Suppose we have to copy file R from disk-2 to a P1 P2


directory A under disk-1. After mounting directory structure
becomes as shown in Fig. 5.1.7.
PU)
Disk 1

Fig. 5.1.8 : Linking

in directory R L The same file is P12 in P l as show" *


Fig. 5.1.8, The linking is done by kernel with link sv
call as follows.
Fig. 5.1.7 : Directory structure after mounting

Now the command cp/B/R/A/R copies the resultant file execute link system call. Error messages w®
R. In UNIX a facility is given to mount or unmount a file at ■outputted if user is not super user.
pre specified time. The mount and unmount system calls are The pa* name /P/P1/P12 is parsed to obtain
used for this purpose. A system administrator does the
mounting at the time of booting of UNIX. A large number
i-node record is accessed and number of lb* ’
of file system are supported by UNIX for mounting.
incremented.

Scanned by CamScanner
Operating System (MU - Sem 4 - FT) ___
, Storage Management
in order to have disk consistency, the i-node
n
back to disk. the file, then the new blocks will be listed only in the
The path name /R/Rl is parsed to obtain the i. no<fe „ F directory of the user, who is actually doing the append.
the directory. The content of the directory is then read The changes will not be visible to the ocher user, thus
free entry in directoty is located and it makes .sore that overwhelming the intention of sharing. Following are
two methods to handle this problem.
file with name S I not present in directory. If pre
enor message is outputted and exit operation is ° Instead of listing the disk blocks in directories, it
performed. is listed in data structure related to the file, just
like i-node in UNIX. The directories would then
The entry of the Jinked file is done in /R/R( W jth S1
point just to this data structure.
fife name and i-node number stored in step 2.
jj directories are then written back. ° When Q link to S’s file, system create a new file
a
of type LINK and enters this file in Q’s directory.
The new file contains the path name of the file to
5 1 12 Working of Fifes
which linked. In our example, it is S . When Q
_ While working on a common task, many users need to read from linked file, the operating system note
share their files. If these shared file appears that it is type LINK file and search the name of
simultaneously in different directories belonging to the file and read it This is called symbolic link
different users, then it will be convenient to all users. approach.
, Fig. 5.1.9 shows that one file of user S appears in the I The disadvantage of first method is that, when Q links
to the shared file of user S, i-node records owner of file
as S and link count becomes 2 which was initially 1.
Now system knows 2 directory entries pointing to the
Root directory
file. If S removes the file and clears the i-node, then
Q’s directory entry will point to ihvalid i-node. If later
on i-node is assigned to other file then Q will point to
the wrong file.
Q R S
Due to link count in i-node, operating system will
come to know that file is in use but unable to find all
the directory entries of that file. The solution to this
Q R S S S problem is to remove S’j directory entry, but do not
clear the i-node and keep it with count set to 1.
Now Q is the only user having a directory entry for a
Q Q Q S file owned by S. Here system will assume that file
belongs to S until Q decides to remove it. When file is
Link S S deleted, count goes to 0 and no longer belongs to any
user.
Shared file
The disadvantage of symbolic link is that, extra
Fig. 5.1.9 : File system combining a shared file operating cost required. The LINK file having the path
must be read, and then parsing of path is done and
Problem In sharing the files
followed, component by component, until the i-node is
Suppose directories actually do contain disk addresses searched.
then, in above example, copy disk addresses of S s
All of this activity may require a considerable number
directory should be made available in Q’s directory of additional disk accesses. Every symbolic link
hen the file is linked. requires additional i-node, as is an additional disk
The user who actually perform append operation, the block to store the path.
appended block appears only in his directory* In Although path name is short, the system could store it
e
mple, If either Q or S perform append operation to in the i-node itself for optimization purpose.

Scanned by CamScanner
Store Mai
5-12.
Qnnmtino System (MU - S e m j J T ) Execute
Sullabun Topic: Sharing A loading and execution of p
but copying it is not allowed.
Re4id
4.
5.1.13 File Sharing
a rend access right allows the use to teed the flle
q ( Write nota on ‘access rights’ C.y work, together with copying and execute

5. Appe nd
The user can only add data to the file at the end.

6. Update .
5.1 .1 3(A) Multiple Users — — t update
The upo 1
access right allows
. file’s thedata.
user to

7 Changing P rolec,,on
the access rights assigned to ote,
The file system shou. J users . It
permitting widespread file s * Th
offcr a many ‘ “ Normally this right is associated only with

is also a responsibility “ ' he way of file accessing. In X of the file. The owner is allowed to extend
alternatives in order to co ( Kss permissions Xht to other users in some systems
general, it important to grant partly g. Delete
The user is allowed to delete the file from the fik |
OT n£W
,S I0
T L P
Xess rights has been used. The

y
to user form a hierarchy, wih
These rights assigned to usei

t None access right

2. Knowledge ■JI &


The owner is granted all of the access rights and may put
rights to others.
3. Execute
The different categories of users to which access c
■+[ 4. Read given are : -

5. Append
Specific user : Individual users having its own us
- User groups : A group of users who belong 6
6, Update *
particular user group. The system keeps tract
7, Changing protection membership of user groups.
AU : Authenticated users of this system/
8. Delete
<r Simultaneous Access
' . .a fro
Fig. C5.8 : Examples of access rights
- It is necessary to protect the share *
-► 1. None access right simultaneous updating from more than one u
more than one user granted access to appen
To put into effect this access limit, the user is restricted a file, the operating system or file managed

from reading the user directory that contains this file. must implement some way to restrict it.
A brute-force scheme permits a user to
X Knowledge
complete file when it is to ,be updated,
With knowledge access right, the user can find out
possible to lock individual records during upd

the owner for added access rights.

Scanned by CamScanner
3(B jRemote File Sy s t em 5-13

Explain in tietail operafo ,


a For
Sharing of data in the form of fife PS name Systcm
P ss b l name to
* name to (prp , ■ <DNS )
(0communication over the n e t w w k X ° ' * du e
NS etn address Befo
’ ai]Jwftor ft p W a , ' « wide use of
For P*asuse7to
netted implemented is transfer «Ns r ltst hines m ,n exchange files between
»etwork. Undc..
0 distri Systefns use
using ftp <
* 8
"dually b”Jtfcd-in f Ormjir . a many types
Int 051 lnclhoi , Ye((ow
rodu ccd 5 s methods Yellow pages are
fhe second method is distributed fii e MiCrTbcuo . t_
ote (DFs te " wMiCro5y3tern
- - - - j as network
known
PFS, it rectories are abj e >- in use. i t ’*'» WS and iindustries adopted its
' « nlral SWw of
local machine. In World Wide Web (WWW " ■ use r names, host
bK>Wscr
is required to get access to the remote n authem- information, and others. But
Hes
essence a wrapper for ftp j s Uscd tQ ’ and in me hOdS arc not secured.
NIS + ‘ than USW1
uX"more secure NIS ” y

supports both anonymous and authenticated
indUSUy
_ jn anonymous access, users do not have D°1
Access as a secure
remote machine, they can transfer file? * 0
™ °n di Protocol (LDAP)
‘ tnbure d naming method.
makes use of anonymous file CXch Active directory is based
°n light weight-directory
exclusively.
SU Microsystems
with ' " comprise LDAP
. A tighter integration i s required in DFS bet ween the S and is used for user authentication in
Edition to system-wide retrieval of information, for
machine accessing the remote fifes the
c example availability of printers.
providing the files.
Organizations can use single distributed L D A P to store
* The Client-Server Model all user and resource information for all their

- Remote file system permits m o u n t i n g of one or more computers. This allows secure single sign on for all the
users.
file system from one or more remote computers.
Machine storing the file i s server and machine from Failure Modes
which file is accessed is client. As there can be failures in local file system due to

A server can offer the s e r v i c e to multiple clients, and a various reasons, there are more failure modes for
remote file system. This is due tq the interference in
client can request to multiple servers. This depends on
communication over network and complexity of
tow client-server facility is implemented.
network systems.
The server generally states the available files on a
There can be interruption i n network due to hardware
volume or directory level.
failure, poor hardware configuration, or networking
Client is specified by network name or IP address. As implementation issues.
tose can be spoofed, unauthorized user can access the
A single failure can interrupt the flow of DFS
server. commands.

Secured authentication is required using encrypted Due to crash or failure in server remote file system
fe will be unable to reach. In this case, system can either
y>- During exchange of these keys, unsecured access
10 terminate all operations to the crashed server or delay
*• also possible.
,n use the operations until the server is again reachable.
UNIX and its network file system (NFS), the “
Implementation of the failure semantics is a part of the
IDs
on the client and server must remote-fte-system protocol.
’“‘hentication. In mismatch case, the server w
o f data can be resulted due to termination of
able
to decide access rights to files.
Chuted Information Sy»»n« from failure requires maintaining cras the
Recovery hesstate
but

bibuted naming service is also used ieffl01e


information at
rt it has remotely mounted exported file
“ftifted access to information to suPP°

"‘“’Putation.

Scanned by CamScanner
Storage

6-14
Operating System (MU

systems and opened files. NFS use stress DFS


But this stateless DFS is not secure. 1
,
table.
5.1.13(C) Consistency Semantics mechanism has develop
. are evaluated
File systems supporting file shan g any complex system iD
important criteria which is consistency is involved. There are
of shared
user makes modification in file, then other user Jtion.
able to see i t .
reasons to ofl

UNIX Semantics [t is tneeded


— to provide the protection to
mischievous, deliberate violation of an
Following consistency semantics are used by the UN user. It is necessary to make SUre
file system. component running in the system
- Other users who have file open, can immediately vi
system resources as per defined policies to ensute
writes to this open file by a user.
reliability of the system.
One mode of sharing permits users sharing of the
Policies to make use of Che resources of C0Rlp

user advances the pointer then it affects all sharing system are put into effect by mechanism which
users. Here, a file has a single image that interleaves al! offered by protection. Some of the policies
accesses, in spite of of their origin,
Session Semantics
are defined by the individual user of the system to
Following consistency semantics are used the Andrew
protect their own files and programs.
file system (AFS).
rt puntvnv j oner
Other users who have file open, can not immediately
different and put into effect the different types of
view writes to this open file by a user.
policies. Different types of applications have different
Once a file is closed, the modifications done in it are
types of resource use.
visible only in sessions starting later on. Already open
The policies designed should allow offering this
instances of the file do not reflect these modifications.
resource use need. The resource use of the applications
Immutable-Shared-Files Semantics
can change over the period of time. Hence, instead of
For fmmutable-Shared-Files, if creator of file deciares relying totally on operating system, application
it as shared then changes cannot be made in it. The name of
programmer should use protection mechanism to
tan immutable file may not be reused and its contents may
protect the resources created against misuse.
not be changed. Therefore, the name of an immutable file
indicates that its contents are fixed. 5-1.1 4(A) Type of Accesses

Syllabus Topic : Protection Protection mechanisms give controlled access by


restricting the types of file access that can be done. Access
5.1.14 Protection is permitted or denied because of a number of factors. As an
ample o f this is the request type. Followings arc the type
- Exptafofite system protection in detail, ~j

Protection mechanisms refer to the particular operating


Read : Indic
system mechanisms which are used m n, . ates reading from the file.
information, fifes and resources in the computer Write : Indicates writing or rewriting the file.

Policy means whose data should be protected f rom


: Ind,cates
file must be loaded into menKfl
then carry out execution of it.
PPend . Write new content at the end of the file-

eletion of tile and reuse the freed space-

and attributes of the file.

Scanned by CamScanner
Sfations, example rena J tS

tl,x
low
evety
Z > * « tevet. For ex >* 01cclion .Gti$;’ d
®<‘). TwrZL by arer-id M d gro up-
implemen
a’fposwfle '° ‘ I“ 8 > by a s2
P e
' - a mh, W,,h llw simil <U1D
s>J “ "«rio n “ -
V J Hence. a USCr haVing re
“d “ of
read p roce tuf
objects. VC t0
P ' y thc

i*1 | S l0 be copied, printed, and so on. a so


' cause l s < Fr fTOm 0SCT area 10 ken,tl arca
°m oncXm* ’
CtSs 0
to a different se f °t cr
' ' ternc ' part has
'"'14(B) Protection Domains
system contains many hardware objects
5Uch 45
memory segments, disk drives, prinJ„
access 10 a dif
many software objects such as nJ?’ mag,l
«* from the us rT f eR ot set of objects
System keens
*t s es. or semaphores. Each object can be ref
domain hv ° n w h l c h object belongs to which
n every
* 2 Je ° ° bject 4
ite set of FiR
«; 35 dI il 0n ?h g Thc
matrix for above
l S s OWninFig 5 1 U
S performed. For exampie. WAFT and SIG J
C4n access lo a
ores and READ and WRITE on files. Darticnio given object in
manner from a specified domain is
permissible.
A system should enforce a mechanism, to restrict the
or this purpose, system makes use of this matrix and
processes from accessing the needed objects for which the current domain number.
they are not unauthorized. The mechanism should also
Object
ensure to restrict processes to a subset of the legal
operations when that is needed. Fite A

Domain Read
object and rights pair is called domain. Each pair
1
denotes an object and some subset of the operations write

that can be performed on it. One domain corresponds


Domain
to one user and specify the permissions to user for
certain activities. Consider the Fig. 5.1.10 three 2

domains.
Domain
Domain 2 Domain 3

Fite C [R] [ \ FileFfRW]


FileA(RW) Fife D [RWX] Printer A [W] I Rte G [RWX]
FfeBfR] FifeE[RW] V J J
It is possible to switch domain in the matrix model by
considering that a domain is itself an object, with the

domainl arc allowed to go in domain 2,


tat once it is in domain 2. they cannot go back to
Fig. 5.140 : Domains
. . r tn be in multiple "mtaon models executing a SETUID program in
It is possible for the same objec ava jlable

domains, [Read, Write, eXecute] ngnis each permitted.


on each object. At a particular time o
process executes in some protection B it can
' I" that domain there is some set o rf rights

access, and for each object it has some


shown in square brackets. on e domain
During execution, processes pitching
to
other domain. The rules $ from s ysw
Vc v
ry much depends on and
astern. ___ _

Scanned by CamScanner
Operating Enter

rpomain| Fite A [ Fite & ) File C I

Read

Write
Writ*

Read
Read
Read
write
write Execute

te
Write
Read Wn
Read Write
Execute

-, n Sswitching
Fig, 5.1.12 : Matrix nMd *| W ttli<lo”“

entries in to ACL. The firat entry indicates that


v Disadvantages of access matrix process begged by user X may read and w
file The second entry indicates that any
In fact storing the matrix is not preferred practically Xngedtouserrmayreadthefiie.
due to its large size and i t is also sparse.
Other than these accesses by these users are prohibit
storing a very large, mostly empty, matrix is a w a Apart from this, all accesses by other users are
disk space. prohibited. Again, process does not approve the rights,
The two practical methods to store the matrix are, first these are approved by user. Any numbers of processes
storing the matrix by rows or by columns, and then Storing
only nonempty elements. In the first method, an ordered list

what way. This list is associated with each object and called
as Access Control List (ACL). In the Fig. 5.1.13 shown
alt read the file, and also Y has a right to write it. Apan
from these accesses other accesses are not permitted
Y, and Z and three files F l , F2, and F3 are shown. Each
File F3 seems to be an executable program, since Y
domain corresponds to exactly one user, X, Y, and Z.
and Z can both read and execute it. Y can also write it.
Above example demonstrates the fundamental form of
protection with ACL’s. More sophisticated systems are
frequently used in practice. Other than read, write and
execute, many other rights are given in practice. Apart
from, having ACL entries for single user, many system
supports group of users. Groups have names and can
be included in ACLs.

y *biiS-Topic File System Implementation

— — — *e System Implementation
dary storage such as disk is designed to hoM s

0 1
file tai an ACL coupled with
Perma? ? The file system resides
n Secondar
file svste* a ° y storage. Implementation of
Ble
ideals with the following.

Scanned by CamScanner
■fr»«tfn8 System ( M U - S e

way fi,es
• Bd
tOre
' management of the & d.
that Storage Management
" K .
h
p' e anagem nt 11 1
H(?vv to ensure that everything Woi *. Ut
logical flip metadata information is carried
Work
' unfailingly ef Metadata c mm

pjsks off the mass of secondary st d s


“ — structure. It
is maintained. The data On
menrory >s performed in unit of b £ y
l°gica| fi| e c
ory structure is carried out
hich Order to off
*Lii> one or more sectors. These seeing ’* may S ization m « the file
the informa
iB
w 4,096 bytes depending on disk fron
> 32 paired, given □ * hoQ the latter
sector is 512 bytes. Because of Return via fit. 1 ' mbolic Rie
name. It maintains file
M0CkS (FCS)
mediunl f FCS h X '
J is » “ storing various
inf O I n d UNIX ns
es °m>atio n ato U |‘ ril J “ ' “ “"“
A Hock can be read ffo m '
modification can be written it back in ' Sk and
after Pcr tnissions, etc 38 Ownershi
P’ location,

Any g‘ ven block ° f information can secu


a essed b ri<y, X i Sm5tem alSO
“* proteclion
“ d
disk which makes i t simple to access “ y
fi e either
sequentially or directly, and switchingfr '
to
sorter requires only moving the read-wriT
e heads
waiting for the disk to rotate and

J Hie System Structure


[ Organization Modufe l
[£" Explain file system structure? -----------------------1 I
The file system contains

of lower levels to create new features for use by higher


levels.
Fifr 5.2*1: Layered File system
- The lowest (bottom) layer consists of device drivers
and interrupt handler to exchange information between Advantage of layered structure
main memoiy and disk. - Advantage of layered structure for file-system

- The file-organization module keeps information about implementation is that it minimizes duplication of

files, their logical blocks and physical blocks. code. Multiple file systems can use I/O control and
sometimes the basic file-system code. Each file system
■ The file organization module can translate logical
can then have its own logical file-system and file
block addresses to physical block addresses for the
organization modules.
basic file system t o transfer. This is achieved by
Limitations
identifying the type o f file allocation used and the
The limitations of layering is that it can bring in more
location of the file.
operating system overhead, which may lead to
Logical block numbers for each file's block is
decreased performance.
numbered from 0 or 1 to N.
Syllabus Topic : Implementing File System
data is stored in physical blocks. These physical
Woch do not match the logical numbers. Hence, a
5.2.2 Implementing File System

Explain the implementation of file system in detail.


manager is also a of
It tracks unallocated blocks andW '
to the file-organization modulo when ________

Scanned by CamScanner
5-1 S
floOtdir
f file system .
Operating System (MtpSete a

Sector
5.2.2(A) File System Layou t
Boot R eC ' Files and u*

-
virtual file system.
Write
Q-
at operating systems must simultaneous |
Beginning and ending a<
■ _ _ k,; fturtition table All mod ern $ of fi lc sy stems - Thc

MBR. „ ; n and executes support method of implementing


When system is boo,cd
' l
° the active partrUO". and deal but S uW fe write directory and file ro ulutes '
MBR- The MBR boot block, and types of fi‘e S * S ‘ system implementation includes th
for each type-™ flle . s y stem interface.
key layers- The f } } te( ). md
wri
layout. layer i s T ? descriptors. The second layer is called the
Enilfe disk Bpaca -
calls and on fi « having following
(Vr
Disk parttitons virtual file
Pirttaitot* functions flle . sys tem-generic operations from their

*' - piemeMation by defining a clean VFS interface.


The VFS offers a means for uniquely representing a
file throughout a network. The VFS i s basedon vntxle
a
hi h is file-representation structure. The vnode
deludes a numerical designator for a network-wide
Bootblock

- For homogony, every partition begins with a boot for support of network me *------------
block, even if it does not contain a bootable operating one vnode structure for each active node (file or i
system. directory). I
- In future operating system might be loaded tn that Thus, the VFS differentiates local files from remote
block.
ones. The local files are again distinguished as per their
r
Superblock file-system types. The activation of file-system-specific
When system boots, all key parameters regarding file operations i s carried out by V F S to handle local requests
system is read in memory from this Superblock. consistent with their file-system types and even calls the
Usual information in the superblock includes a magic NFS protocol procedures for remote requests. File handle*
number to identify the file system type, the number of are bulid from the relevant vnodes and are passed as
blocks in the file system, and other key administrative arguments to these procedures. The third layer of to
information. architecture is the layer implementing the file system type 01
Free space management the remote-file-system protocol.

To keep track of free disk space, the system maintains The object-oriented principles are used to design VFS.
a free-space list which records all free blocks. It has two modules: a set of definitions that states wb*
I-node file-system objects are permissible to seems to
software layer for these objects manipulation.
The information regarding each file in file system is
kept in data structure called I-node. For each file there Following four major object types are defined by
is one i-node. i- An individual file is represented by i-node object

°pen file is represented by file object

Scanned by CamScanner
gating Sent 4 . r y .
e01ire
At, file sy stem '
sUP erblock object. rapt U
iadiVidual direct
A" entry , * the Storage Management
b- eDt ry object. " nted b

, t of operations are <fef med f(Jr °r Writi"g daU‘ Theicfore thesc


object of one of « £ type Qf
hle
Thee object
ypes Perblock
tionmble. Points te ,
Sysi KCt represenls
«m. The files of entire file
ord of addresses of the actual f Un .
l n8 sysle
Perbloclt ohj ' ' tn kernel keeps a single
function table. These fu nctions is kept dcvice
system and f mounted as a
mem
j’ fiped set of operations for that object. P ssnt connect ** *** nttw
orked file system at
VPS software layer need not re
W hat kind of object it i s deali »>« eatlitr
access to i-nodL object is to offer
c
aa operation on one of the fii e “ »" carry
The VFs
bj
poking the right function f rom lhe objL
JWt 5 syste ,7 e !Kh n0<iC by
Unction M-n< T '/ ' “ “ n ’ qUe *”*■
P 1OC1
“alogowto .“' '* “ S i node
-
4
ftte vfs remains unaware about whether . . the Particular i-node number by requesting
LKk
presents a networked file, a disk -node nu . ° bjeCt t0 rCtUrn thc
’ n o d e with
number,
°r 4 flk The
tion table conta rk ry object represents a directoty entry that may
le function to read the file. ° ntai ns
de the name of a directory in the path name of a
The VFS software layer will invoke
Me (such as /U5t) or the actual file.
without worrying about the way of reading tfe ”
Sv11abu8
'— Topic : Directory Implementation
objects. AO 1-n vujcvl i s a data structure that holds
5,2,3 Dir
pointers to the disk blocks. The actual file data is «ctory Implementation
present on disk block. Explain approaches for implementation of
directory.

an open file. In order to access the i-node' s contents, Efficiency, performance, and reliability of the fde
s ste
the process first has to access a file object pointing to | V m depends on which directory -allocation and directory -
the i-node. management algorithms are used. Following are the
algorithms that are used for directory implementation.
The i-node objects do not belong to single process.
Whereas file objects belongs to single process. The i- I
5.2.3(A) Linear List
node object is cached by the VFS to get better
Every file must be opened before it read. User provides
performance in near future access of the file.
path name to access the file. When a file is opened, the
Therefore, although process i s not currently using the
operating system uses the same path name to find the
file, its i-node i s cached by VFS.
directory entry. The directory entry contains the
• All cached file data are linked onto a list in the file s i- information needed to find the disk blocks of the file.
node object. The i-node also keeps standard Every file also has its attributes. Different file
information about each file, for example the owner, attributes we have already discussed. Directory entry
size, and time most recently modified. stores these attributes of a file.

Directory files are treated in a different way from other In the simple design, a directory consists of a list of
files. The programming interface of UNIX define, fixed-size entries, one per file, containing a file name

fcveral operations on directories, for example creati of fixed length, a structure of the file attributes, and

deletion, and renaming of a file in a directory. one or more disk addresses(up to some maximum)
denoting where the disk blocks are. Following
’ system calls for these directory c>P<=rations do «* Bg . 5.2.3 shows this design.
need of the ,.»r nnen the files concerned, u

Scanned by CamScanner
SysteTT1
: Sem 4 - IT) 5-ao
Flit Fte 1 oniry
Attribute*
■era
attributes Entry lor first file
n
B y
findin
attributes
File 2 entry ineffi'
attributes Fite a Bttdbutu Progt
D *1 more
attributes FH Sentry tengiti
of di
Fig. 5.2-3 file
end
Otter fite antrUm nar
to 1
Fig. 5.2J: In-line handling of i Ol)
Fc
, n de!i In this
In this example two file inventory
entries ’ ° ’ directory
Termination of each fi| P ; u *
— ui a tlle 1 5 stored .n .
lg- 5 2 4
shows this design. - - —- — • _____ __ _.apu uiv'unvFU inql
a variable-sized free space i s ’
directory. It may happen that, ne x t file t0
not be accommodated in this free s piiCe ° C ° !S*-
Solution on this issue is compaction
of
Another problem is that a if S ing le (

spreads in several pages, page f aull


reading of a file. curs

refers to an i-node

Above two approaches are corresponds to MS-DOS


which have short file names of 1-8 characters and
Entry tor Ural file

n Pointer to ftte r, nany


____Fife 1 altributea
Ponter to lite Vs mure ’ "V
_____Fite 2 attributea f

Otter lite entries


71
file names were 1-14 characters, including any I n v s ]
extensions. n t 0
_ y X a*
All modem operating system support longer x -
b
and
variable-length file names. Following are the
two
approaches to implement it. Other file name*

The simplest solution is to keep file length of


Fig. 5.2.6 : Long name handling using heap

can be used in which maximum 255 characters is The second approach to deal with variabte-kntih
names is to allow directory entries of fixed length and

wasted.
In this approach, fixed size of all directory entries is The drawback discussed above is defeated here- When I
an entry is removed, the next file entered will * a' ! '
section, which begins with length of the entry, and then fit in freed space. The management of heap wilted •
followed by data with a fixed format which are burden here and page faults cannot be avoided as 1C
previous method.
AU the designs discussed, suppons searching
t0 et
directory sequentially from beginning

Scanned by CamScanner
rating System (Mu .
!n» J}
fi
tiding le name
Ual
. deot if directories arefo " Wn
H
jamming the linear list m er k.
U,e Storage Mana merit
' “ ““ - To create hut ta>
to Ca
of out to ’ ui Ching to <he Unkcd
>« o £ ,t
file bus the same name. A f[e , tha ? ' link d
“■'O-Xo » is out “ “ ' Ust is
of the directory fa add(Jd '• » new eotrv "“"J director ter d>
“' s««h
„ «d file « “««hed in di tion of ‘
f0 this file JS released. y
space «>e

teus
The file entry can be marke . * ‘°
o What are th ---------
Assign a special name to fit, , r
-t9r ce FiteT tX 8
"0 ’ 10 " me
®d8
o 1o
iho >r
wn . blank name. ’ '‘ CXan)
Ple an an.
Cf
' OS$
0
A used -unused bit
uu
in each1 ve aCCeSS 10
maintained. cntI flexibility in ‘ which we can
* can
be
6 file, * Op,
»»“'n e * **-*■ of
files. There should
0 Portree directory
► the onofdto »' « space. the
ptIn
may , A Adoption is to copy the last entrv >um and eff « should always allow for the
diiecfo
into the released location and to red D >o the f lles W u r™”" °f S
P“' so that quick
the directory. A linked list can also be ’ ength of USed
majorly in different UpPO,Ud
blowing methods are
tOry
t0 decre -----------— grating systems
ntry the time required to delete a file. ase
Rte
Allocation Methods I
ring . Solution to this problem is use of
m each
directory as hash search is e ffici
JA) Contiguous Allocation
,CIU COrn
sequential search. pare to
<B > Linked List Allocation
5.2.3(B) Hash Table
file
“♦ <c > Linked Ust Allocation
JSin
9 a Table in Memory
~ In this method, a linear l i s t is u s e d t0 stQre
entriesJn addition to this, a hash Jata structure is Z
(D) Indexed Allocation
used. The hash table uses a value computed from the
file name and returns a pointer to the file name in the (E) l-nodes
linear list As a result, i t c an significantly decrease the
Fig. C5.9 : Allocation methods
directory search time. Insertion and deletion are also
quite simple. It i s require to make some provision for 5.2.4(A) Contiguous Allocation

Q. Explain contiguous file allocation method with its


location. advantages and disadvantages.
The main problems with a hash table are its usually In contiguous allocation each file takes up a set of
faed size. The hash function i s dependent on size of contiguous blocks on the disk. Disk addresses define a
hash table. Consider a linear-probing hash table linear ordering on the disk. If each block size on disk is
lnj[
‘ holds 64 entries. Using hash function, file names 2 KB, then 30 KB file would be allocated 15
converted into integers from 0 to 63, For creating consecutive blocks.
3
file, we must increase the size of directory hash Contiguous allocation of a file is defined by the disk
ta e
* for example to 128 entries. Consequently, we address of the first block and length (total blocks
fei u rc occupied). If the file is m blocks long and starts at
) ' a new hash function which will convert file
JajI]es location i, then it takes up blocks i, i + 1, i + 2,
to the integers ranging front 0 to 127. to this
Case
’ I' is necessary to reorganize the present directory
to reflect their new hash-function values. The directory entry for each file specifies the address
of the starting block and the total number of blocks
chajned
solution is to use a ’ 0 '' e, Tr* eac b allocated for this file.

of an individual valwj?£ - ----------

Scanned by CamScanner
U Operating SystemjMu - Sam 4 - IT) 5-22

Directory entry in Fig. 5.2.7 shows that file A starts at 5.2.4(B) Linked List Allocation
block 0 and i t is 3 block king occupying block 0, block
J and block 2. Similarly, file B starts at block 6 and it Q. Explain linked list file allocation
advantaged and disadvantages.
is 5 block long and so on.

Advantages
contiguous allocation. In linked allocation, eacj. ? of
It is simple to implement because, only information a linked list of disk blocks. The scattered disk J'1 it
needed t o keep track on files block is starting block disk can be allocated to the file. The directory C[ ?
and length (total number of blocks occupied). a pointer to the first and last blocks of the fife
For sequential access, the file system memorizes the Creation of a new file is simple. For this, it i s
disk address of the last block referenced and. when io create a new entry in the directory. The
required, reads the next block.
To access block k of a file directly which starts at f r
this file is zero. * °
block i, we can immediately access block i + k. Thus,
both sequential and direct access can be supported by The pointers need to be followed from block to ki
in read a file.
contiguous allocation.

0 0 0 0 00 □ El
Performance i s high as whole file is read from disk in Directory

0 0 0 0 0 □ E] 0
single disk operation. To reach to the first block only
one seek i s needed.
|B B E B B B B B

00000000
13
E] 0EIB B 0 0 0
00000000
00000000

Dimdnry

Fite Star, Woe* Length

A 0 3
8 6 S
C 14 B

Disk apace

Fig. 5.2.8: Linked List Allocation of disk space


Disk space

Fig. 5.2.7 : Contiguous Allocation of disk space Fig. 5.2.8 shows shows linked list allocation for file A
The file A of five blocks start at block 13 and continue
Disadvantages
at block 16. then block 21, then block 24, and finally
When allocated file is deleted, continuously allocated block 29. Each block holds a pointer to the next block.
blocks to the file become free. For new file to allocate, These pointers are unavailable to the user.
these blocks might not be sufficient. If new file size is
small than previously allocated size, some holes - User can see the block size excluding the size required
remains which are not enough to allocate any new file. to store the address. Accordingly, if each block size is
As a result, the disk ultimately consists of files and 512 bytes, 4 bytes are required to store address, then
holes causing external fragmentation. the user sees blocks of 508 bytes.
In some cases, it is simple it is simple to find out space Advantages
is needed for a file to be allocated. For output file this
- In linked list allocation, every disk block can be used
size estimation becomes difficult. For CD-ROM,
which is not possible in contiguous allocation. Hen«>
contiguous allocation is reasonable as all file sizes are there is no external fragmentation except for internal
known in advance. fragmentation in the last block.
Reading a file sequentially is simple.
It is not necessary to compact disk space.

Scanned by CamScanner
ten
fl .-.■•-rating Sy? ? (MU - Sem 4 ,
JU
nta es
ZJ, dva 9
Z
o Z inef6cient to s u Pport a m acce
retrieve to block i t the onemf ss c *
Storage Management

The Chain need to U


jj e. it® actual
achmi
Jllow to gel ofTT*’
—'
pointer is lost or damaged f t Je ’ in memorv 5 1 flle
> but the chain

z essibJe W0uld
jj ', ' Direct any d Sk
’ * f Howed
° without
A tfroog pointer can be chosen due
“ evstem .
' 'system Software
' breaka° bUg
« the e ofX'fik d St in8 address (ime
*cr block
a result of w W ch. J r, of disfc
n though file size XT
r
in unking into the ftee- space | i s t . tun, Dlsa
vantage

Ubie
work. ™USt ** *n “*™»y all the time to make it

linked list allocation, each block 2 )j exed


location
slore
information, therefore entire bkvt *°
n With advante
used to store file content. This linX.J! °‘ ______ 9 es
by keeping pointer inform " J?
n Ub,e
# bich always remains m memory, list alloclf A1IOCatiClT1 Table AT
) in memory, linked
SUPPOrt raild0Tn access
table m °il ’ but the ent
1 m memoi
odcr and file B occupies block 3, 8, 4, 14 »uon. all the pointers > all
are the
kepttime. In location
in one indexed
same order. The File Allocation Table (FAT) is show
io Fig- 5«2.9. - - a j i i u c a djock assigned to each file and this
mdex block holds the disk block addresses of that
pbydcal Neri block particular file. There is a pointer from i’h entry in the
index block to the 1th block of the file, it means n*
0
entry in index block holds the address of the 0 th disk
12
maintained in directory.
2 In order to locate and read the i tb block, pointer in the
i index-block entry is used.
8
File B begins here
4 14 Directory

Fite Index node


File A begins here A
r.......

Index block 15

DisksPace

Indexed Allocation of disk space


Fig. 5.1W:

5-2.9 : Linked List Aitoa** 100

Scanned by CamScanner
m ( M U - Sem 4 - IT) 5-24

Initially when the file is created, all pointers in the structure


index block are initialized io nil. When the i“ block is first i
wntten. a block is obtained from the free-space manager. The mode of access and type of the fife.
Identifiers needed for group access and ho i
Advantages file- , 1
Indexed allocation supports random access. The time at which the i-node is just now
There is no external fragmentation, because any free
block on the disk can be allocated as per request for
more space. The file size which is in bytes.
Disadvantages The series of block pointers.
The pointer overhead of index block is more with Total number of actual physical blocks Occtaiea
compare to the pointer overhead of linked allocation. file. It also comprises the blocks holdin
8
pointers and attributes.
Indexed-allocation schemes suffer from some of the Total number of directory entries which refers tllA
same performance problems as does linked allocation.
- The flags which user or kernel Can
UNIX Example demonstrate the uniqueness of the fife.
The structure of i- node An arbitrarily chosen number allocated to •
Q- Explain the structure of i- node in detail. every time that the latter is assigned to a new fit
as generation number of the file. The same numfe*
The information regarding each file in file system is used to identify references to deleted files.
ept in data structure called i-node. For each file there is one
- The data blocks size pointed by the i-node. Usually it
the same as, but sometimes larger than, the fife sy
block size. FreeBSD has a minimum block rf
node. Every file is prohibited by precisely one i-node. 4,0% bytes (4 Kbytes).
Extended attribute information size.
nodes. The exact i-node structure differs from one
Zero or more extended attribute entries.
implementation of UNIX to another. The file attributes as
well as its access rights (permissions) and other control I-node table, or i-node list present on disk, contains ife
information are stored in the i-node. The FreeBSD t-node i-nodes of all the files in the file system. When a fik b
opened, its i-node is carried into main memory and stored in
a memory-resident i-node table.
Single indirect block
Mode
Data
Ownera(4)
Data
Timestampe(2)
Data
Size
Direct address(0) ' Data
Data Data
Direct address(1) Data
Data Date
Data
Data Data

Direct address(12) /
Single indirect
Data
Data Data
Double Indirect -
T riple indirect — - Date Data
Block count Data
Recreance count
Flags(2) '
Generation number
Block size
Extended
attributes size Data
Double Indirect block
Extended Data
attributes blocks
Date

r „ indirect block
= FreeBSD |. node
----- and file struct.

Scanned by CamScanner
ting system (MU s

/'nodes
As
t
is u3Bd
to alfog
a? Store Mana merit
fitdex-node) is Ute s
hicl locks usm
Abates and dtsk address, of , * ' tototu, *hi ch ' ’77 ® *** liat, The disk
s to ftIe
j_ ;<t associated with * ~ _ l — . files blc 2 P e lis or directory
T UCh bl ks
“ totekof
' ° .««-e a neJ r *" ’ “ -
nd dv
ailabiliiy o , ' fcM s
Pacc lira 1. searched for
gets Spbce lf
*■ r e™_ ■ • ’“node located toZ ™ fooad. free space
Th m rc 41
' ° ttoneficial wit h “■ bt f,le sp
«* Im F±“ W F te
' ” d
“ “ «>* « from the
ttoJted list allocation which requ - " ‘“"Pare ; SPS due 10
to gets i„ c | ud ?r “ “» deletion of file is
e
.ration table m memory. ntine <
file *“ W«h«!. " ,rcCSP
*“ l i S*
Qth
h
> the Q. ~- managemant
Fite Attributes
direct
space managemant.
" Address of disk block 0 to
implement ' ---------------------
e
file. arc used SpaCC liSt folIowi
ng two methods
" ArMreas of disk block 1
which
Address of disk bkxik 2
Me1
bod» of free •pecs
Address of disk block 3

Address of disk block 4


W Bit Map or Bit Vego? |
Ailress of disk block 5
_(B_) Linked list of disk btortT]
ytdtffBSS of disk block 6
r
stem F
Address of disk block 7 fc- C5.10 • Methods of free space
£ of
Address of block of pointers .** 5.2.5(A)Bit Map or Bit Vector
Disk block
containing
bit map method in detail .
additional
djsk addresses
In this method, every block on disk signified by a 1 bit
the Fig. 5.2.12 : Example of i-node IW
block rs represented by bit I and allocated block
: is
- ine size u* me djiucauun raoie i s proportional to the
Suppose block numbers 3, 5, 7, 9, 10, 15, 18, 20, 23,
number of blocks that d i s k contain. If disk has k blocks
25, and 28 are allocated and rest of the blocks are free.
file allocation table will have k entries,
The free space map would be as follows. Allocated
- in contrast, the i-node scheme requires an array in blocks are shown in bold space
memory whose size i s proportional to the maximum
11101010100111101101011010110............
flumber of files that may be open a t once.
With a bitmap, it is also possible to keep just one block

fa last disk address i s reserved for the address of a becomes full or empty. Since the bitmap is a fixed-size

ofak containing more d i s k block addresses. data structure, if the kernel is (partially) paged, the
bitmap can be put in virtual memory and have pages of
it paged in as needed,
, Mock addresses. If file grows beyond this limit, this
liSf Advantages
disk address point to disk block containing
diuonaj disk addre sses. ft is relatively simple and requires less space, since it
uses 1 bit per block.
|a
<ty bu» Topic : Free Space Managements
It is efficient in searching the first free block or k
successive free blocks on the disk.
ee Space Management

* hen flle is deleted, the disk space becomes free-


should be reused to allocate to new files- ys—

Scanned by CamScanner
Disadvantages memory. This meth • #(|(Jrcsses „ f, ee blocks
of
4
The whole vector is needed to keep J wRtten to disk In mis meth* . * Actually . the first n-1 of thcse-biZ' *
Otherwise it would be inefficient- flrstf,
occasionally for recovery needs. _omiSin g for ■he 7« t block stores the addresses of %
ftee. The ® As compare d wtth standard
- Keeping bit map in main memo Je for larger blockS
smaller disks. It is no t necessarily P
' r.he addresses of a large number of «
disk size. -Si-""
-♦ 5.2.5(B) Linked List of Disk o __ _ _ 5.2.5(D) Counting
q. Describe linked list of disk block rnetlhod--------------J

— In this method of free space management Lf l“aUon°* e


" C nl,gU
° ’ US
disk blocks are linked together .
The pointer to first free block is kept in partl L.
location on disk and cached it in main memory advantageous to keep the address of the first ftee »
first block contains a pointer to the next rec
block, and so on.
and the number (n) of ftee conttgueus block, 2
follow the first block. The entry ln
Zn our bit map example, we would keep a pointer to fi
free block which is block 0. Block 0 would contain the
denotes a disk address and a count.
pointer to block 1, which would point to next ree In this approach, more space is needed for each en|q 1
block, which is block 2 and so on. In Fig- 5.2.1 with compare to a simple disk address. But adva,
shown, block 0, 1, 2, 4, 6, 8, 11, 12, 13, M, lb. 17* 1 »
21, 22, 24, 26, and 27 are free blocks.
generally greater than 1.
This method is not efficient because, wre must read
each block in order to traverse the list, which requires Instead of linked list, a B-tree can be used to
extensive I/O time. But advantage is that, traversing these entries so that, efficient lookup, insertion,
the free list is not regular operation. deletion can be carried out.
Usually, the operating system simply needs a free
block s o that it can allocate that block to a file, so the Syllabus Topic : Efficiency and Performance
first block in the free list is used.
The FAT method includes free -block accounting into 5.2.6 Efficiency and Performance
the allocation data structure. No separate method is
needed. Q. Explain various techniques to improve efficiency
Free space list head and performance of secondary storage.

As disk is slowest memory component, it may become


bottleneck in system performance. There are various, ]
techniques to improve efficiency and performance of I
a 10
secondary storage.

15 5.2.6(A) Efficiency I

16’ 7 19 The disk allocation and directory algorithms


mainly decides the efficient use of disk space. Tte
20 22 |~23
preallocation of UNIX inodes is done on a voW1
This results some space occupied by inodes altho111
24
27
disk is empty. Preallocating the inodes and. scatterin?
em across
the volume improves file syst -
P ormance. The UNIX allocation and
gonthms maintains a file’s data blocks near that

block to lessen the seek time.

Scanned by CamScanner
rating System (Mu .

fn a file’s or data

a Stars Mana ment


done or not. Some “P of " <o t l0lht 8rou
sys(em file d«k w ;* Porbi,‘°clcs that logically belong
u
date." Which is useful t0 U5et the - las| *be of e
* Perth " '*’>« Cpt i n
mory for the
#as last read. °* *>>»»
VCra)
c
*C|ic, *8orithms exist
As this information is maintain
8w
' t41
' writing th* fin'd in the directory > Astern. j( Sls
c*he?*
Block
file * “ essential l0 *henev£*
if section Changes, and if '
If lbc b| * ,k“’“ U t a disk «" *
back oul to 3110
be disk, as o ** ’
U
place only in block chunks. cX d. n
S On
disks ta£ “d then coXd t “ f™ »«> the
sqcoes Aft
situation, if any time a file is o "« pract >. sive r Xs f T" “ “ “
h,lr,U blOdt c
its directory entry must be read and wri £ " "8 «*n ln
' d f ro >n thc X'e “
cache there
a,S
, s necessity cannot be efficient for r " °' ’KSKs ar y l o ha v? n Fbe lar8e " umber ° f bl «ks. It is
Which
jessed frequently. Therefore, We mu . 1 bl
«k. Hash , hl technique to locate
benefit against its performance cost wh its
blocks >1 cache ' ° r 8“ i “«® is used to organize
same hash
file system. Tn general, all fj] e a
form,n
to be considered for its effect Elision n chain
cham g a linked list so the
can be followed,
00 efflc,e
performance. &cy
1010 1 fUU Cache some block
needed ‘ “
. The size of pointers used by systems is rem VCd iU d tran8fer to dlsk lf U has
been m r- ° ’ to ’
system uses 16 bit then length of fi le "L 'f SltKC being brou
circumst • 8 ht in - ™s
KB. For 3 2 bi! pointers, file length “ Hke 3 118 411(1 a l 1 the
normal page replacement algorithms
P ®' such
* as FIFO.
restricted to 4GB. Some systems use large size penmen, U>
Wopnate to replace block from cache.
which occupies more d i s k space. Since cache references are relatively occasional, it is

5.2.6(B) Performance

structures.
Accessing the memory is faster with compare to
FiwitfLRU)
accessing the d i s k . For accessing single word, the memory Harfitetta 1

access is on the order o f a m i l l i o n times as fast as disk


access. Due to t h i s differentiation i n access time, many file
systems have been designed with various following
optimizations lo improve performance.

Various optimizations to Fig. 5.2.14 : The buffer cache data structures


Improve performance
The Fig. 5.2,14 shows a bidirectional list running
through all the blocks in the order of usage, with the
1 . Caching
least recently used block on the front of this list and the
most recently used block at the end of this list. When a
2. Block Read Ahead

3 Reducing Disk Arm Motton on the bidirectional list and put at the end. In this way,
t L.RU order can be maintained. It seems that LRU
is Arable, treasons are;

performance i-node like critical block is read into the cache


Stl
ified but not rewritten to the disk, a crash will
QtW 8 ral
1 ” u • the most 8 the nlesystem in an inconsistent state. H the i-
Block cache or buffer cache is relation to

Scanned by CamScanner
irating System (MU ■ Sem 4 - IT) 5-28

node block put at the end of the LRU chain, it may be b


sedor holds 512 y tes ' lhe s
X««n coul? / oti,
blocks occupying 1 sectors but assign disk st _
to the disk.
units of 2 blocks occupying 4 sectors. j,
Some blocks, such ak i-node blocks, are not often
_ | D this case, cache would still use 1-KB b ,
referenced two times within a small interval. ]n order
disk transfers would still be 1 KB but readin „ ' **
to consider the above two factors regarding crashes and sequentially would minimize the number O f J ? r “e
file system consistency, modified L R U scheme is factor of two, considerably improving petfood by
possible. The blocks can be categorized as i-node variation on the s a m e theme i s 10
take Kc J'
blocks, indirect blocks, directory blocks, full data rotational positioning. When allocating bloc, °f
blocks, and partially full data blocks. Blocks that will system tries to place consecutive blocks i n a flle ' "k
most likely not be required again quickly go on the same cylinder.
front, instead of the rear of the LRU list, so their - Those systems which use i-node like data structq,..
that two disk accesses are required to access ev ' ‘n
1
Blocks that might be required again soon, such as a short file : one for i-node and other fo, J"
Generally i-nodes reside at the start of disk
partly full block that is being written, go on the end of
average distance from its blocks about half thc
the list, so they w i l l stay around for a long time. If the
o f cylinders.
block i s necessary to the file system consistency and it
- Because of this reason long seek is
Performance can be improved by placing i-nodes
middle of disk which would minimize the average
2. Block Read Ahead between the i-node and the first block by a

The performance of file system can be improved by two.


Other approach to improve the performance is to

file system create n* block, it checks if n + l * is the disk into cylinder groups, each with its 0Wn /

any i-node can be selected, but a try is made to search a


cache taking into consideration that, it would be
block i n the same cylinder group as the i-node U
required in future.
nothing i s available, then a block in a nearby cylinder
This read ahead policy does not work for random group is used.
access and only works for files that are being read
sequentially. The read block in cache would force to Syllabus Topic : Recovery
replace the useful block and i t might not be required.
The file system can keep track of the access patterns to 5,2.7 Recovery
each open file. I f next time file read is suppose
sequential and not random, the read ahead strategy is Q. Explain file system recovery in detail.
useful otherwise not.
Directories and Files reside on disk or remains in main
3. Reducing Disk Ann Motion memory. System crash may lead i n data loss or in data

Apart from the caching and block read ahead technique inconsistency. F o l l o w i n g are the ways to handle these
to improve performance, other technique is to issues.
minimize the total disk arm motion by keeping blocks
that are likely to be accessed i n succession close to Ways to
each other, i f possible in the same cylinder. In the handle Recovery Issues

following ways performance can be improved


I f bitmap technique i s used to record free blocks, and (A) Consistency Checking
the entire bitmap is in main memory, then it is simple
(B) Backup and Restore §
to select a free block as close as possible to the
previous block
■+[7c) log Structured File Systems
Grouping o f the blocks can be done w i t h free list.
N o w . system w i l l keep track of disk storage in groups f
Ifr C 5 . I 2 : Ways to handle recovery issue*

Scanned by CamScanner
In order to get the pe rf 06
''"9
information, i t is cach in
,n
updated copy of dir *=Pt in T k di,
Whereas co °" n
’Uon g. .Slcrm Mana
snt
He
by <«* is old copy as cached n
tially written to disk -y info o„ ICe da
A
cssen On
ch - Thii c y 1 arc don P COpy of a11

w
Of there can be loss of co “Pdate is «*« l y
?2 “ X bKI
a ‘"p
also of TO operations Cacl ” •Mneyek? - day 3 „.. ” «"" far files
curn . J. * an d h *
he n, y
cbinc crash. As a res ult ' Wor . anq nc
* ’‘*"*•‘•“ 7. w unai d
‘ y
-'•
y
drones of opened f 1]es " Uion, £ af "' a
in W toC S>
Ieadi the file system i n M - ** lost Th the ’ ia * W b,e * “*
,o « Of7 k 7 Wri "'" ° W tb
'
in '"“ Un g re r ' restore' „ aCk “ P mwli ’ to
'“•
state the actual state of son* s ’ Won ghL-k wil
h the full . . c o m P'«' disk by
$
compare to present in t . "** 's differ _ ,n “lis
t ,rec,
Regularly, during reboot specie °ry „ *hh
e e mi<Wte 8round
sU,e «*‘X2 22*
and correct disk 'neons, s l e n £" tfe
consistency checker p r o g ' * *
8 dn,s
in MS-DOS. "ft/sck in »f
I.
The directory structure data and
a
com ared b the
£ P * » prt>g rams , ? 1(° Cks Of > disk
fl
KCU rs then tries to correct any i n c o r any “■:r—
f .onsj
On . WK match
The disk space allocation and f ree ’
stenci '«( if existh
7 effce ' algorithms can be
algorithms direct the checker proe/’”'*'' '-''’! problem
'becking. The imD | eni , , “* of consistency
to dis
type of problems and how C0' resulted in l KbaMd
SUCC ?™«U«n-or| ( nLd <™T’ °
correcting them. -M The disk file syste joumallng) fi| e systems.
dit SmKturcs for
_ in case of linked allocation, a link i s «'«y
structures f re ,7? . «“>pl K
its next block.
block to its block. ThereforT -*"' 6 My can leads' £ T* f« FCB
reconstruct the file from the data blwfc to failure of system. '"consistent state due to
After this
directory structure is created again ’
10 ab
place whenTise" 2?* ° Ve 5tn,ctures ■»
. In case of an indexed allocation, loss n f , a- incorporated in n~. « 8 "based techniques was not
0PeCab
entry is difficult to handle. This i s because thTdaZ results in " S systems ■ The creation of file
blocks are unknown about the other data blocks X onra Z SX Ur te
“ Wi,hin,heBle

Therefore, For read accesses, UNIX caches directory modificauonof dhretory struck, XoXrTs
entries. On the contrary, for write leading to space flwrefore free
th
these ui counts then
blocks are decreased. If crash occur for all of
these
allocation, or other metadata updating, is carried out
changes can be interrupted and structures can leads to
synchronously, prior to t he corresponding data blocks
inconsistent state. These structures can be fixed after
are written.
recovery. But in such scheme following difficulties
may arise .
*5.27(8) Backup and Restore
The inconsistency may be beyond the repair. The
If failure of magnetic disks occurs then loss of data consistency check made may be incapable to recover
should not be forever. Back up of data from disk to the structures, which can lead in loss of files and yet
° t storage devices can be carried out by system whole directories. In order to get rid of the conflicts, a
human involvement is required. Availability of system
programs. Recovery of any data is now just the
will not remain to the user until the human directs it
ring the data from backup.
about how to proceed. For the consistency checking of
amount of copying can be minimized by making huge size data can require hours of clock time to check.
of information from each file's directory entry. I If log-based recovery techniques are applied to
k** up program i s aware of the time of the last bac. P f le-systetn, metadata updates then the above problems
1
file, then file's last write date in the be solved. NTFS and the Veritas file system use

that the file i s unchanged since that


“ no need to copy the file again-

Scanned by CamScanner
ratii
Stora
5-30 !
These directories then acC e i i «(C)N
A particular directories along
the log-based recovery techniques All changes done i remow clients. so in fact entire directory 'U
metadata are written sequentially to a log- J
-t *.
Thcsc written changes to I he log considered to be fig-
committed. After this the control of execution o
system call can return to the user process which f 4 _rTT’ 1 jiKe *
it carry on execution. In the meantime, replaying o t
log entries are carried out across the actual file system “ ed. Clients can access exported d i r e « ” n
them. When a client mounts a arto 1
structures. Because the changes are done, a pointer is
updated to show which actions have finished an The '
Xw »' NFS **• “ b“ omes ot

which are still unfinished. hierarchy and client can access thcse
Once transaction finishes the execution and committed directory. ehtn
local
nothing bm a circular buffer. In circular buffer writing
peet
takes place from start to end and when reaches at other
end it then continues at the beginning, overwriting operating systems on different hardware, envi * chef
older values as it goes. is heterogeneous. It becomes necessary thai reci
After the crash of the system, there can be no or any interface between the clients and servers be * u
™li
number of transactions in log file. I f the transactions defined.
present currently in log file were not completed to the 1 a
neiu co
file system, although operating system has committed client implementation and guarantee it
it t they must now be completed. The transactions can correctly with existing servers, and vice versa. Up*
begin execution from the pointer until the work is achieve this objective by defining two client- • at
finished so that the file-system structures stays in protocols. The mounting is handled by first NFS d
consistent state. protocol. n
If transaction does not committed prior to system crash
A client can send a path name to a server and request
and aborted then Any changes chat were performed to
permission to mount that directory somewhere in i
the file system must be undone. So the file system will
directory hierarchy. The location at which it is to bt
remain in original state and consistency will be
mounted is not specified in message from client to
maintained. The logging on disk metadata updates are
server.
to the on-disk data structures. The reason is. server do not care for this oi
mounting. If the path name is legal and the director
Syllabus Topic : NFS specified has been exported, the server returns a fife
handle to the client. The file handle includes fields
5.2,8 NFS
exclusively identifying the file-system type the disk
the i-node number of the directory, and secum
on Linux to join file systems of different computers into one
information. *
logical whole. Version 3 of NFS was introduced in 1994
L v wriie
NSFV4 war introduced in 2000 and offers a number of ° vnes m the mounted
di ectory or any of lls subdirectories use the file handle.
improvements over the previous NFS architecture.
i e or directory replication is not supported by NFS,so
5.2.8(A) NFS Architecture

Plainoperatonof netawkWesystem in detail. As a result, auto

NFS allows a number of clients and servers to share a file systems with system binaries and other files that
hardly ever change.
common file system. These clients and servers can b!
on the same LAN or in wide area network if served
S
'ocated geographically at Jong distance NFS of nfc isC ' 01used.an Clients
d access
purpose,
can send second
messages protocol
to servers to
K
every machine to be both a el
anipu ate dtrectories and read and write files. TM
simultaneously. ‘ent Md a server

Remote clients accesses the NFS taw. f


Z “e atlribUteS
' mode,
servers
directories. Every NFS serv exported aS
UNIX I ' rn<x '*'lcat *on - Protection mechanism of
un
‘a also used in NFS.

Scanned by CamScanner
ifating System (M(j -

on
its Explain the lay Bfed
a-
s.
Fig- 5 2 1 5 shows NFS | ay 22flg_Management
layer is the system-caij layer ''’ .
'«s
«k» °P en - rcad
' “ d
close. Th ? is
’’Yer fa,
is (V
« "ten ittX d C"?
aft Vi
and inspect' the parameter. 'r Pars|„ '"m
te
The VFS layer maintains a m., * "* Ca ‘l
kee
' open elong with a W ' P one ,
enBy for open file. V- node5 1- , '"N fa,
le
local or remote. In case o f H whettl .. V'"»de
fil
is different and is supp,'!”' «, i n C f 115 «

r them . For local fi les . »'“<• to be Ration


lt£Orded since modem Linu, 7 d i-nJ
Syste e
(Dultiple file systems. ms Ca|
StJ
Pport
V-node plays the important rot I :
yered struct., *
d
■ sider system cafts i o a „ “> as an "“ww* hie system
The mount program "tount. »
administrator for / elc / rc) by the sy

tmy, the local director on the rertlo[e

mounted, and other information. " “ “ to SlQ-ngeStructure


, The mount program parses t h „ „ , d
re ■“Westlevelof “ ‘ en ‘ ar > slora
«' structures are the
directory to be mounted and fi nd o u [ . “* mote
NFS server on which the remote directo "’"‘e ° f the

then contacts that machine, asking f ’* localed - It . . ............ ------


or structure
the remote directory. a file handle for

, ff the directory exists and is available for rem , 5 3 1 6rviewof


' ' °'' Mass-Storage Structure
mounting, the server returns a file h.
PhyS,C!U f sc ndar
“d tertiary storag'devkes ° “ >'

passing the handle to the kernel.


5-3.1(A)Magnetlc Disks
- Ute kernel then constructs a v-node for the remole f
L Ex
directoiy and asks the NFS client code to create an r- -51__ P*ain physkai uaure |
The magnetic disk is used as main storage device in
your computer system, ft is magnetic type of storage
file handle. The v-node points to the mode.
device. Within one magnetic disk unit, many physical
- Each v-node in the VFS layer will ultimately contain disks are present. Each disk is known as a platter.
cither a pointer to an r-node i n (he NFS client code, or a Several metal platters are present within the magnetic
disk and these are coated with a special magnetic
pointer to an i-node in one o f the local file systems
material. The platters rotate many thousands of times
Ttos. from the v-node it is possible to see if a file or within a second for accessing the data. The diameters
tetory is local or remote. of platters are usually between 1.8 to 5.25 inches.
Magnetic read and write head are present which floats
it is local, (he correct file system and i-node can be
just above the surface of the platter.
ted. If h is remote, the remote host and file handle
Material used for making the platters is Aluminum,
te located. glass, or ceramic and two read/write heads are present,
one for upper and lower surface. Platters are arranged
in stack because of which the position of the read/write
heads often is referred to by its cylinder. Cylinder is

Scanned by CamScanner
5-32
Other using n Paia on floppy disk. Its s,« is 3
pue to read and floppy disk. The di H
System
c
which is ppy is 3W inch. »l
ircd
the location of X i«q° *?
this . uS ed ro cover it. A metal h,
S m
disk cache mechan ' .
write to dWt J K
un i
“ , r,
" ' t
Hardplaa Read /Write window.
magnetic disk. Mag sepa rtby tes
used to c open ed me disk is in a
each partition can Ix n gte abyt * h spce d w** 11
capacity of is at M* u 60 ““"‘"TX d drive. A notch is used to write paC *
_ The spinning ££.£ specd of which floppy ‘ of floppy disk has a size of 51/<
disk is n* use. . cver y second. T js called disk. °t h6r
/Opacity Of 1.2 mb. It is a

to 200 notations for compu deS


data flow between dK m _ a€CeS s time, " . fastic plate coated with magnetic

5 .3.1(B)M«’9' ieticTapeS
- laln physical structure ot maanetic tape,.
disk can have d flies on
hea
ittencies of several milhsecon K possibility that the
e v e r y t h i n cushion of am h £ surface,
71tb b ta
head may make coot “‘ hat head may damage the
Sometimes it is also possible th
- ’
A fter head
‘ rmUt and have large capacity,
magnetic surface call ed rep | a ced because it r me is slow than that of RAM and magnetic
crash, the entire disk must tie rep Magnetic tape is thousand times slower for
cannot be repaired.
access compared to random access to disk. So <llsk j,
A removable disk permits different „
d]
more preferred secondary storage with compare to
mounted if needed. Removable J o avoid
contain one platter, covered in a plastic cas
damage while not in the disk drive. Floppy “’ sl£s for storing the infrequently used information. It is
inexpensive removable magnetic disks that ave a
used to transfer information from one system
plastic case containing a flexible platter. The head of a another. A tape is kept in a spool and is wound
floppy-disk drive generally sits directly on the dis rewound past a read-write head.
surface, so the drive is designed to rotate more slowly
than a hard-disk drive
in minutes. But once located, the write operation is
carried out with speed equal to speed of disk drive.
Spindle
track t Tape capacities are decided by particular type of tape
drive. Usually, tape storage capacity ranges between
Arm assemby
20GB to 200GB. The storage capacity is double for
Sectors
some tapes which are having built-in compression.

Syllabus Topic : Disk Structure

Cylinders Read-write
head 5.3.2 Disk Structure
Modem disk drives are viewed as large ooe-
Platter

Arm
Rotation is the smallest u n i t of transfer. Logical block size is
usually 5 1 2 bytes. The low-level formatted disks can
Fig. 53.1: Moving-head disk mechanism
have a different logical block size, for example
The storage capacity of floppy disk is only 1.44MB. It
is a small plastic plate coated with magnetic material.
The mapping of one-dimensional array of
Data is recorded in floppy disk in terms of in magnetic
blocks is carried out onto the sectors of the disk ' ij
spots. We can transfer the data from one computer to
sequential manner. On the outermost cylinder, in

Scanned by CamScanner
the very first

. a
’Petti J age Mana m
fW conversion of log ical '""<»■ ,D ah
' preorcdcally mto d i s k k n Uln(wr O r <> e r
* ou o ra d ,hal funcl
' O n oveT
ppiriber, a track nurnbor ’"elude. * ,*0. haVi,, erent
8 a On<- ° ndUClOr c
°PPCr cable. It
tor n “ rober withi
" 'hat t, ac . "* '=yl. nd / yli
"*, 24 hil U
“ fabric
IO carry out thia u-a „ sllKK)n ’ •* a
8 netl U re an<
*orki, l '* th® 1>uis ot
g Most of the disks contain s ' '* ason s a . " *»
e
me mapping hides this °'" Mty ’
scotors from elsewhere on tlw \ . ’ti.uy ”• but A

0 The ntmber of sectors pe r ’KS,t


A W1 reraotel
on some dnves. i s not a n XX“’ re c
’ “
c
° n s t nt itlfcj" i n UNOJ " “" (WO interface
5 CIFS
Syllabus Topiu ; ui,), mX “ «ed by di ’ . ” ™ *i"0»»’
nach TCP or 5 n l ork l
"; w* ~ . the tn' UDP u uu-j'."''”for ' * ““chsd
_ orks Performing RPC over '
3 Disk Attachment
OTa
‘ * 8c « generally
On small systems, disk 5tora . dements the R irra
* wilh 5oftw

nterfaCC
ers via VO ports cailed as host * by that u J/? - NAS is a
storage-access
SRPCo
~ N AS off ers TCP/IP.
Juted file system this access is via In a tA
N to share a *lc fOT a
” the machinea on a
tached stora ote
# o ge. host called access similar! storage with simple naming and
s ,OCal
> =. * “d
having les. , nther hand, it is i e s s efficient and
st Ormante than TOme
°rage option direct-attached
" ISCS! is th
rCCCnl
protocol \ -attacbed storage
. Local VO ports are used to h
uses the ff lwork protoco1 lo
storage. Different technology is used by
tached '«iy the Sarpro . “| “
5 3
Desktop PC makes use of an VQ bus
-3(C) Storage- Area Network (SAN)
referred as IDE or ATA. Maximum of two drives per One limitation of NAS systems is that the storage I/O

1/0 bus is supported by this architecture. In SATA operations require more bandwidth on the data
network. Therefore, latency of network communication
protocol the c a b l i n g is simplified, SCSI and fiber
increases. This problem can be sensitive in case of
channel (FC) is used by high ended servers and large client-server installations where the
workstations. communication between servers and clients needs more
bandwidth along with bandwidth needed for
SCSI bus architecture has its physical medium as a
communication among servers and storage devices.
ribbon cable that contains 50 or 68 conductors. There
Hence, there is competition for bandwidth during
can be at the most 1 6 devices supported by SCSI on the communication which can lead to decrease in
bus. In general, the d e v i c e s contain one controller card performance.
in the host called as S C S I initiator. Devices also Private network is the example of SAN. It uses storage
includes up to 1 5 storage devices called as the SCSI protoco) to connect server and storage unit instead of
networking protocol. SAN is very much flexible.
targets.
Several systems and storage arrays can be attached to
' The common S C S I target is SCSI disk. The protocol SAN and dynamic allocation of storage can also be
done to these machines.
offers the capacity t o address u p to 8 logical
A SA N switch permits or restricts access between the
each SCSI target. The logical unit addressing is directs
r RAID arcay or Ls and the storage. Consider the example, If host
ite commands t o components or a
„ m n tly have low disk space then SAN can be
components of a removable media library 10 offcr ffl rc StOTa8e l h St
° ° ° —

Scanned by CamScanner
Sterane

R queue with requests for I/O


Consider
tbe 1
tem to sh o yim
35,1
96, i s ’ -
SAN’,. i‘ * e
nue«

porw. and less cosdy pon

SyftobU* Topic : o

534 Disk Schedul n g (Nov. 1®)

0 . techno of dtaksebe

Fig, 53.2 : FCFS disk Scheduling


of write a disk block. The ume

m 5Ho9Mhentol 3 5,m,U.no. S
- .ime to move to - Io -be proper
, last to 57. for a total head movement of
cylinoeia This schedule is diagrammed in
roc for the proper sec,or tO
ZX-I detay "
rotate under the head.
The larger swing from 122 to 16 and then back to i 2o
Actual data transfer time.
demonstrates the problem with this schedule. If
For most of lhe disks, the seek time is greater than
requests for cylinders 35 and 1 6 could be serviced
rotational delay and actual data transfer time.
together, before or after the requests at 122 and 120, the
Hence by minimizing mean seek time; i t i s possible to
total head movement could be decreased considerably,
enhance system performance significantly.
and performance could be thereby enhanced.

Come, First-Served (FCFS) basis then it is not possible 5.3.4(B) Shortest-Seek-Time-First (SSTF)
Scheduling Algorithm
to optimize seek time.
Yet, another approach is possible in case of heavily
loaded disk. When the arm is seeking for one request, Q. Compare the following Disk scheduling algorithms
other disk requests may be produced by other using appropriate example-SSTF, FCFS, SCAN,
processes. C-SCAN, LOOK. M U - Dec 2016. 10 Marks
Several disk drivers keep a table which is indexed by
This algorithm services all the requests near to the
cylinder number. In this table, all the pending requests
cunent head position before moving the head far away
for each cylinder chained together in a linked list
headed by the table entries.
the request with the minimum seek time from the
5.3.4(A) FCFS Scheduling Algorithm
number of cylinders pass through by the head, SSTF

Q.
position.
In our example, closest request to the original bead
position (51) is at cylinder 55. Once we are at
The fust-come, first-served (FCFS) algorithm is the
55, the next closest request i s at cylinder 57. FfOtn
simplest scheduling algorithm.
there, the request at cylinder 35 is nearer than the ok”
This algorithm is basically fair Tm. . .
96, 50 35 is
next. Continuing, we service U*
general doesnot offer the fastest service, ” request at cylinder 16, then 96,120, 122, and
185,

Scanned by CamScanner
System ( M U . Spm 4

scheduling method results in A35


Storage Management
°f only 3,6
■»**-. . '7
<o FCFS
“ h
'<<uling Of Uu,s „„ ” w «> reaching at the O thcf end, direction of head
Wst Q u _ lb
Wh< cl5
; n i s al g-s a consider fa,*-* 8 reversed, and servicing continues. The
re
Petitiyely scans backward and forward across
the disk,
, ite a shortest-job-f™ (SJF) Kh

,«d » Ration of wme reques(s Reme £ ** arm acts just like an elevator in the building.
may time liaJly jt servjggj while going up and
, a. “”7 “ lh „ *
ueStS qUeue for en
' e tw° " ““ - linden, j6 and reverse the direction towards down, it service the
' and white servicing .he requesl from quests on this way. Therefore the SCAN algorithm is
>6 arrives. a
lso called as the elevator algorithm.

.-st-'-:™—-:
‘ another request close io 16 .
“ arrive,
g
jn t oO
C u lu M
l 1 s
consider the same example discussed above.
Before proceeding with SCAN to schedule the requests
on cylinders 96. 185, 35, 122, 16, 120, 55, and 57, the
a continual stream of requests near one another
direction of head movement should be known along
arrive. causing the request for cylinder ) 8 S lo
with the head's present position (5 1).
indefinitely-
I f movement of the disk arm is towards direction of 0,
„ e e 9 6 . 185,35. 122, 16. 120,55,57
the head first will service 35 and then 16. After

reaching at cylinder 0, the arm will reverse and will go

toward the other end of the disk, servicing the requests

at 55, 57, 96, 120, 122, and 185,

I f the incoming request in queue is just in front of the

head, then it will be serviced almost without delay; a

incoming request just at the back the head need to wait


Fig, 533 : SSTF disk Scheduling
until the arm goes to the end of the disk, reverses
direction, and comes back.
-> (Dec. 16)
- Considering an even allotment of requests for
--
7 Compare the following Disk scheduling algorithms
cylinders, consider the density of requests when the
using appropriate example-SSTF, FCFS, SCAN,
i C-SCAN, LOOK (MU - Dec. 2016, 10 Marks) head arrives at one end and" reverses direction. At this
;— --------------------------------
moment, comparatively a small number of requests are
• The movement of the disk arm in this algorithm is from
right away in front of the head, since these cylinders
one end of the disk and goes in the direction of lhe have just been serviced.

other end. While doing so, it service requests as


arrive at each cylinder, until it arrive at the other end

(he disk.

fee =96, J 85, 35, 1 2 2 . 16, 120. 55. 57


Head
stan at 5 1

Scanned by CamScanner
otora
5-36

20
Queue - ~96, ii w
M,. M . i s j e . ' - 56 ' 57

Head start at 51 120 122 185


5 1 55 57
16 35
0 --------1------------

FJg. 5 3*4: SCAN disk Scheduling

5.3.4(D) C-SCAN Scheduling Algorithm

■* (fee.

M U - Dec 2016.7b”

C-SCAN is called as Circular SCAN scheduling and it is a variation of S C A N . It offers a more uniform wait time

service
any requests on the return trip.

The C-SCAN scheduling algorithm essentially treats the cylinders as a circular list that wraps around fro k*
cylinder to the first one.

Queue = 96, 185, 35, 122, 16, 120, 55, 57

Head start at 51

16 35 5 1 55 57 96 120 1 2 2
0 ------- h 4- 4-
185

4-

the f0ll0win g Disf(


scheduling algorithms
US n9 apprapria,e
' example-SSTF, FCFS. SCAN, C-SCAN
Abo
~ aJgoritbmsscAN
v
— -------- ------------- AN, move the dis k ------
________ ™ ,he *, w ,dth „f d .ik.

Scanned by CamScanner
.Storage Management

165

,C Examples on Disk Schedni! Cdis


kschedU | lng
edull
Algorithms "9
= 876 ms
Cylinder Seek « 77’ * «
Algorithms :

rBt B,BOrit
closest reque -t hm always handles the
J Previou. reject was °° Zea S* “ “*
pending requests, in FIFO order, is , Tow
86 147Q «lme , « 20 _ 20) + (22 _ _ W)

f774,S48.1509, 1022, 1750, 130 Starting f rom the . + (10 - 6) + (6 - 2) + (38 - 2) + (40 - 38))
position, what is the total distance ((in cylinders) that ’ 6 msec = M • 6 msec = 360 msec
jetfisKarm moves Io satisfy all the pending requests, for
nearest requestsA)in8° increasiog
nearest
nlh
™ ■ Elevator
(dKreasing) algorithm handles
« requests on the way. Afterwards the direction is
SCAN
I, FCFS reversed.
*5c*rtk)n : Total time: ((20 - 20) + (22 - 20) + (38 - 2 2 )
Tbe FCES schedule i s 143, 86, 1470, 913, 1774, 948, + (40 - 38) + (40 - 10) + (10 - 6) + (6 - 2))
(M1022, 1750, 130. The total seek distance is 7081. * 6 msec = 5 8 * 6 msec = 348 msec
He SCAN schedule i s 143, 9 1 3 , 948, 1022, 1470,
Example 5.3.3 MU - Dec. 2014. 10 M a r k s
1503,1750,1774, 4999, 130, 86. The total seek distance is
Suppose that a disk drive has 5000 cylinders, numbered 0 to
4999. The drive is currently serving a request at cylinder

i»n(* 5.3.2 143, and the previous request was at cylinder 125. The
queue of pending requests, in FIFO order, is 86, 1470, 913,
• requests come in to the disk for cylinders 10, 22, 20, 2,
1774, 948, 150971022, 1750, 130. Starting from the current
*; Sand 38. A seek takes 6 msec per cylinder move. How
head position, what is the total distance (in cylinders) that
!<tt
Mek time is for Closest cylinder next algorithm? the disk arm moves to satisfy all the pending requests, tor
"tyam is at cylinder 20. each of the following disk-scheduling algorithms?

SSTF C SCAN d L K
FCFS & ' °°
f||
*Come, First-Served algorithm accepts requests . OSCAN I.C-LOOK

t ® a time and caries them out in that order,


+ (
«20-iO) + (22-10) + (22-20) 2)

'x + (40 - 2 ) + (40 - - ------------------

Scanned by CamScanner
Stora
jAaruv..
5’38

Solution :

solution ■ |rgt , Served algorithm accepls r


1509, 1022 .1750, 130.
CorneT
The total seek distance is 708 1. First - and c a ries them out in that order;
b. The SSTF schedule is 143, 130, On
“ “u m
Total * Te . ((20
" (40 - _10) + (22 - 10) + (22
+ W- .2A0) +
1470, 15O9f 1750,1774. 2) + (40-6)

The total seek distance i s i !•••>■ .470,


__ ] 4 6 * 6 m s e c = 876 msec.
c. The SCAN schedule is 143, 913, 948,
1509, 1750. 1774,4999, 130. 86.
The total seek distance is 9769.
d. The LOOK schedule is 143, 913, 948, 1022, " l8< ’mX«k ,i,nc
' independently on the request, *>
1509, 1750. 1774,130,86.
Totals
Total ttae : + ( 6 - 20)
2) + + (38
(22- - 2)20) + + (22
(40- -10)38))
+ . 6 ’
The total seek distance is 3319.
e. The C-SCAN schedule is 143, 913, 948, 1022, 14 , = 60 * 6 msec = 360 msec.
1509, 1750, 1774,4999, 86, 130.
Cylinder Seek Algorithms : Elevator algorithm h™,
The total seek distance is 98 13,
nearest requests in increasing (decreasing) ot(3n
f. (Bonus) The C-LOOK schedule is 143, 9 1 3 , 948, 1022,
more requests on the way. Afterwards the direct
1470, 1509,1750, 1 7 7 4 , 8 6 , 130.
reversed.
The total seek distance is 3363.

Example 5.3.4 Total time : ((20 — 20) + ( 2 2 - 20) + ( 3 8 — 22) + (4q _


+ (40 - 10) + ( 1 0 — 6) + ( 6 - 2 ) ) * 6 a
None of the disk-scheduling disciplines, except FCFS, is
truly fair (starvation may occur). = 5 8 * 6 msec = 348 msec.
a. Explain why this assertion is true.
b. Describe a way to modify algorithms such as SCAN to Example 5.3.6
ensure fairness. Suppose that a disk drive has 5000 cylinders, numbered
Explain why fairness is an important goal in a time
4999. The drive is currently serving a request at cylinder 155
sharing system.
d. Give three or more examples of circumstances in which
it is important that the operating system be unfair in pending requests is 86, 1350, 948, 130, 1500, 50. Starting
servig I/O requests. from the current head position, what is the total distance ((in
Solution : cylinders) that the disk arm moves to satisfy all the pending
a. Ne w requests for the track over which the head requests, for each of the following disk scheduling
currently resides can theoreticafJy arrive as quickly as Algorithms.
these requests are being serviced.
1. FCFS 2. $STF 3. SCAN 4 . C- SC AN
b. All requests older than some predetermined age should
5. LOOK 6. C-LOOK
Solution :
new request could be moved ahead of these requests.
For SSTF, the rest of the queue would have to be 1. FCFS : The FCFS schedule is 155,86,l350M
reorganized with respect to the last of these “old” 130,1500,50
requests.
c. To avoid unusually Jong response times It gives (155 - 86) = 69, (1350 - 86)
d. User requests should have lower priority over paging
= 1264, (1350 - 948) = 402,
and swapping. It may be desirable for other kernel
tmnated I/O, such as the writing of fife sX (948- 130) = 818, ( 1 5 0 0 - 1 3 0 ) = 1370,(1500-50)
metadata, to take priority over user I/O If the S = 1450
supports real-time process priorities, the VO reque 7™
M u f Total head movements are :
those processes should be favoured.
69 + 1264 + 4oq j oi o . ,
Example 5.3.5
5373 Cylinders
requests come into the disk for cylinders 10, 22, 20 ,2,
2.
* The SSTF schedule is 155,130,
948,1350,1500 '

Scanned by CamScanner
ting system (My . s *

:r 507 = 8 9 8 . ( 1 3 5 0 - ,
86)
-«- 3 o)
*> 143, Sto
Management
3
= 150 ~ ' ’0)
ftead movements are :

8STF 3
SCAN : The SCAN schedute is 1 J s ***>>: >-OOK «. SCAN 5. C-SCAN
S5
> M35<M5OO -’3O,86, 50.0,
13
1 ( f jves(15S- °) = 25,(13o_ 86) = 1

,(M
= 36-(5O- 0 ) = 5o '5O) i
- aX 1
L8 1777
' <M8,1O22j75<) ' !30 -
(948 - 0) = 948,(1350. * = 63. (1470 -&0) 1390,
W2 t - ( 1 4 7 0 - 913) = 557,
(1500-1350) = 150 '
(J 777 - 9 1 3 ) - 864. (1777 -948) = 829,
Total head movements are : ( 022-948) = 7 4 , ( 1 7 5 0 - 1022)

= 728,(1750 - 8 0 ) = 1670
Otal
- SCAN : The C-SCAN schedule is i 5s movements are <
c 13Q
1 + 1670
JO,0,4999, 1500, 1 3 5 0 . 9 4 8 . 30,86,
~ 6125 Cylinders
jt gives ( 155 — 1 30> = 25, (130 - 86) = 44, (86 _ J0)
2. SSTT:
The SSTF schedule is
143,130, 80,913,948, 1022, 1470, 1750,1777 .
(50 - 0) = 50, (4999 - 0) = 4999, (4999 - 1500)
11
gives (143 - 130) = 13, ( 1 3 0 - 80) = 50,
= 3499,
= (913 - 80) = 833,
(1500- 1350) = 150, ( 1 3 5 0 - 9 4 8 ) = 402.
(948-913) 35,(1022 - 948)= 74,
Total head movements are : ( 1 4 7 0 - 1022) 4 4 8 , ( 1 7 5 0 - 1470)= 280,
( 1 7 7 7 - 1750) = 27
Cylinders Total head movements are :
- C - L O O K : The C - LOOK schedule is 155,130,86, 13 + 50 + 833 + 35 + 74 + 448 + 280 + 27
50,1500, 1 3 5 0 , 9 4 8 = 1760 Cylinders

Itgives (155 - 130) = 25, ( 1 3 0 - 86) = 44,(86 - 50) 3. LOOK: The LOOK schedule is
= 36,(1500 - 5 0 ) = 1450, 143,913,948, 1022, 1470, 1750, 1777,130, 80.
( 1 5 0 0 - 1350) = 1 5 0 ( 1 3 5 0 - 948)= 402. It gives (913 - 143) = 770, (948 - 913)
= 35, (1022 - 9 4 8 ) = 74,
Total head movements are :
(1470- 1022) = 448, (1750- 1470)
25 + 44 + 3 6 + 1 4 5 0 + 1 5 0 + 402 = 2107 Cylinders
= 280,(1777- 1750) = 27,
(1777-130) ~ 1647,(130 - 8 0 ) = 47
1350,1500
Total head movements are
Itgives (155 - 130) = 25. (130 - 8 6 ) = 44, (86 - 50) 770 + 35 + 74 + 448 + 280 + 27 + 1647 + 50
— 36,
= 3331 Cylinders
(948 - 50) = 898. (1350 - 948) = 4 0 2 .
SCAN: The SCAN schedule is
(1500-1350) = 150.
,, 913 948, 1022, 1470, 1750,1777, 4999, 130. 80.
Total bead movements are :
It gives (M9113 3 - 143) = 770, (948 - 913) = 35.
25 + 44 + 36 + 898 + 402 + 1 5 0 = 1555 Cylinders ------ (1022 - 948) =

11470- I 022 ' ' 4481

E n| 5 3 7
’* * - - . numbered ° t °
Su 0 1
PP°se that a disk drive has 5000 cy *■ a t cyl»nd0 r
The drive is currently serving a ----------

Scanned by CamScanner
-Storage

24
160) *=
mentsare:
System (MUjjem d1 n 0>ov<=
tratii O) = 4* 69
’ Total)** ' 1 + 20 + 132 .
1 3 16 + +

(4999-1777) = - + 3
10 + »
50
(130-80) =

Total head movements are : SCAN ! 3g 18 0 , iso, 160, i M

100 5
' /’i00 90) = 10 - (90 - 58) = »•
= 9775 Cylinders iVe9<
'•« ‘; 55) = 3. (55-39) =16.
(58 -
C-SCAN : Tbc C-SCAN scbedi11
t
4999, o t 80 t
(39 _38) = 1 , ( 3 8 - 1 8 ) = 20,
143, 913 , 948. >022, 1470. 1750.1777.
(18 -0) = 18, ( 1 5 0 - 0 ) = 150,
J30.
770. (948 — 9 1 3 ) 3 . ( 1 6 0 - 1 3 0 ) = 1 0 , ( 1 8 4 - 160) = 24
It gives ( 9 1 3 - 1 4 3 )
(1022-948) Total head movements a r e :
(1470 - 1022) 2. <1750 - 1 4 7 0 ) = " 32 + 3 + 1 6 + l + 2 O + 1 8 + 1 5 0 + l o+ :
1777) = 3222,
( 1 7 7 7 - 1750) = 27,(4999-
(4999 - 0) 4999, - 284 Cylinders
50
(80-0) 8 0 , ( 1 3 0 " 8°) = IOOK- The LOOK schedule .s
I00 . 90, 58, 55. 39. 38. 18, 150, 160, 184
Total head movements are :
+ + 80
It gives (100 - 90) = 10.
+ 5 0 - 9985 Cylinders (90 - 58) = 32,
(58-55) 3. (55 - 39) = 16,
Example 5.3.8 . .
(39 - 38) 1 , ( 3 8 - 1 8 ) = 20,
Suppose that a disk drive has 200 cylinders, number
199. The initial head position is at 1 00 th track . The que ( 1 5 0 - 18) 132, ( 1 6 0 - 150) = 10,
pending requests in FIFO is 55, 58. 39. 18 90, 160. 150, , (184-160) = 24
184. Calculate average seek time tor each of the o owi
algorithm, Total head movements are :

1. FCFS 2. SSTF 3. SCAN

4. LOOK 5. C-SCAN 6. C-LOOK = 248 Cylinders


Solution :
C • SCAN : The C - SCAN schedule is
1. FCFS : The FCFS schedule is
100, 150, 160, 184, 1 9 9 , 0 , 18, 38,39,55,58,90
100, 55, 58, 39, 18, 90, 160, 150, 38, 184.
It gives ( 1 5 0 - 100) = 50, (160 - 1 5 0 ) = 10,
It gives ( 1 0 0 - 55) = 45, (58 - 55) = 3,
( 1 8 4 - 160) = 24,
(58-39) = 1 9 , ( 3 9 - 18) = 21,
( 1 9 9 - 184) = 15, ( 1 9 9 - 0 ) = 199,
(90-18) = 72,
( 1 6 0 - 150) = 1 0 , ( 1 5 0 - 3 8 ) = 112,
(18-0) = 18, (38 - 18) = 20,

(184-38) = 146 (39-38)= 1,(55-39) = 16,


Total head movements are : (58 - 5 5 ) = 3, (90 - 5 8 ) = 32
Total head movements are :
428 Cylinders 5 0 + 1 0 + 2 4 + 1 5 + 1 9 9 + 18 + 2 0 + 1 + 16 + 3 + 32
2. SSTF : The SSTF schedule is — 388 Cylinders
100, 90, 58, 55, 39, 38, 18, 150, 160, 184 ®- C-LOOK : The C - SCAN schedule is
It gives (100 - 90) = 10, (90 - 58) = 32, 100, 150, 160, 184, 18, 38, 39, 55, 58, 90
( 5 8 - 5 5 ) = 3, (55 - 3 9 ) = 16,
Itgives ( 1 5 0 - 100) = 50,
( 3 9 - 3 8 ) = 1 , ( 3 8 - 1 8 ) = 20,
( 1 6 0 - 1 5 0 ) = 10,
__________<1 5 0 ~ 1 8 ) = 132 ( 1 6 0 - 1 5 0 ) = 10
( 1 8 4 - 1 6 0 ) = 24,

Scanned by CamScanner
fl ratify System ( M o
(184-18)
(38
(39-38) -181
M55. 39
(58-55)
head movements . ' S8
) = 32
' 2 . *4“ * -**-**■ 2®ss«m«™ tent
[11al
hc.| *l> “ J. <91 _ b * . ,
— 7
5.3.9
that the head of a
5
0 1 19 CU,TB
red *° "' 'W eenJ'* **"’ trtu ,
n
, a nd has just finished a req U B S t 9 a re, ta. 10 M a r k ?
’ ests is kept in the FIFO ord ** 12 5 .

,or 20 S ,M
len
cj wi s ' 147.
to,al numher 9tti a™ 'SCAN. C-l,ook r eduUng It carried out
L is <>' Heed
thes e requests for nee
to
SSTp Sfk i
.igtxi ®' eduth
ing
fCFS 2 SSTF 3
‘ E<eva
tor n,.o ***>»*
1

utloh : ----------

102 10)
> ‘75, 13Q
[t gives ( 1 4 3 - 86)
57,(147 86) =6l 262/9
(147-91) C’SCAN , “ = Average 29.11
56, Ule b: l00
‘20, 110 ’ 41, 27, 10, 186, 147, 129,
(177-91) 8 6 , ( 1 7 7 _ 94) = 83 ( 100 - 641+ j . . ..
(150-94)
5 6 , ( 1 5 0 - (02) = 48
( 1 7 5 - 102) =
73,(175 l
" 30) - 4 5
" 3 + 14 + 1 7 + 1 7 +39+ 18+9+10 = 342/9 = Average 38
Total head movements are :
SthedUlB 10 l10 120 l29
27
* ' t 41, 64 °> ’ ’ > 147 ’ 186
’ 10,

= 565 Cylinders. ( H O - 100) + (120-110) + (129-120) + (147-129) +

I SSTF : The SSTF schedule is 143, 147, 150, 130, 102


- 10+10+9+ 18+39+176+17+14+23 =316/9
9 4 , 9 1 , 8 6 , 175, 177
= Average 35.11
It gives ( 1 4 7 - 1 4 3 ) = 4, ( 1 5 0 - 1 4 7 ) = 3,
(150-130) = 20, Example 5-3.11 MU - June 2015. 10 Marks
The requested tracks tn the order received are 54,57, 40,
(130-102) = 2 8 , ( 1 0 2 - 9 4 ) = 8, 20, 80, 120, 150, 45, 180. Apply the following disk
(94-91) =3 scheduling algorithms. Starting track is 90.
(I) FCFS (ii) SSTF (IB) C-SCAN
(91 - 86) = 5, (175 - 86) = 89, (177 - 175) = 2
Solution :
Total head movements are :
0) FCFS : FCFS schedule is 90, 54, 57, 40. 20, 80. 120,
4 + 3 + 20 + 28 + 8 + 3 + 5 + 89 + 2 = !62Cylinders 150. 45, 180’
It gives (90-54) + (57-54) + (57-40) + (40-20) +
A Elevator : The elevator schedule is
(80-20) + (120-80) + (150-120) + (15045) +,(180-45)
143, 147, 150, 175, 177, 199. 130, 102,94,91,86 Total head movements are
36+3+17+20+00+40+30+105+35 - 343
It gives ( 1 4 7 - 143) = 4, ( 1 5 0 - 1 4 7 ) = 3,
(175-150) = 25, (ii) SSTF : SSTF schedule is 90, 80. 57, 54, 45, 40,
20,120, 150, 180
221
( 1 7 7 - 175) = 2 , ( 1 9 9 - 177) =
V. rives (90-80) + (80-57) + (57-54) + (54-45) + (45-
(199-130) = 69. XX + (120-20) + (150-I20) + O80450)
81
(130-102) = 8,(102 - 9 4 ) - _____

Scanned by CamScanner
Stora

1 4
tting System MU - Sg” ~

Total movements t
23
9+5+20+ 100+.1O+.30 *

(ii) C-SCAN , . « 0 . 37. 54. 43. 40. 20. 0 . 1 2 0 .


Trailer
The header and trailer hold the inform
- such as error correcting codes
disk controller
-r
sector ' number. ECC is updated with a
+ (180- 150)
,
from i— all the bytes in the data. This is dOne
Total head movements are _ 27
inuollcr
coi writes a sector of data at the time
M U - Dec- 2 0 l b ~ i o M * ± s I/O, die area-
The ECC is recalculated at the time sector
tn. number or trad". »> lae , r6que3t
compared with the stored value. If lhe
in. requbsls >n th q haad ,g toward calculated numbers matches data is correct 3
corrupted. If mismatch occurs, it indicates that tu **
Z F'FO order contain, requests tor the
area of the sector has become corrupted w
•• »«, 692. 475. 105. 376 .
disk sector may be bad.
Perform the computation for the following scheduling
algorithms.
Wl
j) FIFO S) SSTF ’> SCAN data have been corrupted, it facilitate the control
Solution : recognize which bits have changed and calculate
i) FIFO : The FIFO schedi their correct values should be. It then
and 376.
recoverable soft error. The controller auton *
Total head movement = (345-123) + (874- J 23) +
(874-692) + (692-475) + (475-105) + (376-105) does the ECC processing whenever a sector is
= 2013 written.
if) SSTF : The SSTF schedule is 345,376,475, 692,874, During the manufacturing process, low level fornvm;
123 and 105 of hard disk is carried out. It helps to manufactn
Total head movement = (376-345) + (475-376) initialize the mapping from logical block numbed
defect-free sectors on the disk.
(692-475) + (874-692) + (874-123) + (123-105) =1298.
i 3 V . V 1 U 1 uuiv uunj, uiv yii.nv VUUUVllCr is instm
(iii) SCAN : 345,123, 105, 0, 376, 475, 692 and 874
for how many bytes of data space to leave between ib
Total head movement = (345- 1 23) + ( 1 23- 1 05) header and trailer of all sectors at the time when n
+ (105-0) + (0-376) 4- (376-475) + (475-692)
+(692-874)= 1219. can be 256,512, and 1,024 bytes. If disk is fonnaw
with larger sector size, fewer sectors can fit on each
Syllabus Topic : Disk Management track.
A s a result fewer headers and trailers are written on
5.3.6 Disk Management
The disk management activities of the operating system
include disk initialization, booting from disk, and bad-block
on disk before it use disk to store the files. It performs
recovery.

5.16(A) Disk Formatting Il partitions the disk into one or more groups oi
cylinders. Each partition is treated by OS an
' magnenc recording raateriaI
then
“ is
- a P' a «- of a separate disk.
cannot store data. The surface needs to be divided Logical formatting. That means creation of
m seem so that disk controller is able to perform reaJ system.
rcad
and write operations.
In order to increase the efficiency, file system gr*
ock-s in chunks called as clusters.

Scanned by CamScanner
System (MU - Sem 4 . ,

fflz operaung systems give spccia| 'Ji


Storage Many mgl
to uw a disk partition a, a lar the
tbe d k k (n
' “»* s
- discover had blockv If format discovers a
<Kk
ThJ5
' s somcl r
' nes called m da
> ’ '* whies a special value into thc related I A
fU
i/o to this i s caJIcd 33 ra
* dijtk
- blodk * r, orni l be allocation routines not to utilise
blocks go bad during normal operation, a > peel a
< e oot Block P gram (such „ chkdsk} must be cun msnuidly to look for
3 ad
S- ' bootstrap program is required f w a t| C , blocks and to lock them away. Data that resided on
tcr thcbad
.tTthe boot g after it is powered u p or J * bloeks generally are lost.
ifl|ha
it«es 4,1 coni P° nenUi Of the system CPlJ 1Vn . Tbc 5051 disks are used in high-end PCs and most
-s w device controller* and the com stations and servers, are more intelligent about
-y and then starts the operating °f ,nain recovery of bad-block. Thc controller preserves a record of
ad
,ben ,ocates
‘be OS kernel blocks on the disk. The list is initialized during the low-
d,sk ® v c l formatting at die factory and is updated over the life of
*’'*£ tW t kerne! into memoiy, and ju mps t o " -
d sk
0 <<> start *he <’P era,in 8- tem execution ’ ‘ Low-level formatting also sets aside spare sectors
not vk
‘ble to the operating system. The controller can be
fold to replace each bad sector logically with one of the

The OS attempt to read Logical block 67.


* use Of read only feature of ROM; it CMnot
The controller computes the ECC and concludes that
' » a computer vtnts. diffk * the sector is bad. The same result is reported to OS.
Lification of thts bootstrap code req Uifes chan
8 8 When the system is rebooted, a special command is run
ROM hardware chips.
most s s,e,ns slore a to inform the SCSI controller to replace the bad sector
Lafore. >' ’Mil bootstrap loader
with a spare.
i n t h e bOOt R O M W h , C h inVOkcs a , l d bri
' "g full
Ltstrap program from disk into main memory. The
Ldified version of full bootstrap program can be block 67, the request is translated into the replacement
sector's address by the controller.
simpJy w r i t t e n onto t h c disk ‘
Tbc fixed storage location of full bootstrap program is '
Syllabus Topic : RAID Structure
in the "boot blocks ", A disk that has a boot partition is ’
called a boot d i s k or system disk
’ Boot R0M code 5.3,7 RAID Structure
gives instruction to controller for reading the boot
5,3.7(A)RAID Levels
Hocks into memory ( n o device drivers are loaded at
Q. Explain various RAID levels.

pje full bootstrap program is more sophisticated than The performance of CPU has been rising immensely
over the past decade,, approximately it is getting
[fie bootstrap loader in the boot ROM; it is able to load
increasing by 100 percent after every 1.5 years. On the
the complete operating system from a non -fixed contrary disk performance is not increasing with this
location on disk and to start the operating system speed. Now days average seek time of the disk is near
execution. Yet, (he full bootstrap code may be small. about 10 msec, which is very less compare to average
seek time of 50 to 100 msec on minicomputers in the
5.3.6(C) Bad B l o c k s decade of 1970
Failure of the disk can be : If in any other industry, performance is enhanced by 5
- Complete, means there is no way other than replacing to 10 percent in two decades, it would be considered as
the disk. Back u p o f content must be taken on new disk. great achievement. However, in the computer industry
it is an discomfiture, As a result, the difference between
- One or more sectors become faulty.
CPU performance and disk performance has grown to
- After manufacturing, (he bad blocks exist. Depending be much larger in due course of time.
on the disk and controller in use, these blocks are
In order to get faster processor performance, parallel
handled in a different ways. processing is being used to a greater extent. Over the
If the disks are with I D E controllers then these are years people thought that parallel I/O might be an
excellent scheme as well. In research paper of 1 9 8 8 ,
wple disks. In these disks bad blocks are handled
Patterson et al. recommended six specific disk
"snuafly. For example, the MS-DOS format comman
logical fonnatling md. as a part of the process

Scanned by CamScanner
Storage M a

one requests, with the


Opting System (MU. SggjJ comply the first request they

organizations that could be «[j Iized t O gC


‘ 5€COnd
performance, reliability, or both- has "Teller decides to divide the
The eon ro priate commands to the
- This proposal was soon agreed by m c a ||ed a v|d
P ' £e right order and then collect the
made possible a new class of I definition of diskS
R A I D . Patterson et al- had give Disks, but memory £ right manner. Performance
mentation i s simple. i5
RAlb Nk
R A I D as Redundant Array of Inexp* „p , and the onorlv with operating systems that Q
industry soon changed the definition or
“Independent" instead of " Inexpensive . nf a dS O"« sector “ a Ume The
"suits
dem
The fundamental thought behind a RA1 J* o is K but there is no increase in perform £ * > « £
bunch of disks beside the computer, us y “ ofparalielism.
farce server, put RAID controller m f
controller card, copy the data on R A I D s hould
on usual operation. The intention was a K l PS»rinsa
appear a single large expensive disk to t e ope
system but have improved performance and }
reliability.
As SCSI disks are characterized by better periormanc, vanished. A single Urge exp 'Oi
l o w cost, and the capability to include up to 7 hit e
a single controller/ 15 for wide SCSI), it is usu D) with a MTTF of 20.000 hours woum"* **
L E
most R A l D s contain a RAID SCSI controller I imes more reliable. Since tedundaocy i s foj,
addition to a bunch of SCSI disks which operating design, it is not actually a true RAID. thii
system can thinks as a single large disk. In this fas on, The next alternative, RAID level L, demonstrated
no software modifications are necessary to make use o J,g. 5.3.7(b). is an exact RAID. In this,
the RAID, a great selling point for a majority of system each disk is present, so total 8 disks are present 2
administrators. these 8 disks, four are primary disks and f 0Ur b 1 of
Besides coming o u t similar to a single disk to the disks. The strip is written twice for each
operation. A read can be earned out on any of the *
software, every R A I D have the feature o f distributing
assigning and distributing the load over more drives. '
As a result, write performance with compare to sinou
Many varieties of methods for achieving this were
drive is worse; however read performance Can V
defined by Patterson el al., and they are at present improved neatly by double. Fault tolerance ’
recognized as RAID level 0 through RAID level 5. outstanding : if a drive fails, the copy is just used in

the impression of the virtual single disk simulated by


The RAID levels 0 and 1 operate with strips of sectors
Whereas RAID level 2 supports the working on either
the RAID as being split up into strips of k sectors each,
word basis or even on a byte basis. Consider division of
by means of sectors 0 to k - I being strip 0, sectors k to each byte of the single virtual disk into a two 4-brt
2£ -1 as strip 1, and so on. Each strip appears to be a nibbles, then inserting a Hamming code to each one fo
sector for value k = l ; a strip is two sectors if k = 2, etc.
The RAID level 0 collection writes successive strips
over the drives in round-robin manner, as shown in are synchronized with respect to arm position and
Fig. 5.3.7(a) for a RAID having four disk drives. rotational position. Then it would be possible to write
one bit per drive out of the 7-bit Hamming coded word.
When the data is distributed like this on more than one
This idea used by the the thinking Machines’ CM-1
drives, the operation is called as striping. If data block
comprises four consecutive strips beginning at strip parity bits to form a 38-bit Hamming word, in addition
boundary and software gives a command to read this to one additional bit for word parity, and distribute each
data block, the RAID controller will split this command
up into four separate commands, one for each of the The total throughput was enormous, because it was
four disks, and have them run in parallel. Thus parallel possible to write 32 sectors worth of data in one secioi
I/O is achieved and software remains unaware about it. writing time. The crash of one drive did not lead w
pro ems, as loss of a drive cause the loss of 1 M
each 39-bit word read> w h i c h Hamming code a»W
. level 0 works better. If number of drives times the strin manage easily.
size is less than the request, some of the drives win

I hhJ raW a 1
k
°f
ttie
the above
erne is that, in
---------- drives needs to be rotaJ Si

Scanned by CamScanner
r T Operat g System (Mp
hnmized and ic T*
,
>«i ate 'S o f drives (regardless / F
w
jrivest the overhead is j Q d at 7 Kh a 7***
t Up of .he controller. as ® Per*. »tgt
jrwksum every bit B n>u „ • ft ai "d ,,
Pcrf
hAJD level 3 le deo,™'. »n» > l v . * >
a simpler version of jn p ' p.M.n.0, ,

"x: i-sci- iT a>« , dctCcl


' ° n . net error
/eve, 2. «« -nve. m Ur
dd u " £ use mdmduiU da(a P-ee ' Like 2".' U Hkte clcd
urtt lh
'lo of dri— « o*= is

te
d Ove
rx ’
r
• hth
Stripe
00 Strip 4
L pi
s rip 5
onc Strip A
k pa
1W 8
U1
0
f
0Ur Strip o
this
Lsthp4
3
in strip a
of
of fl raid
J level 1
P
te Bit 1
Bit 2
(C) Bit 3
Bit 4
Bit 5 BftQ
Bit 7
RAID level 2

L at 1 Bit 2
Bit3
Bit 4

RAID level 3

[Strip 0 [Strip 1 Strip 2 Strip 3


e
( ) [Strip 4 Strips Strip g, Strip 7 RAID level 4
/strip 8J Strip 9 Strip 10 Strip 11

Backup and parity drives


[Strip 0 [ Str i p 1 Strip 2 Strips
are shown shaded.
[Strip 4 [Strip 5

(0 [Strip sj [Strip 9

[Strip 121 L Pt l
P16-19J Strip 161

7: RAO ,evels 0 thr0Bgh 5


Fig. 53-
the total separate I/O requests per second
h
dv to to M. RAID 2 to 3 P»* « “> “
can handle is n o belter than for a single drive.

Scanned by CamScanner
Storage Managgw-

5-46 r ilure. I" ** m L d d l e daU Uat


*f C
Partb* 1 purred: hence, some of the
ratin System MU - Sorr U
failure d the sector that is Wrift
iynchrom dflto
and 55 00
In RAfP levels 4 and do nor 'need < does not
Jt
written w , t f failurtt m ay have been corrupted

driv« and again operas wil J* rA(D level 4 at the time failure occurred prior to lh
W
consider individual words P rA(D | e vel T „«l '“ , V d , so the previous data value, '
Sl
demonstrated in Fig. 5.3.7(e) 15 an e X tm .■-L- write
0. and here a strip-for- P fc k bytes.
lW
disk ren 311
dri „, Forin!lm x. if .he tens-h > on dl tbe ■ of block, failure happens then S ysltRb
wrl
EXCLUSIVE OR o»e • «« „ sBip of If during * and call a recovery procedure
strips together. This operation results should dcLeC ‘ b llock to a consistent state. F Or
length k bytes,
reinstate the physical

In case of failure of drive, the 'oirtby


purpose / b[ock following are the execution steps
recovered by recomputing from the p ty
a
design defends against the failure . output "P" “° o nn ,s wrinen onto the first
penned « no. S-t for small updates. If
sector is altered, it is essential to mad aU
emulation of the parity, which must an !, IOCk step 1 completes successfully, thc
Ration is then written onto the ■
that be rewritten.
user data and the physical block.
the second write completes succes Sfi,lly
old parity data and compute again the new parity fro
men declare the operation is completed.
them. Still with this optimization, a small update n *
to have two reads and two writes. As £ „pai flrt i vof
* - recovery
- procedure from faivUre

As a result of the huge load on the parity drive, tt may


turn into a bottleneck. This bottleneck is removed in •rx.
identical, then no further action is needed. If one block
the drives, in round robin manner, as illustrated in has a detectable error, then its contents are replaced
Fig. 5.3.7(f). Yet, in case of a failure of drive, with the value of the other block. If detectable error

process. then it is necessary to replace the content of the first

Syllabus Topic : Stable-Storage Implementation


This recovery procedure gives the guarantee that, a
write to stable storage either succeeds completely or
5.3.8 Stable-Storage Implementation
results in no change. This procedure can be extended
For the write-ahead log, stable storage is required. 1 he with no trouble to permit the use of an arbitrarily large
information stored in stable storage remains permanent number of copies of each block of stable storage.
and never lost. In order to implement this stable
storage, it is necessary to replicate the information over Syllabus Topic : Tertiary-Storage Structure
multiple disks with independent failure modes.
The update writing should be carried out in a manner 5.3.9 Tertiary-Storage Structure
that promises that a failure during an update will not
5.3.9(A)Tertiary-Storage Devices
damage all the copies. It should also guarantee that
Q. Explain various Tertiary-Storage Devices.
during recovering from a failure, all copies must
remain consistent and with correct value, though The main characteristic of tertiary storage devices is its
another failure takes place during the recovery. low cost. Following are the examples of tertiary storage
One of the following can happen due to disk writes. devices.
o Successful completion. The writing of data
on Removable Disks
disk correctly done.
Floppy disk is the example of removable magnetic
disk. The storage capacity of floppy disk is only

Scanned by CamScanner
<uB. « ' s a sma11 P!astic
D’ 13 is reco 'ded in n n C a e
° ' d v P B=Se5 41?
✓ spots. We can tra in te ic I
X r t0 other using floppy the or

M n
. ° T" ieni
S*. *ll ot vwk capacUy
C()j SC(l I

V,) umc <laU


*** to?'* *** ’dentT ' related to
A me b lhc
to cover the Read/Wri te L * tal y la dai a
ly opened after **W S1> is m tHlC ' a p c c h a n »crH arc
dhvc movc ,bcm
’ „v disk drive. A notch is used lsk
« m ten * s and si™
Upe
Other type of floppy di slt ° *rite p to °f ? a hbTafy

" The F reduced by robotic


storage rapacity of 1 .2 M of **
1
ter plastic plate coated with m ' “ a 51/, ’ C’1 he
8i«al3 " 0 «don |2 “ disk
- * "«
il £to opt C diSk anoth Wrii1
„,agn ' ' “ er exa T* " '' w w
* K, this disk, data i s P e of
o Vah , "»■"> tooved “ “ 1" future .hen it can
recorded

® o with magnetic material i. J "M plan * Upe us *““'*“«■

"* M " n “ r ' ll “ ’’ Ma8e “ *


, the | QW uonance of on-line magnetic
01
L. This disk is more re s , slant tQ . a ma goetic 5helV&
to a storage rlZ °f O,
'‘ hne Up
“ S,ttra|! 00
drive i s .responsible to create a m J?”* The
TWMWngy
field is produced at room temper atu J «« If
holographic
, „ e and too weak to magnetize a hi. ’ « too USe
h raphic nhett laser light to record
rf* a bib the disk head flashes a To can
be th ght ° n Wial med A
hologram
surface. at thc imcris
Pi*el is black the ' ional array of pixels. If
e p eRe lt,1
Opdctd disk is also a removable disk which does presents bit 1 white pixel
use of magnetism. It uses special "« bit Ooe n ? lS W ’ piXd a

pixels in a hoi °f hRhl UaiIsfcTS aU the

can v icmuvely dark n


4
spots. Example o f optical * is pha . * “ UsLy h ‘“ "

disk. This disk i s coated with a material that can frKac


sto teri?*T mOAanical stems
(MEMS) based
into either a crystalline or an amorphous state. Thl Ogy 1S n aclive arca of research
fabric; on technologies
° °* used to produce The
electronic
change drive uses laser light at three different
P5 ls applied for manufacturing the small data
piners: low power to read data, medium pow er to stf3T
age machines,
erase the disk and h i g h power to write to the disk.
Rumples are re-recordable C D - R W and DVD-RW. 5.3.9(B) Operating-System Support

, All above disks arc c a l l e d read-write disks. In contrast, This section explains how operating system carry our
wriie-once, re ad - m a n y - t i m e s (WORM) disks can be | itsjob when removable storage devices are used,
written only once. C D - R O M and DVD-ROM are the Application Interface
eumples of R e a d - O n l y d i s k s on which prerecorded
Operating systems handles removable disks in the same
data is available. Technology used by these disks is
maimer as they do handle fixed disks. It is necessary to
similar to that o f W O R M d i s k s and they are very format blank cartridge when it is inserted or mounted
durable. into the drive. An empty file system is then generated

Tapes on the disk. This generated file system is used just like
a file system on a hard disk.
Magnetic tape stores large amount of data compared to
The handling of tape is carried out in a different
Wai or magnetic disk cartridge. Although disk drives
manner. Initially. OS treats tape as a raw storage
tope drives have same transfer speed, random
medium An application does not open file on device.
*cccss on tape takes more time compared to disk see
An application opens the whole tape drive as a raw
drive is more expensive than disk drive. But

Scanned by CamScanner
Storage Manage

5-46 standardized that all n l a c h i n


■ting System (MU - Sen? 4 - J~T) nie dta are so w
*

Mana9em
application untii 0*
7
exits or closes the tape device. H„ ric jukebox facilitates for changing
A m
As tape drive is presented as raw device. OS does no 1 cartridge in a tape or disk drive
offer file system services. The application most make a
"““L intervention. A hierarchical storage sy5tem
decision about how to use the array of blocks. * hU
X the storage hierarchy beyond main memory
application decides its own rules for organizing a tape, W1
' dary storage to include tertiary storage. Tcrtlajy
I a tape full of data can usually be used by only the
storage is implemented as a jukebox of tapes w
program that created it. For example, although a
removable disks. This storage hterarchy is
backup tape holds a list of file names and Hie sizes cheaper, and slower.
followed by the file data in that order, it is not easy to
use the tape. The normal method to include tertiary storage is te
widen the file system. Small size and frequently USM
seekf ). Instead of seek( ), tape drive uses locatef ) files resides on disk, whereas large and old files wWcll
operation. The locate ) operation of tape is more having no active use is stored to the jukebox. In
accurate than seek( ) operation as tape gets positioned
to correct logical block instead of entire track. Many ;er
number of tape drives supports read-position ( ) holds space in secondary storage.
operation that returns the logical block number where
the tape head currently exists. Many tape drives also installations with large volumes of data that are used
support a space( ) operation for relative motion.
rarely, at irregular intervals, or sometimes. Existing
File Naming work in HSM comprises extending it to offer full
- On personal computer, file name contains name of the
drive followed by path name. In case of removable data go from disk to tape and back to disk, as required,
disk, it does not mean that if drive containing the but are deleted on a schedule or in accordance with
cartridge at some time in the past is known means how policy.
to find the file is known. If every removable cartridge
had a unique serial number, the name of a file on a 5.3.9(C) Performance
removable device could be prefixed with the serial
Tertiary ’Storage performance is measured in terms of
number, but to make sure that no two serial numbers
speed, reliability, and cost.
are the same would require each one to be about 12
digits in length. It would then difficult to remember 1 2 Speed
digits file number as a file name.
Bandwidth and latency arc two metrics to measure
In case of writing the data on a removable cartridge on
one machine and then use the cartridge in another
machine, the problem will become more complex. If is steady is the average data rate in a huge transfer-
both machines are same and with same kind of
spectfically, the number o f bytes divided by the transfer
removable drive, then only trouble is to find
time. The effective bandwidth considers the average
contents and data layout on the cartridge. If both

many other problems can come up.


a j u k e b °r > an cartridge switching time in
Machines with different architectures have their data
representation in different formats. Usually in modem Ihe suslained ba
"d»idih u cUeuhted as Se
operating systems, this name-space problem k d e
d w“ "’" — °r d.u and ». I( f„l«
vedandaddressedforremovaHemedTX 7
on applications and users to discover how to acceT" d
dm. I. .. . » usually a bandwidth of a
d
„ “,“

Scanned by CamScanner
It for fastest; it js

e r
per 30 °
y is second melrics ,0
Z’ 8
* 1
With
Storage m. ment
' tf hich tok performance is ft Pa _ inr
'1 ec ,. “* tow CM1
>s in operation, then “ ’ to *» In ot
many removable
58
sidcraWy r ffaremo i late „ c y *Mf «***
s
* has ; dncs
. cost ncr m .
£balg ed. then dnve must stop 5pj I. n £ ,M
’"’ttoitad. ?°' h> more °*
ann !S required to s w i «be has this i m D r n lhlul four
orders of
then the dnve shou!d spin * ** £ - No only th for memory
N ftntt
prim ° 8 Ude.
11* *"“>**! tit **
b> ract
ont-access time within one longer
As
£
n
" °' ouoTtt? mwt co“ ,y °'ra <i‘’k
pitching disks in a jukebox leads . re * ' *"«•• more megabyte baa
a con
W h performance penalty. >ParatiVcl
p
'tSAtfove Jre " •
jlie robotic-arm time is same f or d - , drive i s 2Oin ° * f lape
cartridge without
Befo ,0
etching the tape, old tape requires re /’*' * small- and med ** n “”y sa
™ • A» » result,
S 11 Ub ies ha ; a w
ran * e -iected ' The d n « required fo r J ng
it "W cost than disk™ ™ '" «h “
Opera landKk
sb out 4 minutes. Several seconds are "™ is — - *yste™ With equal capacity.
Uired
loading of new tape in drive to get ? after **** S* **"'* Topk !
Swap-Snnce Munagement
,0
prepare for I/O. In case of perfomtanc e’o
drivcs
a jukebox, the bandwidth and latency I t
rcasonable
Q.
Bul seem to have a terrible bottleneck °°
olft'n «*ap Space in detail
r Reliability
e agemen lo |evel
the opera btl g ™ '" *- job performed by
, Reliability is important metric tn
10 hold all >h/„. ™ S ma
y » •>« sufficient to memOT
n
performance. Removable. magnetic di sks “’fcasure
„ e ,o the
disk Rnarp J*** 5* 8 simultaneously, virtual memory uses
S aCCCSsing the disk ,s
extent is less reliable than are fixed hard dixk, > case
"* access ng the
k mam memory, using swap much slower than
space considerably
uisKs. in
s
Of removable disks, the cartridge is probab | y TOre system performance. The main aim towards the
exposed to damaging environmental conditions; for Jign, and implementation of swap space is to offer the
best throughput for the virtual memory system.
example, dust, changes in temperature and humidity,
and mechanical forces like shock and bending. 5.3.10(A) Swap-Space Use

Optical disks are very much reliable. In this disk, layer The use of swap space is carried out in different ways
storing the bits i s protected by a transparent plastic or by various operating systems. The way operating
system uses swap space merely depends on the type of
glass layer. The reliability of magnetic tape depends on
the memory -management algorithms in use. Such as,
type of drive and it differs largely. Some low-cost Swap space may hold complete process image, with
drives wear out tapes if used for few number of uses . code and data segments.
Whereas, some other type of drives permits millions of Paging system may store the swapped out pages from
reuses. The magnetic-tape drive head is a weak spot main memory. The needed size of swap space differs
based on the amount of physical memory, the amount
compared to magnetic - d i s k head.
of virtual memory it is backing, and the manner in
T
Cost which the virtual memory is used. It can range between
few megabytes of disk space to gigabytes.
It is better to keep sufficient amount of swap space that
consider. Removable media lower the overall storage is required. Otherwise operating system will be forced
cost. Though it is somewhat expensive to ere to abort the processes as system may run out of swap
removable cartridge, the cost per gigabyte of removable space There can be chances of system crash due to its
running out of swap space.
storage may well be lower than the cost per gigabyte o
a hard disk. The reason is, the cost of one d---------

Scanned by CamScanner
Storage

5-50 (ir sents a significant fraction


ratinn System (MU - _ The — m . Computers operate with
the use o f
rOt
Many upereting systems are * devices. As we know that it include *
kinds m sks tapes), transmission devices (tet
multiple swap spaces. . „; ac ed
maintained on separate disks- A s a res ■ device* and h0 man-interface devices
wapp,ne
o„ the VO system by P«S'"8 ' use)
k ri.®’ -
- a KA) I/O Devices
5.3.10(0) Swap-Space Location
o,>/OdeMicro
' i ! - _
A swap space can part of the normal Hie system,
can be in a separate disk partition. If it is J u s l 0 / _ n . ral |y when computer system uses VO devices «
large file in the file system, normal file-system routines "devices can be grouped into three categories -
can be used to create it, name it, and allocate its space. „ «“ d « b k ’ “ * SUi,ab ' e C mrnuni
° «Un g
This scheme is simple to implement but is not efficient. with the computer user.
In this approach, navigation of direc toiy structure and
the disk-allocation data structures consumes more time Examples P™ ters :
ternunals
’ vid
“ display,
and also extra disk accesses are needed. External
keyboard, and mouse.
fragmentation can very much increase swapping times Machine readable : It is suitable for communicating

process image. Examples Disk drives,


: USB keys, se nsors
Instead, swap space can be created in a separate raw controllers, and actuators.
partition. In this approach , a separate swap-space Communication : It is suitable for communicating
storage manager allocates and reclaims the blocks from with remote devices.
the raw partition. The algorithms used by swap-space
Examples : Digital line drivers and modems.
manager are better in terms of speed. For these
Computers system operates with many kinds o f
devices. It includes storage devices (disks, tapes)
(here is frequent access to the swap space. As data
transmission devices (network cards, modems), and
remains for short period in swap space, internal
human -in terface devices (screen, keyboard, mouse)
fragmented is accepted.
A device communicates with a computer system by
This is not the case with file system. Adding extra swap
sending signals over a cable or even through the air
space needs repartitioning the disk, this requires to
The device communicates with the machine via a
move the other file-system, partitions or destroying
connection point termed a port (for example, a serial
them and restoring them from backup or adding another
port). I f one or more devices use a common set of
swap space in another place, Linux supports both I

When device X has a cable that p| u g s into Y

Syllabus Topic r 1/0 systems and device Y has a cable that plugs into device Z, and

5.4 l/Q systems


ment is called a daisy chain
a7Z u DUS. -“ >y w
5.4,1 Overview
5.4d(B)Differences between I/O Devices

dif,eren Parameters
SZ. ' distinguish 1/0
Basically i t should have lo igsu . ' Devices can be disti
«ces, catch i Bfcmipts . J, the mguished with respect to following
parameters.
Provide an jBterface » should 1. Data transfer rate
3. Control r - * c
2- Device application
plexiiy 4. Data transfer unit
5. Data
representatiion
Errors

Scanned by CamScanner
I I

1 l
<«pp ue
’‘‘ o n ! T h e a
PPHeati on * tho -1 o,
dev,CeS VaneS
n-dUh | ...... Management
“ the tom „
> complexity : The c °f Por ln
«C ' “Plex C* Wrtal Pon
* cW
« “ b '’ u >
tbe devices are also varies. qui
■ “ “ ni ‘ hi Thc '"‘“•faring of A °f theT “ the X7 my
' hC, * i " 8 '
of by eS
, ' “ Characters or >n l a r . ° data
Op h. .
1
r«Pr€sentfltion : The
d a ta f
6
nCodil
, varies different devices. >8 me tl _
' rs = Thc
characters of errors v
PWatei re i s
« ten are ,o
«’<> <Wn The
. . «— •
, g e t o another. ** bro V n
to es 4n “toe of t h e . °™ > 1 '»F1F0« bytea.
aU
chitB
« the M ““
Zsj"»fo Topic : Overview MO - further ttan
< I/O Hardware of ir
’’ sem m ChipS qan store 8Cv
W and output a f ° eral bytes
’ /Device Controllers t0
- tOre
a small bi™ / FlF0 chip WOrk like
buffer
ntil thc or
those dau “ host
* Kput/outiw' U "“ S COnsist of a
mechanical com
5 4
' •» ele " c0
"~t. A co
- Wolling
section of electronics
Dradevice-
that can operate
P
° R’
r

a
,s
a
h US ,
i
A serial-port controller is an example of a ■ Prol0C0 fot
Tc' ' totoraction between the host
ice controller. It is a single chip in the comput ’T, handshak- U nlricate
Shaking notion is simpk .' ’ but basic
controls the signals on the wires of a serial port. The controller indicates its slate through the busy bit in
The implementation of SCSI bus controller is done status register. (Recall that to set a bit means to
most of the time as a circuit board which separately *nte a 1 into the bit, and to clear a bit mean to write a
plugged into the computer. 0 into it.)
When controller is busy in operations, it sets a busy bit
It normally includes a CPU, microcode, and some
This busy bit is cleared when it is not working and
private memory to facilitate it to process the SCSI
ready to accept the next command. The host signals its
protocol messages. The S C S I b u s controller is integral
desires by means of the command-ready bit in the
part of the some of t h e devices.
command register.
Total four registers are present in I/O port. These are This command-ready bit is set by host when a
as follows : command is available for the controller to execute. ■
o Status register For this example, the host writes output through a port,

o Control register coordinating with the controller by handshaking as follows.

1. Until bit becomes clear, the host constantly reads the


o Data-in register
busy bit.
o Data-out register
2, In the command register, the host sets the write bit and
The host can read the hits in status register. These bits
then writes a byte into the data-out register.
show states like :
3 Command-ready bit is set by host,
o The current command is completed or not 4 When the controller notices that the command-ready bi
0
A byte exist in d a t a - i n register or not which i is set, it sets the Busy.
be read 5 After this, the command register is read by control!
and sees the write command.
0
If device error present

host writes the control register to '

Scanned by CamScanner
Storage
5-52
ET Operating System

In order to get the byte, it reads


and perform the I/O to the device.
Iropna.e degree of necess -
The different bits are then cleared as : The controller
clears the command-ready bit, clears the error bit in t e modem computer hardware, CPU and the
status register to specify that the device I/O succeeds controller hardware offers alt above three fea tUn . s *
and clears the busy bit to specify that it is finished.
CPUs have two interrupt request lines. One is ttll _
If the status of host is busy-waiting or polling; It reads maskable interrupt, which is reserved for events S1)ch
the status register repeatedly until the busy bit turn out to be unrecoverable memory errors. The second internip , £
js maskable.
the controller and device speed is better, this method seems
to be practical one. If host has to wait for longer period of Itcanw‘-“ — - '
time; it should perhaps go for another task. critical instruction sequences that must not
interrupted. The maskable interrupt is used by deV j
5. 4.2(C) Interrupt Handler controllers to request service. This address is an
Q, What is intemjpt? Explain the tasks carried out by I in a table called the interrupt vector. This Vc€
interrupt haodter. contains the memory addresses of specialized i nterru *
handlers.
The purpose of a vectored interrupt mechanism is
the CPU detects that a controller has asserted a signal reduce the need for a single interrupt handler to se
on the interrupt request line, the CPU saves a small all possible sources of interrupts to determine wk-
WQich
amount of state, such as the current value of the one needs service.
instruction pointer, and jumps to the interrupt-handler
The interrupt mechanism also implements a a
** joi£in of
interrupt priority levels. This mechanism enables th
The interrupt handler finds out the reason of the CPU to defer the handling of low-priority intcrnm.
without masking off all interrupts, and makeT*
this the interrupt handler executes a return from
possible for a high-priority interrupt to preempt
interrupt instruction to return the CPU to the execution
execution of a low-nrioritv
state before the interrupt.
The device controller first affirms a signaJ on a wide
variety of exceptions, such as d
mtemrpt request line. It does so to raise rhe
accessing a protected or nonexistent tnetnoty address

terenupiban.gerforxerviciugihe dev.ee '


ITe imenupt handfer cJears in(cmjpt

="X,TcT„" application to invoke a eme" “


k 1 service The
asynchronous event fn respond to an checks th. - system call
8 appUca,ion
data structure tree' > builds <•
arguments to
and then execute * kernel,
inStlUC,iOn
support for
■n'errupt, o ' X

X xx g «. inten
* h
* B
* j contro|P W ith the ‘0 manage 0,6
°f flow

should have efficient • efficiently, We n d eme ’’ I f ““ disks


are to be used
Previous one completes*1311 V
° “ Soon “
Ue
Wre P ri a te int eW h a n d J e r out to the

50 deWce I( sh
out poij ing ajj (he /, ' °M do
oae deWCeS t0
Consequently, the kem i
raised the int emip( rve which read that com
is implemented bv ! Pletes a disk
y apairof
The i/o statu • interrupt handlers.

,Ce
interrupt

Scanned by CamScanner
■rfinq System ( M U - s .

U raises
°’ ““ « low
d W<Mfc
> *
_ user le *»< 1/0 « <*>mpleted . " P1 *>
by C Pyin8 da,a fr ,he
, ° °m ko int
Iic81iO n space. It then invoke, .J"" Mera'S
£ P n he 'hllads """h** «»> be celled
aJ >plica»°
on the ready qUeue *hed ull . r “ 'he
n,n nx
fundamental interrupt meek . <Wew , adOu T “ lnKl ™°' ,c

,he ,S 1 Klud n lhe


R ,w * '"’"he rhe , h „ t "”' " ' 8
to respond to an Per 'Hee, wk. *“ 'bread, .. ‘-"'’“hll 1(

>ice controller become, ready fer ISr


Mian, ' W " c '' " kl!

, modem operating system, howev '


0*
cated interrupt handling '■ need
" stf 1'
11
,vet *’■ ’ho U| d 3 au
* <teadloHr b k
’n
* a ? ‘ To do to
S that Umiu lhe
The ability to postpone intern™ c , ISRs can 'ynchfomzation
0 hand,i lally foi CTbc ISync daw is provided
critical processing ng
“bh, J
durfng

0 Pf per -— Accent (DMA)


interrupt handler for a device witho ° Q.
first ----------------------
all the devices to see which Ming
raise ysnttTi ~~ --------------------------
interrupt- d the C ntr
Permit transfp , ° ° l unit will be provided to
The multilevel interrupts, so th a t external device° ° f data difectl
* between an
o Operating lhe
system can differentiate between hi h interfprrt, l primary memory, without
CI>U clllK1 05
priority interrupts and can Access mu .,?*.. ’ D‘re« Memory
™«rsrnriri HfC decree of tiroAnn*. Of DMA.
involves the transfer of large size information in a
54 p) Interrupt Service Routine (ISR) c
operation. This transfer of information
tween disk and main memory involves DMA
Interrupt means event, which invites attention of the
operation. When used together with an interrupt, the
processor on occurrence of some action at hardware or processor is conveyed only after the whole block of
software interrupt instruction event. In response to the data has been moved.

interrupts the routine or program, which is running Information is transferred in the form of bytes or words
presently interrupts and an interrupt service routine and for these it must supply the address of memory
location and every bus signals that control the data

of the devices and called exception or signal or trap transfer. Communication with a device controller is

handler in case of software interrupt. handled through a device driver. Device drivers are part
of the operating system.
Processor executes the program, called interrupt service
The operating system provides a simplified view of the
routine or signal handler or trap handler or exception
device to user applications (e.g„ character devices vs.
handler or device driver, related to input or output from
block devices in UNIX). In some operating systems
the port or device or related to a device function on an
(e.g., Linux), devices arc also accessible through the
interrupt and does not wait and look for the input ready /dev file system.

In some cases, the operating system buffers data that


7 rrPd between a device and a user space
The programmer may register Interrupt Service
X S network buffer). This usually
Pr
Routines (ISRs) to handle hardware interrupts, ° gr performance but not always. The DMA
111
routines are not independent threads, but mo controller
nS tas access to the system j bus
includes
independent
a number
of

CPU S
“ h7h -d or write. These
of registers dw P Kgistelj a byte .
an interrupt, and the ISR called. The ISR n»« on a
renters compn * control registers .
Wte interrupt stack only if the count register, ands gi
Otherwise, the ISR stack frames are pus
of the interrupted thread.

Scanned by CamScanner
5-54
=*■"«■*> Mary,-.
- S e m 4 - IT)

1 CPU DMA
preujraiTy<
CPU IM DM*
to the I/O device, the unit of data or information
transferfin terms of byte or word at a time), and the
total bytes to transport in one burst. L

Without DMA, let us see how disk read occurs. 2 DMA iwganka
Intamipl Mwn
Initially the controller reads the block from the drive (tor*
3
Dwth it

serially, bit by bit, until the complete block is in the


controller's internal buffer. Fig* 5.4.1 : DMA transfer operations
Next, it calculates the checksum to confirm that no read
errors have occurred. Then the controller causes an Syllabus Topic : Application I/O Interf
interrupt.
After this, operating system can read the disk block 5.4*3 Application I/O Interface
from the controller s buffer a byte or a word at a time

in a standard and uniform way. In VO related


one byte or word is read from controller device kernel module, the device-driver l a y er
differences among device controllers from 1 .
With DMA, the process is different. First the CPU U
subsystem of the kernel, much as the I/O rv**
j’tertl Calio
s
programs the DMA controller by setting its registers so encapsulate the behavior of devices in a f ev/
K becomes aware about what to transfer where classes that hide hardware differences e ° er ' C
step ] i n Fig. 5.4.1). It also instructs to the disk applications.
controller to read dau
u er and confirm the checksum. DMA can start only " ------ — ouMsyMcni ttKlependp
the hardware, then the job of the operatm g - sy

The DMA controller begins the transfer by issuing a


request over the bus to the disk controller (step 2).
for the device-driver interface. It i s the diffl ?
e disk controller remains unknown and does not care
,W
device-hardware manufacturers. *
rm .Whe,her
aa DMA read
controller.
requesl came
front the CPU or from

Naturally, the memory address to write to is on the bus’


dress lines so when the disk controHer fetches the
word f rom iqteniaJ buffcr ft
o C haracter- stream or block : A character
memory is another standard bus
cycle (step 3).
ns,
hen the write is ends, the disk controller sends an “ ba’"block
a block device "‘transfers s
of bytes as a unit.
O
1
deTce".* 7 rand
v ce transfers
m aCWSS ! A
data °in a fixed order determined

acce ri CC’ WhereaS *he USer Of a


anvoftKeV ' Ce Can lnStrUC ‘ ,he devicc t0 10

any of the available data storage locations.


2 Eb1 o
« * “”" dlvic T Or asynchr
“ : a synchronous
OrmS daU trans ers w
resoon >th predictable
e An
S asynchronous device exhibits
o ‘ utar or unpredictable response times.
disk block to memory- L a? a ‘° C° Py ,he
shows DMA operation.’ ? Rg
’ 5 ’4 - < used cor ° r dedleated : A
sharable device can be
■ <*»"2 e™

Scanned by CamScanner
System 1MU - s 6rn

1 Of operation :Device

C nd to
“ ° a
01 1
s* * kgab
ge»d •wr’1*’ «»d only, Or „ Man rrt

0 vic* perform both input «nly . ln Ul1ers


M * * LLT* * OTk vo
sU pport only ° ne data transfo dil but fot Ttad writet>
’K&s N °‘ ' * xko ’
The °T*rai i n interface w
k
4(A) 0 1 ~' = V , CBB m ** * Wrf
a or functitm
r h iock device interface holds f
of * * 0 which
wl UWon vhe
t* “8 disk drives and
other b1' tS . eSSen ' i a ' infor ’ **' seleci(\ ' 'mptemcnution
pt The device must recognue co ' X:k -0 ''ente d ktt
b
" —
ivedatvi Z ' havii
L writeO ;if it is u random w
ce
Lt have a seek 0 command to de ' -"«
L l0 transfer next. These devices are *' *'"*
< S,ed by
Citation through file system interfaces. *
Sing sy m jmd Database management system
the b
, Ls** ’“* d “ a S ' mplc '’“ear
H application carries out its own buffe * c
Vlde
the cutrem time.
raW deV C aC 5S Pass the time.
b ' r “ “ nontm! of1
* device directly to the application. Stt a 10
cause operation X at time T.
Si help to step out the OS to carry out salne
operating system makes heavy use of these
nc
is CaHed raw UO So tioiis. The time sensitive applications also make
- Lfiott- W “ - OS will not be
vy use of these functions.
Led to perform these operations. UNIX allows a
Unfortunately, the standardization of the system calls
ode of operation on a file that disables buffering and
that implement these functions is not done across
firing called as Direct I/O. operating systems.
advantageous to have memory -mapped file access
The hardware to determine elapsed time and to trigger
' f on top of block-device drivers. A memory-
operations is called programmable interval lime. This
capped interface Offers access to disk storage via an can be set to wait a definite amount of time and then
of bytes in main memory. cause an interrupt.

system call that maps a file into memory returns - The setting can be for generating interrupt once or to
repeat the process to cause periodic interrupts. When
' the virtual memory address that contains a copy of the
process is al the end of its time slice, it should be
file. The real data transfers are carried out only when
preempted.
required to satisfy access to the memory image.
The scheduler uses above method to generate an
As the transfers are handled by the same means as that
' for demand-paged virtual memory access
used

memory mapped I/O is efficient. The„i,urartpr


* examplestream
of a
device that is accessed throug a uses U to cane JT fion M failures.
interface is keyboard.

- The basic system calls in this ,mcrf


** ubraries user processes to use timers.
“ “* h* ta
application to get() or pulO one . crface that offers
can be implemented on top o editing

line-al-a-time access with buffering and and nonblocking


Qflerentiate t een blocking

services. Q.
VO.
W) Network Devices y ioten ns

' Network VO and disk VO diffeIS /°L r ist ics


n
of performance and addressing ch

Scanned by CamScanner
5-56
4
[ Toperaang System (MU - '
guested. >e cal | requests a transfer that wi|'|
o. , Explain issues related Io performs 0US
as ynchr°" its entirety but w i l l complete at
system. _ nrmcd m
g
The execution of the application e cuting ‘he
issues a blocking a system c . b|ocking it will ■ Tropic : K.mel I/O Subsy.teTT
application was in run q u * ue -
move to wait queue, .
Kernel VO S ufasyStem
After the system call finishes, the app ica e( j to
provided
moved back to the run queue, where t
resume execution. Q.
related to
- When it resumes execution, it w i l l receive the
Following are
returned by the system call. The physic
performed by VO devices are usually asynchronous
rei
they take a varying or unpredictable amount o f t i m ices provided
However, most operating systems use blocking system
calls for the application interface. This is because
(A)I/O Scheduling
blocking application code is easier to understand than
non-blocking application code. (B)Buffering
Apart from blocking VO, some of the user-level
processes require non -blocking I/O. One example is a (C)Caching
user interface that receives keyboard and mouse input
while processing and displaying data on the screen. mJ (D)Spooling and Device reservation
Overlapping of execution w i t h VO can be achieved by (E) Error handling
writing multithreaded application. While some threads
cany out blocking system calls, at the same time others
Fig, C5.13 : Services provided by kernel
carry on execution. The Solaris developers used this
technique to implement a user-level library for 5.4.4(A) I/O Scheduling
asynchronous I/O, freeing the application writer from
It involves deciding the good order in which I/O
that task. Some operating systems provide non
requests should be executed. Scheduling can enhance
blocking VO system calls.
the overall system performance. Processes also shares
If the call is non-blocking, then i t does nut stop the
the device accesses in fair manner due to scheduling.
execution of the program for an extensive time. It
The average waiting time for I/O to complete also can
returns promptly, w i t h a return value that shows
number of bytes transferred.
be minimized.

The substitute to a non-blocking system call i s an There is a wait queue of requests maintained for each
asynchronous system call. An asynchronous call device. The I/O request of the application is placed on
returns immediately, without waiting for the VO to the wait queue for that device when application issues
complete. The application continues to execute its blocking I/O system call. All these requests are then
code. arranged in a manner such that, the overall system
The completion of the VO at some future time is efficiency and the average response time experienced
communicated to the application, either through the by applications will be improved.
setting of some variable in the address space of the
Operating system tries to give equal service to all the
application or through the triggering of a signal or
applications. Delay-sensitive requests may be serviced
software interrupt or a call-back routine that is executed on the basis of priority. For asynchronous I/O, kernel
outside the linear control flow of the application.
should keep track of many I/O requests simultaneously
As compared to asynchronous system calls a non- Hence, the OS might attach the wait queue to a device'
Wocking readQ returns without delay with whatsoever
status table. This table keeps the entry for each VO
device and its management is carried out by kernef

Scanned by CamScanner
slatc of lhe dcvict

** <»**!■ “""‘ouin.
i*' device * busy in proe ess .
ff * tyt* and
°’ hCr parameters wil '
I be
t icn ""brefr ' i a n n e . ' ' n < tai per need, and the
t*" enW lha
‘ device. Sched, "*'*™™.T W . « h Un e i , rOT
E
........ of the
cha
f itnp ' rov ng
f* °nnance“* the rf SVSte,
" Ott.« “» S w’rhT™ ™'«’
n0 „, ““ I, t,t Bt
< i0 main memory or on disk via teeh?* to
’ '** t»t then, a " (ul1 - ,h =m i’
Peril'S' caching ’ and s
P°°Ung. ca]leiJ w
M kcr
i buffo,. UUori
over this is to have a
fort Ftf5t bufttf
uflerin9 'l has em * 7 «p. but
.4.4(0) B htn the P
' lhe !iecond onc
is used.
m
to the ’ S UP ' * “ liable lobe
Co h,le
the Pied to Uscf ‘ the second buffer is being
' P raduCCr and
consumer of ? ata" 1 ' stsm ’>ch C
second reason for buff cring ream. character y n .l- ' jC F' rs one can be used for new
nt two
fcr sizes of the diffe rent dcv . alternatively, . buffers can be used
dK W
similarities are particularly COmmo „ s
‘ Such ' i 8 aee:e I™“ " “ P ', ' d to user space,
S

like this Ptibg new input. A buffering scheme


networking. Where for fragmentation and d0UbU buffcr
buffering ‘n 8 Uke

usages buffers are heavily nsed . Buffe of * also important on output.

done to support copy semantics for app !icalio „ Vo

buffering needs to be consider for block and charact,


C oF s
co pared to data ’for
original, accessed moredisk
example, efficiently
stores
' devices for a diversity of reasons. For example when
process reads data from a modem, user process do a ructions of currently executing process. Same
nstructions are then cached in physical memory, and
copied again in the CPU's secondary and primary
Each incoming character causes an interrupt. The caches.
interrupt service procedure gives the character to the Buffer holds the only existing copy of a data item,
user process and unblocks it. After pulling the character whereas, cache holds, a copy on faster storage of an ,
somewhere, the process reads a further character and data item that resides in anolhet place. Sometimes, a
blocks another time. part of memory area can be used for both buffering and
) caching. For efficient scheduling of disk VO, operating
The problem with this approach is that the user process
3
needs be started up for every arriving character. system uses buffers in primary memory to hold disk
5 data. The VO efficiency for files shared by many
Permitting a process to run again and again for short
applications is improved by using these buffer
runs is inefficient, so this design is not a good one.
I maintained in main memory as a cache. The files being
An enhancement on above approach is to have user written or reread speedily also makes use of these
process an n -character buffer in user space and does a buffer as cache.
read of n characters. The interrupt service procedure
For the file VO request, kernel first check buffer cache
places arriving characters in this buffer until it fills up.
in main memory to avoid physical disk VO. In addition,
After this, it wakes u p the user process. This proposal disk writes are collected in the buffer cache for several
seems to be more efficient than the previous one. The seconds. This helps to collect the large transfers to

limitation of this scheme i s that if buffer is paged out


when a character arrives, the buffer could be locked | I 5 4.4(D) Spooling and Device Reservation
memory. In this w a y , if many processes initiate loci ng
Some devices, such as printer cannot accept interleaved
pages in memory, the pool o f available pages
’ L streams. A buffer that holds output for a dev.ee »
reduce in size and performance will degrad ,1 . „ , p00 l. Printer can print a single job at a time.
wanttoptiBttheiIO tpu
In other approach, a buffer can be created
;X « " “ ’
kernel and the interrupt handler p ace
. •». Wbn. ,h.„ I, no » » In W .

Scanned by CamScanner
Storage Mana

. rt fiVS tem should protect any


5*50
from
Memo* po« reject ah user access. r
cjyiitem
=, spooled to «

queued the “ceCSS ™ta Structures

corresponding spool f
icd b y spools Kernel kccpS different in-kernel data struct
nLS
cO mpo oDen -file table structures, kernel Uses
system to the poorer b
* agcm eI .l of *t»°“ n
« just Hke the r , dif&rcnt VO activities like netw
precess is responsible for ting system*. m any structure - vicc communications etc.
Zme openmag connections, cha _______________________________
is managed by e n m - k e n t d 1/0
____ . . lrfa ,.itlJiin*nil Requests t0
opera K R ,
For this purpose. “J d ,y SBnia dmini*ire ’ °
interface that allows users j ob s prior to
display the queue, to whjle lhe printer is Transforming I/O Requests to
those jobs print, to suspend pnm g I
serviced, and so on. he|pfu ||y Hardware Operations
_ es is transto,m ,o hard
Explain i® RWU ’ *are
the I/O requests
operating systems operations.

can manage concur - r fadJitics This


Th «P lainS the t° W ° S C ° D tneCtS “ Pl 08ram
concurrent device access is , ■> set of network wires or to a specific disk
rcq
“ e rS ln o r der to read a file from disk, the application
for coordination.
JX W <he data by a file name. Instde the disk, this
■4 5.4.4(E) Error Handling file name is mapped by file system through the file-
- If an operating system uses protected n ]em< ’ 1C
d system directories to get the space allocation of the file.
can protect against many types of hard In MS-DOS, file name’s first part preceding the colon,
application errors. Hence, complete system failure
for example C: indicate that it is a primary hard disk.
avoided if minor failure occurs. There can be a
Then, c: is mapped to exact port address through a
transient reason to fail devices and I/O transfers sue
device table. Colon separates the device name space
as network becomes congested . The permanent reason
from the file-system name space within each device.
for such failure can be, for example disk controller
becomes faulty. Modem operating systems get important flexibility
In case of transient failure, operations can be repeated from the several stages o f lookup tables in the path
to recover from failure. But it is not possible to recover between a request and a physical device controller. The
in case of permanent failure. methods to pass requests between applications and
An I/O system call returns single bit as a status drivers are common. Hence, new devices and drivers
indicating success or failure. UNTX uses integer
can be added without recompiling the kernel. Some
variable named ermo to return an error code. Near
about hundred values are used to indicate the nature of operating systems can load device drivers on demand.
the failure. Following are the steps involved in typical life cycle of
5.4.5 I/O Protection a blocking read request. It shows I/O operations

to disturp the operation of system. To prevent the users


fiom performing illegal I/O operations, all I/O
instructions are defined as privileged instructions. Kernel code related to system call verifies the
User program has to execute system call in order to parameters for correctness. If required data is in the
buffer cache, then it is returned to the process which
a
o7Z' ne r "" 8 SyStem Cany 0Ut 1/0 On

8 SyStem n,nning m
completes the I/O operation.
ehZ JJeth ' “ ° nitOT
3. If required data is in buffer cache then a physical
not
trie
— I/O requested.
must carri
** ed out. The requesting process is then

Scanned by CamScanner
C0*1
lee
ft»e

initiated a DMA transfer or it

* inte m>pt after comply of transf ’

wa ct interrupt handler g ets .


1.
the intenr.pt vector table. 8toWs ‘ pt
signals the dev.ee driver, and ret Urns
,nt
TW device driver gets the s.g na| , * pt. wr|
to qufjue
J. uest has finished, finds the tu s “ * hlch
0
sta of

signals the kernel I/O subsystem that th ’M queue


e WTile
accomplished. uest has queue

Driver end
The address space of the requesting proc ,
data O r code from kernel and process gets '’ K * iVes

a ferred
from wait queue to ready queue by kernel. “

F g
As process i s i n ready queue, it i s now ' ‘ SA2 : The
STREAM structure
When process gets CPU, i t resume., execution at £
S! aVaiUble llBn
Z" ”“ “ ““ «•“«*
<10eSn
ZnZL “ “ Cef ‘ - ,‘
g S contt01
Syllabus Topic : STREAMS exchsnZ
Changed ' m
among queues - adjacent "“sages
modules. are

Th
= writef ) or putmsgf ) system cal| is ,

54.6 STREAMS Process for writing the data to device. The write! )
system call is executed to write raw data to the stream,
0. What is STREAMS? Explain. while putmsgf ) permits the user process to state a
message. For any of this system call the stream head
- STREAMS are a mechanism offered by UNIX V
copies the data into a message and sends it to the queue
system. This mechanism allows an application to
assemble pipelines o f driver code dynamically. A for the next module in line.

stream is a full -duplex link between a device driver and This copying of messages carries on till the message is
a user-level process. It consists of a stream head, a copied to the driver end and so the device. In the same
driver end, and zero or more stream modules. Stream
way, the read( ) or getmsg( ) system call is executed by
bead interfaces with user processes. The device is
process to read data from the stream head.
controlled by driver e n d . Zero or more stream modules
exist between stream head and driver end as shown in STREAMS I/O is asynchronous (or nonblocking)

5.4.2. excluding when the user process communicates with

the stream head. The driver end also has a read and
ltat driver end mu5t
write queue. It i» necessary ’
««m module, stream head and driver end. The
*t>onality of S T R E A M S processing is <werc(1 respond to intemipts. for example, one triggered when
a frame is r y to te read from a networic
"“toes. The ioctlf ) system call is used to push
"l“' lul es onto a streams.

Scanned by CamScanner
tom (MU - Sem 4 - IT) 5-60 StojageMan

Syllabus Topic ; Performance 5.5 Exam Pack


(University and Review Questi Ori
5.4.7 Performance Syllabus Topic : File System - File Con "
rc e
Pt
The performance of the system is affected by the 1/0 Explain various file attributes in brief
<uid hence it is important factor to consider in (Refer section 5. 1.2)
performance. CPU has to execute device driver code Q. Explain various tile operations in brief
and a/so ft has to carry out scheduling of the processes (Refer section 5. 1.3)
in fair and efficient manner. Also due to I/O. context
Q. Write short note on File Types. (Refer sectio
switches occurs and hence, it burden the CPU and its
hardware caches. Q. Explain the different techniques to a»n
CtUr
files. (Refer section 5. 1.6) ° 'be
- If there arc some inefficiencies in the interrupt
handling mechanisms in the kernel the, I/O can expose Syllabus Topic : Access Methods
them. Also due to I/O, memory bus gets heavily loaded Q- Explain different file access methods
during data copy between controllers and physical
(Refer section 5. 1. 7) (10 Marks) (June
memory. It also happens when copy takes place
between kernel buffers and application data space. Q. Write short note on File Access (sequential anr
(Refer section 5. 1. 7(1))
In interrupt handling, it is require carrying out several
operations. Hence interrupt handling is expensive task. Q. Write short note on File Access (Random acce
During completion of I/O, process gets blocked. As a (Refer section 5, 1. 7(2))
result of this blocking, context switching is required Syllabus Topic : Directory Structures
which lead to certain overhead on system performance. Q. Explain different methods for defining the logical
Several context switches and state switches takes place
when there is communication between the machines in
Q. Explain different directory operations.
network. For example, during remote login many
context and state switches takes place.
Syllabus Topic : File Sharing
Following are the measures to improve I/O efficiency.
Write note on “access rights”.
I- It is required to lessen the number of context
(Refer section 5. 1.13)
switches.
Q
2. The frequency of copying the data in memory ' Explain issues related to file sharing.
While passing between device and application (Refer section 5. 1.13(A))
should be reduced. 0- E xplain different access rights to the tile.
■ Use large transfers, smart controllers, and polling (Refer section 5. 1.13(A))
to lessen the frequency of interrupts.
° Explain in detail operation of remote file system.
Use DMA-knowledgeable controllers or channels (Refer section 5.1.13(B))
to relieve the CPU from simple data transfer in
Syllabus Topic : Protection
order to improve the concurrency.
Q* Explain file system protection in detail.
Processing primitives should be in hardware to
permit their operation in device controllers in (Refer section 5, 1. 14)

parallel with CPU and bus operation. Syllabus Topic : File System Implementation
Try to keep balance in performance related to Explain file system structure. (Refer section 5.2.1)
mernoty. CPU, bus and I/O. If one area gets Syllabus Topic : Implementing File System
overloaded then it will cause the area to be idle. Explain the implementation of file system in detail.
(Refer section 5,2.2)

Scanned by CamScanner
cvsjgm (MU - Sem 4
JL)
notes on virtual fj|e sys
&r section 5.2.2(B))

0
io n Stri Star Mana merit
Uc
h<re , 0 p l ' ! !
0v W vi. WrW „
of dir.
, <e: kQeatlo,
* r* * ’»wo., yiical

the diffaren
are ' allocation nph*. W>

nee to File S y s(en >s. "’’•’ode *ith


* ec«bn««H10 Marks)
Syll, 3
Su, T „ '<B »
<°*-S0l4 Mav
contiguous file afioca *- l , toChn
inhost n° * '*‘ *"t
and
'Litages disadvantages. **> its
J- ’ section 5.2.4(A))
(Reret
V. ”"*—*»
Fj >n iiftod list file aBocation
s
I
antas0 5 a n d disadvantages *’*’ its "*«.rTr'""
section 5.2.4(B))
•JZ"“
8in indexed file allocation ad
9es an
. flotaaes. (Refer section A o d Q.
C<XT) 5 5 (1 MarttaJ
Par h ° (Nov. 2015)
Usin fol,Ow
’©tail. 9 apomn ’ r ‘g Orsk scheduling algorithms
1316 eXampte
S Took - SSTF ’ FCFS, SCAN, C-
(Refer section 5.2.4(D))
i efersae+jrt- c .
i-node is used to allocate the file ?
J (Dk. 2016)
t Refer section 5.2.4(E))
(Dec. 2014)
E
«We 5.3.i0(10Marto)
Syllabi Topic : Free Space Management (May 2016)

(June 2015)
i (Refer section 5.2.5) Example 5,3, 1 2 ( 1 0 M a r k g )
(Dec. 201 5)
Syllabus Topic : RAID Structure
A
(Refer section 5.2.5(A)) Q. Explain various RAID levels.
(Refer section 5. 3. 7(A))
ascribe linked list of disk blocks method.
Syllabus Topic : Tertiary-Storage Structure
(Refer section 5.2.5(B))
Q> Explain various Tertiary-Storage Devices.
r syllabus Topic : Efficiency and Performance
(Refer section 5.3.9(A))
t Explain various techniques to improve efficiency
Syllabus Topic : Swap-Space Management “
and performance of secondary storage.
(Refer section 5,2.6) Q. Explain management of the swap space in detail.
(Refer section 5.3.10)
T
Syllabus Topic : Recovery
Syllabus Topic : I/O systems
1 Explain file system recovery i n detail
Q. Explain different types of I/O devices.
(Refer section 5.2. 7)
(Refer section 5.4.1(A))
r
Whjs Topic : NFS
q. Explain different parameters to distinguish I/O
I Explain operation of network file system in detail. devices. (Peter section 5.4.1(B))
(Peter section 5.2.8(A)) Syllabus Topic: Overview VO Hardware

Explain the layered structure of NFS. Write note on (Pets' on 5.4.2(B))

< *wsecfibn 5.2.8(C))

Scanned by CamScanner
- Sem 4 - rr) 5-62
Q. What is interrupt? Explain the tasks carried out by Syllabus Topic : Kernel I/O Subsystem

Q. Explain various services provided by


Q. Explain steps in DMA transfer. related to I/O. (Refer section 5.4.4) nel ar
e
(Refer section 5.4.2(E))
Syllabus Topic : Transforming I/O R e q u
Syllabus Topic : Application I/O Interface Hardware Operations to
Q. Q. Explain I/O Request is transform to h
(Refer section 5. 4. 3(D) ) operations. (Refer section 5.4.5) ° v' / a r e
Q. Explain issues r81at6d to perfofmance Qf |/o Syllabus Topic : STREAMS
system, (Refer section 5. 4. 3(D ))
Q. What is STREAMS? Explain. (Refer section 5 4

Scanned by CamScanner
uted
Systems

Operating System : Network h


0S
tu re and Protocols; Distributed ’
1
3tct=!“S Service, File p - - 118 •
***»«: Naming and'tr S ' rUC,UrB
Nan
and To
P° l o gy. Communication
11 ansparen
distributed Svnrk cy, Remote file access, Stateful
C trof and Deadlock Handling, ynchrontzahon : Mutual Exclusion, Concurrency

communication links. These devices include


=
computers, printers and other devices capable of
retributed Systems sending and/or receiving information from other
devices on the network. These devices often called as
node in the network. So computer network is
g fjne distributed system.
interconnected set of autonomous computers.
development of powerful microprocessors and
A distributed system is defined as set of autonomous
invention of the high speed networks arc the two major computers that appears to its users as a single
I velopments in computer technology. coherent system.

Many machines in the same organization can be Users of distributed system feel that, they are working
together through local area network and with a single system.
r
iflformalion can be transferred between machines in a Main characteristics of distributed system
very small amount o f t i m e .
Q. What are its characteristics? Explain.
As a result of these developments, it became easy and
Following are the main characteristics of distributed system,
f Unable to organize c o m p u t i n g system comprising
- A distributed system comprises computers with distinct
large number of m a c h i n e s connected by high speed
architecture and data representation. These
I networks.
dissimilarities and the ways all these machines
;■ Over the period of last thirty years, the price of communicate are hidden from users.
I microprocessors and communications technology _ The manner in which distributed system is organized
constantly reduced in real terms. Because o internally is also hidden from the users of the
I distributed computer systems appeared as a p distributed system.
substitute to uniprocessor a n d centralized sy. r nf users and applications with

r The networks o f computers are present ' sptfe of where and when interactionand identical way. in
occurs
I Internet is composed of many netwo
networks separately a n d in cofnbindt
'° n pertinent _ ■ n ,,ri system should allow rtfor scaling it
A drstn y. It
lll suppo for availabilit

the necessary characteristics that m


- arable to the users and applications
topics to focus under d i s t r i b u t e d system should be aiw®j

Definition
set of . X *“ “
s application — ------- ---------
computer net work * defm □ together by
r , t . . . ;i re connect

Scanned by CamScanner
erating System (MU ■ Sem 4 - IT) 6-2 sterns

6.1.2 Motivation In case of failure, other nodes can take OVer


computations to achieve reliability. This failure should
What are objectives behind building rhe distributed
be detected and recovered by system itself. Once the
system?
recovery is done, final computation must result i Q
Following are the objective behind building distributed consistence state.
system.
6.1.2(D) Communication
Objective behind building
distributed system - Machines in network communicate with each other by
exchanging messages. At low level, this
(A) Resource sharing communication takes place through protocol stack
between machines.
(B) Computation speedup
At highest level, applications exchange these messages
Due to such communication, many users can be part of
(D) Communication a single work to be carried out by sharing the
information.
Fig. C6.1 r Objective behind building
By connecting users and resources, it becomes easier to
distributed system
work together and exchange information. The success
of the Internet is due to its straightforward protocols for
Resource sharing offers saving in cost. One printer can exchanging files, mail, documents, audio, and video.
be shared among many users in office instead of having The worldwide spread people can work together by
one printer to each individual user. There can be more means of groupware. Electronic commerce permits us
saving in cost if expensive resources are shared.
to purchase and sell verity of goods without going to
The increase in connectivity and sharing also increases shop or even leaving home.
the security risk and to deal with it is equally important.
Presently, systems offer fewer defenses against 6.2 Types of Distributed Operating
eavesdropping or intrusion on communication, Systems _________________
- A communication can be tracked to construct a favorite
Q. Explain different types of distributed operating
profile of a particular user. This clearly violates
systems.
privacy, particularly if it is done without informing the
user, A allied problem with increased connectivity can In a network, multiple machines may have different
also cause unnecessary communication, for example operating system installed on it. Operating systems for
electronic junk mail, called as spam. Special distributed computers are categorized as :
information filters can be used to select inward
messages based on their content. Types of Distributed
Operating Systems
6.1.2(B) Computation Speedup

- Many big computations can be divided in sub 1 Loosely-coupled systems

computations which can be assigned to different nodes


2. Tightly coupled systems
in network so that work can be carried out in parallel.
- In this case, speed of computation increases. As another t ig. C6.2 : Types of Distributed Operating Systems
example, process may be migrated from heavily loaded
1. Loosely -coup led systems
machine to lightly loaded machine in network.
In a set of computers, each has its own OS and there js
6.1.2(C) Reliability
coordination between operating systems to make their
System does not affeci with failure of one node in own services and resources available to the others. This
network as other nodes are available in system. loosely coupled OS is called as network operating

Scanned by CamScanner
multicomputer systems,
hetei
I Tightly coupled systems

Jf JteCP5 a sinfi,e
’ global V je w M hl ~~~ -----
of the
68 SUCh tiEhtl c c 10 r,ie
' fljtf*® * * ou p I e d °pits •X *• "
h
a lled distributed operating S V v,
-o Whkh Wa hmc 10 Other man
*te th ‘ >
™ ®°Ao*”! » c nJnt,in Thc
„ js useful for the manage, qf
°nnect 10 & on J m
rCqUCsl COmpulcr Watchcs
rnulu ‘heruj FTP
i' ' joiuogeneous multicomputer. bos N" Pro C t c of 1O
” °f This d “ n<1
fo]|
Wln aCm
underlying hardware. This hardware ““ d
]s of
& predefined °n res
P° n d s the «:t
q
processes and details reraa i ns Sh
b get' « commands.
ted by ’ Foster fife fT
puter, °m remo|c
computer to local
Z yllabus Topic ; Put: transfer fi]
10ca1 COrn
computer, Puter t0 remote
Network Operating 3
‘ k
Wdir: Uslfik
re
°W computer. current directory on the

ffusers have NOS installed on machjn


n
either access resources on other mn ,, cy can llB d,recto
™ *e remote
or transfer the data frnm _____ remote
mac 6.2.2
hine t0 their Distributed Operating Systems (DOS)
cvVn machine.

Remote Login local resourc1 '*" 1


' 11 ° pera,m
* ’kstem. access to remote and
UrceS
...?“ '» be I " the same way. Operating
xss migration from one site
, Network operating system allows users to | 0 to another.
remotely- The Internet offers the telnet facing fo" ( A ) Data Migration
remote login. NOS permit users (0 use services
( B ) Computation Migration
available on a particular machine in the network
_<S rocess Migration
Remote login service provided by NOS allows the user
to Jog in remote machine from his/her terminal. 6.2.2(A) Data Migration
User working on machine A can access data (such as a
- Using command for remote copy, user can copy the file
file) that reside at machine B , One method to data
from one machine to other.
migration is to transfer the complete file to machine A.

621(B) Remote File Transfer After this all access to the file is local. When the user
need is completed, a copy of the file if it has been
- NOS offer a mechanism for remote file transfer from modified is sent back to machine B.
one computer to another. Here, each computer has its Although a small change has been made to a large file,
own local file system. I f a user working on computer A all the data needs to be transferred. This method was
wants to access a file on another computer B, then the used in the Andrew file system. This method of file
file must be copied explicitly from the computer B to transfer is inefficient. In other method of data transfer

computer A . File Transfer Protocol (FTP) is .used for only those portions of the file that are actually needed

such transfer i n internet environment. User invokes the for the immediate task gets transferred.

FTP program as: Jf another part of the file is needed later on, another
transfer will take place. When the user no longer wants
ftp name of computer B
access the file, any part.of it that has been modified
After entering the correct username and passwor , he sent back to machine B. The Sun
should connect to the right subdirectory Xylems Network File System (NFS) protocol
M
L method. The Microsoft SMB protocol
wan
required file. Suppose user ‘ ter B
“ SeS • n top of either TCP/IP or the Microsoft
abc.frt then this file must be copi man d.
3150 a
“ ows fi,e sharing
° wr a
to computer A by executing following co
network.
get abc.txt

Scanned by CamScanner
_ _ Distributed System
L£J Operating System (MU - Sem 4 - IT) 6-4

6.2.2(B) Computation Migration 4. Software preference


The needed software by process may available at only a
- The computation rather than the data can be transfer particular machine, and either the software cannoi be
across the system. This approach is called computation moved, or it is less expensive to move the process.
migration.
-> 5. Data access
- If applications needs to access various large files that
If the huge amount of data requires in the computation
reside al different machines, to obtain a summary of
those files. In this case, it would be mare beneficial and it may be more efficient to have a process run remotely
efficient to access the files at the machine where they than to transfer all the data.
reside and return needed results to the machine that
Syllabus Topic : Network Structure
initiated the computation.
— If the time required to transfer the data is longer than
6.3 Network Structure
the time to execute the remote command, the remote
command should be executed. Distributed systems are built on top of computer
6.2.2(C) Process Migration networks, which are of two types: LANs (I.ocal Area
Networks) and WANs (Wide Area Networks).
Q. What are the reasons behind process migration?
LAN covers one room, building or campus, WAN
Process migration involves transfer of process to other covers a city, country, or whole world. Ethernet is
machine for execution purpose. The whole process, or example of LAN.
portion of it, may be executed at different machines. The Internet is collection of thousands of separate networks
reason for migration is : and can be considered as one WAN.

Reason for migration


6.3.1 Local-Area Networks (LANs)
Q. Explain LAN type of network.
1. Load balancing
Instead of having large mainframe computer, it seemed
2. Computation speedup
to be more economical to have many several
computers. Hence, LANs emerged in early 1970 as a
3. Hardware preference substitute to for large mainframe systems which many
organizations were using for their compulation.
4. Software preference
As LANs cover small area such as LAN covers one
5. Data access room, building or campus, the communication links
between nodes are having more speed and less error
Fig. C6.3 : Reason for migration rate.
The reliability and high speed can be achieved by using
1. Load balancing
high quality cables to connect the different computers
The processes (or sub processes) may be distributed
in network.
across the network on different machines to balance the
- Twisted pair cables or fiber optics cables are most
workload.
commonly used in local area network.
•fr 2. Computation speedup
- IEEE Standard 802.3 describes classic Ethernet in
The subprocesses of the single process can run
which a coaxial cable is used to connect many
concurrently on different machines then the total
computers. The cable is called the Ethernet. In the very
process turnaround time can be reduced.
first version of Ethernet, a vampire tap was used to
3. Hardware preference attach a computer to the cable. Connecting the machine
The particular process may need some specialized to cable was carried out by drilling a bole halfway
processor for execution. through the cable and screwing in a wire leading to the
computer.

Scanned by CamScanner
tarn ( M U - Sem 4 - IT)
nd 3

Distributed Systems
faceted
v<k. i" frared ne WOrkS
‘ and
“* Blu., ’
AX,work have comm U m cation speed of “ 1

net l y Conbcc1cd
*orks and i to backbone. Regional
backbone bv r, > ” connected lo routers in
- -
efnetS rou|er
Network ro *re connected to regional
rS
rnodem hank ° Uters a1
Ps are connected to
d jng common. US6d lS S Customere l n thiR
every host ' way,
n nternct
contains several small and large often ° ' has at least one path, and
qoni
_ to every other host.
devlces which
< peripheral « shamble. and
fP 21’ to connect to other networks.
__

Ullde-Area Networks (WANs) ~ 4


NetWork Topology

n WAN type of network. Q' Explain different topology tor network with its
______ advantages and disadvantages.
■jjs cover a city, country, or the whole world.
* ' met ts collection of thousands of separate networks Machines in distributed system can be connected in
different ways called as topology for that network.
can be considered as one WAN. W A N was
30
i n late I 9 6 0 as a n academic research project to Each topology has its advantages and disadvantages.
Installation cost, communication cost and availability
effc i e n t communication among machines. The
criteria are used to compare different configurations.
objective behind this project was to share
Various topologies are shown i n Figs. 6.4,1, 6.4.2,
are and software i n convenient and economical
6.4.3 and 6.4.4. I n fully connected networks, each node
by a wide community of users. Arpanet is the
is connected to every other node in network. Hence,
number of links increases as square of number of sites.
I waN . machines are physically distributed over a Also installation cost is high. Hence, this topology is
’ ;L geographical area. Therefore communication impractical to consider building the large network.
L jre relatively slower compared to LAN and links A
B )
F „ also not reliable. Internet W A N offer capability for
I communication between different machines whtch are

D
’ computers. Hosts
Hns s are PCs
rc notebooks, handhelds, D

,-«re. mainframes, and other computers owned , (b)Partialiy Connected


(a ) Fully Connected Network Network
individuals or companies that want to conn
Fig. 6.4. i
I Internet. ,

■ Routers arc also computers w i t h m a n y incoming and


,ncol ng
outgoing lines. It accept "' on their
Myincoming lines, process i t a n d . end
than fully connected netw
way along one of many outgoing
311
P- " „ igher cotnmumcation cost,
networks are other o f some links, some o f the nodes
Urge national or worldwide route Because of me '
and ISPs (Intw** may become u neWorks . and sta r
operated by telephone companies t.
ice Providers) for
for their cumm
their customers. ted by _ Tlte . s tructumd ne conn ected
networks — -------- - '
ganized with backbones at the top f(j<]tcrs

operator. Backbone cental

Scanned by CamScanner
Distributed Si

'different while designing 1


networks. Tree-smictured network relatively have low Itto communication network |
installation and communication cost. However, due to
tbc failure of a single link, network may become 1 . Naming and name resolution
partitioned.
- In ring network, i f two links fails then and then 2. Routing strategies
network can become partitioned. So availability of ring
network is higher than tree- structured network. Burring 3. Packet strategies
network incur high communication cost as messages
needed to cross large number of links to reach to 4. Connection strategies
destination.
5. Contention
In star network, i f single link fails, then only single
node will be unreachable. In star network,
Fig. C6.4 : Issues while designing communication
communication cost is low but i f central node fails, network
then all nodes i n the system became disconnected.
-4 6.5.1 Naming and Name Resolution
F Q> what is naming? How names are resotved?-1|
B
Explain

D Processes running on different machines must be able


to specify each other. On each machine, process has
Fig. 6.4 J ; Tree-structured Network process identifier- The messages to be sent to the
processes may be addressed to the process identifier
As machines do not share memory, initially machines
B have no knowledge about processes on remote
machines.
To solve this problem, a pair <host name, identifier* is
D
used to identify the process on remote machine. Host
name is unique in network and is alphanumeric so that
Fig. 6.43: Star Network
it is easy to recognize it. Within any host, identifier is
either process identifier or other unique number used in
that host to identify the process.
Names are used to identify the hosts as they are easy to
remember. But machines (hosts) use an identifier for
simplicity and speedup. Hence. It is necessary to have
some mechanism to resolve host name into host*id.
1 wo solutions exist. First each host may have data file
Fig. 6.44 : Ring Network which contains host name and addresses of all hosts in
network. This was the approach used initially in
jy flhusJTopic : Communication Structure
internet. But this approach requires updating the data
6.5 Communication Structure files on all machines whenever host are added or
Following five different issues are important to address removed from network. Due to growth of internet, this
while designing the communication network. approach is not used for name resolution.
The second approach is domain-name system (DNS)
which specifies the naming structure of the hosts, and
also name-to-address resolution. Names starts with
most specific part, where each part is separated by dot
operator and ends with general part. For example,
aihu.ityale.edu indicates host name athu in information

Scanned by CamScanner
department at y ale
Uln
\eisity
address each component h£
, ss i0 system) which takes a
the rcSPOn
C ““ T" *«>le I?"*
name server for the host is . ° , ,hat6 1 -ante.
host-id. Consider Machine J’' ’
X
X t is , issued by process on which
lhe ' Q5w ta t y hj
'’8'Alway S theshorIt .5t lh
Following steps t
lnv
the name athu.it.yale.edu. °lved to ln
'his type of . •
Kernel on host X issue request to nanie 1 8 P>fll f ra
ntJ™' " ' "> “”■>« machine to
edu domain. It ask to name server about ° ff S s
* ®lon. fixed for the duration of one
. -The edu name server’s address U v °
know USed
i fice initial request was issued ‘ °> the f y different sessions to send ’
OmSOurct
jjK? w ------ -----~ "duress Of - U • destination.
tht Shorl like pcriod
Kernel then request to name server v„i , involved in°r **
1 5 trans tr or 85
* it-pcc rtf it x- j j Period h)ng as a remote-login
3
** ' Dynamic routing
dynamic routing, the path used to send a message

message is sent. As decision is taken dynamically,

465-2 Routing Strategies


— —
in general, a source machine sends a message to
different routing strategies.
destination machine on whatever link is the least used
at that particular time.
Machines in internet communicates by exchanging
packets. Each packet contains address of source and 65.3 Packet Strategies
I destination. R o u t e r c o n ta in s routing tables. It extracts
- There are two types of services : Connection-oriented
address in i n c o m i n g packet a n d look up the table to service and connectionless service. With connection-
oriented network service, user initially establishes a
to which router. connection, uses the connection, and then releases it.
Over this connection, the sender sends bits at one end.

!he destination host. The routing tables arc very


other end.
dynamic and are updated continuously as routers an
links go down and conic b a c k u p and as On the contrary, with connectionless service packets

conditions change.
routing methods r.““-
zz x. z- - —
used for routing purpose.

used

Receiver scM of pacle t to the sender. .


1 Fixed routing
cohfiraaho" of introteC s delay and

2. Virtual routing SC paCte


Xtl 2 * l0SS bU

3. Dynamic routing

Flg.C6 : R o u « W m£lh " d S

Scanned by CamScanner
£EToPgrating System (MU - Sem 4 - IT) 6-8
Distributed S?

slow the communication. File transfer is example of A permanent physical link is established between
reliable connection-oriented service. two processes that want to communicate with each
— Reliable connection -oriented service supports either other.
message sequences or byte streams. Any other process cannot use this established 1
communication session is completed. Hence in cixcujt
- In message sequences message boundaries are
preserved, fn byte streams, the connection is simply a switching, link is allocated to for the duration of
stream of bytes, wilh no message boundaries. communication session.

- For application such as digitized voice traffic, the Message switching


delays introduced by acknowledgements are In message passing, a temporary link is established f Or
unacceptable. Here noise on telephone line or a garbled the duration of one message transfer between iwo
word from time to time user can prefer with compare to communicating processes.
delay due to waiting for acknowledgements.
As per need, physical links are allocated dynamically
Unreliable connectionless service is called as datagram among correspondents. These links are allocated f Or
service and does not offer acknowledgement as a only short duration.
confirmation of delivery of packet to die sender,
Each message contains data with other system
— The acknowledged datagram service can be used to information like source, the destination, and Error
send one short message instead of establishing the Correction Codes (ECC). This information is used by
connection. The request-reply service can be used for
client server communication. Following are the six
destination.
types of network services.
3. Packet switching
*+ 6.5.4 Connection Strategies
- Each message from application is first broken up in
Q. What are the different connection strategies used small chunks called a packet.
in network communication?
- Each packet contains data with source and destination
Once connection is established between source and address as each packet may be sent to its destination
destinations, processes can set up communications separately. Each packet reaches to the destination
sessions to exchange information. There are number of machine through separate path.
ways by which two communicating processes may As packets takes different route to reach the
connect. Following are the three most common destination, all must be reassembled into messages as
schemes used. they arrive.

Three most common schemes 6.5.5 Contention


of connection strategies
It several computers want to transmit over a same
1 . Circuit switching communication link then collision occurs. This situation
occurs in ring and multi-access bus network. To avoid the
2. Message switching
collision as it degrades performance, several techniques
have been developed.
3. Packet switching
CSMA/CD
Fig- C6.6 : Schemes of connection strategies
In this technique, every computer sense the channel
1. Circuit switching
before transmitting to check that other machine
message is currently being transmitted or not. This is
,0 COnneCti
system °n estab,isl
»“nt in telephone
called carrier sense multiple access technique. If
Two communicating parties continue to talk over a collision is detected, then transmission gets terminated.
communication line till connection i s terminated On Ethernet, computer has to first see whether other
Dunng communication period, no one can use this line
computer is transmitting packet or not. Then it can send

Scanned by CamScanner
rating System (My .

Lfijst computer fini shes

t sebdP et.
Vision occurs, both the
PUle b
’’ issions. Here both CJ * te _ -
S bjbutad s sterns
ne,w
°iks. Dirf

j0 sends the packet. ln


a
J
d th
-n .
"Hid
Vision occurs, all colliding c * »«rnp, 'n ,U
(°S1) (ISO)
StJ
£ji into the interval 0 to - Pute
’s nuttC 1
* ' Fem *k 'bas * °F«n system
2T M
each nd
*“«•« colli sion T° - =M
is doubled 6
i ‘1 ’ ’“hieing the
char 311
Visions. This algorithm js . >ce of 1
'“, w

,“ 1
81 kno
rtponend backoff. «n I
f r
* binary 1“’'’ 5 H** u™
M Sisseton Ethernet f or
- also a maximum number ' mn m cable fen
n ,s
these : simple, . , / ** whether
ronnected to it, a multiple E 'omputers to be
<,n
entire campus- * used l0 wire I
Jl|
it .4 ' « a evice
A bridge is used to connect these
sau,i en,ow
which permits traffic » - mets C?—
,

” occurred1
XSr' Ct <>

Elhen “ver oh “ ' "channel
“* “wetton
another when the source and destin»« w l0 "’’“mission. PMtcat during
nat, nare
Side. ° m diff NetWork layer : All,k
ne k ,hh
"»o* layer offer, £ *“ " The
f is res
contains ports, to which computer SW
'“ Ch or routing the pack ““ebons P°"siblc
11
another switch can be attached. A Wnder>s °r ™ PS P L~. “"™ on network,
''“versa TlX«
toffeted in switch and sent out on the po rt wh A “
Tranenew 1 ™ ™“ in * “*0™“™.
destination machine stays. As each ____ "“‘“'“•J'r'-i' provides low-level access to the
each machine is It is responsible for tran
connected to separate port ;and collision is eliminated sfer of messages
between clients, includir ’
but requires bigger ltf'all£kc? to connect
switches __ many ing breaking the messages into
mets. sequencing of ’Z
control, and
computers. generating physical addresses
r ession layer-. The session layer is responsible for
Token Passing
establishing the sessions between users. It also
■ Token is a unique message that is circulated in
implements process-to-process communication
network. Any machine want to transmit holds the protocols. It provides services like dialog control,
token. Once transmission completes, it releases token synchronization, token management.
so that other computer can use i t . Token may loss so 6. Presentation layer: This layer deal with syntax and
system should be able to generate new token. semantics of information exchanged between machines.
As machines may have different data representations,
Syllabus Topic : Communication Protocols presentation layer resolve the differences in formats
among the various machines in the network, including
character conversions and half duplex-full duplex
.-Communication Protocols
modes (character echoing).

< Explain different communication protocols. _ _| AuoUcution tayen This layer inUuacb directly with
' It contains different protocols needed by users
compUt
otocol i s o f rules by which e j pyP TTP, Telnet, DNS, and SMTP, This
ttansfcr remolc
coWunicaies with each other. Many pf° tc tals with Gl' ’ ' logm P rotoco,s -
1 t
" *nt such as router-router protocols, and electronic mail.
p,o|
ocols, and others. Protocol stack lay ers

Scanned by CamScanner
Distributed System
JFT Operating System (MU - Sem 4 - FT) .JLL2
dCViCCS &UCh
“ ° n Whicb
- Most of the distributed systems use the Internet as a
met are stored. These files are accessed from these
base. Hence, these system uses two important Internet
devices as per request of client.
protocols: IP (Internet protocol) and TCP
There can be different implementation for DFS. Server
(Transmission Control Protocol), IP (Internet Protocol)
ma y run on dedicated machines. In other
is a datagram protocol. In IP, a sender sends datagram, implementation both client and server may run on Same
of up to 64 KB over the network and no guarantees are machine DFS can be part of DOS or it can be a
given for its delivery. The datagram may be fragmented software layer managing communication between file
into smaller packets and travel independently, possibly system and NOS. DFS should come into view to its
along different routes. At destination host when all client as conventional centralized file system. The
* packets reaches, they are assembled in correct order as servers and storage devices which are on different
per sequence number and delivered to the application. machines in network should be invisible to clients. DFS
- IP protocol has two versions, v4 and v6. Version 4( V4) should fulfill the request of client by arranging the
are currently in use and v6 is up and coming. IP v4 required files or data.
packet starts with a 4O-byte header that contains each - As data transfer is involved in operation of DFS, its
32-bit source and destination address with other fields. performance is measured with amount of time required
These are called IP addresses and routing is carried out to service the client request. Storage space managed by
using these addresses. a DFS includes different and remotely located small
storage spaces.
IP does not offer reliable communication in the
Internet. To offer reliable communication, TCP Syllabus Topic : Naming and Transparency
(Transmission Control Protocol), is present on top of
IP. TCP makes use of IP to offer connection-oriented
6.8 Naming and Transparency
streams. The remote process always listen the incoming
connection on port number. Remote process is - Mapping between logical object and physical object is
specified by IP address of machine and port number. naming. User always knows logical name of the file but
Sender first establishes the connection and sends bytes system has to manipulate the data blocks which is
over that connection which is guaranteed to come out actually stored on disk sectors or tracks. Actually text
the other end undamaged and in the correct order. The name of file gets mapped to numerical identifier which
TCP gives this guarantee by using sequence numbers, in turn again mapped to disk block. Due to such
checksums, and retransmissions of incorrectly received mapping user remains unknown about location of file
on secondary storage.
packets.
In case of DFS, location of file in network remains
Syllabus Topic : Distributed File Systems unknown to the user. This is the new dimension added
to the abstraction with compare to conventional file
6.7 Distributed File Systems (DFS) system. In conventional file system, file remains on
disk of same machine whereas in DFS location of file
Q, Explain working of distributed file system (DFS). can be in disk of any machine in the network. File
— In distributed system, files are available on several replication is supported i n DFS so that several replica
computers. Computers in distributed system can share of file exist. By given file name then set of locations of
these physically dispersed files by using distributed file the file is returned by mapping.
system. Service offers particular function to client and
it is a software entity running on some machines. 6.8.1 Naming Structures
Server runs service software on single machine. Client
process invokes the service through some set of defined Name mapping should support location transparency

operations called as client interface. and location independence.

File system offers file services to clients. ITie set of 1. Location transparency : Physical storage location of
primitive file operations are create a file, delete a file, file cannot be known by using file name.
read from a file, and write to a file. Client interface is
formed with set of these operations. File server controls

Scanned by CamScanner
System (MU - Se

______ _
[J Ctun independence , AhhZr** ».,
Chan8eS lherc
r ’ « »»
1 16
Sk"' ” ' (k. Distrihirtmj Systems
n* clHie nt DFS system ,
d v l 1 PprDKh
S a ic ' ' Lld' L is in network file system
. ‘ ’ PPir Su
n,grj 8 "'•Wortcin. ’ " " yaterns.
" ted by these systems. Henri '
“ion is
QNC+ a
noil ot “PPorted by many UNIX
on O f
sterns. xr po,K
__ " 1 — i networking
‘ tion independence is suptxjrted ltu
Ulen files o cok tor
“ s to
local a-" ra “ hamsm to attach remote
►’ « l o » ---------
caJ data
™n (a ,„,,crs
----------" ers that
tha[ ar P
Can
be
intrud Uc eddircctOT
now j V u ? s
Which
autOrn
°unt feature is
appeare to u
n
" , speeifv borage location. If on . ' Cached
parency is supported, the fi le kx , on a ’ m °unts arc done on demand,
mou
names. nt points and Restructure
1
“f of physical disk blocks although J **1 to
gn the h thind apDrn
hidden from users. * block,
in a
location transparency, users can sharp name stru
are covered
r)1 Saturn s , cture. This file-system
n)s
the
simply by naming the fil e s in t m0le c entional file lsom
<>rphic to the structure of a
ner j t B® *e files are local. I n this "X "’ tTUkes
« diffi cu i7t tem PraCt
‘Cally * some special files
storage space is cumbersome. ’ Sdarin g 8 l<Jbal name structure ** om
<teh lhe
8 of single

approaches 1,1051
ge
WoLh ? ’ complex is the
' end data objects as well. Hence, i t i s u “°" by WS slructure is
practicall
Really most diffici)lt (o complex
As and
, t aivows
' balancing the disk utilization across the system ,
remote ditectQry
naming hierarchy is separated fr O m lhe “
comes more unstructured, Due to non availability of
' of storage devices hierarchy and front the inter t some directories on different computers also
computer structure. Once separation between file name
become unavailable,
and storage space completes, client can now access file
6
remote server also. 8.3 Implementation Techniques

As transparency is important, implementation should


afl files and OS kernel. This is needed as diskless client include the mapping of a file name to the associated
cannot use DFS code to get back the kernel. There is a
special boot protocol in ROM which is invoked by units instead of keeping a single file, so that resultant
mapping would be manageable. It also helpful from
client This facilitates networking and get backs either
administration point of view.
boot code or kernel from fixed location. As soon as
Replication, local caching, or both can be used for
kernel gets copied o v e r network and loaded, DFS
improving the availability of mapping information. If
makes available all o t h e r OS files.
location independence is supported then consistent
* In current systems, c l i e n t uses both local disks and update of this mapping is not possible. This is due to
remote file servers. Operating systems and networking change of locations. Solution to this problem is to use
software are stored locally. File systems holding user low-level location-independent file identifiers.
cUra and sometimes a p p l i c a t i o n s are stored on remote
File names which are in text form are mapped to lower-
file systems. level file identifiers that specify to which component
unit the file belongs. As the identifiers are locations
W Naming Schemes independent, its replication and caching can be earned

1 freely without being invalidated by migration of


Ptain various naming schemes. ________ — —
component units. This will result in need of a second
narru which mapS COmPO n,
Ttere are three approaches available for Lei of “ "
in DFS. Nations and needs a simple yet consent update
d
mechanism.
frtt approach, combination o f f ,ic s
stein -wide
JocaJ
name is used. This ensures a unique
In Ibis, host: local -name combination! ---------

Scanned by CamScanner
Distributed System
LEJ Operating System (MU - Sem 4 - IT) 6-12

— Structured names arc commonly used for low-level - If granularity of caching is large size chunks then hit
identifiers. These names are bit strings with two parts. ratio increases. But, if miss occurs then more data is
The first part is component unit to which the file needed to retrieve from server which increases network
belongs. The second part indicates a file within the traffic. It in turn will increase the possibility Qf
unit. Implementation with more than two parts is also consistency problems as well. The network transfer unit
possible. This explained technique is used in Andrew and the RPC protocol service unit should be taken into
File System (AFS). account wile selecting unit of caching. For large caches
large block sizes are beneficial.
Syllabus Topic : Remote File Access
6.9.2 Cache Location
6.9 Remote File Access Cache location can either be main memory or disk If
cache is kept in main memory then modifications done
Suppose, client forward the request to server to access
on cached data will lost due crash. If caches are kept in
remote file. Naming scheme locates the server and
disk then they are reliable. No need to fetch the data
actual data transfer between client and server is
during recovery as data resides on disk. Following are
achieved through remote-service mechanism. In this
mechanism, request for accesses is forwarded to server, the advantages of main memory caches.
which then performs accesses and returns (he result to o II permits for diskless workstations
user or client. This is similar to disk accesses in o Data access takes less time from main memory
conventional file system. . compared to access from disk.
- Caching can be used to improve performance of o Performance speed up is achieved with larger and
remote-service mechanism. In conventional file system, inexpensive memory which technology demands
caching is used to reduce disk I/O. The main goal today.
behind caching in remote-service mechanism is to
o To speedup I/O server caches are kept in main
reduce network traffic and disk I/O. Following are
memory. If both server caches and user caches are
basic caching schemes in DFS.
kept in main memory then single caching
6.9.1 Basic Caching Scheme mechanism can be used for both.
Many remote-access implementations take the hybrid
- If data needed for operation is not cached by client then
approach considering both, caching and remote service.
it send request to server for the same. Now, client
In NFS, for example, the implementation is based on
performs the accesses on cached data which is sent by
remote service but is improved with client- and server
server. Ail the future repeated accesses to this recently
side memory caching for performance.
cached data can be carried out locally. This will reduce
additional network traffic. 6.9.3 Cache-Update Policy
Least Recently Used (LRU) algorithm can be used for
- System's performance and reliability depends on the
replacing the cached data. Master copy of file is
policy used to write back updated data to the master
available on server and its part is scattered on many
copy of file which is available on server. Following
client machines. If copy of the file at client side
modifies then its master copy at server should be policies are used.

updated accordingly called as cache-consistency Write-through policy


problem. DFS caching is network virtual memory
This policy write data through to server copy (disk) the
which works similar to demand-paged virtual memory
moment they are placed in any cache. This policy is
a vi ng remote server as backing store.
more reliable as little information is lost when client
In DFS, data cached at client side can be disk blocks or system crashes. The write performance is poor as each
it can be entire file. Actually more data are cached by
write access has to wait until the information is sent to
' . than actual| y "< d so that most of the the server.
operations are carried out locally.

Scanned by CamScanner
System ( M U . Setti

retems [ S P iS d e l a
> - writin. th
master copy on server. M "«
ien hit
iata is
advan,a
twork I L 8 c <>f this J dooe
on n. ten

ty of f complete in less ti me is that


runit h? "'"’•hers hnwdtattly.
1
into
, Va nt WDr Q

iches I fatten. The limitation thi ; "Mate •* 2. . ’dity checj ' Spends on

“-™ data are lost ‘Client


hcnce
less
a,e variations of delayed
C. If Whe mus<
is w fluSh a block “*°™>«lencr "'«r n
one « soon as “ Mlc >- One
t in I the chenfs cache. This altern “ “» be
lc
lata ,'jood performance, h u t some blocks c.”"'"’" « - ' hen "xxles cache a Bte
A notif yiflnc
the ication , ™sista>cy.
Eire I dial's cache a long time before they
11 back 0P ed
[ ng server. A negotiation between this eh*""* "' ' « d**.*™ “ «ben m
mode
a ufile
st
is
ified for every open **
,be
K-through policy is to scan the c ach
Whenever server rt
rcEular
Rivals and to flush blocks that have iJ “ s le bas bcen
ry iniultaneously in rnnft opened
mo<1,fi b fl Cllng modes
siare the most recent scan. So far one " ? disabling cach ' - « can take action
c
d 0 delayed write i s to write data back to the " “*g is d iMbled g lbe " m
p
T i<:ular n,e When

m le
operation becomes active “ <*
Is
policy is
6
| ysed in Andrew File System (AFS), -9 5 A Comp
Ing and Remote
L Consistency Service

I Client machine always use cached data for accesses dlflefenCe betwe8n rem
racing? ote service and
fcjj is consistent with master copy at server. If client
Sr. No. Caching
amines that its cached copy i s o u t of date, then it should Remote Service
I. It allows to serve Remote access is
it the up-to-date copy of data. Following two
remote accesses handled across the
oacbes are used.
locally so that they network, so slower
Two approaches of can be faster as compared to caching.
consistency local accesses

2. It reduces the It increases network


1. Client-initiated approach network traffic, traffic, server load
server load and and degrades
2. Server-Initiated approach improves performance.
scalability as server
| Fig. C6.7 : Approaches of consistency is contacted
occasionally.
f Client-Initiated approach
In case of remote
3. In case of caching,
I client starts a validity check in which i t contacts service, transmission
for transmission of
I server. In this check, c l i e n t checks consistency of series of responses
big chunks of data,
to specific requests
1* hed data with the data i n master copy. overall network
involve higher
I resulting consistency semantics is overload required
can network overhead as
is at lower side
I / cy of validity checking. Validity c compared to caching.
e s s {0
compared »
I * carried out prior to every access or on
remotew ------

Scanned by CamScanner
Distributed Systems
Operating System (MU - Sem 4 -JT).
<r Stateless File Service
Sr. No u Caching [ Remote Service
In remote service, in this case server does not keep any information in
4. I Caching is better ir
there is always main memory about opened files. Here, each request
J case of infrequen
I writes but if writes communication identifies the file. There is no need to open arid close
I are frequent then between client and the file with operations open( ) and closef ).
I mechanism used to server to have a - Each file operation in this case is not a part of session
overcome master copy
but it is on its own separately. Closing the file at last is
J consistency consistent with
also a remote operation, NFS is example of stateless
I problem incur large client’s cached copy.
file service approach.
I overhead in terms
I of performance. Stateful service gives more performance. As
' network traffic, connection identifier is used to access information
and server load. maintained in main memory, disk accesses are reduced.
If machines are Moreover, as server knows that file is open for
I 5, | Caching is better
option for diskless and with sequential access, then read ahead next block improves
machines with disk small main memory the performance. In stateful case, if server crashes then
or large main then remote service recovery of volatile state information is complex. A
memory. should be carried out. dialog with client is used by recovery protocol. Server
6. In case of caching, The remote service also needs to know about crash of client to free the
■ . the lower- level concept is just an memory space allocated to it. All the operation
nter-machinc extension of the local underway during crash should be aborted.
nterface is file-system interface In case of stateless service, the effect of server crash
I t lifferent from the across the network.
and recovery is invisible. Client keeps retransmitting
I u ppcr-Icvcl user
the requests if server crash and no response from it.
if i terface.
I
Syllabus Topic : File Replication
Syllabus Topic : Stateful Versus Stateless Service

6 J 1 File Replication
6JO Stateful Versus Stateless Service
Replicating the file on many machines improves
Client access the remote files from server. If server
availability. If nearest replica is used then service time
keeps track about each file being accessed by each
also reduces. The replica of the same file should be
client then the service is called stateful. If server is
kept on failure independent machines so that
simply provides requested blocks of data to client and
availability of one replica is not affected by availability
does not keep track about how client makes use of them
then service is called stateless. of remaining other replica. Replication of files should
be hidden from users.
Stateful File Service
It is the responsibility of naming scheme to a replicated
First client must cany out open( ) operation on file
file name to a particular replica. Higher levels should
prior to accessing it. Server the access information
remain invisible from existence of replicas.
about file from disk and store in main memory.
At lower-level different lower-level names are used for
Server then gives unique connection identifier to client
different replicas. Replication control such as
and opens the file for client. This identifier is used by
determination of the degree of replication and
client to access the file throughout the session.
placement of replicas should be provided to higher
On closing of the file. or by garbage collection
levels. As one replica updates then from user’s point of
mechanism, the server should free the main memory
view, the changes should be reflected in other copies as
space used by client when it is no longer active. The
well.
mformatton maintained by server regarding client is
e d m fault toterance, AFS uses stateful approach

Scanned by CamScanner
?d Systems
(n _ (J Distributed Systems
me5 ltldudin
i ituition in *ge 8 itself, in this request
centralized synchronization nie . ' *hich request SLarT1p
’Seating the time at
ch request
nie SCnt
I Lj to distributed environment also C
be ssaget ot | ler ‘ receiving this request
arid close
I 4 imtnc
diate|vn ;/ r<1CeSS may 5410(1
reply message
r
Mutu a l Exclusion If ydefCTit
of session P ess w
s at last is f gxpiafnddferent algorithms for mijh Pr °ce$s i s alre d uest
message from other
cr l c a l
I distributed system. Queued. When . ‘ section, then request is
* stateless
reply lo reou ‘ ° Or, ' eS ° u l critical section, it sends
. U T rcnt algorithm, Io achlew J reply from a|R nfi process
‘ If
requesting process gets
ice. As pr cesses lt can enter in
' n in distnbuted environment. There are s u mUtual critical section " ' °
>o
urination i n system
‘' h e r of nnntbered f roni , “ PP0Se "
reduced, 0 n u immOlia,dy
has its own processor ‘ ’d “'Pends ' on following factors. » it deferred
’pen for
1 [f
mproves process receiving request message is in critical
hes then •on then it defers the reply to requesting
iplex. A process.
„ Server
free the to enter in critical section then it immediately
aeration , lD this algorithm, one of the proceS5 sends reply to requesting process.
coordinator. Each process want to enter in critical
I f process receiving request message also want
section sends request message to coordinator. If process
r crash enter in critical section then, it compare its time
then receives reply message from coordinator then stamp with time stamp of requesting process. If its
nutting
proceed to enter in critical section. Once process finish time stamp is greater than lime stamp of incoming
it.
request then it immediately sends request
coordinator and continue its execution. message. Otherwise the reply is postponed.

After receiving the request, first coordinator checks In this algorithm, mutual exclusion and no deadlock is

whether critical Section i s empty or not. If empty then guaranteed. There is no starvation as time stamp is used
coordinator sends reply message to requesting process to ensure FCFS strategy to serve. Exact 2(n-l) number
proves of messages arc required to exchange to enter in critical
and process enters in critical section. If other process is
j time section which is less compared to if processes act
already in critical section then request of requesting
Id be independently and concurrently.
process is queued.
that Following are the limitations of this algorithm.
biiity - When coordinator receive release message from
1. Processes should know names of all other
KJuld process exiling from critical section, one of the
processes in group. There is problem when new
processes from q ue u e is chosen to enter in critical
process joins the group. In this case, processes
:ated section based o n some scheduling algorithm. If FIFO
must receive names of all other processes in
ould strategy is used then no starvation can occur. Three group. Suppose, request and reply messages are in
messages request, reply and release are used in transit and new process joins the group, then
algorithm. problem definitely will arise.
I for
as , If anyone process fails, then requesting process
•■12.1(B) Fully Distributed Approach
and ■ wi jl not receive reply message from it. The
her 1 monitoring of all processes slate is required and
Explain distributed algorithm for mutual exclusion notification regarding fail process need to be sent
tof in distributed system. ------------------------------------------1
, ,11 other processes in group. Thts is required so
> as J - y o l v e d in decision
esS sh0UW Send thC req
fa this algorithm
*
a l l processes arc mv
x* tn efliC*
critical thaX P “ eSt l
°
to
allow the requesting process cr itical failed process. ‘ __________
cn
xwion. Process Pi which want to olher

L *«ion. sends request (Pi, TS) --------------

Scanned by CamScanner
6-16 Distributed System
Operating System (MU ■ Sern 4 TQ

3. h is necessary to pause (he processes frequently control is the responsibility of transaction manager. i t
that have not taken entry in their critical section. also maintains log for future recovery purpose.
This is required to guarantee other processes that
6.13.1 Locking Protocols
they want to enter the critical section.
And hence, this algorithm is better suited to small and - Two-phase locking protocol also works for distributed
stable sets of cooperating processes. environment. In this case implementation of loc
manager differ compare to single system environment
6.12.1(C) Token-Passing Approach
6.13.1(A) Non replicated Scheme
In this algorithm, a token which is a special type of
message circulated in the system. Process acquiring - This scheme is used when data is not repliCat£d
token can enter in the critical section which ensures a system. In this scheme, there is a local lock manager Un
single process in critical section at a time. In this each site to manage the lock and unlock requests for
algorithm, processes in the system are assumed as those data items stored in that site. In order to lock the
logically organized in a ring structure. This logical data item X on site Si, process simply send message to
arrangement can be based on some parameter, for lock manager on that site.
example, integer value of IP address of the machine on If X is locked in incompatible mode then the request is
which process is running. delayed until that request can be granted. Once it has
Token circulates around the ring and process which been decided that the lock request can be approved, the
want to enter in critical section holds the token and lock manager sends a message back to the initiator
enter in critical section. After the process exits its showing that the lock request has been granted.
critical section, the token is circulated again in ring. In The implementation of this scheme is easy. It needs
this scheme, process exiting from critical section passes
only two message transfers for handling lock requests
the token to its neighbour. In this algorithm,
and one message transfer for handling unlock requests
unidirectional link guarantees the freedom from B u t , deadlock handling is more complex.
starvation.
One message per entry is required to enter in critical if 6.13.1(B) Single-Coordinator Approach
all the processes want to enter in critical section, if no
This scheme is used when data is replicated. In this
one want io enter in critical section then infinite
scheme, single lock manager resides on particular
number of messages may require. Limitations of this
selected site Si. For locking and unlocking the data
algorithm are:
item, all request are sent to this site. Any transaction
1. Token may lost. In this case election needs to be want io lock ihe data item, sends lock request to Si and
called to generate new token. lock manager decides whether to grant lock without
2, Failure of any process requires establishing a new delay or not.
ring.
It immediately lock request is granted, then lock
manager instantly sends message to the site from where
Syllabus Topic : Concurrency Control
request was sent.

6.13 Concurrency Control If request not granted, then it is delayed till it is granted
and again message is sent to the requesting site.
This section explains concurrency control schemes for The transaction can read the data item from any one of
distributed environment. The management of the the sites at which a replica of the data item resides.
transactions or subtrabsactions accessing database on
In case of write operation, all the sites having replica
the local site is earned out by transaction manager of
should take part in the writing.
distributed database system.
Advantages
- Transaction can be local or it can be part of the
transaction running at other sites. The concurrency Implementation is simple as only two messages are
required to handle lock request and one message to

— handle unlock request. Algorithms used to handle

Scanned by CamScanner
System (MU

ck on s t n 8 Je system
request are f rom same’s ” -a 1Qck

pfM****" Shi
Distributed Svstems
mana
. . sing* '<“* ger reside On „ T
si , site Si may be «, ° the I
1 d X
sing o f a* *e request, then . "““ck due ? " onsite hT "‘ 'X' x" ' ’' an5Mion s
'Wiy
done r f ,s E P,iCa f
> » * ° must stOp ‘ * ’«u.lv. X ‘" ° x.~
solution to avoid bottle n e c k hen □ t
z j; rta tor approach. Lock ° mnk...
to
x. u
lngare lock
° Plica O f X " ager at each site
If
* e *que st b eRntCd lhen
lock priest is delayed tv response to that
n S SC e m e
° read operation ■ ’m POses less overhead
3 1(C) M a J°rity Protocol vantage E ive _ mpared lo
majority protocol. This
reSU
majority p jn compared t ’ tS When morc
arc reads
Writts extra
■s incurred a ‘ overhead on write
I %is approach is modification of nonrcnli handling
ln com i S . many Sltes involved, deadlock
da,a
EcomplexiUes exists.
' srheoie. In “heme lock manager i s pra , c ?. 3/1
(E) Primary Copy
| iiIe . The locks for all data or repIica o f d ata ™ sX

In this aDDroach
inning on that
| site- tv - zv wmcn ls kept on exactly on one site
1
. For this
m X, this is called as primary site of X. When
I if data item is replicated on n number of sitcs
saction needs to lock data item X, it should request
transaction want to lock this data item. Then it js
st primary site of X, If the request is not granted
necessary to send the lock request to more than one-
half of the n sites. Lock manager decides whether to The concurrency control for replicated and
grant the lock or should response be delayed. The nonreplicated data is handled in the same way.
transaction does not operate on replicated data item Implementation is straightforward. If primary site of -

the replicas of data item. item X is accessible.

Advantages 6.13.2 Timestamping

- Avoids the drawback o f central control approach. A unique timestamp is given to each transaction based
r
Disadvantages on which serialization order is decided.

I* implementation i s complex. It requires 2(n/2 + 1) 6.1 3.2(A) Generation of Unique Timestamps


messages for handling lock requests and (n/2 + 1)
- Centralized and distributed methods are used to
messages for handling unlock requests. As many sites
generate the unique timestamp. In centralized approach,
are involved, different types of implementation is
single site distributes the time stamps which are
required for handling the deadlock.
generated using logical counter or site’s own clock.

613.1(D) Biased Protocol In the distributed method, every site generates a local
timestamp which is unique. For this, either a logical
a ori
Tfe working of biased protocol is close to the J counter or the local, clock is used. The global unique
Protocol. In biased protocol, requests for S timestamp >s 8 enerated by concatenat,on of ,he

get more favorable action than requests hich local


loC unique timestamp with the site identifier. While
“ , ln » site identifier is kept in the least
,Ocks
- Each site contains a lock manag5 that the global tifflesamps
“X i«sito «
*** the locks for all the data to

Shared and exclusive locks are man g


fertnt manner.

Scanned by CamScanner
Distributed Systems

6-18 i. The global resource-


MU - Sem
carry out the ban
dlouk-P-ven " 'Xgh t
than those
ordering
generated in one site are not / *
venerated in another site.
overhead

wUon ls
impleme« from thc selected process
faster rar eomparelI o limeslwn ps me5S
The number |afge , Hence, it may become
ste . s nmestw" , „ s« 5hould be
as banker may banker’s algorithm does not
bottleneck M ‘ & distribdted systenv
X
there to guarantee hat umestarups will be generated >»
mat,
fair manner across the system. aDuear to be of praccre .
Padlock-prevention scheme that considers a

A
« — " Xng approach with resource preemption
‘"’’“‘X any deblock situation that may happen in a
can
generation o f timestamps. h red system Consider only a ease of a single
Xce of each resource type. A unique priority
‘ is aiven to each process in order <0 control the

In centralized approach, there can be cascading


Pi should for
rollbacks i f no mechanism is applied to stop a
transaction from reading a data item value that is not Pi If Pi has highest pnomy j -----------------------------
until now committed. The basic timestamp scheme for Pi This method prevents deadlocks since, for every
experiences through the unwanted property that
edge from Pi to Pj in the wait-for graph, Pi has a higher
conflicts between transactions are resolved through
priority than Pj. Thus, a cycle cannot exist. This
rollbacks, rather than through wait.
scheme lead to starvation as always lower priority
processes will be rolled back. Following are the two
read and write operations (delay) until a time when we deadlock-prevention schemes based on timestamps.
arc guaranteed that these operations can happen
The wait-die scheme
without causing aborts.
This scheme is based on a non-preemptive approach.
Syllabus Topic : Deadlock Handling When process Pi requests a resource presently held by
Pj, Pi is permitted to wait only if it has a lesser
6,14 Deadlock Handling < timestamp than does Pj (that is. Pi is older than Pj ).
Otherwise, Pi; is rolled back (dies).
Deadlock prevention, deadlock avoidance and deadlock
detection algorithms for single system can be extended The wound-wait scheme
to distributed system. This scheme is based on a preemptive approach and is a
complement to the wait-die scheme. When process Pi
6.14.1 Deadlock Prevention and Avoidance
requests a resource presently held by Pj, Pi is permitted
Explain deadlock prevention and avoidance in to wait only if it has a larger timestamp than does Pi
distributed system.
(that is, Pi is younger than Pj). Otherwise, Pi is rolled
In distributed system, resource-ordering deadlock- back (Pj is wounded by Pi).
prevention technique can be used by simply defining a If rolled back process does not get new timestamp, then
global ordering among the system resources. A unique above both schemes can avoid starvation. The rolled
number is assigned to each resource in system and back process will always receive smaller timestamp as
process request the resource with unique number i only
time always increases.
if it is not holding a resource with a unique number
In the wait-die approach, an older process has to wait
greater than i.
for a younger one to free its resource. Therefore, the
Banker's algorithm can also be used in distributed older the process gets, the more it tends to wait. On the
system provided one of the processes is selected as other hand, in the wound-wait approach, an older
process which keeps all the information required to process never waits for a younger process. In both
schemes needless rollbacks may take place.

Scanned by CamScanner
terr (MU - S e m 4 - m
dn<J ®
S-19
Detection
Distributed System

IfP' , , efflP* dcfiDeS reS


“ Urce aJ
l« a tio „ s|ate
f Ch yPC
I ° '‘ ' '* <We i n Wai, .P , a i n c*n zed algorithm” lor deadl H
[' „„ denotes deadlock. Each site maintains it. ----- in distributed system. J
1 S' ait-for graph n<XleS
“ w
“‘ fM
graph In
*his approach, there is a deadlock detection
,0Ca S Which are
i X P"**®*’ “' °' ronning t rdinator that maintains a global wail-for graph
»» * ° n ‘1 rCqucs,,n * or
holding the ich is union of local wait-for graphs from different
eR
local to that that sue. . 1 he two types of graph are generated- One is real
graph which is no illustrates the real but unidentified

during the execution of its algorithm. The reported


result by generated graph should be correct. On

fig. 6-1 4 1 : Wait-for 8ra


Ph at site
8, reported system is certainly in deadlocked state.
The updating of global wait-for graph is carried out if

graphs.

P2 P3 Periodically, after number of changes.


On invocation of deadlock detection algorithm.
On invocation of dcadlock detection algorithm, if cycle
F jg, 6.14*2 : Wait-for graph at site S2 exist in global wait-for graph then process responsible
I eSS pi j s running on site S t and it request for the is rolled back and local sites are also informed by
I ifpioc®’ t | jen pj sen( j s message to site S2 and
coordinator about the victim. The local sites also roll
gets addcd in wait back the roll out process. On addition or deletion of
1 ’ for graph of site Sz

edge in local wait-for graph, a local site should


tfJoca j wait-for graph contains c y c l e then deadlock has
immediately convey information to coordinator.
is no cyc1eS in Iocal wait
I urred If there ‘ for graphs
Coordinator then updates these modifications in global
no( mean dcadlock not
I ° ccurrcd ’
wait-for graph.
w can be a deadlock. The union of the all local
Periodically when local sites sends messages to
rat-far graph is taken on some site to obtain global
wait-for graph

If this global wait-for graph contains cycle then system This happens due to delay incurred in reaching the
isin deadlocked state, messages from different local sites to coordmator.

Pl)* P4 Hence, although there is no deadlock, recovery wi


initiated. To overcome this problem, messages should

P3 for deadlock
(P5
1
de 0O ±
043 : Global wa ,t-for graph after union of ------; TT Tthe sires are involved in deadlock
for graphs at site Si and Si ,h,S
“P Each sitt builds a wait-for graph indicating a
de,eCtl
i-for graphs can be organized m g ° n ; h ; total graph, tending on the dynamic
C
f the system. The thought is that, if a
baviof ot u _____________

Scanned by CamScanner
Distributed Systems
6-20
Tcommumcation Structure .

deadlock exists, a cycle will appear in at least one of naminq? How names are resolved?
Q. What is naming ,
' the partial graphs.
In «, approach, each she — "
graph. In each local wait-for graph, we „ O. Explain different routing strategies.
P Edge Pi to P„ in graph indicates Pi J* wai i g (Refer section 6.5.2)

dX hern in another site being held by any process. In


Q.
the same way, edge P„ >□ Pj in S«ph indicates thai. netwoi
process at another site is waiting to acquire a resource
presently being held by Pj in this iocal sue.
These modified wail-for graphs arc then used for Q. Explain different communication protocols.
(Refer section 6.6)
which does not contains P M then deadlock that involves Syllabus Topic : Distributed File Systems
local processes of that site is occurred. This deadlock is
handled locally at that site only. Q. Explain working of distributed file system (DFS).

- If local wait-for graph has cycle which contains P„


then there is possibility of distributed deadlock in Syllabus Topic : Naming and Transparency
which processes from different sites are involved. In
this case distributed deadlock detection algorithm is
invoked by site whose wait-for graph contains P„,
Syllabus Topic : Remote File Access
6.15 Exam Pack (Review Questions)
q. What is difference between remote service and
<sr Syllabus Topic : Distributed Operating System caching ? (Refer section 6.9.5)

Q. Define distributed system. (Refer section 6. 1) <r Syllabus Topic : Distributed Synchronization

Q. What are its characteristics? Explain. Q. Explain centralized algorithm for mutual exclusion in
(Refer section 6. 1.1) distributed system. (Refer section 6.12.1(A))

Q. What are objectives behind building the distributed Q. Explain distributed algorithm for mutual exclusion in
system? (Refer section 6. 1.2) distributed system. (Refer section 6.12.1(B))

Q. Explain different types of distributed operating Syllabus Topic : Concurrency Control


systems. (Refer section 6.2)
Q. Describe majority protocol in distributed system.
Syllabus Topic : Network Based OS (Refer section 6.13.1 (C))

Q. What are the reasons behind process migration? Syllabus Topic : Deadlock Handling
(Refer section 6.2.2(C))
Q. Explain deadlock prevention and avoidance in
Syllabus Topic : Network Structure distributed system. (Refer section 6.14.1)
* f

Q. Explain LAN type of network. (Refer section 6.3. 1)


Q. Explain centralized algorithm for deadlock detection
Q. Explain WAN type of network. (Refer section 6.3.2) in distributed system. (Refer section 6. 14.2(B))

Syllabus Topic : Network Topology Q. Explain distributed algorithm for deadlock detection

Explain different topology for network with its in distributed system. (Refer section 6.14.2(C))

advantagesand disadvantages. (Refer section 6.4)

Scanned by CamScanner

You might also like