0% found this document useful (0 votes)
4 views159 pages

OS Class Notes

The document provides an overview of operating systems (OS), defining them as intermediaries between users and computer hardware, and detailing their structure, components, and functions. It discusses the goals of OS design, including user convenience, resource allocation, and error detection, as well as the importance of memory hierarchy and I/O organization. Additionally, it covers the types of system calls and interrupts that facilitate communication between programs and the OS.

Uploaded by

Utkarsh Pareek
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views159 pages

OS Class Notes

The document provides an overview of operating systems (OS), defining them as intermediaries between users and computer hardware, and detailing their structure, components, and functions. It discusses the goals of OS design, including user convenience, resource allocation, and error detection, as well as the importance of memory hierarchy and I/O organization. Additionally, it covers the types of system calls and interrupts that facilitate communication between programs and the OS.

Uploaded by

Utkarsh Pareek
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 159

Jan-24

Introduction

OS is a
programm hardware

OSemedyweus
What is OS

What is Abstract New of le

of What s s
viewerhas differentview
So each
an

What is an operating system ?


• A program that acts as an intermediary between a
user of a computer and the computer hardware
• An operating system (OS) is a collection of programs
that achieve effective utilization of a computer system.
• An OS has several kinds of users
• The OS meets diverse requirements of different kinds of
users
• Each user has a different view of what an OS is, and what it
does. Each of these views is called an abstract view
Jan-24

A designer’s abstract view of an OS

&
No serSubfacenogram
ne

I
A designer’s abstract view of an OS
• The abstract view consists of three components

T
• The kernel programs
• interact with the computer’s hardware and implement the
intended operation
• The non-kernel programs
• implement creation of programs and use of system
resources by them. These programs use kernel programs to
control operation of the computer
• The user interface
• interprets the commands of a user and activates non-
kernel programs to implement them
Implement .
with Hardware & Get work done
Kernel-
Non Kernel-

User Interbal
Jan-24

Computer System Structure


 Computer system can be divided into four components
 Hardware – provides basic computing resources

a
CPU, memory, I/O devices, file storage space
D  Operating system
Controls and coordinates use of hardware among various
applications and users
B  Application programs – define the ways in which the system resources
are used to solve the computing problems of the users
Word processors, compilers, web browsers, database systems,
video games
 Users
People, machines (e.g., embedded), other computers

Four Components of a Computer System


Jan-24
Use
Hardware &
Software
or
bac
Program
③ Control

Operating System Definition


 OS is a resource allocator
 Manages all resources (OS as a government )
 Decides between conflicting requests for efficient and fair
resource use
 OS is a control program
 Controls execution of programs to prevent errors and
improper use of the computer
Infinite Look chale se rok skta hai.

 Resource allocation and control is especially important when


having several users connected to the same mainframe or
microcomputer ↓
is also an example
Multiple 'Users' in one computer
I user
admis
is

Shafald
a

Operating System Design & Goals

I
 Operating system goals (Objectives):
 Execute user programs and make solving user problems easier
 Make the computer system convenient to use
 Use the computer hardware in an efficient manner
 Ability to evolve: An OS should be constructed in such a way as to
permit the effective development, testing, and introduction of new
system functions without interfering with service.
 Each OS has different goals and design:

·
 Mainframe – maximize HW utilization/efficiency
 PC – maximum support to user applications
 Handheld – convenient interface for running applications, performance per amount
of battery life
Jan-24

Goals of an OS
• These two goals sometimes conflict
• Prompt service can be provided through exclusive use of a
computer; however, efficient use requires sharing of a
computer’s resources among many users
• An OS designer decides which of the two goals is more
important under what conditions
• That is why we have so many operating systems! ease of use

performance,
resource utilization efficiency convenience

User convenience
• User convenience has several facets
• Fulfillment of a necessity
• Use of programs and files
• Good service
• Speedy response
• Ease of Use
• User friendliness
• New programming model
• e.g., Concurrent programming
• Web-oriented features
• e.g., Web-enabled servers
• Evolution
• Addition of new features, use of new computers
Jan-24

Computer System Organization

 Computer-system operation

&One J
or more CPUs, device controllers connect through common
bus providing access to shared memory
 Concurrent execution of CPUs and devices competing for
memory cycles (through memory controller)

Device Controller
.
eg
Har
Keyboard a

De called KD  Each device controller is in charge of a particular device type


S
(thus competing on memory cycles)
KD charge
in of K
 Each device controller has a local buffer
② KD Has Local Bubber
CPU KD Bubber to
uses
 CPU moves data from/to main memory to/from local buffers ③
interchange data to Abo
min memory  I/O is from the device to local buffer of controller
Any Input brom a
K wil

 Device controller informs CPU that it has finished its


informthat
KD ohde
it
operation by causing an interrupt②
it as Interrupt
Jan-24

·
I Accounting

Operating System Services


• Operating systems provide an environment for execution of programs
and services to programs and users
• One set of operating-system services provides functions that are helpful
to the user:
W• User interface - Almost all operating systems have a user interface
(UI).
* • Varies between Command-Line (CLI), Graphics User Interface
(GUI),
D• Program execution - The system must be able to load a program
into memory and to run that program, end execution, either
normally or abnormally (indicating error)
*• I/O operations - A running program may require I/O, which may
involve a file or an I/O device

Operating System Services (Cont.)


• File-system manipulation - The file system is of particular interest.
Programs need to read and write files and directories, create and delete
them, search them, list file Information, permission management.
• Communications – Processes may exchange information, on the same
computer or between computers over a network
• Communications may be via shared memory or through message
passing (packets moved by the OS)
• Error detection – OS needs to be constantly aware of possible errors
• May occur in the CPU and memory hardware, in I/O devices, in user
program
• For each type of error, OS should take the appropriate action to
ensure correct and consistent computing
• Debugging facilities can greatly enhance the user’s and programmer’s
abilities to efficiently use the system
Jan-24

Operating System Services (Cont.)


• Another set of OS functions exists for ensuring the efficient operation of the system
itself via resource sharing
• Resource allocation - When multiple users or multiple jobs running concurrently,
resources must be allocated to each of them
• Many types of resources - CPU cycles, main memory, file storage, I/O
devices.
• Accounting - To keep track of which users use how much and what kinds of
computer resources
• Protection and security - The owners of information stored in a multiuser or
networked computer system may want to control use of that information,
concurrent processes should not interfere with each other
• Protection involves ensuring that all access to system resources is controlled
• Security of the system from outsiders requires user authentication, extends
to defending external I/O devices from invalid access attempts

A View of Operating System Services


Jan-24

OS and the Computer System


• Fundamental features of computer systems that are can t dohia
important to an OS are: app

• Privileged mode of CPU So


• Memory hierarchy Whil ?
heshopwi
• Interrupt structure Higher Briority first
-

• I/O organization
• How an OS uses these features to control operation of
an OS?
• How a program interacts with an OS?

Memory utilization during operation of an OS

• Non-kernel programs are loaded in the transient area when needed


• The kernel is the core of the OS; it is always memory resident.
• Rest of the memory is shared between user programs.
~

Jan-24

Prio Mod -
> Runs sensitive Comp Aper ass it be dow in

normal mode/user application


510 landli modifying CPV elock die

Privileged mode of CPU


,

Na Pal
-

3
• The CPU can operate in two modes.
D • Privileged mode
• Certain sensitive instructions can be executed only when
the CPU is in this mode
• For example, initiation of an I/O operation, setting protection
information for a program
• These instructions are called privileged instructions
• User mode
• Privileged instructions cannot be executed when the CPU is
in this mode

TEnterrupt
VIS

Dual Mode Operation


is how OS
I protects CPU

Use &

- F

Privileged 2 -
Jan-24

Memory hierarchy
•The memory hierarchy is a cost-effective method
of obtaining a large and fast memory
• It is an arrangement of several memories with
different access speeds and sizes
• The CPU accesses only the fastest memory; i.e., the
cache
• If a required byte is not present in the memory being
accessed, it is loaded there from a slower memory

Memory hierarchy

• Cache memory is the fastest and disk the slowest in the hierarchy
- -

• The CPU accesses only the cache memory


• If required data or instruction is not present in the cache, it is loaded there
-

from memory (if not present in memory, it is first loaded from disk)
Jan-24

Memory hierarchy
• Cache memory
• A cache block or cache line is loaded from memory when some
byte in it is referenced
• A ‘write-through’ arrangement is typically used to update
memory
• The cache hit ratio (h) indicates what percentage of
accessed bytes were already present in cache
• The cache hit ratio has high values because of locality

3
• Effective memory access time = h x access time of cache memory +
(1 – h) x (time to load a cache block + access time of cache memory)

Memory hierarchy
• Main memory
• Memory protection prevents access to memory by an
unauthorized program
• Memory bound registers indicate bounds of the memory
allocated to a program
• Virtual memory
• The part of memory hierarchy consisting of the main
memory and a disk is called virtual memory
• A program and its data are stored on the disk
• Required portions of the program and its data are loaded in
memory when accessed
Jan-24

Memory protection using bound registers

First
&
Lat
• The lower bound register (LBR) and upper bound register (UBR) contain
addresses of first and last bytes allocated to the program
• LBR, UBR are stored in the memory protection info (MPI) field of the PSW
• The CPU raises an interrupt if an address is outside the LBR–UBR range

Memory protection using bound registers

N
Jan-24

Hardware Address Protection

Input / Output organization


• An I/O operation slows down a program’s execution
due to mismatch of CPU and I/O speeds
• Involvement of the CPU in I/O operations should be the
minimum possible
[• CPU J
outhi herea
should be free to execute instructions while I/O
operations are in progress
• Different I/O modes Miraup CPU CPU cont do
place
Take
• Programmed I/O Bilinrancher
,

• Interrupt I/O An 110 instruction starts Sooperationand busCD of tuit htered


of

S • Direct memory access (DMA) to transfered blo memory & 110


interrupt raised whenver data has
is

110 instruction state 510 operation


It doesn't involve CPU Everything is done by PMA
,

Once 510 completes DMA sends an interupt.


,

free our other instructions


to

CPU is
.
Jan-24

Input / Output modes


• Programmed I/O
• Data transfer between memory and an I/O device takes
place through the CPU
• CPU cannot perform any other operation until I/O completes
• Interrupt I/O
• An I/O instruction starts an I/O operation and frees the
CPU to execute other instructions
• An interrupt is raised every time a unit of data is to be
transferred between memory and the I/O device
• An interrupt processing program in the kernel actually transfers
the data

Two I/O Methods


Synchronous Asynchronous

understand
Jan-24

Input / Output modes


• Direct memory access (DMA)
• An I/O instruction indicates the operation to be
performed and the number of bytes of data to be
transferred. Its execution starts the I/O operation
• Data transfer is coordinated by the DMA controller; it does
not involve the CPU
• When the I/O operation completes, the DMA controller
raises an I/O interrupt to indicate its completion
• CPU is free to execute other instructions while an I/O
operation is in progress

How a Modern Computer Works


Jan-24

Interrupts
• An interrupt signals the occurrence of an event to the
CPU
=

• An event is a situation that requires OS intervention


• At an interrupt, the interrupt action in the hardware diverts
the CPU to execution of an interrupt processing routine (IPR)
• The IPR is a routine in the OS kernel
• It handles the situation that caused the interrupt
• After the IPR, kernel switches the CPU to execution of a user
program
• Different classes of interrupts convey occurrences of
different kinds of events

Classes of interrupts
• Three important classes of interrupts are
• Program interrupt
• Caused by conditions within the CPU during execution of
an instruction; e.g.,
• Occurrence of an arithmetic overflow, addressing exception, or
memory protection exception
• Execution of the software interrupt instruction
• I/O interrupt
• Indicates completion of an I/O operation
• Timer interrupt
• Indicates that a specified time interval has elapsed
Jan-24

Interrupt
System Call -ware
• A computer has a special instruction called a
‘software interrupt’ instruction
• Its sole purpose is to cause a program interrupt
• A program uses the software interrupt instruction to make
a request to the system
• The operand of the instruction indicates what kind of request is
being made
• Association between interrupt code and kind of request is OS-
specific
• This method of making a request is known as a system call
·
Error
·
Resource

Use of A System Call to Perform I/O


To
bat

system
Jan-24

in
i

CALLS

Types of system calls

S
• System calls are used to make diverse kinds of requests

und
D • Resource related

Ra c e
• Resource request or release, checking resource availability
② • Program related
• Execute or terminate a program, set or await timer interrupt
B • File related
a
• Open or close a file, read or write a record
D • Information related
• Get time and date, get resource information
• Communication related
• Send or receive message, setup or terminate connection

Example

 Which of the following instructions should be privileged?


b. Read the clock. NP
c. Clear memory. P
=>
d. Issue a trap instruction. P
e. Turn off interrupts. P
P
f. Modify entries in device-status table.
* g. Switch from user to kernel mode. p
* h. Access I/O device. NP
Jan-24

Example

 Which of the following instructions should be privileged?


b. Read the clock.
c. Clear memory.
d. Issue a trap instruction.
e. Turn off interrupts.
f. Modify entries in device-status table.
g. Switch from user to kernel mode.
h. Access I/O device.

Evolution of Operating Systems

 Major OSs will evolve over time for a


number of reasons:

- hardware upgrades

new types of hardware

new services

Fixes
Jan-24

Evolution of Operating Systems

 Stages include:

·
Time
Sharing
Multiprogrammed Systems
Batch Systems

Simple Batch
Systems

Serial
Processing

Serial Processing

Earliest Computers: Problems:


• No operating system • Scheduling:
• programmers interacted directly • most installations used a hardcopy
with the computer hardware sign-up sheet to reserve computer
• Computers ran from a console with time
display lights, toggle switches, • time allocations could run short or
some form of input device, and a long, resulting in wasted computer
printer time
• Users have access to the computer • Setup time
in “series” • a considerable amount of time was
spent just on setting up the program
to run
Jan-24

Couldn't even schedule


Batch processing system anything ! Every job by land
got BASCK'
processing
So we

In early operating systems, the computer operator had to


manually set-up execution of every job
-
so doesn'tcave to do

• Aim of batch processing:


• To reduce operator’s intervention in processing of user jobs
• A batch is a sequence of jobs
• The operator sets up processing of a batch, rather than processing
of individual jobs
• It saves valuable time spent in human actions

Simple Batch Systems

• Early computers were very expensive


• important to maximize processor utilization
• Monitor
• user no longer has direct access to processor
• job is submitted to computer operator who batches
them together and places them on an input device
• program branches back to the monitor when
finished
Jan-24

Monitor Point of View Interrupt


Processing
Device
Drivers
Monitor
Job
• Monitor controls the sequence of Sequencing
events Control Language
Interpreter
• Resident Monitor is software always Boundary

in memory
• Monitor reads in job and gives User
control Program
Area
• Job returns control to monitor

Figure 2.3 Memory Layout for a Resident Monitor

and Sin

Batch processing system Rate o

• The operator forms a batch of jobs and inserts ‘start of batch’ and ‘end of batch’ cards
· • Operator initiates processing of a batch
• The batch monitor, which is a primitive OS, performs transition between individual jobs
Jan-24

Turn-around time in a batch processing system

• ‘Turn-around time’ of a job is the time between the submission of a job


and obtaining of its results
• If results are printed after entire batch is processed, turn-around time
depends on execution time of all jobs in the batch

Processor Point of View


• Processor executes instruction from the memory containing the
monitor
• Executes the instructions in the user program until it encounters
an ending or error condition
• “control is passed to a job” means processor is fetching and
executing instructions in a user program
• “control is returned to the monitor” means that the processor is
fetching and executing instructions from the monitor program
Jan-24

Job Control Language (JCL)

SCBXX

11
Special type of programming
language used to provide
instructions to the monitor * Sch
J
w Job Controller Larg

Em
Which Machse Understand
what compiler to use Give Instruction to Machine

what data to use

Desirable Hardware Features


Memory protection for monitor
• while the user program is executing, it must not alter
the memory area containing the monitor
Timer
• prevents a job from monopolizing the system
Privileged instructions
• can only be executed by the monitor
Interrupts
• gives OS more flexibility in controlling user programs
Jan-24

Modes of Operation

*User Mode DKernel Mode


• user program executes in • monitor executes in kernel
user mode mode
• certain areas of memory are • privileged instructions may
protected from user access be executed
• certain instructions may not • protected areas of memory

protectioMuch
be executed may be accessed
in Kenne
So memory fail

&Dual Mode Operation·


Jan-24

Simple Batch System Overhead


• Processor time alternates between execution of user
programs and execution of the monitor

• Sacrifices:
• some main memory is now given over to the monitor
• some processor time is consumed by the monitor
• Despite overhead, the simple batch system improves
utilization of the computer

Multiprogramming Systems
In a batch processing system, the CPU remained
idle while a program performed I/O operations
•Aim of multiprogramming:
• Achieve efficient use of the computer system
through overlapped execution of several programs
• While a program is performing I/O operations, the OS
schedules another program
Jan-24

Multiprogrammed Batch Systems


• Processor is often
idle
• even with
automatic job
sequencing
• I/O devices are
slow compared
to processor

Uniprogramming

Program A Run Wait Run Wait

Time
(a) Uniprogramming

• The processor spends a certain amount of time executing, until


it reaches an I/O instruction; it must then wait until that I/O
instruction concludes before proceeding
Jan-24

Multiprogramming
Program A Run Wait Run Wait

Program B Wait Run Wait Run Wait

Run Run Run Run


Combined Wait Wait
A B A B
Time
(b) Multiprogramming with two programs

• There must be enough memory to hold the OS (resident monitor) and one
user program
• When one job needs to wait for I/O, the processor can switch to the other
job, which is likely not waiting for I/O

Multiprogramming
Program A Run Wait Run Wait

Program B Wait Run Wait Run Wait

Program C Wait Run Wait Run Wait

Run Run Run Run Run Run


Combined Wait Wait
A B C A B C
Time
(c) Multiprogramming with three programs

• Multiprogramming
• also known as multitasking
&

• memory is expanded to hold three, four, or more


*

programs and switch among all of them


Jan-24

Architectural support for multiprogramming

Se
• The computer’s architecture must contain following features to
support multiprogramming
X
• DMA
• To provide parallel operation of the CPU and I/O
~ Learned in DMA Skoruft
• Interrupt hardware
• To implement the interrupt action, which passes control to the OS
• Memory protection
• To prevent corruption or disruption of a program by other programs
• Privileged mode of CPU
Already
• CPU must be in privileged mode to execute sensitive instructions
~• It is in the user mode, i.e., non-privileged mode, while executing user
programs

Concepts and techniques of multiprogramming


• Three key concepts are
• Use a suitable program mix of CPU-bound and I/O-bound
programs
• A CPU-bound program performs computations most of the time,
and I/O operations seldom
• An I/O bound program performs I/O operations frequently
• Assign suitable priorities to programs
• Decide which program should be favoured for execution on the
CPU—an I/O-bound program or a CPU-bound program?
• Use a suitably high degree of multiprogramming
• Degree of multiprogramming = No. of programs in memory
• Facilitates parallel operation of CPU and I/O system
Jan-24

Performance and user service in multiprogramming

• Performance is measured as throughput


• Throughput is the number of programs serviced by the
system in a unit of time
• For high throughput, I/O-bound programs should have
higher priorities than CPU-bound programs
• User service is measured as turn-around time of a
job
• Turn-around time of a job is the elapsed time between
submission of a job and its completion

Time-Sharing Systems
• Can be used to handle multiple interactive jobs
• Processor time is shared among multiple users
• Multiple users simultaneously access the system
through terminals, with the OS interleaving the
execution of each user program in a short burst or
quantum of computation
Jan-24

A schematic of round-robin scheduling with time slicing

• The OS maintains a list of programs that wish to execute on the CPU


• The scheduler selects the first program in the list
• If the time slice elapses before the scheduled program completes its
operation, it is preempted and put back into the scheduling list

Batch Multiprogramming
vs. Time Sharing
Batch Multiprogramming Time Sharing
Principal objective Maximize processor use Minimize response time
Source of directives to Job control language Commands entered at the
operating system commands provided with the terminal
job

Batch Multiprogramming versus Time Sharing


Jan-24

Real time Operating system


•A real time OS is used to service time critical
applications
• A time critical application is one that malfunctions if
it does not receive a ‘timely response’
• A real time OS focuses on providing a timely response
to an application

Real time Operating system


• Two kinds of real time operating systems
• Hard real time system: Meets the response requirement
of an application under all conditions (including error
recovery actions, if any)
• Used in command and control applications
• A computer system may have to be dedicated to an
application
• Soft real time system: Meets the response requirement
of an application in a probabilistic manner
• Used in applications such as multimedia and reservation
systems
Jan-24

Distributed operating system


•Distributed computer system
• A distributed computer system consists of a number
of computer systems, each having its own memory
and performing some of the control functions of
the OS, interacting through the network
• Each computer system is called a host or node

A distributed system

• A WAN connects geographically distant nodes


• A LAN connects nodes within an office, laboratory or building
Jan-24

Distributed systems
• Benefits of a distributed system
• Resource sharing
• An application can use resources located in other computers
• Reliability
• Provides availability of resources despite faults
• Computation speed-up
• Parts of a computation can be executed simultaneously in different computers
• Communication
• Users in different computer systems can communicate
• Incremental growth
• Cost of enhancing capabilities of a system is proportional to the desired
enhancement

Features of Distributed Operating Systems


• Parts of a distributed operating system execute in
different nodes of a distributed system. Its salient
features are:
• The OS provides support for distributed computations
• Remote procedure call (RPC)
• Distributed file systems
• The OS uses distributed control techniques because
• Several computers participate in a decision
• Control data may be distributed across nodes in the system
Jan-24

Parallel Systems
 Most systems use a single general-purpose processor
 Most systems have special-purpose processors as well
 Multiprocessors systems (two or more processors in close communication,
sharing bus and sometimes clock and memory) growing in use and importance
 Also known as parallel systems, tightly-coupled systems
 Advantages include
1. Increased throughput
2. Economy of scale
3. Increased reliability – graceful degradation or fault tolerance
Jan-24

Multiprocessors systems
Key role – the scheduler
 Two types of Multiprocessing:
1. Asymmetric Multiprocessing -
assigns certain tasks only to certain
processors. In particular, only one
processor may be responsible for
handling all of the interrupts in the
system or perhaps even performing
all of the I/O in the system
2. Symmetric Multiprocessing -
treats all of the processing elements
in the system identically

[A eney flat a
r be u o e pres

proce
is

Process
• Fundamental to the structure of operating systems

A process can be defined as:

a program in execution
an instance of a running program

the entity that can be assigned to, and executed on, a processor
a unit of activity characterized by a single sequential thread of execution, a
current state, and an associated set of system resources
·
CPU utilization is
preferred while
running processes
Jan-24
Word Don /Cit Time
·

Throughput:-

Components of a Process
• A process contains three • The execution context is
essential:
Video components:
• it is the internal data by which
E
• an executable program the OS is able to supervise and
• the associated data needed

·
control the process
gard by the program (variables,
work space, buffers, etc.)
• includes the contents of the
various process registers
• the execution context (or • includes information such as
much &“process state”) of the the priority of the process and
Registe program whether the process is waiting
for the completion of a
particular I/O event

Algohm
CONVOX EFFECT An which schedules present
-
AT

Metallic scheme
-

ou
.

E ht
O

Process
• Process – a program in execution; process execution must
progress in sequential fashion.
• A process includes: hemale
• program counter
• stack
• data section
• As a process executes, it changes state
24
Dating

#
• new: The process is being created.
• running: Instructions are being executed.
• waiting: The process is waiting for some event to occur.
• ready: The process is waiting to be assigned to a processor.
• terminated: The process has finished execution.
Jan-24
- insied
exit
admitted interuft
new &
dy ning
e
dispat

Diagram of Process State

&schedul and
dispat
,
exit
new
n
Process Control Block (PCB)
-

&
Information associated with each process.
~• Process state Obvious
-

-
~ -Oberon
• Program counter
• CPU registers
a
~• CPU scheduling informationTo make itread
-
cruning
-• Memory-management information

Y
• Accounting information
• I/O status information
Jan-24

Process Control Block (PCB)

Process Management
Main Processor
Memory Registers
Process index i

PC
i

 The
11
Process
entire state of the list
j
Base
Limit
b
h

process at any instant is Other


contained in its context registers

 New features can be Process


Context

designed and incorporated A


Data

Program
into the OS by expanding the (code)

context to include any new


b
information needed to Context

-
support the feature Process
B
h Data
S

Program
(code)
Jan-24

CPU Switch From Process to Process

When CPU switches to anothe Proge


state is loaded

afundamentalstep a
Sili

t
i

Context Switch
- Sportant es ·
·
Time it takes
To Do

Ser
So

D Sara

I load it
vor

rortest
is overhead/baxalls wase

• When CPU switches to another process, the system must save the
state of the old process and load the saved state for the new
process.
• Context-switch time is overhead; the system does no useful work
>
while switching.
• Time dependent on hardware support.
Jan-24

Threads

#
• Context switch between processes is an expensive operation.
It leads to high overhead
• A thread is an alternative model for execution of a program
that incurs smaller overhead while switching between
threads of the same application

Q: How is this achieved?

Process switching overhead

• A process context switch involves:


1. saving the context of the process in operation
2. saving its CPU state
3. loading the state of the new process
4. loading its CPU state
Jan-24

unit of computation
Theed i th smallest

Threads Boant Many


Threads

·
• A threads is a program execution within the context
of a process (i.e., it uses the resources of a process);
many threads can be created within the same process
• Switching between threads of the same process
-

involves much less switching overhead

Process & Threads

THREAD
S
Jan-24

One or More Threads in a Process

Each thread has:

·• an execution state (Running, Ready, etc.)


·• saved thread context when not running
·• an execution stack
• some per-thread static storage for local variables
• access to the memory and resources of its
process (all threads of a process share this)

Single-Threaded Multithreaded
Process Model Process Model
Thread Thread Thread
Thread Thread Thread
Process User Control Control Control
Control Stack Block Block Block
Block

Process User User User


User Kernel Stack Stack Stack
Control
Address Stack
Block
Space

User Kernel Kernel Kernel


Address Stack Stack Stack
Space

Figure 4.2 Single Threaded and Multithreaded Process Models


Jan-24

The key states for a Thread operations


thread are: associated with a
change in thread

·
• Running state are:
• Ready
• Blocked  Spawn
 Block
 Unblock
 Finish

T
Brows - Some adde starresource PCB
When I

Basicall Same Content


The use Floods - Because bester
Switching Jokes les time

Threads

#
• Where are threads useful?
• If two processes share the same address space and the same
resources, they have identical context
• switching between these processes involves saving and
reloading of their contexts. This overhead is redundant.
• In such situations, it is better to use threads.&

En
Benebik
in Clarge
Jan-24

Benefits of Threads

do
Take Less Fin
Taste
Less time to Threads enhance
terminate a efficiency in
thread than a communication
Switching between
Takes less time process between programs
two threads takes
to create a less time than
new thread switching between
than a process processes

Take Takes Less Time


to eitch
he Greate

[
Thread Use in a Single-User System
• Foreground and background work
• Asynchronous processing
• Speed of execution
• Modular program structure
Jan-24

 In an OS that supports threads, scheduling and


dispatching is done on a thread basis

Most of the state information dealing with


execution is maintained in thread-level data
structures
suspending a process involves suspending all
threads of the process
termination of a process terminates all
threads within the process

Scheduling of user-level threads


Jan-24

Scheduling of kernel-level threads

Thread Synchronization

• It is necessary to synchronize the activities of the


various threads

s
• all threads of a process share the same
address space and other resources
• any alteration of a resource by one thread
affects the other threads in the same process
I
persa
Jan-24
as
D Keel Don't

block
,
Know

whe
D Komel knows

Cortest Swich Fast

Different kinds of threads

• Kernel-level threads: Threads are created through system


calls. The kernel is aware of their existence, and schedules
them
• User-level threads: Threads are created and maintained by a
thread library, whose routines exist as parts of a process.
Kernel is oblivious of their existences.
• Hybrid threads: A combination of the above.

ENIAC (1945)
Th O(1)
cioTine)e(Combine Time
=

February 24
[OnD =<(m(e) + D(m) + c(m)

a
# I
some of powe

CPU Scheduling

Scheduling

•Scheduling is the act of determining the order in


which requests should be taken up for servicing
• A request is a unit of computational work
• It could be a job, a process, or a subrequest made to a
process
February 24

Basic Concepts
• Maximum CPU utilization
obtained with
multiprogramming
• CPU–I/O Burst Cycle – Process
execution consists of a cycle
of CPU execution and I/O
wait.
• CPU burst distribution

Process Scheduling Queues


• Job queue – set of all processes in the system.
• Ready queue – set of all processes residing in main
memory, ready and waiting to execute.
• Device queues – set of processes waiting for an I/O
device.
• Process migration between the various queues.
February 24

Ready Queue And Various I/O Device Queues

Long, medium, and short-term scheduling


• A single scheduler cannot provide the desired combination of
performance and user service, so an OS uses three schedulers
• Long-term scheduler
• Decides when to admit an arrived process
• Uses nature of a process, availability of resources to decide
• Medium-term scheduler
• Performs swapping
• Maintains a sufficient number of processes in memory
• Short-term scheduler
• Decides which ready process should operate on the CPU
February 24

Long-Term Scheduler
• Determines which programs are Creates processes
admitted to the system for from the queue
processing when it can, but
must decide:
• Controls the degree of
multiprogramming
• the more processes that when the operating
are created, the smaller which jobs to
system can take on
the percentage of time accept and turn
one or more
into processes
that each process can be additional processes
executed
• may limit to provide
satisfactory service to
the current set of priority, expected
first come, first
execution time,
processes served
I/O requirements

Medium Term Scheduling


• Part of the swapping function
• Swapping-in decisions are based on the need to manage the degree of
multiprogramming
• considers the memory requirements of the swapped-out processes
February 24

Short-Term Scheduling
• Known as the dispatcher
• Executes most frequently
• Makes the fine-grained decision of which process to execute next
• Invoked when an event occurs that may lead to the blocking of the
current process or that may provide an opportunity to preempt a
currently running process in favor of another

Examples:

• Clock interrupts
• I/O interrupts
• Operating system calls
• Signals (e.g., semaphores)

Schedulers
• Short-term scheduler is invoked very frequently (milliseconds)  (must
be fast).
• Long-term scheduler is invoked very infrequently (seconds, minutes) 
(may be slow).
• The long-term scheduler controls the degree of multiprogramming.
• Processes can be described as either:
• I/O-bound process – spends more time doing I/O than computations, many short
CPU bursts.
• CPU-bound process – spends more time doing computations; few very long CPU
bursts.
February 24

Dispatcher
• Dispatcher module gives control of the CPU to the process
selected by the short-term scheduler; this involves:
• switching context
• switching to user mode
• jumping to the proper location in the user program to restart that
program
• Dispatch latency – time it takes for the dispatcher to stop one
process and start another running.

Short Term Scheduling Criteria


• Main objective is to
allocate processor
time to optimize User-oriented System-oriented
certain aspects of criteria criteria
system behavior • relate to the behavior of
the system as perceived
• focus in on effective and
efficient utilization of
• A set of criteria is by the individual user or
process (such as
the processor (rate at
which processes are
needed to evaluate response time in an completed)
the scheduling policy interactive system) • generally of minor
• important on virtually importance on single-
all systems user systems
February 24

Short-Term Scheduling Criteria: Performance

examples: example:
• response time Criteria can • predictability
• throughput be classified
into:

Non-performance
Performance-related
related

easily hard to
quantitative qualitative
measured measure

Scheduling Criteria
• CPU utilization – keep the CPU as busy as possible
• Throughput – # of processes that complete their execution per time
unit
• Turnaround time – amount of time to execute a particular process
• Waiting time – amount of time a process has been waiting in the
ready queue
• Response time – amount of time it takes from when a request was
submitted until the first response is produced, not output (for
time-sharing environment)
February 24

Optimization Criteria
• Max CPU utilization
• Max throughput
• Min turnaround time
• Min waiting time
• Min response time

Fundamental techniques of scheduling


• Three fundamental techniques are used in scheduling
• Priority-based scheduling
• As seen in the context of multiprogramming
• Reordering of requests: it may be used to
• Enhance system throughput, e.g., as in multiprogramming
• Enhance user service, e.g., as in time sharing
• Variation of time slice
• Small time slice yields better response times
• Large time slice may reduce scheduling overhead
February 24

More on priority
• Features of priority-based scheduling
• Priorities may be static or dynamic
• A static priority is assigned to a request before it is admitted
• A dynamic priority is one that is varied during servicing of a
request
• How to handle processes having same priority?
• Round-robin scheduling is performed within a priority level
• Starvation of a low priority request may occur
Q: How to avoid starvation?

Kinds of Scheduling
• Scheduling may be performed in two ways
• Non-preemptive scheduling
• A process runs to completion when scheduled
• Preemptive scheduling
• Kernel may preempt a process and schedule another one
• A set of processes are serviced in an overlapped manner

Q: What are the benefits of preemptive scheduling?


February 24

Kinds of Scheduling
• CPU scheduling decisions may take place when a process:
1. Switches from running to waiting state.
2. Switches from running to ready state.
3. Switches from waiting to ready.
4. Terminates.
• Scheduling under 1 and 4 is non-preemptive.
• All other scheduling is preemptive.

First- Come, First-Served (FCFS) Scheduling


Process Burst Time
P1 24
P2 3
P3 3
• Suppose that the processes arrive in the order: P1 , P2 , P3
The Gantt Chart for the schedule is:
P1 P2 P3
0 24 27 30

• Waiting time for P1 = 0; P2 = 24; P3 = 27


• Average waiting time: (0 + 24 + 27)/3 = 17
February 24

FCFS Scheduling (Cont.)


Suppose that the processes arrive in the order:
P2 , P3 , P1
• The Gantt chart for the schedule is:

P2 P3 P1
0 3 6 30

• Waiting time for P1 = 6; P2 = 0; P3 = 3


• Average waiting time: (6 + 0 + 3)/3 = 3
• Much better than previous case
• Convoy effect - short process behind long process
• Consider one CPU-bound and many I/O-bound processes

Shortest-Job-First (SJF/SPF) Scheduling


• Associate with each process the length of its next CPU
burst
• Use these lengths to schedule the process with the
shortest time
• SJF is optimal – gives minimum average waiting time
for a given set of processes
• The difficulty is knowing the length of the next CPU
request
• Could ask the user
February 24

Example of SJF
ProcessArriva l Time Burst Time
P1 0.0 6
P2 2.0 8
P3 4.0 7
P4 5.0 3
• SJF scheduling chart

P4 P1 P3 P2
0 3 9 16 24

• Average waiting time = (3 + 16 + 9 + 0) / 4 = 7

Example of Shortest-remaining-time-first
• Now we add the concepts of varying arrival times and preemption to the analysis
ProcessAarri Arrival TimeT Burst Time
P1 0 8
P2 1 4
P3 2 9
P4 3 5
• Preemptive SJF Gantt Chart

P1 P2 P4 P1 P3
0 1 5 10 17 26

• Average waiting time = [(10-1)+(1-1)+(17-2)+5-3)]/4 = 26/4 = 6.5 msec


February 24

Example of Preemptive SJF


Process Arrival Time Burst Time
P1 0.0 7
P2 2.0 4
P3 4.0 1
P4 5.0 4
• SJF (preemptive)

P1 P2 P3 P2 P4 P1

0 2 4 5 7 11 16

• Average waiting time = (9 + 1 + 0 +2)/4 = 3

Determining Length of Next CPU Burst


• Can only estimate the length.
• Can be done by using the length of previous CPU bursts,
using exponential averaging.
1. tn  actual lenght of nthCPU burst
2.  n 1  predicted value for the next CPU burst
3.  , 0    1
4. Define :
 n 1   t n  1    n .
February 24

Prediction of the Length of the Next CPU Burst

Examples of Exponential Averaging


•  =0
• n+1 = n
• Recent history does not count.
 n 1   t n  1    n .
•  =1
• n+1 = tn
• Only the actual last CPU burst counts.
• If we expand the formula, we get:
n+1 =  tn+(1 - )  tn-1 + … +(1 -  )j  tn-j + … +(1 -  )n-1 tn 0
• Since both  and (1 - ) are less than or equal to 1, each
successive term has less weight than its predecessor.
February 24

Priority Scheduling
• A priority number (integer) is associated with each process

• The CPU is allocated to the process with the highest priority (smallest integer
 highest priority)
• Preemptive
• Nonpreemptive

• SJF is priority scheduling where priority is the inverse of predicted next CPU
burst time

• Problem  Starvation – low priority processes may never execute

• Solution  Aging – as time progresses increase the priority of the process

Example of Priority Scheduling


Process Burst Time Priority
P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2
• Priority scheduling Gantt Chart

• Average waiting time = 8.2 msec


February 24

Example of Scheduling
Process Arrival Time Burst Time Priority
P1 0 14 2
P2 2 10 1
P3 3 8 2
P4 5 5 0
P5 5 5 1

Round-robin scheduling
• Each process gets a small unit of CPU time (time quantum q), usually 10-
100 milliseconds. After this time has elapsed, the process is preempted
and added to the end of the ready queue.
• If there are n processes in the ready queue and the time quantum is q,
then each process gets 1/n of the CPU time in chunks of at most q time
units at once. No process waits more than (n-1)q time units.
• Timer interrupts every quantum to schedule next process
• Performance
• q large  FIFO
• q small  q must be large with respect to context switch, otherwise
overhead is too high
February 24

Example of RR with Time Quantum = 4


Process Burst Time
P1 24
P2 3
P3 3
• The Gantt chart is:

P1 P2 P3 P1 P1 P1 P1 P1
0 4 7 10 14 18 22 26 30

• Typically, higher average turnaround than SJF, but better response


• q should be large compared to context switch time
• q usually 10ms to 100ms, context switch < 10 usec

Time Quantum and Context Switch Time


February 24

Turnaround Time Varies With The Time Quantum

80% of CPU bursts should


be shorter than q

Highest Response Ratio Next (HRRN)


• Chooses next process • While shorter jobs are favored, aging
with the greatest ratio without service increases the ratio so
that a longer process will eventually
• Attractive because it
get past competing shorter jobs
accounts for the age of
the process
February 24

0 5 10 15 20

A
First-Come-First B
Served (FCFS) C
D
E

A
Round-Robin B
(RR), q = 1 C
D
E

A
Round-Robin B
(RR), q = 4 C
D
E

A
Shortest Process B
Next (SPN) C
D
E

A
Shortest Remaining B
Time (SRT) C
D
E

A
Highest Response B
Ratio Next (HRRN) C
D
E

Multilevel scheduling
• Salient features
• Scheduler uses many lists of ready processes
• Each list has a pair (time slice, priority) associated with it
• The time slice is inversely proportional to the priority
• Simple priority-based scheduling between priority levels
• Round-robin scheduling within each priority level

Q: How to organize the lists for minimum scheduling overhead?


February 24

Multilevel Queue
• Ready queue is partitioned into separate queues:
foreground (interactive)
background (batch)
• Each queue has its own scheduling algorithm,
foreground – RR
background – FCFS
• Scheduling must be done between the queues.
• Fixed priority scheduling; (i.e., serve all from foreground then from
background). Possibility of starvation.
• Time slice – each queue gets a certain amount of CPU time which it can
schedule amongst its processes; i.e., 80% to foreground in RR
• 20% to background in FCFS

Ready queues in a multilevel scheduler

• Each queue header has two pointers—to first queue in list


and to the next queue header
• Queue headers are linked in order of reducing priority
February 24

Multilevel Queue Scheduling

Multilevel adaptive scheduling

• The scheduling policy adapts to its workload to provide a good


combination of service and performance
• Adapt treatment of each process to its behaviour
• The priority of a process is varied depending on its recent behaviour
• If the process uses up its time slice, it must be (more) CPU bound than
assumed, so provide a larger time slice at a lower priority
• If a process is starved of CPU attention for some time, increase its
priority
• Improves response time and turn-around time

Rome a process if WC is More

Denote a process if we less


is

This ability to adapt I work with ques


is Multilevel Adaptive
Scheduling
February 24

Multilevel adaptive scheduling


• A process can move between the various queues; aging can be
implemented this way.
• Multilevel-feedback-queue (Multilevel adaptive scheduling) scheduler
defined by the following parameters:
• number of queues
• scheduling algorithms for each queue
• method used to determine when to upgrade a process
• method used to determine when to demote a process
• method used to determine which queue a process will enter when that process
needs service

Example of Multilevel Feedback Queue

• Three queues:
• Q0 – time quantum 8 milliseconds
• Q1 – time quantum 16 milliseconds
• Q2 – FCFS
• Scheduling
• A new job enters queue Q0 which is served FCFS.
When it gains CPU, job receives 8 milliseconds. If it
does not finish in 8 milliseconds, job is moved to
queue Q1.
• At Q1 job is again served FCFS and receives 16
additional milliseconds. If it still does not complete,
it is preempted and moved to queue Q2.
21 in
9 unEE
February 24
~
Used in CSOFT
funds
eg here
because of
same
spewed share

Fair Share
& proportionally Store

M
Fair-Share Scheduling Proportional)

[
• Scheduling decisions based on the process sets
• Each user is assigned a share of the processor
• Objective is to monitor usage to give fewer resources to
users who have had more than their fair share and more to
-

those who have had less than their fair share

Distribution based on time they usedI have left


Threads
.
P takes Tim A
Fair Stare considers
Shares y
·

MatProcessed ha
a rd
↑ Kotal Time)

Generally ,
we would give 50 % time
to P & P2
BUT after Fair Stare
-

66 % to P &33 %
to Pr
We
give
,

[tah Sample Lo
the
but if I charge
Load results may

(but you can


clarge
& Informance will charge
,

still get a trend


- Doministic load
Redeemed
Performance analysis
• Performance is sensitive to the workload of requests directed at a server. So it
must be analyzed in the environment where the server is to be used
• Three methods of performance analysis
• Deterministic modeling – takes a particular predetermined workload and defines the
performance of each algorithm for that workload.
• Simulate the functioning of the scheduler and determine completion times, throughput, verente
& n

etc. the processes


system run
Simulation : report
Use a ,

• Perform mathematical modeling using the Program &

(St is Costly) slows performance of


• Model of the server Also it works with The Hardware
OS which
Not exctes
• Model of the workload
to obtain expressions for service times, etc.
• Implement a scheduler, study its performance for real requests
February 24

Mathematical modeling
• A mathematical model is a set of expressions for performance
characteristics such as arrival times and service times of requests
• Queuing theory is employed
• To provide arrival and service patterns
• Exponential distributions are used because of their memoryless property
• Arrival times: F(t) =1 – e –αt, where α is the mean arrival rate
• Service times: S(t) = 1 – e –ωt, where ω is the mean execution rate
• Mean queue length is given by Little’s formula
• L=λxW
• L – the average number of items in a queuing system (mean queue length)
• λ – the average number of items arriving at the system per unit of time
• W – the average waiting time an item spends in a queuing system

o I
t pee ta
& THEN implement
ct a
Queue
almost
is Empty Butitis
very release
Dep-an Aerial (But update OS)
If
can
ong you

wayUpdeal algoYouadhoose a
a

rathe ,
ador

Evaluation of CPU Schedulers by Simulation


February 24

Can't do that Heterogrous [DEPENDENCY


Docess any system
in
(Identical run or
,
can ,

Homogerous
Loosely coupled or distributed multiprocessor, or cluster
Convectivity bottleneck
Distribritie • consists of a collection of relatively autonomous systems, each is a

processor having its own main memory and I/O channels

Functionally specialized processors


• there is a master, general-purpose processor; specialized processors
are controlled by the master processor and provide services to it

Tightly coupled multiprocessor


• consists of a set of processors that share a common main memory and
Parallel are under the integrated control of an operating system Communication is bast
very
They are identical

We car also have Heterogenous Systems (Crates DEPENDENCY)

If call it co-operative tasks


dependency is there ,
we

Synchronization Interval
SmallTask Grain Size Description
Abstraction (Instructions)
High Comm


Lou (Simple) Fine Parallelism inherent in a single <20
instruction stream.

Medium Parallel processing or multitasking 20-200


within a single application

Coarse Multiprocessing of concurrent processes 200-2000


in a multiprogramming environment
W Very Coarse Distributed processing across network 2000-1M
Abstraction (Complex) nodes to form a single computing
environment Low Comm
High Large ~
Task Independent Multiple unrelated processes not applicable

Basically if Synchronization is increased ,


we need more lightly couple
COMMUNICATION
Table 10.1 Synchronization Granularity and Processes as we require more
February 24

• No explicit synchronization among


processes each user is performing a
particular application
• each represents a separate,
independent application or
job multiprocessor provides the
same service as a
• Typical use is in a time-sharing multiprogrammed
uniprocessor
system
because more than one
processor is available, average
response time to the users will
be less

Balancing the Load is Objective


&
for 5 thread
If a process

mustgive
all thro
ead
a
Gup of Process given CPU at same time (Group Scheduling)

&

• Synchronization among processes, but at a very gross level


• Good for concurrent processes running on a multiprogrammed
uniprocessor
• can be supported on a multiprocessor with little or no change to user
software

Processors

⑧ D

It can be too Queal


E - P

Moremptive hoAnyProceswhe
a Priority Q)
th So

itneedto the
clea

time
.
what
Dynamic Scheduling February 24
process altered
No of threads in a are

dyranically by the application

• Single application can be effectively implemented as a


collection of threads within a single process
• there needs to be a high degree of coordination and interaction
among the threads of an application, leading to a medium-grain
level of synchronization
• Because the various threads of an application interact so
frequently, scheduling decisions concerning one thread may
affect the performance of the entire application

• Represents a much more complex use of parallelism than is


found in the use of threads
• Is a specialized and fragmented area with many different
approaches
February 24

• The approach taken will


Scheduling on a depend on the degree of
multiprocessor granularity of applications
involves three and the number of
processors available
interrelated issues:

actual use of assignment of


dispatching multiprogramming processes to
of a process on individual processors
processors

Assuming all processors


are equal, it is simplest
static or dynamic
to treat processors as a
needs to be
pooled resource and
determined
assign processes to
processors on demand

If a process is permanently
assigned to one processor advantage is that
from activation until its
there may be less allows group or
completion, then a
dedicated short-term queue overhead in the gang scheduling
is maintained for each scheduling function
processor

• A disadvantage of static assignment is that one processor can be idle, with an empty queue,
while another processor has a backlog
• to prevent this situation, a common queue can be used
• another option is dynamic load balancing
February 24

• Both dynamic and static methods require some


way of assigning a process to a processor
• Approaches:
• Master/Slave
• Peer

• Key kernel functions always run on a particular processor


• Master is responsible for scheduling
• Slave sends service request to the master
• Is simple and requires little enhancement to a uniprocessor
multiprogramming operating system
• Conflict resolution is simplified because one processor has
control of all memory and I/O resources

Disadvantages:
• failure of master brings down whole system
• master can become a performance bottleneck
February 24

• Kernel can execute on any processor


• Each processor does self-scheduling from the pool of
available processes

Complicates the operating system


• operating system must ensure that two processors do not
choose the same process and that the processes are not
somehow lost from the queue

• Usually processes are not dedicated to processors


• A single queue is used for all processors
• if some sort of priority scheme is used, there are multiple
queues based on priority
• System is viewed as being a multi-server queuing architecture
February 24

Concurrency, Mutual Exclusion &


Process Synchronization

Contents

• Principles of Concurrency
• Mutual Exclusion: Hardware Support
• Semaphores
• Monitors
• Message Passing
• Readers/Writers Problem
February 24

Multiple Processes

 Central to the design of modern Operating Systems is managing multiple


processes
 Multiprogramming
 Multiprocessing
 Distributed Processing
 Big Issue is Concurrency
 Managing the interaction of all of these processes

Concurrency

Concurrency arises in three different contexts:


 Multiple applications
 Sharing time
 Structured applications
 Extension of modular design (concurrent processes)
 Operating system structure
 OS themselves implemented as a set of processes or threads
February 24

Key Terms
a
other program
[ J allow
Don
-

Funny LMAO ,
Can happer in texting

condition
Helps protecting
a

= >

Interleaving and Overlapping Processes

I A A D

is
I A This
INTERLEAVING
I A
February 24

Interleaving and Overlapping Processes

 And not only interleaved but overlapped on multi-processors

I ↑ I I

I I I I
OVERLAPPING
I I

As as
long I'm writing in
my
[ ed
,
No issues/No concrency .
But as soon
suppose we all are writing o Cana (Stard Resource) Then we will
have a lot of problem

Difficulties of Concurrency

 Sharing of global resources


 Writing a shared variable: the order of writes is important
 Incomplete writes a major problem
 Optimally managing the allocation of resources
 Difficult to locate programming errors as results are not
- -

deterministic and reproducible. when threads not syctonized


- g are

result
in
Java ,
we
may get different
every time
February 24

Principles of Concurrency: A Simple Example

void echo()
{
chin = getchar();
chout = chin;
putchar(chout);
}

A Simple Example: On a Multiprocessor

Process P1 Process P2
. .
chin = getchar(); .
. chin = getchar();
chout = chin; chout = chin;
putchar(chout); .
. putchar(chout);
. .
February 24

Enforce Single Access

I
 If we enforce a rule that only one process may enter the This
function at a time then: won't
give
 P1 & P2 run on separate processors undeterministic
result
 P1 enters echo first, Kinda ensures

 P2 tries to enter but is blocked – P2 suspends Mutual Exclusion

 P1 completes execution
 P2 resumes and executes echo

Race Condition

 A race condition occurs when


 Multiple
S
processes or threads read and write data
items
 They do so in a way where the final result depends
on the order of execution of the processes.
 The output depends on who finishes the race last.
February 24

Race Conditions

Two processes want to access shared memory at the same time.


The final result depends on who runs precisely when.

Producer/Consumer Problem

• One or more producers are generating data and


placing these in a buffer
• A single consumer is taking items out of the buffer one
at time
• Only one producer or consumer may access the buffer
at any one time
• Producer can’t add data into full buffer and consumer
can’t remove data from empty buffer
February 24

Producer
Suppose that we wanted to provide a solution to the consumer-producer problem that
fills all the buffers. We can do so by having an integer count that keeps track of the
number of full buffers. Initially, count is set to 0. It is incremented by the producer
after it produces a new buffer and is decremented by the consumer after it consumes
a buffer.
while (true) {
/* produce an item and put in nextProduced */
while (count == BUFFER_SIZE)
; // do nothing
buffer [in] = nextProduced;
in = (in + 1) % BUFFER_SIZE;
count++;
}

Consumer

while (true) {
while (count == 0)
; // do nothing
nextConsumed = buffer[out];
out = (out + 1) % BUFFER_SIZE;
count--;
/* consume the item in nextConsumed
}
February 24

Race Condition
 count++ could be implemented as
register1 = count
register1 = register1 + 1
count = register1

 count-- could be implemented as


register2 = count
register2 = register2 - 1
count = register2

 Consider this execution interleaving with “count = 5” initially:


S0: producer execute register1 = count {register1 = 5}
S1: producer execute register1 = register1 + 1 {register1 = 6}
S2: consumer execute register2 = count {register2 = 5}
S3: consumer execute register2 = register2 - 1 {register2 = 4}
S4: producer execute count = register1 {count = 6 }
S5: consumer execute count = register2 {count = 4}

Potential Problems
·
Need for Mutual Fexclusion : Critical Section

• Data incoherency
• Deadlock: processes are “frozen” because of
mutual dependency on each other
• Starvation: some of the processes are unable to
make progress (i.e., to execute useful code)
February 24

The Critical-Section Problem

 n processes all competing to use some shared data


 When a process executes code that manipulates shared
data (or resource), we say that the process is in it’s critical
section (CS) (for that shared data)
 Problem – ensure that when one process is executing in its
critical section, no other process is allowed to execute in its
critical section.
 The execution of critical sections must be mutually exclusive: at any time, only one
process is allowed to execute in its critical section (even with multiple CPUs)
 Then each process must request the permission to enter it’s critical section (CS)

①Request allse Release

The critical section problem

 The section of code implementing this request is called the entry


section
 The critical section (CS) might be followed by an exit section
 The remaining code is the remainder section
 The critical section problem is to design a protocol that the processes
can use shared resource so that their action will not depend on the
order in which their execution is interleaved (possibly on many
processors)
repeat
entry section -> Request
critical section Use >
-

exit section -
> Release

remainder section
forever
·
20 Critical Section is kind a
of bottleneck ar it can
only be done seriallyReduces the step up time
February 24

Critical Regions

Mutual exclusion using critical regions

Solution to Critical-Section Problem


Requirements:
~1. Mutual Exclusion - If process Pi is executing in its critical section, then no other
processes can be executing in their critical sections.
~
2. Progress - If no process is executing in its critical section and there exist some Another process can't be denied
Critical Section
processes that wish to enter their critical section, then the selection of the if so process is in

processes that will enter the critical section next cannot be postponed indefinitely.
~
3. Bounded Waiting - A bound must exist on the number of times that other
processes are allowed to enter their critical sections after a process has made a
request to enter its critical section and before that request is granted.
 Assume that each process executes at a nonzero speed
 No assumption concerning relative speed of the N processes
February 24

Turn
Dekker’s Algorithm
1 2
,
3
,
 The shared variable turn is
Process Pi:
initialized (to 0 or 1) before
executing any Pi repeat
 Pi’s critical section is executed
iff turn = i while(turn!=i){};
intock
 Pi is busy waiting if Pj is in CS: CS
mutual exclusion is satisfied turn:=j;
 Progress requirement is not RS
satisfied since it requires strict forever
alternation of CSs

 Ex: P0 has a large RS and P1 has a small RS. If turn=0, P0 enter its CS
and then its long RS (turn=1). P1 enter its CS and then its RS (turn=0)
and tries again to enter its CS: request refused! He has to wait that P0


leaves its RS.
23

-
Po -

Pi

Waiting Time
This
Problem
a
is

Strict Alternation

Proposed solution to critical region problem


(a) Process 0. (b) Process 1.
February 24

Peterson’s Solution
 Initialization:
flag[0]:=flag[1]:=false
turn:= 0 or 1 Process Pi:
 Willingness to enter CS repeat
specified by flag[i]:=true flag[i]:=true;
turn:=j;
 If both processes
do {} while
attempt to enter their
(flag[j]and turn=j);
CS simultaneously, only
Critical Section
one turn value will last
flag[i]:=false;
 Exit section: specifies Remainder Section
that Pi is unwilling to forever
enter CS

25

it w icon

Solution to Critical-section Problem Using Locks

do {
acquire lock
critical section
release lock
remainder section
} while (TRUE);
February 24

Solution to Critical-section Problem Using Locks

Solution to Critical-section Problem Using Locks


February 24

Solution to Critical-section Problem Using Locks

Drawbacks of solutions

 Processes that are requesting to enter in their critical


section are busy waiting (consuming processor time
needlessly) Basically wasting CPU time If CS

better to
is
taking too
suspend the process
long ,
its

But if not
Too let it wait
Long ,

 If CSs are long, it would be more efficient to block


to
BECAUSE suspending process will require
save the coxent I load it back whe we'll
call
processes that are waiting...
the prous again
element subber
The make memory
also

30
February 24

Mutual Exclusion: Hardware Support

 Many systems provide hardware support for critical section code


 Uniprocessors – could disable interrupts
Currently running code would execute without preemption

 Generally too inefficient on multiprocessor systems
Operating systems using this not broadly scalable
 Modern machines provide special atomic hardware instructions
Atomic = non-interruptible

 Either test memory word and set value


 Or swap contents of two memory words

Hardware solutions: interrupt disabling

 On a uniprocessor: mutual
exclusion is preserved but
efficiency of execution is Process Pi:
degraded: while in CS, we
cannot interleave execution repeat
with other processes that are disable interrupts Request
in RS critical section Use
 On a multiprocessor: mutual enable interrupts Release

E
exclusion is not preserved remainder section
 Generally not an acceptable forever
solution
a
get
proces
other wjust
Because is

NOT
SCALABLE

32
February 24

Hardware solutions: special machine instructions

 Normally, access to a memory location excludes other


access to that same location
 Extension: designers have proposed machines instructions
that perform 2 actions atomically (indivisible) on the same
memory location (ex: reading and writing)
 The execution of such an instruction is mutually exclusive
(even with multiple CPUs)

33

Test & Set Instruction

• It is an instruction that returns the old value of a memory


location and sets the memory location value to 1 as a single
atomic operation.
• If one process is currently executing a test-and-set, no other
process is allowed to begin another test-and-set until the first
process test-and-set is finished.
• Initially, lock value is set to 0.
• Lock value = 0 means the critical
section is currently vacant and no
process is present inside it.
• Lock value = 1 means the critical
section is currently occupied and a
process is present inside it.
February 24

The test & set instruction (cont.)

 Mutual exclusion is preserved: if Pi enter CS, the other Pj are


busy waiting
 Problem: still using busy waiting
 When Pi exit CS, the selection of the Pj who will enter CS is
arbitrary: no bounded waiting. Hence starvation is possible
 Processors (ex: Pentium) often provide an atomic xchg(a,b)
instruction that swaps the content of a and b.
 But xchg(a,b) suffers from the same drawbacks as test-and-
set

35

Using xchg for mutual exclusion


3  Shared variable b is
h
initialized to 0 Process Pi:
 Each Pi has a local repeat
variable k k:=1
repeat xchg(k,b)
 The only Pi that can enter
until k=0;
CS is the one who finds CS
b=0 b:=0;
 This Pi excludes all the RS
other Pj by setting b to 1 forever

36
February 24

Hardware Mutual Exclusion

Advantages
 Applicable to any number of processes on either a single processor or
multiple processors sharing main memory
 It is simple and therefore easy to verify
Multi-Processor
as

 It can be used to support multiple critical sections for t own


every process
Dis-advantages
 Busy-waiting consumes processor time
 Starvation is possible when a process leaves a critical section and more than
one process is waiting. Because of Priority & QUEUE Queve very important to tackle priori
NO
,

 Some process could indefinitely be denied access.


 Deadlock is possible

Semaphore
Partially
True not
Icompletely
 Synchronization tool (provided by the OS) that do not require busy waiting
 A semaphore S is an integer variable that, apart from initialization, can only be accessed
through 2 atomic and mutually exclusive operations:
 wait(S)
 signal(S)

Siamarable onlycan function


 Less complicated
 Can only be accessed via two indivisible (atomic) operations
 wait (S) {
signal (S) {
while S <= 0 S++;
; // no-op }
S--;
}

 To avoid busy waiting: when a process has to wait, it will be put in a blocked
queue of processes waiting for the same event
February 24

Semaphore as General Synchronization Tool

 Counting semaphore – integer value can range over an unrestricted domain


 Binary semaphore – integer value can range only between 0
and 1; can be simpler to implement
Mutual Exclusive
 Also known as mutex locks
-

 Can implement a counting semaphore S as a binary semaphore


 Provides mutual exclusion
Semaphore mutex; // initialized to 1
do {
=

wait (mutex);
// Critical Section
signal (mutex);
// remainder section
} while (TRUE);

Semaphore Implementation

 Must guarantee that no two processes can execute wait () and signal
() on the same semaphore at the same time
 Thus, implementation becomes the critical section problem where the
wait and signal code are placed in the critical section.
 Could now have busy waiting in critical section implementation
But implementation code is short
Little busy waiting if critical section rarely occupied
 Note that applications may spend lots of time in critical sections and
therefore this is not a good solution.
February 24

Semaphore Implementation with no Busy waiting


 With each semaphore there is an associated waiting
queue. Each entry in a waiting queue has two data items:
 value (of type integer)
 pointer to next record in the list
 Two operations:
 block – place the process invoking the operation on the
appropriate waiting queue.
 wakeup – remove one of processes in the waiting
queue and place it in the ready queue.

Semaphore Implementation with no Busy waiting (Cont.)

 Implementation of wait:
typedef struct {
wait(semaphore *S) {
int value;
S->value--;
struct process *list;
if (S->value < 0) {
} semaphore;
add this process to S->list;
block();
}
}
 Implementation of signal:

signal(semaphore *S) {
S->value++;
if (S->value <= 0) {
remove a process P from S->list;
wakeup(P);
}
}
February 24

Semaphores
 Hence, in fact, a semaphore is a record (structure):

type semaphore = record


count: integer;
queue: list of process
end;
var S: semaphore;

 When a process must wait for a semaphore S, it is blocked and put on


the semaphore’s queue
 The signal operation removes (acc. to a fair policy like FIFO) one
process from the queue and puts it in the list of ready processes

43

Semaphore’s operations (atomic)


wait(S):
S.count--;
if (S.count<0) {
block this process
place this process in S.queue
}
signal(S):
S.count++;
if (S.count<=0) {
remove a process P from S.queue
place this process P on ready list
}

S.count must be initialized to a nonnegative value


44 (depending on application)

Bisand
o

*us
o

done ea
February 24

Semaphores: observations
 When S.count >=0: the number of processes that can
execute wait(S) without being blocked = S.count
 When S.count<0: the number of processes waiting on S
is = |S.count|
 Atomicity and mutual exclusion: no 2 process can be in
wait(S) and signal(S) (on the same S) at the same time
(even with multiple CPUs)
 Hence the blocks of code defining wait(S) and signal(S)
are, in fact, critical sections

45

Deadlock and Starvation


 Deadlock – two or more processes are waiting indefinitely for an event that can be
caused by only one of the waiting processes
 Let S and Q be two semaphores initialized to 1
P0 P1
wait (S); 8 = 0
wait (Q); q
=
0

wait (Q); q -1 (Block) wait (S); -1 (Block)


Blocked
=
a =

Po is
. blocked .
Now it is deadlocked
. .
. .
signal (S); signal (Q);
signal (Q); signal (S);
 Starvation – indefinite blocking. A process may never be removed from the
semaphore queue in which it is suspended Only when the queue is Priority Queue a

 Priority Inversion – Scheduling problem when lower-priority process holds a lock


needed by higher-priority process
February 24

Reading/Writing is
The producer/consumer problem the main stuff here

 A producer process produces information that is consumed

a
by a consumer process
 Ex1: a print program produces characters that are #Producerwill i

consumed by a printer
 Ex2: an assembler produces object modules that are
consumed by a loader
 We need a buffer to hold items that are produced and
eventually consumed
 A common paradigm for cooperating processes

47

P/C: unbounded buffer


 We assume first an unbounded buffer consisting of a linear
array of elements (Assuming infinite spaces
 in points to the next item to be produced
 out points to the next item to be consumed

48

At time let 2 Produces write at the time


same ,
we can't same
February 24

P/C: unbounded buffer

 We need a semaphore S to perform mutual exclusion on the buffer: only


1 process at a time can access the buffer
 We need another semaphore N to synchronize producer and consumer
on the number N (= in - out) of items in the buffer
 an item can be consumed only after it has been created
 The producer is free to add an item into the buffer at any time: it
performs wait(S) before appending and signal(S) afterwards to prevent
customer access
 It also performs signal(N) after each append to increment N
 The consumer must first do wait(N) to see if there is an item to consume
and use wait(S)/signal(S) to access the buffer

49

Solution of P/C: unbounded buffer


Initialization:
S.count:=1;
N.count:=0;
in:=out:=0;
Fabarav
append(v):
b[in]:=v;
in++; Producer: Consumer:
repeat repeat
produce v; wait(N);
wait(S); wait(S);
take(): append(v); w:=take();
w:=b[out]; signal(S); signal(S);
out++; signal(N); consume(w);
return w; forever forever

critical sections
50
February 24

P/C: unbounded buffer

 Remarks:
 Putting signal(N) inside the CS of the producer (instead
of outside) has no effect since the consumer must
always wait for both semaphores before proceeding
 The consumer must perform wait(N) before wait(S),
otherwise deadlock occurs if consumer enter CS while
the buffer is empty
 Using semaphores is a difficult art...

51

Bounded Buffer
February 24

P/C: finite circular buffer of size k

 can consume only when number N of (consumable)


items is at least 1 (now: N!=in-out)
 can produce only when number E of empty spaces is
at least 1
53

P/C: finite circular buffer of size k

 As before:
 we need a semaphore S to have mutual exclusion on buffer
access
 we need a semaphore N to synchronize producer and
consumer on the number of consumable items
 In addition:
 we need a semaphore E to synchronize producer and
consumer on the number of empty spaces

54
February 24

Solution of P/C: finite circular buffer of


size k
Initialization: S.count:=1; in:=0;
N.count:=0; out:=0;
E.count:=k;

append(v):
Producer: Consumer:
b[in]:=v;
repeat repeat
in:=(in+1)
produce v; wait(N);
mod k;
wait(E); wait(S);
wait(S); w:=take();
append(v); signal(S);
take():
signal(S); signal(E);
w:=b[out];
signal(N); consume(w);
out:=(out+1)
forever forever
mod k;
return w;

critical sections
55
February 24

Solution of P/C: finite circular buffer of


size k
Initialization: S.count:=1; in:=0;
N.count:=0; out:=0;
E.count:=k;

append(v):
Producer: Consumer:
b[in]:=v;
repeat repeat
in:=(in+1)
produce v; wait(N);
mod k;
wait(E); wait(S);
wait(S); w:=take();
append(v); signal(S);
take():
signal(S); signal(E);
w:=b[out];
signal(N); consume(w);
out:=(out+1)
forever forever
mod k;
return w;

critical sections
55
Data Encapsulation
& Sychorize keywor in Save

=
:thislear

It is modification to semaphor
Synchronized

dress
Address Applications arecommuni notmachines
,
Msg Sent
address Same Received
After Ark ,
I car
send anothe message
There
Both are NB when
is no date sharing
not required
syrctronization
B B
O-Sender ↑ Receiver

DO # 1 .

1Can useAddressing

mSee
= g reg
from Man See

o 1-hBroadesta

G m -m

·
Can use direct addressing when there is

only 1 serdes
sendus are ther
·
Use Indirect when multiple

fort

~ for Mailboxes multiple


,
receivers
Many to Many Mailbox
mailbox need to specify
When using common ,
we

a message Type

Many to One is called as Port


Mailbox
Port : Like eg . Sir
gave ,
of a
Physical
their letter in that Port
People
Many put
Receiver
whenever ther want
and may
come

I can ope I check


Port is used for Client/server applications

make ports or mailboxes


Only receiver can

warts
It will be destroyed whenever user

or when user terminates/ (is gove basically)


File
2)
-
#R

·


Can Read Only I can

time
Simultaneously write at a


-
Can NOT read if
- someone is writing

dire
Readcountcatbed te sam

it becomes I
Time I instead of 3
also needs Moes
to

ReadCourt updation
3 So

writer write
To no can

3Reader write
O
es can
DEADLOCKS

--
- Fristwata
-
-
-
I Process print at
only can once
Pinter
>
- eg
. ,

WHE
laving
alreadydeadlock
resource
which
a
is
>
- So a processresource be in
wants another
can

can't take it back borefully


One the resource
is
given you
,

->
needs to be given/release voluntarily

(Po Pr Pe Ps Po )

epo
p resources
.....

Po wants
, , ,
, ,
7)

Whena ll these conditions


Ps
onesatisfied simultaneoua
Ps
1

Deadlock
Po


Reboot LOL

To remove deadlock
,
we need eliminateatleastI of
t o

conditions which were

Mutex
Hold Cait
No Preemption
Circular Wait
aren't
Cant Remove Mutu for sure resources even
· ,
some
at
share g only I
page be printed ove

possible to ......
can

· Hold & Wait : Eithe request all resources


together
Or take next resource only wher you have released previous resource

utilisat
DRAWBACK
allresource
Cartgive ta Low a up
:
on

mem

stS
& This becomes a
cycle
< Po P, Pr Pr ,

T ,

& to
Make a
allows
function that only

ordea
,

request asourcesonlyin ineating


of number
.
When no
cycle in Resource Allocation Graff :
·
Sabe State

cycle (May cycle)


there May not be
When is
a or

·
Unsafe State

Basic Facts
No DEADLOCK
Safe State ->
Unsabe State >
- POSSIBLE DEADLOCK

Avoidance -
MARF SURE SYSTEM
REMAINS IN SAFE S .

-
CLASSES MESSED
Frame
S Physical Address -

Logical Address
-
Page
- 16km

↓ (21 )
%

1kB

76BilOB-
Page Affect

Logical Physical
#00000 ....
0000

--
-

E

I.......
I I
111111

Physical Memory
Page Table
want benefits of Contiguous in Nor Cont
.

If we

Then we need to make an indes

#
Index Table-PageTable

&

Now AccessTime is double of Contiguous as first me


,
the
have "Page Table' & then
from
access we access
to

directaddress from "Physical Memory"

a
Becausewemedtime
we don't wanna speed
where is index page
.
Suppose Page TBR ,
is at O
PTLR is 20

& I want to find index page 25


Now OS knows its invalid because isdes sije is 20
Rages
& it state at .
0
invalid address
& OS needs to protect
So I5 is an

memory
.

bequity refer
Suppose 100 ms for Rage
160 for Physical
ms

Ins for TLB

If ILBHA-A .
T .
=
101ms (May Alma
Good, Cartiguars
So TLB Miss >
- A% .

.
=
20 1ms

Ffb A T. .
= X (ma + e) (1 2)(2ma + e) + -

Type (Direc
/ .
8 (120 + 3) + : 2 (240 + S)
·
Type 2 [BitmIO What is Hit Ratio if eboman Deg i Les flo 20 %
&
180 (160) (1 2) (330)
- 2 + -

- Modified Bit

E 2
·

Notifies OS

·
What
Marie
if There is NO free fram
D Then first get/create a free frame
OR

out .
on page I bring the required frame
PageReplacements Swap
page to Replace ,
Wie
Which
Bach
Modified or Modibed need No to The bame cause
its already a
a

&
asDirty/Non
Sometimes referred Dirty Bit

Modified Non-Modified
: Hirartial Paging
Page Table Structure
② Hashed Page Table
Tables
③ Inverted Page

Heirachial:· Break Up the logical address space into multiple

pagetable
/Million YkB

-I
Max Process Size >
- 10 XKBB

Missed 2 Diagrams RIP


&and
Suppose you have page table of every process
So No Af Page Table
= No of processes
Size of Page Table
&
Size of Proces
constant
No of frames remain

·
used
ver
56 ??
SEGMENTATION

outine
a

beUser
T

·
In
Paging Page
,
size was fixed
variable
Segments size are

Now , Obviously Offset is different


A be subdivided into segments
·

program can

Kinda Hybrid
Ane segment must be contiguous
anywhere
·

be allocated
·
But overall segment may
in free space
.
Start

We can use TLB


- here too

Checkd
n
*

&
again
VIRTUAL MEMORY

Demand Paging
Less 510 Needed

I
-

Needed - Refere to it
Page
-"Memory a
, is

Faster Response
· Smal Refer
[ J per
Reduces Page Fault
which
&
guesses page
E

Page Replacement :

-
Americal
This on

probability
-

-
-

Notage
3
&

Page Fault
Service Time

m
(p =
0 .
07)
TLB Fine : 0 001
.

MS
Hit Ratio -
90 %

p = 05
(13)
#AT (0 9X0 00D 0 1
.


=
.
.
& .

=>
0 0009 + 15 /
.
.

009 us

[PF 8mr]

3
=

PF Modified-20ms

MAT = 100 ms

PageReplace 90 %

PF :

200 = (1-k)100 +
+[0 . 3x8000000 + 0 9 .
*
20000000)
14000000)
100-100 p + (2400000 +

000
100 b
,
200 = 100 - +
+16400

I
0000006 e
.
3

&
#
Already
Present zreplaced oreplaced Present Preet 3 Replaced
PR R
41
Replacement
- R a c e d Replaced
Y ReplacedReplace Reset Res creplaced / Start

PF a
]
should

[
framemofdrawback d

of
That's
FIFO
increased
.
a

It is not consistent.

* FCFS & FIFO are Different


[Apposite of Optimal
reque
LS

I O

4
A
Whatwarklange a
,

maro
How address it ? (Solve
to

Use Model
Locality
·
MODULES
20 Management is a bit complicated ,
because every device is

hardware software
just
some are
different some
,
are ,

Send/Receive signal via network


MI Shit

h
Studen
[ Direct 110 to
Enterukt Dis

DMA

fs
Processo
Now day us processor
v
we ,

Einterrupt driven
a mark so
have wi
eg We e
. b

we
Most want something
that workfor
weyhing a
unfor

action
HickuyComplpe
ete

s
RightNow,generaliy
ft
i

want achieve generality


to

We
haven'tyet
up till the Top which we
,
Dise - Large Array of Sectors
Both Surface of Dise are
lam) more Rom ,
better Dis

Access Tim-> 3 Components


to the position I want
read wie desd
Need to
SeekFire to Minimise Seek Time
more

Motion (Main Stuff is

(Angular the dis Con 'I Rotation is required


do rolate
an arg

Rotational Latery
,

Then are need


- Once Head inlove
,

t Seek time
algos So charge
Transfer Time Can't change this using ....
+ Franber
Time

>
-

Ist Tran = 16
12
2nd Tran
:

fo
Total S : 16 + 12 + 12 + 12 + 12
=
Dumb Way of
Storing in Dise

wat
goma

39

S
184 140

55 3
.
AKA 'LOOKI
ARA 'C LOOKI

·
S
do Backup Redudey Improves
[N
to los so much data (Relb
KeyExpensive
,

lol, you're going


,

treat it ar I unit
So use
smalle disks but we
,
we
sinbu
Lowers Access Time

Striping
Clet the Poo)

#
·

Beraud adatea

W
Complete
Complete
Striking Redundancy

Hamming
Code Bit Requirement

Eatum +

The Big Issue is writing .


Need to write in Mirror + The Party Disk

Read/write Performance compromises


.

don
Disk ... Lear/Discover
with FCC these days
come
Can 1 Wite Operation at Ance
only do
Because need to write on Party Bit

Still single write

Understand Dibb

Sequential - Bit by Bit

Block by Block
But in Level 5
To writeI 4
We have different combination
may
We need 2 Combination when we are
writing
S
always be Dish write
To in So Disk
One will ·g 5
will require Diskl & Disk
I
writing in Disk) So
=
write in S
To
& Dish Y
& 5 I will require Disks
Then we'll need Disk1 do these write
So I can

write anything else


row .
Can't
operation together
in

as They'll also need DishE

Basically Disks corpromises us

You might also like