OS Class Notes
OS Class Notes
Introduction
OS is a
programm hardware
OSemedyweus
What is OS
of What s s
viewerhas differentview
So each
an
&
No serSubfacenogram
ne
I
A designer’s abstract view of an OS
• The abstract view consists of three components
T
• The kernel programs
• interact with the computer’s hardware and implement the
intended operation
• The non-kernel programs
• implement creation of programs and use of system
resources by them. These programs use kernel programs to
control operation of the computer
• The user interface
• interprets the commands of a user and activates non-
kernel programs to implement them
Implement .
with Hardware & Get work done
Kernel-
Non Kernel-
User Interbal
Jan-24
a
CPU, memory, I/O devices, file storage space
D Operating system
Controls and coordinates use of hardware among various
applications and users
B Application programs – define the ways in which the system resources
are used to solve the computing problems of the users
Word processors, compilers, web browsers, database systems,
video games
Users
People, machines (e.g., embedded), other computers
Shafald
a
I
Operating system goals (Objectives):
Execute user programs and make solving user problems easier
Make the computer system convenient to use
Use the computer hardware in an efficient manner
Ability to evolve: An OS should be constructed in such a way as to
permit the effective development, testing, and introduction of new
system functions without interfering with service.
Each OS has different goals and design:
·
Mainframe – maximize HW utilization/efficiency
PC – maximum support to user applications
Handheld – convenient interface for running applications, performance per amount
of battery life
Jan-24
Goals of an OS
• These two goals sometimes conflict
• Prompt service can be provided through exclusive use of a
computer; however, efficient use requires sharing of a
computer’s resources among many users
• An OS designer decides which of the two goals is more
important under what conditions
• That is why we have so many operating systems! ease of use
performance,
resource utilization efficiency convenience
User convenience
• User convenience has several facets
• Fulfillment of a necessity
• Use of programs and files
• Good service
• Speedy response
• Ease of Use
• User friendliness
• New programming model
• e.g., Concurrent programming
• Web-oriented features
• e.g., Web-enabled servers
• Evolution
• Addition of new features, use of new computers
Jan-24
Computer-system operation
&One J
or more CPUs, device controllers connect through common
bus providing access to shared memory
Concurrent execution of CPUs and devices competing for
memory cycles (through memory controller)
Device Controller
.
eg
Har
Keyboard a
·
I Accounting
• I/O organization
• How an OS uses these features to control operation of
an OS?
• How a program interacts with an OS?
Jan-24
Prio Mod -
> Runs sensitive Comp Aper ass it be dow in
Na Pal
-
3
• The CPU can operate in two modes.
D • Privileged mode
• Certain sensitive instructions can be executed only when
the CPU is in this mode
• For example, initiation of an I/O operation, setting protection
information for a program
• These instructions are called privileged instructions
• User mode
• Privileged instructions cannot be executed when the CPU is
in this mode
TEnterrupt
VIS
Use &
- F
Privileged 2 -
Jan-24
Memory hierarchy
•The memory hierarchy is a cost-effective method
of obtaining a large and fast memory
• It is an arrangement of several memories with
different access speeds and sizes
• The CPU accesses only the fastest memory; i.e., the
cache
• If a required byte is not present in the memory being
accessed, it is loaded there from a slower memory
Memory hierarchy
• Cache memory is the fastest and disk the slowest in the hierarchy
- -
from memory (if not present in memory, it is first loaded from disk)
Jan-24
Memory hierarchy
• Cache memory
• A cache block or cache line is loaded from memory when some
byte in it is referenced
• A ‘write-through’ arrangement is typically used to update
memory
• The cache hit ratio (h) indicates what percentage of
accessed bytes were already present in cache
• The cache hit ratio has high values because of locality
3
• Effective memory access time = h x access time of cache memory +
(1 – h) x (time to load a cache block + access time of cache memory)
Memory hierarchy
• Main memory
• Memory protection prevents access to memory by an
unauthorized program
• Memory bound registers indicate bounds of the memory
allocated to a program
• Virtual memory
• The part of memory hierarchy consisting of the main
memory and a disk is called virtual memory
• A program and its data are stored on the disk
• Required portions of the program and its data are loaded in
memory when accessed
Jan-24
First
&
Lat
• The lower bound register (LBR) and upper bound register (UBR) contain
addresses of first and last bytes allocated to the program
• LBR, UBR are stored in the memory protection info (MPI) field of the PSW
• The CPU raises an interrupt if an address is outside the LBR–UBR range
N
Jan-24
CPU is
.
Jan-24
understand
Jan-24
Interrupts
• An interrupt signals the occurrence of an event to the
CPU
=
Classes of interrupts
• Three important classes of interrupts are
• Program interrupt
• Caused by conditions within the CPU during execution of
an instruction; e.g.,
• Occurrence of an arithmetic overflow, addressing exception, or
memory protection exception
• Execution of the software interrupt instruction
• I/O interrupt
• Indicates completion of an I/O operation
• Timer interrupt
• Indicates that a specified time interval has elapsed
Jan-24
Interrupt
System Call -ware
• A computer has a special instruction called a
‘software interrupt’ instruction
• Its sole purpose is to cause a program interrupt
• A program uses the software interrupt instruction to make
a request to the system
• The operand of the instruction indicates what kind of request is
being made
• Association between interrupt code and kind of request is OS-
specific
• This method of making a request is known as a system call
·
Error
·
Resource
system
Jan-24
in
i
CALLS
S
• System calls are used to make diverse kinds of requests
und
D • Resource related
Ra c e
• Resource request or release, checking resource availability
② • Program related
• Execute or terminate a program, set or await timer interrupt
B • File related
a
• Open or close a file, read or write a record
D • Information related
• Get time and date, get resource information
• Communication related
• Send or receive message, setup or terminate connection
Example
Example
- hardware upgrades
new services
Fixes
Jan-24
Stages include:
·
Time
Sharing
Multiprogrammed Systems
Batch Systems
Simple Batch
Systems
Serial
Processing
Serial Processing
in memory
• Monitor reads in job and gives User
control Program
Area
• Job returns control to monitor
and Sin
• The operator forms a batch of jobs and inserts ‘start of batch’ and ‘end of batch’ cards
· • Operator initiates processing of a batch
• The batch monitor, which is a primitive OS, performs transition between individual jobs
Jan-24
SCBXX
11
Special type of programming
language used to provide
instructions to the monitor * Sch
J
w Job Controller Larg
Em
Which Machse Understand
what compiler to use Give Instruction to Machine
Modes of Operation
protectioMuch
be executed may be accessed
in Kenne
So memory fail
• Sacrifices:
• some main memory is now given over to the monitor
• some processor time is consumed by the monitor
• Despite overhead, the simple batch system improves
utilization of the computer
Multiprogramming Systems
In a batch processing system, the CPU remained
idle while a program performed I/O operations
•Aim of multiprogramming:
• Achieve efficient use of the computer system
through overlapped execution of several programs
• While a program is performing I/O operations, the OS
schedules another program
Jan-24
Uniprogramming
Time
(a) Uniprogramming
Multiprogramming
Program A Run Wait Run Wait
• There must be enough memory to hold the OS (resident monitor) and one
user program
• When one job needs to wait for I/O, the processor can switch to the other
job, which is likely not waiting for I/O
Multiprogramming
Program A Run Wait Run Wait
• Multiprogramming
• also known as multitasking
&
Se
• The computer’s architecture must contain following features to
support multiprogramming
X
• DMA
• To provide parallel operation of the CPU and I/O
~ Learned in DMA Skoruft
• Interrupt hardware
• To implement the interrupt action, which passes control to the OS
• Memory protection
• To prevent corruption or disruption of a program by other programs
• Privileged mode of CPU
Already
• CPU must be in privileged mode to execute sensitive instructions
~• It is in the user mode, i.e., non-privileged mode, while executing user
programs
Time-Sharing Systems
• Can be used to handle multiple interactive jobs
• Processor time is shared among multiple users
• Multiple users simultaneously access the system
through terminals, with the OS interleaving the
execution of each user program in a short burst or
quantum of computation
Jan-24
Batch Multiprogramming
vs. Time Sharing
Batch Multiprogramming Time Sharing
Principal objective Maximize processor use Minimize response time
Source of directives to Job control language Commands entered at the
operating system commands provided with the terminal
job
A distributed system
Distributed systems
• Benefits of a distributed system
• Resource sharing
• An application can use resources located in other computers
• Reliability
• Provides availability of resources despite faults
• Computation speed-up
• Parts of a computation can be executed simultaneously in different computers
• Communication
• Users in different computer systems can communicate
• Incremental growth
• Cost of enhancing capabilities of a system is proportional to the desired
enhancement
Parallel Systems
Most systems use a single general-purpose processor
Most systems have special-purpose processors as well
Multiprocessors systems (two or more processors in close communication,
sharing bus and sometimes clock and memory) growing in use and importance
Also known as parallel systems, tightly-coupled systems
Advantages include
1. Increased throughput
2. Economy of scale
3. Increased reliability – graceful degradation or fault tolerance
Jan-24
Multiprocessors systems
Key role – the scheduler
Two types of Multiprocessing:
1. Asymmetric Multiprocessing -
assigns certain tasks only to certain
processors. In particular, only one
processor may be responsible for
handling all of the interrupts in the
system or perhaps even performing
all of the I/O in the system
2. Symmetric Multiprocessing -
treats all of the processing elements
in the system identically
[A eney flat a
r be u o e pres
proce
is
Process
• Fundamental to the structure of operating systems
a program in execution
an instance of a running program
the entity that can be assigned to, and executed on, a processor
a unit of activity characterized by a single sequential thread of execution, a
current state, and an associated set of system resources
·
CPU utilization is
preferred while
running processes
Jan-24
Word Don /Cit Time
·
Throughput:-
Components of a Process
• A process contains three • The execution context is
essential:
Video components:
• it is the internal data by which
E
• an executable program the OS is able to supervise and
• the associated data needed
·
control the process
gard by the program (variables,
work space, buffers, etc.)
• includes the contents of the
various process registers
• the execution context (or • includes information such as
much &“process state”) of the the priority of the process and
Registe program whether the process is waiting
for the completion of a
particular I/O event
Algohm
CONVOX EFFECT An which schedules present
-
AT
Metallic scheme
-
ou
.
E ht
O
Process
• Process – a program in execution; process execution must
progress in sequential fashion.
• A process includes: hemale
• program counter
• stack
• data section
• As a process executes, it changes state
24
Dating
#
• new: The process is being created.
• running: Instructions are being executed.
• waiting: The process is waiting for some event to occur.
• ready: The process is waiting to be assigned to a processor.
• terminated: The process has finished execution.
Jan-24
- insied
exit
admitted interuft
new &
dy ning
e
dispat
&schedul and
dispat
,
exit
new
n
Process Control Block (PCB)
-
&
Information associated with each process.
~• Process state Obvious
-
-
~ -Oberon
• Program counter
• CPU registers
a
~• CPU scheduling informationTo make itread
-
cruning
-• Memory-management information
Y
• Accounting information
• I/O status information
Jan-24
Process Management
Main Processor
Memory Registers
Process index i
PC
i
The
11
Process
entire state of the list
j
Base
Limit
b
h
Program
into the OS by expanding the (code)
-
support the feature Process
B
h Data
S
Program
(code)
Jan-24
afundamentalstep a
Sili
t
i
Context Switch
- Sportant es ·
·
Time it takes
To Do
Ser
So
D Sara
I load it
vor
rortest
is overhead/baxalls wase
• When CPU switches to another process, the system must save the
state of the old process and load the saved state for the new
process.
• Context-switch time is overhead; the system does no useful work
>
while switching.
• Time dependent on hardware support.
Jan-24
Threads
#
• Context switch between processes is an expensive operation.
It leads to high overhead
• A thread is an alternative model for execution of a program
that incurs smaller overhead while switching between
threads of the same application
unit of computation
Theed i th smallest
·
• A threads is a program execution within the context
of a process (i.e., it uses the resources of a process);
many threads can be created within the same process
• Switching between threads of the same process
-
THREAD
S
Jan-24
Single-Threaded Multithreaded
Process Model Process Model
Thread Thread Thread
Thread Thread Thread
Process User Control Control Control
Control Stack Block Block Block
Block
·
• Running state are:
• Ready
• Blocked Spawn
Block
Unblock
Finish
T
Brows - Some adde starresource PCB
When I
Threads
#
• Where are threads useful?
• If two processes share the same address space and the same
resources, they have identical context
• switching between these processes involves saving and
reloading of their contexts. This overhead is redundant.
• In such situations, it is better to use threads.&
En
Benebik
in Clarge
Jan-24
Benefits of Threads
do
Take Less Fin
Taste
Less time to Threads enhance
terminate a efficiency in
thread than a communication
Switching between
Takes less time process between programs
two threads takes
to create a less time than
new thread switching between
than a process processes
[
Thread Use in a Single-User System
• Foreground and background work
• Asynchronous processing
• Speed of execution
• Modular program structure
Jan-24
Thread Synchronization
s
• all threads of a process share the same
address space and other resources
• any alteration of a resource by one thread
affects the other threads in the same process
I
persa
Jan-24
as
D Keel Don't
block
,
Know
whe
D Komel knows
ENIAC (1945)
Th O(1)
cioTine)e(Combine Time
=
February 24
[OnD =<(m(e) + D(m) + c(m)
a
# I
some of powe
CPU Scheduling
Scheduling
Basic Concepts
• Maximum CPU utilization
obtained with
multiprogramming
• CPU–I/O Burst Cycle – Process
execution consists of a cycle
of CPU execution and I/O
wait.
• CPU burst distribution
Long-Term Scheduler
• Determines which programs are Creates processes
admitted to the system for from the queue
processing when it can, but
must decide:
• Controls the degree of
multiprogramming
• the more processes that when the operating
are created, the smaller which jobs to
system can take on
the percentage of time accept and turn
one or more
into processes
that each process can be additional processes
executed
• may limit to provide
satisfactory service to
the current set of priority, expected
first come, first
execution time,
processes served
I/O requirements
Short-Term Scheduling
• Known as the dispatcher
• Executes most frequently
• Makes the fine-grained decision of which process to execute next
• Invoked when an event occurs that may lead to the blocking of the
current process or that may provide an opportunity to preempt a
currently running process in favor of another
Examples:
• Clock interrupts
• I/O interrupts
• Operating system calls
• Signals (e.g., semaphores)
Schedulers
• Short-term scheduler is invoked very frequently (milliseconds) (must
be fast).
• Long-term scheduler is invoked very infrequently (seconds, minutes)
(may be slow).
• The long-term scheduler controls the degree of multiprogramming.
• Processes can be described as either:
• I/O-bound process – spends more time doing I/O than computations, many short
CPU bursts.
• CPU-bound process – spends more time doing computations; few very long CPU
bursts.
February 24
Dispatcher
• Dispatcher module gives control of the CPU to the process
selected by the short-term scheduler; this involves:
• switching context
• switching to user mode
• jumping to the proper location in the user program to restart that
program
• Dispatch latency – time it takes for the dispatcher to stop one
process and start another running.
examples: example:
• response time Criteria can • predictability
• throughput be classified
into:
Non-performance
Performance-related
related
easily hard to
quantitative qualitative
measured measure
Scheduling Criteria
• CPU utilization – keep the CPU as busy as possible
• Throughput – # of processes that complete their execution per time
unit
• Turnaround time – amount of time to execute a particular process
• Waiting time – amount of time a process has been waiting in the
ready queue
• Response time – amount of time it takes from when a request was
submitted until the first response is produced, not output (for
time-sharing environment)
February 24
Optimization Criteria
• Max CPU utilization
• Max throughput
• Min turnaround time
• Min waiting time
• Min response time
More on priority
• Features of priority-based scheduling
• Priorities may be static or dynamic
• A static priority is assigned to a request before it is admitted
• A dynamic priority is one that is varied during servicing of a
request
• How to handle processes having same priority?
• Round-robin scheduling is performed within a priority level
• Starvation of a low priority request may occur
Q: How to avoid starvation?
Kinds of Scheduling
• Scheduling may be performed in two ways
• Non-preemptive scheduling
• A process runs to completion when scheduled
• Preemptive scheduling
• Kernel may preempt a process and schedule another one
• A set of processes are serviced in an overlapped manner
Kinds of Scheduling
• CPU scheduling decisions may take place when a process:
1. Switches from running to waiting state.
2. Switches from running to ready state.
3. Switches from waiting to ready.
4. Terminates.
• Scheduling under 1 and 4 is non-preemptive.
• All other scheduling is preemptive.
P2 P3 P1
0 3 6 30
Example of SJF
ProcessArriva l Time Burst Time
P1 0.0 6
P2 2.0 8
P3 4.0 7
P4 5.0 3
• SJF scheduling chart
P4 P1 P3 P2
0 3 9 16 24
Example of Shortest-remaining-time-first
• Now we add the concepts of varying arrival times and preemption to the analysis
ProcessAarri Arrival TimeT Burst Time
P1 0 8
P2 1 4
P3 2 9
P4 3 5
• Preemptive SJF Gantt Chart
P1 P2 P4 P1 P3
0 1 5 10 17 26
P1 P2 P3 P2 P4 P1
0 2 4 5 7 11 16
Priority Scheduling
• A priority number (integer) is associated with each process
• The CPU is allocated to the process with the highest priority (smallest integer
highest priority)
• Preemptive
• Nonpreemptive
• SJF is priority scheduling where priority is the inverse of predicted next CPU
burst time
Example of Scheduling
Process Arrival Time Burst Time Priority
P1 0 14 2
P2 2 10 1
P3 3 8 2
P4 5 5 0
P5 5 5 1
Round-robin scheduling
• Each process gets a small unit of CPU time (time quantum q), usually 10-
100 milliseconds. After this time has elapsed, the process is preempted
and added to the end of the ready queue.
• If there are n processes in the ready queue and the time quantum is q,
then each process gets 1/n of the CPU time in chunks of at most q time
units at once. No process waits more than (n-1)q time units.
• Timer interrupts every quantum to schedule next process
• Performance
• q large FIFO
• q small q must be large with respect to context switch, otherwise
overhead is too high
February 24
P1 P2 P3 P1 P1 P1 P1 P1
0 4 7 10 14 18 22 26 30
0 5 10 15 20
A
First-Come-First B
Served (FCFS) C
D
E
A
Round-Robin B
(RR), q = 1 C
D
E
A
Round-Robin B
(RR), q = 4 C
D
E
A
Shortest Process B
Next (SPN) C
D
E
A
Shortest Remaining B
Time (SRT) C
D
E
A
Highest Response B
Ratio Next (HRRN) C
D
E
Multilevel scheduling
• Salient features
• Scheduler uses many lists of ready processes
• Each list has a pair (time slice, priority) associated with it
• The time slice is inversely proportional to the priority
• Simple priority-based scheduling between priority levels
• Round-robin scheduling within each priority level
Multilevel Queue
• Ready queue is partitioned into separate queues:
foreground (interactive)
background (batch)
• Each queue has its own scheduling algorithm,
foreground – RR
background – FCFS
• Scheduling must be done between the queues.
• Fixed priority scheduling; (i.e., serve all from foreground then from
background). Possibility of starvation.
• Time slice – each queue gets a certain amount of CPU time which it can
schedule amongst its processes; i.e., 80% to foreground in RR
• 20% to background in FCFS
• Three queues:
• Q0 – time quantum 8 milliseconds
• Q1 – time quantum 16 milliseconds
• Q2 – FCFS
• Scheduling
• A new job enters queue Q0 which is served FCFS.
When it gains CPU, job receives 8 milliseconds. If it
does not finish in 8 milliseconds, job is moved to
queue Q1.
• At Q1 job is again served FCFS and receives 16
additional milliseconds. If it still does not complete,
it is preempted and moved to queue Q2.
21 in
9 unEE
February 24
~
Used in CSOFT
funds
eg here
because of
same
spewed share
Fair Share
& proportionally Store
M
Fair-Share Scheduling Proportional)
[
• Scheduling decisions based on the process sets
• Each user is assigned a share of the processor
• Objective is to monitor usage to give fewer resources to
users who have had more than their fair share and more to
-
MatProcessed ha
a rd
↑ Kotal Time)
Generally ,
we would give 50 % time
to P & P2
BUT after Fair Stare
-
66 % to P &33 %
to Pr
We
give
,
[tah Sample Lo
the
but if I charge
Load results may
Mathematical modeling
• A mathematical model is a set of expressions for performance
characteristics such as arrival times and service times of requests
• Queuing theory is employed
• To provide arrival and service patterns
• Exponential distributions are used because of their memoryless property
• Arrival times: F(t) =1 – e –αt, where α is the mean arrival rate
• Service times: S(t) = 1 – e –ωt, where ω is the mean execution rate
• Mean queue length is given by Little’s formula
• L=λxW
• L – the average number of items in a queuing system (mean queue length)
• λ – the average number of items arriving at the system per unit of time
• W – the average waiting time an item spends in a queuing system
o I
t pee ta
& THEN implement
ct a
Queue
almost
is Empty Butitis
very release
Dep-an Aerial (But update OS)
If
can
ong you
wayUpdeal algoYouadhoose a
a
rathe ,
ador
Homogerous
Loosely coupled or distributed multiprocessor, or cluster
Convectivity bottleneck
Distribritie • consists of a collection of relatively autonomous systems, each is a
Synchronization Interval
SmallTask Grain Size Description
Abstraction (Instructions)
High Comm
↓
Lou (Simple) Fine Parallelism inherent in a single <20
instruction stream.
mustgive
all thro
ead
a
Gup of Process given CPU at same time (Group Scheduling)
&
Processors
⑧ D
Moremptive hoAnyProceswhe
a Priority Q)
th So
itneedto the
clea
time
.
what
Dynamic Scheduling February 24
process altered
No of threads in a are
If a process is permanently
assigned to one processor advantage is that
from activation until its
there may be less allows group or
completion, then a
dedicated short-term queue overhead in the gang scheduling
is maintained for each scheduling function
processor
• A disadvantage of static assignment is that one processor can be idle, with an empty queue,
while another processor has a backlog
• to prevent this situation, a common queue can be used
• another option is dynamic load balancing
February 24
Disadvantages:
• failure of master brings down whole system
• master can become a performance bottleneck
February 24
Contents
• Principles of Concurrency
• Mutual Exclusion: Hardware Support
• Semaphores
• Monitors
• Message Passing
• Readers/Writers Problem
February 24
Multiple Processes
Concurrency
Key Terms
a
other program
[ J allow
Don
-
Funny LMAO ,
Can happer in texting
condition
Helps protecting
a
= >
I A A D
is
I A This
INTERLEAVING
I A
February 24
I ↑ I I
I I I I
OVERLAPPING
I I
As as
long I'm writing in
my
[ ed
,
No issues/No concrency .
But as soon
suppose we all are writing o Cana (Stard Resource) Then we will
have a lot of problem
Difficulties of Concurrency
result
in
Java ,
we
may get different
every time
February 24
void echo()
{
chin = getchar();
chout = chin;
putchar(chout);
}
Process P1 Process P2
. .
chin = getchar(); .
. chin = getchar();
chout = chin; chout = chin;
putchar(chout); .
. putchar(chout);
. .
February 24
I
If we enforce a rule that only one process may enter the This
function at a time then: won't
give
P1 & P2 run on separate processors undeterministic
result
P1 enters echo first, Kinda ensures
P1 completes execution
P2 resumes and executes echo
Race Condition
Race Conditions
Producer/Consumer Problem
Producer
Suppose that we wanted to provide a solution to the consumer-producer problem that
fills all the buffers. We can do so by having an integer count that keeps track of the
number of full buffers. Initially, count is set to 0. It is incremented by the producer
after it produces a new buffer and is decremented by the consumer after it consumes
a buffer.
while (true) {
/* produce an item and put in nextProduced */
while (count == BUFFER_SIZE)
; // do nothing
buffer [in] = nextProduced;
in = (in + 1) % BUFFER_SIZE;
count++;
}
Consumer
while (true) {
while (count == 0)
; // do nothing
nextConsumed = buffer[out];
out = (out + 1) % BUFFER_SIZE;
count--;
/* consume the item in nextConsumed
}
February 24
Race Condition
count++ could be implemented as
register1 = count
register1 = register1 + 1
count = register1
Potential Problems
·
Need for Mutual Fexclusion : Critical Section
• Data incoherency
• Deadlock: processes are “frozen” because of
mutual dependency on each other
• Starvation: some of the processes are unable to
make progress (i.e., to execute useful code)
February 24
exit section -
> Release
remainder section
forever
·
20 Critical Section is kind a
of bottleneck ar it can
only be done seriallyReduces the step up time
February 24
Critical Regions
processes that will enter the critical section next cannot be postponed indefinitely.
~
3. Bounded Waiting - A bound must exist on the number of times that other
processes are allowed to enter their critical sections after a process has made a
request to enter its critical section and before that request is granted.
Assume that each process executes at a nonzero speed
No assumption concerning relative speed of the N processes
February 24
Turn
Dekker’s Algorithm
1 2
,
3
,
The shared variable turn is
Process Pi:
initialized (to 0 or 1) before
executing any Pi repeat
Pi’s critical section is executed
iff turn = i while(turn!=i){};
intock
Pi is busy waiting if Pj is in CS: CS
mutual exclusion is satisfied turn:=j;
Progress requirement is not RS
satisfied since it requires strict forever
alternation of CSs
Ex: P0 has a large RS and P1 has a small RS. If turn=0, P0 enter its CS
and then its long RS (turn=1). P1 enter its CS and then its RS (turn=0)
and tries again to enter its CS: request refused! He has to wait that P0
↑
leaves its RS.
23
-
Po -
Pi
Waiting Time
This
Problem
a
is
Strict Alternation
Peterson’s Solution
Initialization:
flag[0]:=flag[1]:=false
turn:= 0 or 1 Process Pi:
Willingness to enter CS repeat
specified by flag[i]:=true flag[i]:=true;
turn:=j;
If both processes
do {} while
attempt to enter their
(flag[j]and turn=j);
CS simultaneously, only
Critical Section
one turn value will last
flag[i]:=false;
Exit section: specifies Remainder Section
that Pi is unwilling to forever
enter CS
25
it w icon
do {
acquire lock
critical section
release lock
remainder section
} while (TRUE);
February 24
Drawbacks of solutions
better to
is
taking too
suspend the process
long ,
its
But if not
Too let it wait
Long ,
30
February 24
On a uniprocessor: mutual
exclusion is preserved but
efficiency of execution is Process Pi:
degraded: while in CS, we
cannot interleave execution repeat
with other processes that are disable interrupts Request
in RS critical section Use
On a multiprocessor: mutual enable interrupts Release
E
exclusion is not preserved remainder section
Generally not an acceptable forever
solution
a
get
proces
other wjust
Because is
NOT
SCALABLE
32
February 24
33
35
36
February 24
Advantages
Applicable to any number of processes on either a single processor or
multiple processors sharing main memory
It is simple and therefore easy to verify
Multi-Processor
as
Semaphore
Partially
True not
Icompletely
Synchronization tool (provided by the OS) that do not require busy waiting
A semaphore S is an integer variable that, apart from initialization, can only be accessed
through 2 atomic and mutually exclusive operations:
wait(S)
signal(S)
To avoid busy waiting: when a process has to wait, it will be put in a blocked
queue of processes waiting for the same event
February 24
wait (mutex);
// Critical Section
signal (mutex);
// remainder section
} while (TRUE);
Semaphore Implementation
Must guarantee that no two processes can execute wait () and signal
() on the same semaphore at the same time
Thus, implementation becomes the critical section problem where the
wait and signal code are placed in the critical section.
Could now have busy waiting in critical section implementation
But implementation code is short
Little busy waiting if critical section rarely occupied
Note that applications may spend lots of time in critical sections and
therefore this is not a good solution.
February 24
Implementation of wait:
typedef struct {
wait(semaphore *S) {
int value;
S->value--;
struct process *list;
if (S->value < 0) {
} semaphore;
add this process to S->list;
block();
}
}
Implementation of signal:
signal(semaphore *S) {
S->value++;
if (S->value <= 0) {
remove a process P from S->list;
wakeup(P);
}
}
February 24
Semaphores
Hence, in fact, a semaphore is a record (structure):
43
Bisand
o
*us
o
done ea
February 24
Semaphores: observations
When S.count >=0: the number of processes that can
execute wait(S) without being blocked = S.count
When S.count<0: the number of processes waiting on S
is = |S.count|
Atomicity and mutual exclusion: no 2 process can be in
wait(S) and signal(S) (on the same S) at the same time
(even with multiple CPUs)
Hence the blocks of code defining wait(S) and signal(S)
are, in fact, critical sections
45
Po is
. blocked .
Now it is deadlocked
. .
. .
signal (S); signal (Q);
signal (Q); signal (S);
Starvation – indefinite blocking. A process may never be removed from the
semaphore queue in which it is suspended Only when the queue is Priority Queue a
Reading/Writing is
The producer/consumer problem the main stuff here
a
by a consumer process
Ex1: a print program produces characters that are #Producerwill i
consumed by a printer
Ex2: an assembler produces object modules that are
consumed by a loader
We need a buffer to hold items that are produced and
eventually consumed
A common paradigm for cooperating processes
47
48
49
critical sections
50
February 24
Remarks:
Putting signal(N) inside the CS of the producer (instead
of outside) has no effect since the consumer must
always wait for both semaphores before proceeding
The consumer must perform wait(N) before wait(S),
otherwise deadlock occurs if consumer enter CS while
the buffer is empty
Using semaphores is a difficult art...
51
Bounded Buffer
February 24
As before:
we need a semaphore S to have mutual exclusion on buffer
access
we need a semaphore N to synchronize producer and
consumer on the number of consumable items
In addition:
we need a semaphore E to synchronize producer and
consumer on the number of empty spaces
54
February 24
append(v):
Producer: Consumer:
b[in]:=v;
repeat repeat
in:=(in+1)
produce v; wait(N);
mod k;
wait(E); wait(S);
wait(S); w:=take();
append(v); signal(S);
take():
signal(S); signal(E);
w:=b[out];
signal(N); consume(w);
out:=(out+1)
forever forever
mod k;
return w;
critical sections
55
February 24
append(v):
Producer: Consumer:
b[in]:=v;
repeat repeat
in:=(in+1)
produce v; wait(N);
mod k;
wait(E); wait(S);
wait(S); w:=take();
append(v); signal(S);
take():
signal(S); signal(E);
w:=b[out];
signal(N); consume(w);
out:=(out+1)
forever forever
mod k;
return w;
critical sections
55
Data Encapsulation
& Sychorize keywor in Save
=
:thislear
It is modification to semaphor
Synchronized
dress
Address Applications arecommuni notmachines
,
Msg Sent
address Same Received
After Ark ,
I car
send anothe message
There
Both are NB when
is no date sharing
not required
syrctronization
B B
O-Sender ↑ Receiver
DO # 1 .
1Can useAddressing
mSee
= g reg
from Man See
o 1-hBroadesta
G m -m
·
Can use direct addressing when there is
only 1 serdes
sendus are ther
·
Use Indirect when multiple
fort
a message Type
warts
It will be destroyed whenever user
↓
Can Read Only I can
time
Simultaneously write at a
↓
-
Can NOT read if
- someone is writing
dire
Readcountcatbed te sam
it becomes I
Time I instead of 3
also needs Moes
to
ReadCourt updation
3 So
writer write
To no can
3Reader write
O
es can
DEADLOCKS
--
- Fristwata
-
-
-
I Process print at
only can once
Pinter
>
- eg
. ,
WHE
laving
alreadydeadlock
resource
which
a
is
>
- So a processresource be in
wants another
can
->
needs to be given/release voluntarily
(Po Pr Pe Ps Po )
epo
p resources
.....
Po wants
, , ,
, ,
7)
Deadlock
Po
⑧
Reboot LOL
To remove deadlock
,
we need eliminateatleastI of
t o
Mutex
Hold Cait
No Preemption
Circular Wait
aren't
Cant Remove Mutu for sure resources even
· ,
some
at
share g only I
page be printed ove
possible to ......
can
utilisat
DRAWBACK
allresource
Cartgive ta Low a up
:
on
mem
stS
& This becomes a
cycle
< Po P, Pr Pr ,
T ,
& to
Make a
allows
function that only
ordea
,
·
Unsafe State
Basic Facts
No DEADLOCK
Safe State ->
Unsabe State >
- POSSIBLE DEADLOCK
Avoidance -
MARF SURE SYSTEM
REMAINS IN SAFE S .
-
CLASSES MESSED
Frame
S Physical Address -
Logical Address
-
Page
- 16km
↓ (21 )
%
1kB
76BilOB-
Page Affect
Logical Physical
#00000 ....
0000
--
-
E
↑
I.......
I I
111111
Physical Memory
Page Table
want benefits of Contiguous in Nor Cont
.
If we
#
Index Table-PageTable
&
a
Becausewemedtime
we don't wanna speed
where is index page
.
Suppose Page TBR ,
is at O
PTLR is 20
memory
.
bequity refer
Suppose 100 ms for Rage
160 for Physical
ms
If ILBHA-A .
T .
=
101ms (May Alma
Good, Cartiguars
So TLB Miss >
- A% .
.
=
20 1ms
Ffb A T. .
= X (ma + e) (1 2)(2ma + e) + -
Type (Direc
/ .
8 (120 + 3) + : 2 (240 + S)
·
Type 2 [BitmIO What is Hit Ratio if eboman Deg i Les flo 20 %
&
180 (160) (1 2) (330)
- 2 + -
- Modified Bit
E 2
·
Notifies OS
·
What
Marie
if There is NO free fram
D Then first get/create a free frame
OR
out .
on page I bring the required frame
PageReplacements Swap
page to Replace ,
Wie
Which
Bach
Modified or Modibed need No to The bame cause
its already a
a
&
asDirty/Non
Sometimes referred Dirty Bit
Modified Non-Modified
: Hirartial Paging
Page Table Structure
② Hashed Page Table
Tables
③ Inverted Page
pagetable
/Million YkB
-I
Max Process Size >
- 10 XKBB
·
used
ver
56 ??
SEGMENTATION
outine
a
beUser
T
·
In
Paging Page
,
size was fixed
variable
Segments size are
program can
Kinda Hybrid
Ane segment must be contiguous
anywhere
·
be allocated
·
But overall segment may
in free space
.
Start
Checkd
n
*
&
again
VIRTUAL MEMORY
Demand Paging
Less 510 Needed
I
-
Needed - Refere to it
Page
-"Memory a
, is
Faster Response
· Smal Refer
[ J per
Reduces Page Fault
which
&
guesses page
E
Page Replacement :
-
Americal
This on
probability
-
-
-
Notage
3
&
Page Fault
Service Time
m
(p =
0 .
07)
TLB Fine : 0 001
.
MS
Hit Ratio -
90 %
p = 05
(13)
#AT (0 9X0 00D 0 1
.
↑
=
.
.
& .
=>
0 0009 + 15 /
.
.
009 us
[PF 8mr]
3
=
PF Modified-20ms
MAT = 100 ms
PageReplace 90 %
PF :
200 = (1-k)100 +
+[0 . 3x8000000 + 0 9 .
*
20000000)
14000000)
100-100 p + (2400000 +
000
100 b
,
200 = 100 - +
+16400
I
0000006 e
.
3
&
#
Already
Present zreplaced oreplaced Present Preet 3 Replaced
PR R
41
Replacement
- R a c e d Replaced
Y ReplacedReplace Reset Res creplaced / Start
PF a
]
should
[
framemofdrawback d
of
That's
FIFO
increased
.
a
It is not consistent.
I O
4
A
Whatwarklange a
,
maro
How address it ? (Solve
to
Use Model
Locality
·
MODULES
20 Management is a bit complicated ,
because every device is
hardware software
just
some are
different some
,
are ,
h
Studen
[ Direct 110 to
Enterukt Dis
DMA
fs
Processo
Now day us processor
v
we ,
Einterrupt driven
a mark so
have wi
eg We e
. b
we
Most want something
that workfor
weyhing a
unfor
action
HickuyComplpe
ete
s
RightNow,generaliy
ft
i
We
haven'tyet
up till the Top which we
,
Dise - Large Array of Sectors
Both Surface of Dise are
lam) more Rom ,
better Dis
Rotational Latery
,
t Seek time
algos So charge
Transfer Time Can't change this using ....
+ Franber
Time
>
-
Ist Tran = 16
12
2nd Tran
:
fo
Total S : 16 + 12 + 12 + 12 + 12
=
Dumb Way of
Storing in Dise
wat
goma
39
S
184 140
55 3
.
AKA 'LOOKI
ARA 'C LOOKI
·
S
do Backup Redudey Improves
[N
to los so much data (Relb
KeyExpensive
,
treat it ar I unit
So use
smalle disks but we
,
we
sinbu
Lowers Access Time
Striping
Clet the Poo)
#
·
Beraud adatea
⑧
W
Complete
Complete
Striking Redundancy
Hamming
Code Bit Requirement
Eatum +
don
Disk ... Lear/Discover
with FCC these days
come
Can 1 Wite Operation at Ance
only do
Because need to write on Party Bit
Understand Dibb
Block by Block
But in Level 5
To writeI 4
We have different combination
may
We need 2 Combination when we are
writing
S
always be Dish write
To in So Disk
One will ·g 5
will require Diskl & Disk
I
writing in Disk) So
=
write in S
To
& Dish Y
& 5 I will require Disks
Then we'll need Disk1 do these write
So I can