0% found this document useful (0 votes)
16 views5 pages

CS2106 Cheatsheet MT

S

Uploaded by

Kailash Gautham
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views5 pages

CS2106 Cheatsheet MT

S

Uploaded by

Kailash Gautham
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

CS2106 Cheatsheet

by Chun Khai
3. Enforce usage policies (e.g. return PC is always a specific displacement away
01. INTRODUCTION TO OS 4. Security and Protection
02. PROCESS ABSTRACTION from FP). Its usage and existence is platform dependent
Illustration of OS Process is a dynamic abstraction for an execution of a
5. User Program Portability
running program, and identified by PID. Saved Registers (1 Word, 4 bytes)
6. Efficiency
Components GPR are very limited, callers/callees save state of GPR
Structures for OS - Memory: Storage for instructions and data, managed on stack to temporarily hold GPR value, and restores
1. Monolithic OS by OS state of GPR when needed (Register Spilling)
- One big special program - Cache: Duplicate of memory for faster access, split - (re)storing saved register to(from) stack can be done
- Most Unix variants, Windows into instruction and data cache via “addi $SP, $SP, ±4” then “sw R $SP”
- [+] Good performance - Fetch Unit: Loads instruction from memory, SF Overview
- [ -] Highly coupled components Location indicated by PC [R] for caller, [E] for callee
[-- Setup --]
- [ -] Complicated Internal Structure - Functional Units: Instruction execution.
1. [R] Pass parameters with registers and/or stack
- Registers: Storage for faster access 2. [R] Save return PC on stack
- General Purpose Registers (GPR): Visible to 3. Transfer control from [R] to [E]
compiler 4. [E] Save registers used by callee, old FP, old SP
- Special Registers: PC, SP, FP, PSW 5. [E] Allocate space for local variables of callee, using stack as
Instruction Execution temporary storage
6. [E] Adjust SP to point to new top
1. Instruction X fetched memory location indicated in
[-- Teardown --]
Types of OS PC
2. Microkernel OS 1. [E] Places return result, RR on stack
1. No OS (first computers) 2. Instruction X dispatched to the correct Functional 2. [E] Restores old SP
- Kernel is small, provides only essential facilities, uses
- [+] Minimal Overhead Unit (Read op, Compute in ALU, Write value) 3. Transfer control from [E] to [R]
Inter-Process Communication (IPC) to communicate
- [ -] Not portable 3. Instruction X completed. PC++ 4. [R] Utilize RR (if applicable)
- [+] More modular, robust
- [ -] Inefficient use of resources Information of a Process 5. [R] Continues execution in caller
- [+] Better isolation, protection between kernel and
2. Harvard Architecture 1. Memory Context
high-level services Heap Memory
- Separate storage pathway for code and data 2. Hardware Context
- [- ] Lower performance Dynamically Allocated Memory
3. OS Context
- Memory space only allocated at runtime, unknown
Virtual Machines (VM) [Memory, Hardware Context]
at runtime (cannot place in data region)
- A software emulation of hardware to aid with Stack Memory - No definite deallocation timing (cannot place in
visualisation of the underlying hardware Stack Frame (SF) stack)
- Primitive OS run on top of VM A memory management strategy which describes [OS Context]
- Managed by VM Monitors information* needed for function invocation like:
Types of VM Monitors
Process
- Return address of caller (return PC)
Process Identification (PID)
3. Von Neumann Architecture 1. Type 1 Hypervisor - Arguments/Parameters for function
- Unique among (currently running) process
- Common storage and pathway for code and data - Individual VM to each guest OS - Storage for local variables
- Communicated to hardware
- Saved registers, SP, FP
- Some OS dependent issues:
* Sequence of information above is platform dependent
Stack Pointer (FP) - Recycling of PID (for Linux, PID wraps around
- Indicates the top of stack region (first free location) after a limit is reached, else no reuse)
and is stored in specialized register, SP - Existence of reserved PID
2. Type 2 Hypervisor - Limit to number of processes
- Stack frame pushed on top when func. invoked
- Runs in host OS Process States
Stack frame popped from top when func. invoked
- Guest OS runs in VM
- Direction of growth (top) is OS dependent
Motivation for OS Frame Pointer (FP)
1. Manage resource and provide coordination Points to a fixed location (usually top) in a stack frame,
(synchronization, resource sharing) used to facilitate access of various stack frame items
2. Simplifying programming
Process Table - PID of terminated child process, stores exit status of 8. [S] executes handler_k( ) - Synchronization: Need to sync access of shared
Each process has a Process Control Block (PCB) that terminated child process * the checking of interrupts only occur after instruction resources
- Cleans up remainder of child system resources (those not
stores the entire execution context for a process. These is completely executed, OS cannot modify how - Single Machine Only: Software can aid in
removed on exit( ))
blocks are stored in a Process Table and maintained by instructions are executed (sequence, stopping it halfway distributed systems, but very inefficient
- Child-Parent Interaction:
the kernel (one table representing all processes). - Zombie: Child exits before parent, remains zombie until etc) - *Implementation: Tough implementation
Information stored include: parent calls wait( ) to obtain child exit status POSIX Shared Memory (UNIX)
- Pointer (to be restored after context switch) - Orphan: Parent exists before child, child reparented to System Calls (Syscall) shmget(key_t key, size_t size, int flag)
- Process State init( ) process. - Creates shared memory region and returns ID.
Acts as an API to the OS, providing ways of calling
- Generally done by master program.
- PID services/facilities in the kernel
main (int argc, char* argv[]) - returns segment ID of shared memory segment created if
- PC (stores address of next instr. for THAT process) Unix Syscall In C/C++
- argc: number of command line arguments (inclusive successful, else -1
- Register(GPR, accumulator, base and other CPU A C/C++ program calls library version of system calls. - key: key value to identify shared memory segment.
of program name itself)
reg) A function wrapper has the same name and parameter IPC_PRIVATE sets memory to only be accessibly by the ID
- argv: char string array
- Memory Limit as the syscall. A function adapter is user-friendlier. returned by shmget( ) and no other keys
- Open files list getpid( )
- Function Wrapper: write( ) size: required size of segment. If already exists, ≤ currSize
- Miscellaneous accounting and status data - get process information Function Adapter: printf()
flag:
IPC_CREAT: if segment with key does not exist, should
UNIX context Exceptions/Interrupts Direct Invocation: syscall(String string)
-
A UNIX process abstraction has information on PID, be created (ignored if IPC_PRIVATE is set)
Exceptions Syscall Mechanism
state, parent PID, cumulative CPU time etc. - IPC_EXCL: segment with key MUST NOT already
- Synchronous (occurs due to erroneous program, and 1. Program invokes library call
exist (ignored if IPC_CREAT not specified)
right after that program is ran) 2. Library call (in assembly) places syscall number in register
init( )
3. Library call executes TRAP to switch to kernel mode
- Root of all processes in UNIX, PID = 1 - All predicting instructions must have completed, and shmat(int id, const *addr, int flag)
4. Syscall handler is determined using syscall number by Attaches shared memory segment to the address space of the
- Created in kernel at boot up time future instructions must not be executed -
dispatcher calling process, so that memory contents can be accessed.
fork( ) - Cause an exception handler to execute. 5. Syscall handler is executed and completed - returns address of attached segment if successful, else -1
Creates a child process with copy of parent’s executable
- Possible exceptions at each stage of execution 6. OS switches back to user mode
- - id: segment id of memory segment to attach (≠key in
image (code, address space etc) - IF : Memory fault 7. Library call returns to user program using function return
shmget)
- Data is not shared with parent process - RF : Illegal Instruction Syscall Dispatcher Methods addr: pointer value to address at which the memory segment
- Sequence of execution doesn’t matter, as they act - ALU : Arithmetic exception 1. if-else statement for all possibility of syscall number
is to be attached. If NULL, system takes first available
independently - MEM : Memory fault 2. switch-case statement for all possibility of syscall number
address
- Function returns 0 for child, PID of created process/child for 3. lookup a table of all syscall with a starting address, A then
- WB : flag: interpretation of addr
parent query A + syscall number (like IVT)
Interrupts - SHM_RDONLY: segment attached is read-only
- Implementation Issues:
- Asynchronous (occurs independent of program exec.) - SHM_RND: addr specified rounded down to a multiple of
- Copying memory region from parent is very expensive
- Instead, use a shared version. Only make two - Suspends program an executes an interrupt handler 03. INTER-PROCESS page size
independent copies when write is involved (copy on Handler Routine
COMMUNICATION shmdt(const *addr)
write) 1. Save register / CPU state - Detach shared memory segment from a process
Processes need to share information, but have
*exec( ) 2. Perform handler routine - Shared memory segment not destroyed even if no process
independent memory spaces attached
- Replaces current executing process image with a new one. 3. Restore register / CPU state
- Only replaces code; PID and other information remains 4. Return from interrupt (either program execution Shared Memory - returns 0 if successful, else -1
- Communication through read/write to shared - addr: pointer value to address at which memory segment is
resume)
execl (char path,char arg0, arg1, ..., arg N, NULL) variables. attached
HW/SW Handling Steps*
- path: location of executable
[H] for hardware, [S] for software - P1 creates shared memory region M, P2 attaches M shmctl(int id, ind cmd, shmid_ds *buf)
- argX: command line arguments
1. During instruction execution, an interrupt/exception to its own memory space (OS is involved up to - Used to perform one of several control operations on the
- NULL: end of argument list flag
of #k occurs, take note and complete instruction here). They can both now write to the region, data shared memory segment
exit(int status) 2. If no pending intr/exc, continue PC++ in space can be seen by both P1 and P2. - returns 0 if successful, else -1
- terminates process and returns status to parent process Else interrupt/execution handling - id: segment id of memory segment to attach (≠key in
Advantages
- Most programs have implicit exit( ) shmget)
3. [H] Push PC and Status Register from stack - Efficient: Only creating + attaching requires OS
- Upon termination, some resources not released: PID, status, buf: address of a shmid_ds structure which contains various
4. [H] Disable interrupts in Status Register - Ease of Use: M behaves like regular memory, can
CPU time (generally everything on PCB) information about the memory segment
5. [H] Read entry #k in Interrupt Vector Table (IVT) easily read/write data of any size/type
cmd:
wait(int status) 6. [H] Switch to kernel mode Disadvantages - IPC_RMID: removes shared memory segment and its ID
- Parent waits for child process to terminate before continuing 7. [H] PC points to handler_k( ) from system after all users have detached
parent process
- IPC_SET: Change ownership/access rule of shared Synchronization Models (Receiver) Functions - Real-time Processing: Midway of the above two
memory segment Blocking Send (Synchronous), Rendezvous pipe(int fd[2]) (not covered in CS2106)
- IPC_STAT: returns content of shared memory ID by - Takes in integer array of size 2, updates it such that
storing in *buf
- Sender invoking send() is blocked until receiver
fd[0] = data from reading end, fd[1] = writing end Scheduling Algorithms (SA)
performs matching receive() - returns 0 if successful, else !0 Criteria
Message Passing - No intermediate buffering required - Fairness: everyone should get “fair” share of CPU
open (char *path, int flag)
- Explicit communication through exchange of Non-blocking Send (Asynchronous) time, depending on need (no starvation)
- opens a file specified by pathname, if doesn’t exist optionally be created
messages - Sender continuously invokes send(), message is (depending on flag) - Utilization: all parts of CPU should be fully utilized
- P1 prepares message M and sends it to P2. M is buffered by system up to certain capacity in a - returns file descriptor (small, non-negative integer) which acts as index
Types of scheduling
stored in the kernel memory space. P2 receives M. to entry in process’ table of open file descriptors, else -1 & errno set
message buffer1 - Pre-emptive: Forceful termination; Processes are
Both send and receive are syscalls and have to go - Receiver running receive() will be completed close(int fd) given a fixed time quota to run, then either block or
through OS. - closes a file descriptor so it no longer refers to any file and can be reused
give up early
immediately
Advantages - returns 0 if successful, else -1 & errno set
- Drawbacks: - Non-pre-emptive: Cooperative; Processes stay
- Applicable beyond a single machine dup(int fd_to_dupe) tunning until it blocks itself or gives up CPU
- Too much freedom for programmer
- Portable: easily implemented on/across many - allocates new file descriptor that refers to the same file as the descriptor
voluntarily.
- Bounded buffer not truly asynchronous due to
platforms and processing environment fd_to_dupe. It is guaranteed that new_fd is the smallest unused file
finite buffer size (once limit it reached, sender descriptor. SA for Batch Processing (3)
- Easy to sync: Implicit synchronization, defined by Criteria
waits/error thrown) - returns new file descriptor if successful, else -1 & errno set
blocking semantics of send/receive 1
Message Buffer: - Turnaround time: finish time – arrival time //
Disadvantages dup2(int fd_to_dupe, int new_fd)
- Under the control of OS, no synchronization from user waiting time + CPU process time
- same as dup() but user needs to provide new_fd, and will set new file
- Inefficient: Every send/receive requires OS required
description to it - Throughput, 效率 : Rate of task completion, task
- Hard to use: Messages are limited in size and type - User needs to declare capacity of mailbox in advance
- User needs to declare behaviour when mailbox is full, either dup3(int fd_to_dupe, int new_fd) finished per unit time
Naming Scheme
sender waits or throws error - same as dup2(), but caller can force close-on-exec flag - Makespan: Time taken to complete all jobs
- Direct Communication: Sender/Receiver of M
explicitly names the other party. Pipes in UNIX Signals in UNIX - CPU utilization: % of CPU used
- Requires one link per pair of communicating process UNIX has 3 default communication channels: stdin, An asynchronous notification regarding an event sent to First Come First Served (FCFS)
- Processes have to know the identity of each other stdout and stderr. The | in shell directs one process’ a process or thread. Recipient must handle signal using - FIFO queue based on arrival time. Blocked task is removed
- send (P2, msg), receive(P1, msg)
output to another’s input. a default set of handlers or user supplied handlers. from queue, then added to back of queue.
- Indirect Communication: Messages are - No starvation: Number of tasks in front of random task X is
Pipe is a FIFO, circular, bounded bytes buffer with signal (int signum, sighandler_t handler)
sent/received from messages storage (mailbox/port) always decreasing
implicit synchronization. * behaviour may vary across UNIX versions, use sigaction() instead
- One mailbox can be shared among multiple processes - Anticipates signal signum, replaces default handler for it with handler - Convoy Effect: One long task at the front of queue greatly
- send (MB, msg), receive (MB, msg) increases turnaround time for short tasks behind. For
- returns previous value of signal handler if successful, else returns
Synchronization Behaviours SIG_ERR & errno set example:
* for CS2106, consider receive always blocking - signum: anticipated signal
- Blocking Primitives (Synchronous) handler: handler that takes in argument of type int and returns void

- Synchronous here means primitives cannot run in


parallel while waiting 04. PROCESS SCHEDULING
- send(): sender is blocked until message sent is Concurrent processes are multiple processes that
received progress in execution at the same time. This can be
- receive(): receiver is blocked until message virtual parallelism(pseudo-parallelism) or physical
arrived parallelism(multiple CPU, multi-core).
- Non-Blocking Primitives (Asynchronous) - WP always moves ahead until it reaches an unread element
Process Behaviour/Environment
- Asynchronous here means primitives can run RP may catch up with WP, but never overtake - Phases a process goes through:
- Shortest Job First (SJF)
while waiting in parallel Blocking Semantics - CPU-activity - Select tasks with smallest total CPU time
- send(): sender resumes operation immediately - Writers wait when buffer full (cannot be configured) - IO-activity - Minimises average waiting time
- receive(): receiver continues to wait for - Readers wait when buffer empty - Processes can be of different types of nature: - Possible starvation: Long tasks may never get the chance to
- Batch Processing: No user-interaction, no need run
message, else proceeds empty-handed and doesn’t Variants
to be responsive - Required to know CPU time: If unknown, need to predict
block - Multiple readers/writers
time needed by exponential average.
- Half-duplex (unidirectional) vs Full-duplex - Interactive/Multiprogramming: Active user-
Use 𝑃𝑟𝑒𝑑𝑖𝑐𝑡𝑒𝑑!"# = 𝛼𝐴𝑐𝑡𝑢𝑎𝑙! + (1 − 𝛼)𝑃𝑟𝑒𝑑𝑖𝑐𝑡𝑒𝑑!
(bidirectional) system-interaction, high responsiveness, low
response time Shortest Remaining Time (SRT)
- Similar to SJF, but shortest remaining time instead of total - Follows the following rules: number of times other processes can enter Test and Set
- Pre-emptive: New short tasks can pre-empt long tasks. 1. If Pri(A) > Pri(B) : A runs before P does. - Keep track of lock variable. Access lock using an atomic
SA for Interactive Systems 2. If Pri(A) = Pri(B) : A and B runs in RR - Independence: Process not executing in the method TestAndSet, only enter CS when lock is set to
Criteria 3. New job gets highest priority CS should not block other processes from UNLOCKED. Prior to exiting CS, set lock back to
- Response Time: time between request and response UNLOCKED
4. If job fully utilizes time quantum: Reduces priority doing so.
- Predictability: variation in response time (to prevent hogging)
Useful Terminologies 5. If job gives up/blocks before finishing time quantum:
- Periodic Scheduler: a scheduler that periodically Retains priority (process unlikely to hog)
takes over CPU using timer interrupt - Adaptive: Adjusts based on actual behaviour, no Symptoms of Incorrect Synchronization
- Timer Interrupt: Goes off periodically and cannot prior knowledge required - Incorrect output/behaviour: Usually due to lack of
be intercepted by other programs. - Effective: Minimizes response time for IO-bound- mutex
time_interupt_handler invokes periodic scheduler. processes and turnaround time for CPU-bound - Deadlock: All processes blocked, no progress. *A
processes couple of extra conditions that help ensure actual
- Interval of Timer Interrupt (ITI): Length of time Application of EnterCS() and ExitCS()
- Possible exploitation: Process designed to give up deadlock:
between interrupts, invokes OS scheduler on
CPU right before time quantum, retains priority - Mutual-exclusion: Each resource is either
interrupt (commonly 1-10ms)
assigned to one process or none.
- Time Quantum: A constant/variable execution Lottery Scheduling
- Hold-and-wait: Process holding onto a
duration of process. Must be multiple of ITI. - Randomized scheduling, gives out “lottery ticket” to - TestAndSet Register, MemoryLocation
resource can ask for more.
(commonly 5-10ms) processes, chooses one at random when during - Atomic Operation: Single instruction and no
- No-pre-emption: Cannot forcibly take interleaving
scheduling.
Round Robin (RR) resources of others. - Loads value from memoryLocation into Register
Enqueue and dequeue from a queue, each task given fixed - Responsive: A newly joined process can participate
- - Circular wait: There is a circular list of ≥ 2 and change value at memoryLocation to 1, and
time quantum (or task gives up task). Blocked tasks are in the next lottery immediately
processes, each waiting for a resource held by returns memoryLocation (if initially LOCKED, no
dequeued and placed in another queue. - Good level of control:
the next. change. If initially UNLOCKED, returns 0)
Each (unblocked) task is then enqueue-ed again. - Process can distribute ticket to child
- Livelock: In an attempt to avoid deadlock, processes - Busy Waiting: Resource wasted on busy-waiting
- Pre-emptive FCFS processes - Not Guaranteed Bounded-wait: depends on process
- Guaranteed response time: time to get CPU is upper keep changing state and make further progress.
- Important processes can be given more tickets scheduling, else one process may repeatedly EnterCS()
bounded by (n-1)*q, n tasks quantum, q - Starvation: Some processes are starved forever.
- Different types resources (I/O, CPU) can - Other variants: Compare and Exchange, Atomic Swap,
- Choice of time quantum is vital: Implementation of Critical Section Load Link/Store Conditional
- Big quantum: better CPU utilization, longer wait time have a different proportion of tickets,
Implementations at Different Levels
- Small quantum: worse CPU utilization, shorter wait depending on usage Semaphores // Mutex
- High Level Language Implementation (HLL)
time - Simple Implementation A generalized synchronisation mechanism that specifies
- Assembly-level Implementation (ALL)
behaviour and not implementation. Provides means of blocking
Priority Scheduling - High Level Abstraction (HLA)
05. SYNCHRONIZATION PRIMITIVES a number of processes, converting them to sleeping processes.
- Assign each task with priority, select task with - Think of semaphores S as an atomically-accessed integer.
- Synchronization problems arises when we have con- Peterson’s Algorithm
highest priority value. The pre-emptive version General (Counting) semaphores have values S ≥ 0
current processes being executed in an interleaving - Keeps track of whose turn it is to run, and whether a process
allows high-priority task to pre-empt running tasks. intends to use the CS. Only busy-waiting if the other process Binary semaphores (Mutex) have values S = 0 or 1
fashion whilst sharing certain resources, execution
In the non-pre-emptive version, tasks wait until P WANTS, and it’s P’s turn to run. Else, enter CS. - Properties of Semaphores
output becomes non-deterministic. Sinitial ≥ 0, #CS ≤ 1
next round of scheduling (decision). -
- Race condition: Situations where execution - Scurrent = Sinitial + #signal(S) - #wait(S)
- Possible starvation: Low priority processes can
outcomes depend on the order shared resources are #signal(S) = number of signals() operations executed
starve. Can be solved by: #wait(S) = number of wait() operations completed(exclude those blocked)
accessed/modified. Can be solved by designating a
- Decreasing priority of running process every TQ. - Scurrent + #CS = 1
segment as the critical section. - No Deadlock: deadlock means all stuck at wait(S), no
- Stop running process after TQ and exclude from
- Critical Section (CS): A segment where at any point process running and CS empty.
next round of scheduling.
in time, only one process can be executing in it. A But we have Scurrent + #CS = 1, contradiction.
- Priority Inversion: Low priority process, PL joins
correct CS should have the following properties: - No Starvation: Suppose P2 is in CS, P1 is blocked by
and locks resource, middle priority process PM joins wait(S). When P2 exits, if P1 is the only process it is woken
- Mutual Exclusion: If P is executing in a CS,
and pre-empts PL, high priority process PH then joins up, else repeat and eventually P1 will be woken up (assuming
all other process shouldn’t be able to enter CS
but resource required is locked by PL. PM continues fair scheduling)
- Progress: If no process is in the CS, all - Previous attempts: (removed to save space)
to run ahead of PH despite lower priority. - Multiple process share one semaphore: else, deadlock will
processes should be able to enter (if needed) - Busy Waiting: Resource wasted on busy-waiting
be possible due to circular wait(S) on one another’s
Multi-level Feedback Queue (MLFQ) - Bounded Wait: After process P asks to enter - Low Level: Error prone, hard to implement
semaphore
- Hard to generalise: Limited to 2 processes
- Allows scheduling without perfect prior knowledge CS, there exists an upper bound on the
wait(S) // P() // Down() Dining Philosophers
- Atomic operation, may block Five philosophers are sat around a table with five pairs of
- Takes in a semaphore S chopsticks placed between each pair. When any philosopher
- If S ≤ 0, blocks current process and decrement S wants to eat, he has to pick up the left chopstick, then right,
else, execute process then eat. However, there may be a deadlock.
signal(S) // V() // Up() - Implementation1: Keep track of state (neutral, hungry,
- Atomic operation, never blocks eating) which is atomically accessed. Keep track of
- Takes in semaphore S individual semaphore, triggered upon checking isHungry,
- Wakes up one sleeping process (if any), increments S and left && right !isEating, can be abstracted into
safeToEat method. Upon finished eating, signal to left and

Common Utilisations for Semaphores right that they can attempt to eat (not eat, still have to
check their left and right, and IF they want to eat)
Semaphore are General Synchronization Tool ()
- Implementation2: Force one philosopher to be right handed.
- B in P2 can requires resource from A in P1
There will still be waiting, but no forever looping deadlock.
- Use k-ary semaphore (k is the number of producers to
consumers ratio), do wait(S) in front of B, signal(S) after
A

One-at-a-time
- Binary semaphore

Safe-distancing Problem
- N-ary semaphore

Busy Waiting
-
Readers Writers
Processes share a data structure D, where multiple readers can
access and read information about D, while writers must have
exclusive access to D to write information.
- Implementation: Writers waits(roomEmpty) before writing
and signals(roomEmpty) upon completion. Reader keeps
track of numReader (atomically updated via mutex), if
numReader = 1 upon entering, writer is inside, do
wait(roomEmpty)
- Writer Starvation: writers can starve if there is a constant
stream of reader, blocking writers at waits(roomEmpty).
Can be solved by adding “revolving door” that writer blocks
before waiting for empty room. New-comer readers will be
blocked behind “revolving door”. (Still doesn’t really
guarantee writer access, just gives it a chance)

You might also like