Operating System Notes
Operating System Notes
Concurrency
What is Critical Section Problem?
The critical section refers to the segment of code where processes/threads access shared resources, such as
common variables and files, and perform write operations on them. Since processes/threads execute concurrently,
any process can be interrupted mid-execution.
A race condition occurs when two or more thread sharing same resources/data try to change this data at the same
time. As we do not know the order in which threads perform the operation therefore the result is dependent on the
thread scheduling algorithms.
Using Locks
Using Semaphores
Mutex/Locks
Two values of lock can be possible, either 0 or 1. Lock value 0 means that the critical section is vacant while the
lock value 1 means that it is occupied. A process which wants to get into the critical section first checks the value
of the lock variable. If it is 0 then it sets the value of lock as 1 and enters into the critical section, otherwise it
waits.
Entry Section →
While (lock! = 0);
Lock = 1;
//Critical Section
Exit Section →
Lock =0;
Lock = Lock()
count = 0
def task():
lock.acquire()
global count
for i in range (1000000):
count += 1
Lock.release ()
Output : 2000000
Disadvantages of Locks ?
Deadlocks
Debugging
Semaphores
1. wait() : The wait operation decrements the value of its argument S, if it is positive. If S is negative or zero,
then no operation is performed.
wait(S)
{
while (S<=0);
S--;
}
signal(S)
{
S++;
}
1. Counting : These are integer value semaphores and have an unrestricted value domain. If the resources are
added, semaphore count automatically incremented and if the resources are removed, the count is
decremented.
2. Binary : The binary semaphores are like counting semaphores but their value is restricted to 0 and 1. The
wait operation only works when the semaphore is 1 and the signal operation succeeds when semaphore is 0
Advantages:
There is no resource wastage because of busy waiting in semaphores as processor time is not wasted
unnecessarily to check if a condition is fulfilled to allow a process to access the critical section.
Disadvantages:
Semaphores are complicated so the wait and signal operations must be implemented in the correct order to
prevent deadlocks.
Semaphores may lead to a priority inversion where low priority processes may access the critical section first
and high priority processes later.
Synchronisation Problems
The task of the Producer is to produce the item, put it into the memory buffer, and again start producing
items. Whereas the task of the Consumer is to consume the item from the memory buffer.
The producer should produce data only when the buffer is not full. In case it is found that the buffer is full, the
producer is not allowed to store any data into the memory buffer.
Data can only be consumed by the consumer if and only if the memory buffer is not empty. In case it is found
that the buffer is empty, the consumer is not allowed to use any data from the memory buffer.
do{
//produce an item
wait(empty);
wait(mutex);
//place in buffer
signal(mutex);
signal(full);
}while(true)
do{
wait(full);
wait(mutex);
signal(mutex);
signal(empty);
}while(true)
Readers - which can only read the data set; they do not perform any updates, some are Writers - can both
read and write in the data sets.
The readers-writers problem is used for managing synchronisation among various reader and writer process
so that there are no problems with the data sets, i.e. no inconsistency is generated.
Reading Writing No
Writing Reading No
Writing Writing No
Solution is given using Semaphores. We need to use two semaphores write and mutes
wait(mutex);
readcount --; // on every exit of reader decrement readcount
if (readcount == 0)
{
signal (write); // if there are no readers than it is possible for writer to access the data
}
signal(mutex);
The dining philosophers problem states that there are 5 philosophers sharing a circular table and they eat
and think alternatively. There is a bowl of rice for each of the philosophers and 5 chopsticks. A philosopher
needs both their right and left chopstick to eat. A hungry philosopher may only eat if there are both
chopsticks available.Otherwise a philosopher puts down their chopstick and begin thinking again.
wait() on a chopstick[i] means that it has been picked by a philosopher and the philosopher requesting the
chopstick needs to wait.
release() on chopstick[i] means that the chopstick which was picked is now free.
do {
wait( chopstick[i] ); // a philosopher pick both left and right chopsticks
wait( chopstick[ (i+1) % 5] );
. .
. EATING THE RICE
.
signal( chopstick[i] );
signal( chopstick[ (i+1) % 5] );
.
. THINKING
.
} while(1);
There may be a situation where deadlock can arrive. This may happen if all the philosophers pick their left
chopstick simultaneously. Then none of them can eat and deadlock occurs. It can be avoided using these
conditions:
An even philosopher should pick the right chopstick and then the left chopstick while an odd philosopher
should pick the left chopstick and then the right chopstick
• A philosopher should only be allowed to pick their chopstick if both are available at the same time.
What is deadlock ?
A Deadlock is a situation where each of the computer process waits for a resource which is being assigned to
some another process. In this situation, none of the process gets executed since the resource it needs, is held by
some other process which is also waiting for some other resource to be released.
2. Hold and Wait : A process waits for some resources while holding another resource at the same time.
3. No preemption : Process does not release a resource unless and until the process is not complete or the
process release it itself
4. Circular Wait : All the processes must be waiting for the resources in a cyclic manner so that the last process
is waiting for the resource which is being held by the first process.
3. Deadlock ignorance
Deadlock Prevention
1. Mutual Exclusion : If a resource could have been used by more than one process at the same time then the
process would have never been waiting for any resource.
2. Hold and Wait : A process requesting a resource should not hold any other resource. This can be implemented
practically if a process declares all the resources initially
3. No preemption : Deadlock arises due to the fact that a process can't be stopped once it starts. However, if we
take the resource away from the process which is causing deadlock then we can prevent deadlock. If a process
requesting resource and this resource cannot be allocated then process needs to drop all the resources that it
was holding.
4. Circular Wait : To violate circular wait, we can assign a priority number to each of the resource. A process can't
request for a lesser priority resource. This ensures that not a single process can request a resource which is
being utilised by some other process and no cycle will be formed. Only circular wait can be implemented
practically.
Deadlock Avoidance
Schedule processes and allocate resources such that the system is always in safe state i.e. there is no state
where deadlock may arise.
In deadlock avoidance, the request for any resource will be granted if the resulting state of the system doesn't
cause deadlock in the system. The state of the system will continuously be checked for safe and unsafe states.
Banker’s Algorithm:
For a single instanced resources detection of cycle is enough to prove the presence of deadlock. For resources with multiple
instances detection of cycle is not necessary condition for occurence of deadlock.
Memory Management
Address Space in OS
Logical Address ( Virtual Address ) : A logical address, also known as a virtual address, is an address generated by
the CPU during program execution. It is the address seen by the process and is relative to the program’s address
space. The process accesses memory using logical addresses, which are translated by the operating system into
physical addresses.
Physical Address : A physical address is the actual address in main memory where data is stored.
Mapping of logical address to physical address is done by hardware device known as Memory Management Unit
(MMU)
While writing the program, a programmer writes code according to the logical address.
Address Translation:
1. Contiguous Allocation
2. Non-Contiguous Allocation
Each process is contained in a single contiguous block. we allot a continuous segment from the entirely empty area
to the process based on its size whenever a process requests to reach the main memory.
Fixed Partitioning
Dynamic Partitioning
The first process, which is 3MB in size, is given a 5MB block. The second process, which is 1MB in size, is also
given a 5MB block.The third process, which is 4MB in size, is also given a 5MB block. So, it doesn't matter how big
the process is. The same fixed-size memory block is assigned to each.
The number of blocks formed in the RAM determines the system's level of multiprogramming.
In the case where the memory allotted to the method is somewhat larger than the memory requested, then the
difference between allotted and requested memory is called Internal fragmentation.
In this technique, the partition size is not declared initially. It is declared at the time of process loading.The first
partition is reserved for the operating system. The remaining space is divided into parts. The size of each partition
will be equal to the size of the process. The partition size varies according to the need of the process so that the
internal fragmentation can be avoided.
No internal fragmentation
Process of larger size can be brought into memory. In fixed partitioning it was not possible to bring a process of
size larger than partition in the memory
External Fragmentation:
First Fit : Allocate the process to first hole that is big enough
Next Fit : Unlike first-fit memory allocation, the only difference between the two is, in the case of next fit, if the search
is interrupted in between, the new search is carried out from the last location.
Best Fit : Allocate smallest hole that is big enough i.e. the difference between free space and n is minimum
Worst Fit : Allocate largest hole that is big enough i.e. the difference between free space and n is maximum