0% found this document useful (0 votes)
44 views10 pages

Operating System Notes

The document discusses several key concepts in operating systems related to concurrency: 1) It defines the critical section problem and race conditions that can occur when multiple threads access shared resources concurrently. Atomic operations, locks, and semaphores can be used to handle race conditions. 2) Mutexes (locks) are described as a simple synchronization mechanism that uses a lock variable to control access to critical sections. 3) Semaphores are introduced as another synchronization mechanism that uses wait and signal operations to control access to shared resources. Counting and binary semaphores are discussed. 4) Common synchronization problems like the producer-consumer problem, reader-writer problem, and dining philosophers problem are summarized

Uploaded by

Himanshu Kashyap
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
44 views10 pages

Operating System Notes

The document discusses several key concepts in operating systems related to concurrency: 1) It defines the critical section problem and race conditions that can occur when multiple threads access shared resources concurrently. Atomic operations, locks, and semaphores can be used to handle race conditions. 2) Mutexes (locks) are described as a simple synchronization mechanism that uses a lock variable to control access to critical sections. 3) Semaphores are introduced as another synchronization mechanism that uses wait and signal operations to control access to shared resources. Counting and binary semaphores are discussed. 4) Common synchronization problems like the producer-consumer problem, reader-writer problem, and dining philosophers problem are summarized

Uploaded by

Himanshu Kashyap
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Operating System Notes

Operating System Notes 1


🤔 Concurrency

Concurrency
What is Critical Section Problem?
The critical section refers to the segment of code where processes/threads access shared resources, such as
common variables and files, and perform write operations on them. Since processes/threads execute concurrently,
any process can be interrupted mid-execution.

What is Race Condition?

A race condition occurs when two or more thread sharing same resources/data try to change this data at the same
time. As we do not know the order in which threads perform the operation therefore the result is dependent on the
thread scheduling algorithms.

How you can handle race condition?

Atomic Operations : Make the operation performed in one CPU cycle

Using Locks

Using Semaphores

Mutex/Locks

This is the simplest synchronisation mechanism.

Two values of lock can be possible, either 0 or 1. Lock value 0 means that the critical section is vacant while the
lock value 1 means that it is occupied. A process which wants to get into the critical section first checks the value
of the lock variable. If it is 0 then it sets the value of lock as 1 and enters into the critical section, otherwise it
waits.

Entry Section →
While (lock! = 0);
Lock = 1;
//Critical Section
Exit Section →
Lock =0;

Lock = Lock()
count = 0

def task():
lock.acquire()
global count
for i in range (1000000):
count += 1
Lock.release ()

if _name__ == ' __main__':


t1 = Thread (target=task)
t2 = Thread (target=task)
t1.start0 t2.start ()
t1.join()
t2.join ()
print (count)

Output : 2000000

Disadvantages of Locks ?

Deadlocks

Debugging

Starvation of high priority threads

Semaphores

Operating System Notes 2


Semaphores are integer variables that are used to solve the critical section problem by using two atomic
operations, wait and signal that are used for process synchronisation.

Semaphores support two atomic operations

1. wait() : The wait operation decrements the value of its argument S, if it is positive. If S is negative or zero,
then no operation is performed.

wait(S)
{
while (S<=0);

S--;
}

2. signal() : The signal operation increments the value of its argument S.

signal(S)
{
S++;
}

There are two types of Semaphores:

1. Counting : These are integer value semaphores and have an unrestricted value domain. If the resources are
added, semaphore count automatically incremented and if the resources are removed, the count is
decremented.

2. Binary : The binary semaphores are like counting semaphores but their value is restricted to 0 and 1. The
wait operation only works when the semaphore is 1 and the signal operation succeeds when semaphore is 0

Advantages and Disadvantages of Semaphore

Advantages:

There is no resource wastage because of busy waiting in semaphores as processor time is not wasted
unnecessarily to check if a condition is fulfilled to allow a process to access the critical section.

They are machine independent

Disadvantages:

Semaphores are complicated so the wait and signal operations must be implemented in the correct order to
prevent deadlocks.

Semaphores may lead to a priority inversion where low priority processes may access the critical section first
and high priority processes later.

Synchronisation Problems

Producer Consumer Problem

Producer thread produces a product

Consumer thread consumes a product

The task of the Producer is to produce the item, put it into the memory buffer, and again start producing
items. Whereas the task of the Consumer is to consume the item from the memory buffer.

The producer should produce data only when the buffer is not full. In case it is found that the buffer is full, the
producer is not allowed to store any data into the memory buffer.

Data can only be consumed by the consumer if and only if the memory buffer is not empty. In case it is found
that the buffer is empty, the consumer is not allowed to use any data from the memory buffer.

Operating System Notes 3


Accessing memory buffer should not be allowed to producer and consumer at the same time.

Solution for Producer

do{

//produce an item

wait(empty);
wait(mutex);

//place in buffer

signal(mutex);
signal(full);

}while(true)

Solution for Consumer

do{

wait(full);
wait(mutex);

// consume item from buffer

signal(mutex);
signal(empty);

}while(true)

Reader Writer Problem

Readers - which can only read the data set; they do not perform any updates, some are Writers - can both
read and write in the data sets.

The readers-writers problem is used for managing synchronisation among various reader and writer process
so that there are no problems with the data sets, i.e. no inconsistency is generated.

Process 1 Process 2 Possible

Reading Reading Yes

Reading Writing No

Writing Reading No

Writing Writing No

Solution is given using Semaphores. We need to use two semaphores write and mutes

Solution for Reader

static int readcount = 0;


wait (mutex);
readcount ++; // on each entry of reader increment readcount
if (readcount == 1)
{
wait (write); // ensures no writer can enter if there is even 1 reader
}
signal(mutex);

--READ THE FILE?

wait(mutex);
readcount --; // on every exit of reader decrement readcount
if (readcount == 0)
{
signal (write); // if there are no readers than it is possible for writer to access the data
}
signal(mutex);

Solution for writer

Operating System Notes 4


wait(write);
WRITE INTO THE FILE
signal(wrt);

Dinning Philosopher Problem

The dining philosophers problem states that there are 5 philosophers sharing a circular table and they eat
and think alternatively. There is a bowl of rice for each of the philosophers and 5 chopsticks. A philosopher
needs both their right and left chopstick to eat. A hungry philosopher may only eat if there are both
chopsticks available.Otherwise a philosopher puts down their chopstick and begin thinking again.

Each chopstick is made a binary semaphore. B

wait() on a chopstick[i] means that it has been picked by a philosopher and the philosopher requesting the
chopstick needs to wait.

release() on chopstick[i] means that the chopstick which was picked is now free.

Solution for Dinning Philosopher Problem:

do {
wait( chopstick[i] ); // a philosopher pick both left and right chopsticks
wait( chopstick[ (i+1) % 5] );
. .
. EATING THE RICE
.
signal( chopstick[i] );
signal( chopstick[ (i+1) % 5] );
.
. THINKING
.
} while(1);

There may be a situation where deadlock can arrive. This may happen if all the philosophers pick their left
chopstick simultaneously. Then none of them can eat and deadlock occurs. It can be avoided using these
conditions:

There should be at most four philosophers on the table

An even philosopher should pick the right chopstick and then the left chopstick while an odd philosopher
should pick the left chopstick and then the right chopstick

• A philosopher should only be allowed to pick their chopstick if both are available at the same time.

What is deadlock ?

A Deadlock is a situation where each of the computer process waits for a resource which is being assigned to
some another process. In this situation, none of the process gets executed since the resource it needs, is held by
some other process which is also waiting for some other resource to be released.

How a process/thread utilise a resource? Request → Use → Release

Necessary conditions for deadlock

1. Mutual Exclusion : A resource can only be utilised by a single process

2. Hold and Wait : A process waits for some resources while holding another resource at the same time.

3. No preemption : Process does not release a resource unless and until the process is not complete or the
process release it itself

4. Circular Wait : All the processes must be waiting for the resources in a cyclic manner so that the last process
is waiting for the resource which is being held by the first process.

Methods to handle Deadlock

1. Prevent or avoid deadlock

2. Allow system to go in deadlock and then detect it and recover

3. Deadlock ignorance

Deadlock Prevention

Operating System Notes 5


Deadlock can be prevented if any one of the four necessary conditions for deadlock can be prevented.

1. Mutual Exclusion : If a resource could have been used by more than one process at the same time then the
process would have never been waiting for any resource.

2. Hold and Wait : A process requesting a resource should not hold any other resource. This can be implemented
practically if a process declares all the resources initially

3. No preemption : Deadlock arises due to the fact that a process can't be stopped once it starts. However, if we
take the resource away from the process which is causing deadlock then we can prevent deadlock. If a process
requesting resource and this resource cannot be allocated then process needs to drop all the resources that it
was holding.

4. Circular Wait : To violate circular wait, we can assign a priority number to each of the resource. A process can't
request for a lesser priority resource. This ensures that not a single process can request a resource which is
being utilised by some other process and no cycle will be formed. Only circular wait can be implemented
practically.

Deadlock Avoidance

Schedule processes and allocate resources such that the system is always in safe state i.e. there is no state
where deadlock may arise.

In deadlock avoidance, the request for any resource will be granted if the resulting state of the system doesn't
cause deadlock in the system. The state of the system will continuously be checked for safe and unsafe states.

Banker’s Algorithm:

Deadlock Detection and Recovery

Operating System Notes 6


The OS periodically checks the system for any deadlock. In case, it finds any of the deadlock then the OS will
recover the system using some recovery techniques.

For a single instanced resources detection of cycle is enough to prove the presence of deadlock. For resources with multiple
instances detection of cycle is not necessary condition for occurence of deadlock.

Operating System Notes 7


Memory Management

Memory Management
Address Space in OS

Logical Address ( Virtual Address ) : A logical address, also known as a virtual address, is an address generated by
the CPU during program execution. It is the address seen by the process and is relative to the program’s address
space. The process accesses memory using logical addresses, which are translated by the operating system into
physical addresses.

Physical Address : A physical address is the actual address in main memory where data is stored.

Mapping of logical address to physical address is done by hardware device known as Memory Management Unit
(MMU)

While writing the program, a programmer writes code according to the logical address.

Address Translation:

Allocation methods in physical memory

1. Contiguous Allocation

2. Non-Contiguous Allocation

Contiguous Memory Allocation

Each process is contained in a single contiguous block. we allot a continuous segment from the entirely empty area
to the process based on its size whenever a process requests to reach the main memory.

There are two ways to allocate this:

Fixed Partitioning

Dynamic Partitioning

Fixed Partitioning (Static Partitioning)

Operating System Notes 8


Entire memory will be partitioned into continuous blocks of fixed size, and each time a process enters the system, it
will be given one of the available blocks

The first process, which is 3MB in size, is given a 5MB block. The second process, which is 1MB in size, is also
given a 5MB block.The third process, which is 4MB in size, is also given a 5MB block. So, it doesn't matter how big
the process is. The same fixed-size memory block is assigned to each.

The number of blocks formed in the RAM determines the system's level of multiprogramming.

In the case where the memory allotted to the method is somewhat larger than the memory requested, then the
difference between allotted and requested memory is called Internal fragmentation.

Flexible Partitioning (Dynamic Partitioning)

In this technique, the partition size is not declared initially. It is declared at the time of process loading.The first
partition is reserved for the operating system. The remaining space is divided into parts. The size of each partition
will be equal to the size of the process. The partition size varies according to the need of the process so that the
internal fragmentation can be avoided.

Advantages over fixed partitioning:

No internal fragmentation

Process of larger size can be brought into memory. In fixed partitioning it was not possible to bring a process of
size larger than partition in the memory

Better degree of multiprogramming

External Fragmentation:

Operating System Notes 9


Request to add process of size n using a list of free space:

First Fit : Allocate the process to first hole that is big enough

Next Fit : Unlike first-fit memory allocation, the only difference between the two is, in the case of next fit, if the search
is interrupted in between, the new search is carried out from the last location.

Best Fit : Allocate smallest hole that is big enough i.e. the difference between free space and n is minimum

Worst Fit : Allocate largest hole that is big enough i.e. the difference between free space and n is maximum

Operating System Notes 10

You might also like