Operating Systems Notes - Codeforces
Operating Systems Notes - Codeforces
If you are struggling with the right track and a 10-min read for Operating systems from beginning to end, you're at the right place. Even I
am not sure where to start, but we would figure it and if you are reading this, I have successfully completed these notes or at least am
on right track.
The Track
I found many resources to learn, let's just list all: (Galvin Concise PPTs)[https://www.os-book.com/OS9/slide-dir/index.html]: These were
great but I felt that these are a little bit too much, so here are the chapters we would do:
1. Processes
2. Threads
3. Process Synchronization
4. CPU Scheduling Algorithms
5. Deadlocks
6. Main memory
7. Virtual memory
8. Virtual Machines
I am skipping the introduction of OS for now as it was not that important, this is going to be a fast article that works like a last night dose.
A simple introduction(May skip if you are already familiar)
An Operating system is like an interface for users and hardware. It is a sort of manager which manages multiple applications.
Whether you are headshoting someone on Valorant or making your own programming language you are actually interacting with the
OS. But why do we need OS, main reasons being Resource Allocation, Protection, and of course, abstraction. We do not need an OS
for a lift, because it has just one functionality, but we do need an OS for a PC due to the complex tasks we expect a PC to perform.
These are the types of Operating Systems based on the functionality the OS provide:
1. Single Tasking: MS-DOS(Only one task) These are very inefficient. The main reason being that the CPU is very fast compared to
IO devices and processes which need an IO device may hold the queue up for other processes and keep the CPU idle.
2. Multiprogramming and Multitasking: Yeah, here is the solution to the above problem. In this we do not wait for the process to
completely execute, we just unassign the process when it is doing some IO process and in meanwhile assign some other
process. Multitasking is different from multiprogramming as, In multitasking, we assign a small fixed time in which processes
execute and change their turn. It is more responsive than multiprogramming because we do not wait for a process indefinitely. It's
just the same concept as Round Robin Scheduling.
3. Multithreading: It's a kind of advancement in Multiprogramming. Here we have multiple threads running in an interleaved fashion
inside a process. The foremost advantage is the increased responsiveness of the CPU. Consider MS word, the tool you use to
make those assignments. Here one thread is formatting the content and some other, taking input from you. These two are threads
involved in just typing some shit in MS word. A thread is the smallest unit assignable to the CPU. Also, we saw that we switch
between processes in multitasking, and switching involved a cost. But this cost is less in thread switching compared to process
switching. We would see this in context switching.
4. Multiprocessing: While buying a laptop on these Diwali sales(if you are in India), you might have noticed processors and with
more processors increases the price. We have Quad Core in 30-45K INR and Octa-Core processors at 50K+ price. Our laptop
uses more than one processor and distributes processes among different processors. This obviously boosts the laptop's speed.
5. Multi-User: Earlier versions of windows didn't have support for multiple users but Linux always had this support.
Processes
Processes are a program in execution. It has its own memory which is divided into four parts: Text, Data, Heap, and Stack.
Brief explanation about these
Process, as the name suggests, have different states (just like making a joint). Here is a diagram which says it all:
Diagram
This handles the order of execution of the process. We have different queues for different states. When the state of a process is
changed, it is simply detached from the current queue and added to its new state's queue.
This says it all
Please note that we are not actually studying scheduling algorithms, we are just assuming that we have some given order and how the
processes are executed.
Schedulers
There are three types of schedulers, long term, short term, and mid term schedulers.
Brief explanation
1. Long-term Schedulers: These are responsible for sending a process from new to ready queue. It has to maintain a degree of
multiprogramming which is simply this: Average rate of incoming = average rate of outgoing.
2. Short-term schedulers/ CPU schedulers: From the ready queue to the running queue. (I think it's clear)
3. Midterm schedulers: These are special types of schedulers with a unique functionality: when a process is paused(maybe it needs
some input) it, swaps in the process and allocates this position to some other process and when it arrives again, swaps out the
other process and resume the previous process.
An interrupt is like a signal to stop the current process and allow the higher priority process(the process causing the interrupt) to
execute. But what happens to the previous process which CPU was executing? It gets saved! When an interrupt occurs, the system
needs to save the current context of the process currently running on the CPU so that it can restore that context. It's like a bookmark
we have while reading novels. For example, if you are reading a book on data structures and algorithms and suddenly you realize that
watching that Avengers 10 min scene would be far more interesting, you place a bookmark and return after 2 hours to resume the
same.
Also, please note that taking up a new process requires saving the context of the current process and restore the context of the
incoming process. This is known as context switch.
Inter-Process Communication
Inter-Process Communication or IPC for short is used for transferring information from one process to another.
Why we need IPC?
It's quite obvious actually. We, programmers, do multitasking a lot, from Sublime to codeforces to youtube to CounterStrike to Chrome
Incognito. We do a lot. But these are independent processes(maybe connected but on a high level, they are independent) But
processes that are interconnected and have to use some other process information to execute need to have IPC.
We have studied PCBs. We know Process information is stored in PCBs. And we need some sort of a temporary variable to transfer
information(like we do to swap two numbers). And we do have two types of mechanisms on this type of principle.
IPC different mechanisms
1) Shared Memory: Kind of a buffer between two processes. One important thing to note is that under normal circumstances,
processes restrict their memory, i.e., you cannot access the memory of a process. That's why we have shared memory, where we
don't have such restrictions.
Producer Consumer Problem
One process is a producer and another one is a consumer. But the information the producer produces should be concurrently
consumed by the consumer. This clearly indicates there can be two cases, underflow of information(when the producer is not
producing enough) and overflow(when the consumer is not consuming enough). Shared memory can handle this situation. We
have a buffer containing the information and that buffer is filled only if there is a need to fill it.
Types of Buffers
This says it all
2) Message passing: Kind of a queue for sharing information(See the below diagram). The main difference between the two is that
here we don't actually share the same address space which is why it can be used with different computers over a network.
Think it like this:
You and your friend are chatting about the last contest's submissions over a WhatsApp chat and are discussing tourist's and benq's
performance. There are two processes going on apart from wasting time: You are sending a message and receiving a message as
well. But you and your friend don't have a shared memory space here. Here comes message passing.
This says it all
Scheduling Algorithms
Terminology
Turn-Around Time: Time difference between completion and arrival time. (How it's different from burst time?)(Add waiting time here
as well)
Non-preemptive Algorithms: Once a process is in the ready queue we cannot pause its execution until it's completely executed.
Widely used in college admissions, this algorithm is just what it says. Whichever process arrives first, it will execute the process first
and then jump to the second process. (Non-preemptive Algorithm)
Here is a simple C++ code
You seriously opened this? Please try this yourself. It is very simple just a sort will do the work.
P1 | 7
P2 | 3
P3 | 10
P4 | 5
Gantt Diagram:
It's kind of obvious. If the arrival time of a process that takes a lot of time than others is less, it will slow down the operating system.
Just sort the jobs according to their burst times. And we schedule the jobs according to that order only. It is a non-preemptive
algorithm but there does exist a preemptive algorithm for SJS as well.
C++ Code for non-preemptive SJS
#include<bits/stdc++.h>
using namespace std;
#define int long long int
int32_t main()
{
int n;cin>>n;
vector<pair<int,int>> times(n);
vector<pair<int,int>> duration(n);
// vector<Time>a;
for(int i=0;i<n;i++)
{
cin>>times[i].first>>times[i].second;
duration[i].second=i;
duration[i].first=times[i].second-times[i].first;
}
sort(duration.begin(),duration.end());
return 0;
}
P1 | 7
P2 | 3
P3 | 10
P4 | 5
Gantt Diagram:
Here we have a certain time when our CPU is idle. Being idle is the worst thing we can have! (In general, as well, that's why I am
writing this).
It has a preemptive way also. In this, the trick is to fill those gaps which we were making. Whenever we are idle, we assign the
computer another process that can be filled.
C++ code for preemptive code
To be updated
This Gantt diagram will help you understand the preemptive version of SJS:
Gantt diagram
P1 | 7
P2 | 3
P3 | 10
P4 | 5
Gantt Diagram:
In this algorithm, we have fixed time slots and we execute the processes sequentially in the order given and switch between
processes after every fixed time. This Gantt diagram will help you understand:
Gantt diagram
P1 | 7
P2 | 3
P3 | 10
P4 | 5
Gantt Diagram:
TBU
Process synchronization
Processes categorized on basis of synchronization
1. Independent Process: Execution of one process does not affect the execution of other processes.
2. Cooperative Process: The execution of one process affects the execution of other processes.
Process synchronization problem arises in the case of Cooperative process also because resources are shared in Cooperative
processes.
Race Condition
When more than one process is executing the same code or accessing the same memory or any shared variable in that condition there
is a possibility that the output or the value of the shared variable is wrong so for that all the processes doing the race to say that my
output is correct this condition known as a race condition. Several processes access and process the manipulations over the same data
concurrently, then the outcome depends on the particular order in which the access takes place.
Example
Suppose we have two operations, cnt++ and cnt--, from two different processes acting on a global variable cnt.
++ Operation :
int reg=cnt;
reg=reg-1;
cnt=reg;
-- Operation:
int reg2=cnt;
reg2=reg2-1;
cnt=reg2;
int reg=cnt;
reg=reg-1;
cnt=reg;
int reg2=cnt;
reg2=reg2-1;
cnt=reg2;
But as the resource is shared, it can happen in any order, maybe this one as well:
int reg=cnt;
reg=reg-1;
int reg2=cnt;
cnt=reg;
reg2=reg2-1;
cnt=reg2;
Critical Section
A critical section is a code segment that can be accessed by only one process at a time. This code segment is common in many
processes and if many processes run simultaneously, we would have a hard time finding the process containing the error, if it happens.
Any solution to the critical section must satisfy these rules.
1. Mutual Exclusion: If a process is executing in its critical section, then no other process is allowed to execute in the critical section.
2. Progress: If no process is executing in the critical section and other processes are waiting outside the critical section, then only
those processes that are not executing in their remainder section can participate in deciding which will enter in the critical section
next, and the selection can not be postponed indefinitely. Woah, that's a bummer. Okay, it's simply that the process which does
not want to execute the critical section should not be allowed to block the system for others.
3. Bounded Waiting: A bound must exist on the number of times that other processes are allowed to enter their critical sections after
a process has made a request to enter its critical section and before that request is granted.
Here is what critical section looks like
We know we can have a race condition if we have some interrupt in a preemptive scheduling algorithm(in a single processor system. In
a multiprocessor system, we can have a race condition if two processes on different processors execute the critical section). A process
can simply announce that it is entering a critical section and the processor should not interrupt it. Well, it works fine, doesn't it? NO, of
course not. This can lead to a lot of problems. Firstly, it is not applicable for multiprocessor systems. Secondly, we cannot give a
process the freedom to block the interrupts. It can go on indefinitely inside the critical section, disabling interrupts forever.
Here we have a lock. We acquire the lock, run a process in the critical section and then release the lock. How it's different from
Disabling Interrupts? Here only the process which wants to execute inside the critical section wait, all other processes can still interrupt.
Peterson's Solution
As we saw earlier, we need a solution for the critical section of code, as it can lead to anomalies. This solution should satisfy the rules
we mentioned before.
Simple Psuedo code
int turn;
bool flag[2];
do{
flag[i]=1;
turn=j;
while(flag[j]&&turn==j);
/////////////////////
// Critical section//
/////////////////////
flag[i]=0;
///////////////////////
// Remainder section///
///////////////////////
}
while(1)
How it works
Before actually seeing how it works, please have another look at the things we want it to satisfy. These 3 rules:
1. Mutual exclusion
2. Progress
3. Bounded Waiting
Peterson solution simply assigns a turn and a flag to each process. Initially, flag is 0 in all processes and the turn is either 0 or 1. Flag
array's on the index means that this process is waiting for its turn now. And would work only if the initial process has completed its
execution. We have the while loop for this in the code. Please have a look at the code now and if you still find any difficulty, please
post in the comments.
3) Semaphores
Semaphore is nothing but an integer variable, and this can be changed by using only these two operations:
1. Wait: It is like a decrement operation.
wait(S){
while(S<=0);
S--;
}
1. Signal:
Signal(S){
S++
}
Binary Semaphores are generally called mutex locks as they provide mutual exclusion.
Counting Semaphores are used to control access to a given resource for multiple processes.
Use of semaphores in handling critical section of N processes
Whenever a process arrives we wait for the semaphore to turn to 1. Initially, the semaphore is 1 already so Process P1 has no wait
and executes the critical section. But if the second process comes now, the wait function runs a while loop which kind of halts the
process. Now when the first process finishes, the second process continues, and so on.
What we do is simple to achieve bounded wait time. We are currently holding suppose n processes due to 1 process which is running.
We simply can just block the processes which are in waiting. Like, we can contain them in a list. Okay, to be very honest, I truly don't
understand how blocking the processes makes them bounded. Would update this when I find this. Please write on comments if you
know this.
4) Monitors
Deadlocks
Perfect example for deadlock
1. Mutual Exclusion: Suppose the resource is a mouse. Now we know that a mouse can't be used by more than one process. So,
we can do nothing about it. The same is the case for printers, keyboards, etc. So, we can see that it is very difficult to overcome
this condition
2. Hold and Wait: This can be avoided. The problem here is that we wait for a process and hold other processes, so what we can do
is that we would start a process only if we have assigned every resource it needs to it initially. But this is not practically possible.
First of all, you can't really have a list of all resources required at the start of the process, it is only available at the runtime.
Secondly, even if you have such a list, suppose the process would work for 10 hours, and a printer is needed at the end of the
process. Here starvation of resources occurs, you are occupying printer for 10 hours straight, but you don't need it for 9 hrs!
3. No Preemptive processes: Here what we can do is that if a process needs a resource, we just snatch this particular resource from
whatever process it was working for and assign it to the current process. Obviously, it is also not practical, as suppose you are
using a printer and your half-page was printed, but then suddenly another process prints something else. There would be a lot of
inconsistency of data.
Example of inconsistency
1. No circular wait: This can be actually avoided. You can give priorities to resources and One process can only request to those
resources whose priorities are greater than the one assigned to it. For example, we have four resources, R1, R2, R3, R4 having
priorities as 1,2,3,4, and four processes P1, P2, P3, and P4.
R2->P2
R3->P3
R4->P4
Now P1 can only request R2, R3, and R4. And P2 can request R3 and R4 and so on. As you would have realized, there can be no
cycle here.
See the below table: Here initially, we had total resources of type A=10, B=5, C=7. (See it as A printers, B keyboards, C CPUs)
There can be two things here: detection and sequence of processes. We have to output a sequence of processes where deadlock not
happens or say it's impossible.
Now, it's very simple from here. We have the remaining need for every process. We just now have to iterate sequentially and see which
process we can execute. If it's executable, we can just execute it and we have the allocated resources of that particular process and the
process is executed completely. If we come in a situation where we can't execute any process, we welcome the deadlock.
I will be making a Codeforces problem also on it. You can visualize this as a completely greedy algorithm. But as far as the practical
application of this algorithm goes, as we saw previously also, we cannot have the max need of processes. We don't know which
process wants what resource and for how long and thus, it's not that practical.
Threads
We saw in types of OS(in the introduction at the start of the blog) that we have multithreading as well. We have multiple threads running
in an interleaved fashion inside a process. The foremost advantage is the increased responsiveness of the CPU. A thread is the
smallest unit assignable to the CPU.
Interesting thing about threads
Consider a music player: one thread is playing the music and the other is shuffling the songs for you. This is how a process in
memory looks like:
Before going to the comment section for complaining about a wrong image, these stacks are actually different. A process is nothing
but a program in execution but why do we have multiple stacks in one process? Each thread has its own stack for functioning and
that's why threads are lightweight and easy to create and destroy. They share the same heap, data, and code.
Advantages and Disadvantages of threads
Advantages
1. Faster to create/ terminate
2. Multiple threads share the same address space, thus making them easier to communicate. And due to this, context switching is
also easier.
3. Lightweight
Disadvantages
1. Difficult to test and debug
2. Can lead to Deadlocks and Race Conditions.
Types of Threads
1. User Level Thread: A process is creating multiple threads and the kernel is not at all aware of these threads being created. Here we
don't need to call the OS and there are no system calls involved making it much faster for context switching. But we have both
advantages and certain disadvantages as well. We get fast context switching, and very fast creation and termination time because
of no involvement of kernel. But if even a single threads make a call that waits, then all of the other threads in the process have to
wait for that call to complete. This is because the call would be made to kernel obviously, and kernel considers the thread as the
whole process and would not entertain another call of the same process thread.
2. Kernel Level Threads: Kernel knows and manages the threads. OS kernel provides system calls to create and manage threads.
Kernel Level threads require the involvement of the kernel, which means that you have to do a MOR switch(Google it), you need to
go from User space to the kernel space, and then you need to do the switch. And then we have to schedule some other thread as
well.
Blocking One thread might block all other threads A thread block itself only
Multicore or Cannot take advantage of multicore system. Only concurrent Take full advantage of the
Multiprocessor execution on single processor multicore system.
1. One to One: Most common way, used in common Operating Systems like windows. Every user thread is mapped to one kernel
thread. Very similar to a pure kernel thread. They are slower than many to one but we can use multicore systems efficiently here.
1. Many to One: Many users are mapped to one kernel thread. They can be seen as pure user threads. They do provide fast context
switching along with fast creation and termination but we cannot utilize multicore systems and all the threads can be blocked by one
thread making a system call.
2. Many to Many: Very rarely used. Many user threads are mapped to many kernel threads.
Memory Management