Operating System Notes
Operating System Notes
1. Batch OS – A set of similar jobs are stored in the main memory for execution. A job gets
assigned to the CPU, only when the execution of the previous job completes. eg,Bank System, Payroll system
2. Multiprogramming OS – The main memory consists of jobs waiting for CPU time. The
OS selects one of the processes and assigns it to the CPU. Whenever the executing
process needs to wait for any other operation (like I/O), the OS selects another process
from the job queue and assigns it to the CPU. This way, the CPU is never kept idle
and the user gets the flavor of getting multiple tasks done at once. eg, Chrome, Firefox
3. Multitasking OS – Multitasking OS combines the benefits of Multiprogramming OS
and CPU scheduling to perform quick switches between jobs. The switch is so quick
that the user can interact with each program as it runs. eg, Windows
4. Time Sharing OS – Time-sharing systems require interaction with the user to instruct
the OS to perform various tasks. The OS responds with an output. The instructions are
usually given through an input device like the keyboard. eg, Linux, Unix
5. Real Time OS – Real-Time OS are usually built for dedicated systems to accomplish a
specific set of tasks within deadlines. air traffic control, autonomous driving systems
● Process : A process is a program under execution. The value of the program counter
(PC) indicates the address of the next instruction of the process being executed.
Each process is represented by a Process Control Block (PCB).
Apni Kaksha 1
● Process Scheduling:
1. Arrival Time – Time at which the process arrives in the ready queue.
2. Completion Time – Time at which process completes its execution.
3. Burst Time – Time required by a process for CPU execution.
4. Turn Around Time – Time Difference between completion time and arrival time.
Turn Around Time = Completion Time - Arrival Time
5. Waiting Time (WT) – Time Difference between turn around time and burst time.
Waiting Time = Turnaround Time - Burst Time
● Thread (Important) : A thread is a lightweight process and forms the basic unit of
CPU utilization. A process can perform more than one task at the same time by including
multiple threads.
● A thread has its own program counter, register set, and stack
● A thread shares resources with other threads of the same process: the code section,
the data section, files and signals.
Note : A new thread, or a child process of a given process, can be introduced by using
the fork() system call. A process with n fork() system call generates 2^n – 1 child
processes.
There are two types of threads:
● User threads (User threads are implemented by users)
Multithreading is a feature in operating systems that allows a program to do several tasks at the same time.
Think of it like having multiple hands working together to complete different parts of a job faster.
Each “hand” is called a thread, and they help make programs run more efficiently.
Multithreading makes your computer work better by using its resources more effectively, leading to quicker and smoother performance
for applications like web browsers, games, and many other programs you use every day.
Apni Kaksha 2
● Scheduling Algorithms :
Apni Kaksha 3
● The Critical Section Problem:
1. Critical Section – The portion of the code in the program where shared variables
are accessed and/or updated.
2. Remainder Section – The remaining portion of the program excluding the Critical
Section.
3. Race around Condition – The final output of the code depends on the order in
which the variables are accessed. This is termed as the race around condition.
A solution for the critical section problem must satisfy the following three conditions:
● Synchronization Tools:
1. Semaphore : Semaphore is a protected variable or abstract data type that is
used to lock the resource being used. The value of the semaphore indicates the
status of a common resource.
Apni Kaksha 4
Mutex is also known as Mutual Exclusion lock. Mutex is a variable that is set before accessing a shared
resource and released after using the shared resource
. When the mutex is set, the shared resource cannot be accessed by any other process or thread.
Mutex locks are used for process synchronization. Mutexes are used by Operating Systems to control the entry and exit of processes in
Critical Sections.
Mutex (A mutex provides mutual exclusion, either producer or consumer can
have the key (mutex) and proceed with their work. As long as the buffer is filled
by the producer, the consumer needs to wait, and vice versa.
At any point of time, only one thread can work with the entire buffer. The concept
can be generalized using semaphore.)
● Deadlocks (Important):
A situation where a set of processes are blocked because each process is holding a
resource and waiting for another resource acquired by some other process. Deadlock
can arise if following four conditions hold simultaneously (Necessary Conditions):
1. Mutual Exclusion – One or more than one resource is non-sharable (Only one
process can use at a time).
2. Hold and Wait – A process is holding at least one resource and waiting for
resources.
3. No Preemption – A resource cannot be taken from a process unless the process
releases the resource.
4. Circular Wait – A set of processes are waiting for each other in circular form.
● Methods for handling deadlock: There are three ways to handle deadlock
1. Deadlock prevention or avoidance : The idea is to not let the system into a
deadlock state.
2. Deadlock detection and recovery : Let deadlock occur, then do preemption to
handle it once occurred.
3. Ignore the problem all together : If deadlock is very rare, then let it happen and
reboot the system. This is the approach that both Windows and UNIX take.
Apni Kaksha 5
● Banker's algorithm is used to avoid deadlock. It is one of the deadlock-avoidance
methods. It is named as Banker's algorithm on the banking system where a bank
never allocates available cash in such a manner that it can no longer satisfy the
requirements of all of its customers.
https://www.geeksforgeeks.org/bankers-algorithm-in-operating-system-2/
1. First Fit – The arriving process is allotted the first hole of memory in which it fits
completely.
2. Best Fit – The arriving process is allotted the hole of memory in which it fits the best
by leaving the minimum memory empty.
3. Worst Fit – The arriving process is allotted the hole of memory in which it leaves the
maximum gap.
Apni Kaksha 6
Internal Fragmentation :-Whenever a method is requested for the memory, the mounted-sized block is allotted to the method.
In the case where the memory allotted to the method is somewhat larger than the memory requested,
then the difference between allotted and requested memory is called internal fragmentation.
Note:
● Best fit does not necessarily give the best results for memory allocation.
Paging is a method or technique which is used
for non-contiguous memory allocation.
● The cause of external fragmentation is the condition in Fixed partitioning and
It is a fixed-size partitioning theme (scheme).
In paging, both main memory and secondary
memory are divided into equal fixed-size
Variable partitioning saying that the entire process should be allocated in a
partitions. The partitions of the secondary memory
area unit and main memory area unit are
known as pages and frames respectively. contiguous memory location.Therefore Paging is used.
● Page Fault:
A page fault is a type of interrupt, raised by the hardware when a running program accesses a
memory page that is mapped into the virtual address space, but not loaded in physical memory.
Apni Kaksha 7
Belady’s anomaly proves that it is possible to have more page faults when increasing
the number of page frames while using the First in First Out (FIFO) page
replacement algorithm. For example, if we consider reference string ( 3 2 1 0
3 2 4 3 2 1 0 4 ) and 3 slots, we get 9 total page faults, but if we
increase slots to 4, we get 10 page faults.
2. Optimal Page replacement –
In this algorithm, pages are replaced which are not used for the longest duration of
time in the future.
Apni Kaksha 8
● Disk Scheduling: Disk scheduling is done by operating systems to schedule I/O
requests arriving for disk. Disk scheduling is also known as I/O scheduling.
1. Seek Time: Seek time is the time taken to locate the disk arm to a specified track
where the data is to be read or written.
2. Rotational Latency: Rotational Latency is the time taken by the desired sector of
disk to rotate into a position so that it can access the read/write heads.
3. Transfer Time: Transfer time is the time to transfer the data. It depends on the
rotating speed of the disk and number of bytes to be transferred.
4. Disk Access Time: Seek Time + Rotational Latency + Transfer Time
5. Disk Response Time: Response Time is the average of time spent by a request
waiting to perform its I/O operation. Average Response time is the response time of
all requests.
1. FCFS: FCFS is the simplest of all the Disk Scheduling Algorithms. In FCFS, the
requests are addressed in the order they arrive in the disk queue.
2. SSTF: In SSTF (Shortest Seek Time First), requests having the shortest seek time
are executed first. So, the seek time of every request is calculated in advance in a
queue and then they are scheduled according to their calculated seek time. As a
result, the request near the disk arm will get executed first.
3. SCAN: In SCAN algorithm the disk arm moves into a particular direction and
services the requests coming in its path and after reaching the end of the disk, it
reverses its direction and again services the request arriving in its path. So, this
algorithm works like an elevator and hence is also known as elevator algorithm.
4. CSCAN: In SCAN algorithm, the disk arm again scans the path that has been
scanned, after reversing its direction. So, it may be possible that too many requests
are waiting at the other end or there may be zero or few requests pending at the
scanned area.
Apni Kaksha 9
5. LOOK: It is similar to the SCAN disk scheduling algorithm except for the difference
that the disk arm in spite of going to the end of the disk goes only to the last request
to be serviced in front of the head and then reverses its direction from there only.
Thus it prevents the extra delay which occurred due to unnecessary traversal to the
end of the disk.
6. CLOOK: As LOOK is similar to SCAN algorithm, CLOOK is similar to CSCAN disk
scheduling algorithm. In CLOOK, the disk arm in spite of going to the end goes only
to the last request to be serviced in front of the head and then from there goes to the
other end’s last request. Thus, it also prevents the extra delay which occurred due to
unnecessary traversal to the end of the disk.
Apni Kaksha 10
Key Terms
● Real-time system is used in the case when rigid-time requirements have been
placed on the operation of a processor. It contains well defined and fixed time
constraints.
● A monolithic kernel is a kernel which includes all operating system code in a
single executable image.
● Micro kernel: Microkernel is the kernel which runs minimal performance
affecting services for the operating system. In the microkernel operating system
all other operations are performed by the processor.
Apni Kaksha 11
● Fragmentation is a phenomenon of memory wastage. It reduces the capacity
and performance because space is used inefficiently.
3. Economical
Apni Kaksha 12