Operating System
Operating System
Definition A process is a program under execution A thread is a lightweight process that can
i.e. an active program. be managed independently by a scheduler
Context switching Processes require more time for context Threads require less time for context
time switching as they are heavier. switching as they are lighter than
processes.
Blocked If a process gets blocked, remaining If a user level thread gets blocked, all of its
processes can continue execution. peer threads also get blocked.
Data and Code Processes have independent data and A thread shares the data segment, code
sharing code segments. segment, files etc. with its peer threads.
In multitasking, the
processes share separate While in multithreading, processes
3. memory. are allocated the same memory.
Isolation and memory protection exist in Isolation and memory protection does not
multitasking. exist in multithreading.
User-space: The user space is also known as userland and is the memory
space where all user applications or application software executes. Everything
other than OS cores and kernel runs here. One of the roles of the kernel is to
manage all user processes or applications within user space and to prevent
them from interfering.
Kernel-space: The memory space where the core of the operating system
(kernel) executes and provides its services is known as kernel space. It's
reserved for running device drivers, OS kernel, and all other kernel extensions.
Types of Kernels:
1. Monolithic Kernel : All the functions and services reside in the kernal
space , hence it becomes more bulky and if one functions gets crashes
then entire kernel will stop.Also it requires more memory.
2. Micro Kernel : File management and I/O management is handled by
user space whereas process and memory management is handled by
kernal space. It is more stable and reliable but the performance is slow
coz lots of switching is required between the user and kernel space.
3. Hybrid Kernel : Combination of both the kernel.
How is the communication takes place between user and kernal space ?
Ans : There are instance where two or more process that are working
independently through its individual memory space might require to
communicate with each other . Hence it is either done by shared memory or
Message Passing - Establishing a channel between the two modes and message
is passed.
Can run 32-bit and 16-bit Can run 32-bit and 64-bit
Compatibility
applications applications
Address Space Uses 32-bit address space Uses 64-bit address space
Job Queue – Whenever any process enters the system its there in job Queue
Ready Queue – Processes in the ready queue waiting for run time are in the ready queue.
Device Queue – Some processes may be waiting some I/O operation, such processes are
in the device queue.
Types of Scheduler:
Long Term Scheduler :Also knowns as Job Scheduler. It selects the process that are to be
placed in ready queue. The long term scheduler basically decides the priority in which processes
must be placed in main memory.
Short-Term Scheduler :Also knowns as CPU Scheduler. It decides the priority in which
processes is in the ready queue are allocated the central processing unit (CPU) time for their
execution.
Medium-Term Scheduler :It places the blocked and suspended processes in the secondary
memory of a computer system. The task of moving from main memory to secondary memory is
called swapping out. The t ask of moving back a swapped out process from secondary
memory to main memory is known as swapping in.
It is also called a
It is also called a It is also called a job
1. Alternate process swapping
CPU scheduler. scheduler.
Name scheduler.
It provides lesser
It reduces the control It controls the degree
control over the
over the degree of of
2. Degree in degree of
multiprogramming. multiprogramming.
programming multiprogramming.
Burst Time : The total time for which the process gets the control of CPU
Waiting Time : The total time for which the process has to wait in the ready queue for the control
of the CPU.
Turn around Time : It is the total time starting from the job queue uptill the execution of the
process.
FCFS ALGO – In this algorithm, the process are been executed based on first come first serve
basis.
Disadvantages - FCFS may suffer from the convoy effect if the burst time of the first job is the
highest among all. As in the real life, if a convoy is passing through the road then the other
persons may get blocked until it passes completely.
Shortest Job First (NP) : In this algorithm , the process is been executed based on
the ascending order of their burst Time. In SJF Scheduling, a process with high burst time
may suffer starvation. Starvation is the process in which a process with higer burst time is kept
on waiting and waiting , but is not allocated to the CPU.
Shortest Job First (P): In this higher priority process gets executed first.
Round Robin : In Round robin Scheduling Algorithm, each process is given a fixed time
called quantum for execution. After the Quantum of time passes, the current running process is
preempted and the next process gets executed for next quantum of time.There is an overhead of
context switching.
Process Synchronization :
When there are multiple processes or threads running , there might be a situation
where changes made by one process gets overridden by other process.Hence it is
important.
Eg : Consider your bank account has 5000$.You try to withdraw 4000$ using net banking and
simultaneously try to withdraw via ATM too.For Net Banking at time t = 0ms bank checks you
have 5000$ as balance and you’re trying to withdraw 4000$ which is lesser than your available
balance. So, it lets you proceed further and at time t = 1ms it connects you to server to transfer
the amount. Imagine, for ATM at time t = 0.5ms bank checks your available balance which
currently is 5000$ and thus let’s you enter ATM password and withdraw amount. At time t = 1.5
ms ATM dispenses the cash of 4000$ and at time t = 2 net banking transfer is complete of 4000$.
Solution :
1. Mutual Exclusion : Only one process is allowed to enter the critical section
at a time.
2. Progress : If there is no process in C.S then new process from ready queue
has to be taken.
3. Bounded Waiting : There should be a limited waiting time for a process to
enter into C.S.It should not be waiting endlessly for it.
MUTEX: It is a binary variable used for mutual execution which is based on the
concept of lock and release mechanism . When a process is present in C.S it locks
the section and when its execution is over it releases the variable . Eg – A single
toilet with this mechanism.
Semaphore: It is based on signalling mechanism for accessing the C.S.It has two
operations wait and signal .There are two types: Binary semaphore has values
True/False,0/1 or counting semaphore which is a non negative number. Eg – A
bathroom with 4 identical toilets.
When a reader is in C.S then other reader can enter the C.S but not the writer
and When a writer is in C.S then no reader and no other writer is allowed to
enter the C.S .
It can be solved by even odd mechanism i.e having even no of philosopher and odd
no of chospticks .
If the resources have single instance than we can use resource allocation graph in
which if there is a cycle than we can say deadlock exist.
Incase of Multiple Resources we use bankers algorithm : In this algo we have three
parameter – Allocation , Request and Available : If the available is greater than or
equal to what is required than we will add the allocation to available and move
ahead . If we are able to fullfill all the request of the processes than we can say
that system is in safe state.
Memory Management
The size of the memory decides the degree of multiprogramming. Also the main
aim of the OS is to keep the CPU utilization high, therefore it is important to have
an efficient memory management.
Partition Algorithm :
1. First Fit : In this algo , whatever the first enough big block which can
accommodate the process is selected.It suffer from internal fragmentation.
2. Best Fit : In this algo , it tries to select the fittest possible block such that
there is no internal fragmentation .
3. Worst Fit : It select the biggest block to accommodate a process and then
the second biggest hole and so on.
Need for Paging : Since compact makes the system inefficient , hence we divide
the process into no of process such that it can be allocated at different holes.
Paging : It is mechanism of fetching the pages of the process from the secondary
memory into the main memory which is divide into no of frames . The frame size
is equal to that of page size.
The main memory contains a page table that has an index and base addresses of
the pages in the secondary memory.
Logical Address : It is the virtual address that is been generated by CPU . The
MMU maps and translates the logical address to physical address . It uses page
table which contains frame number and offset address. It provide abstraction so
that process can access memory without knowing the actual memory location.
Virtual Memory : It is an illusion of having a big main memory which is nothing but
in reality a chunk of a secondary memory which is been treated as main memory
and it is called as swap space .
How it works : Instead of loading a big process in a main memory , different pages
of different processes are loaded in the main memory so that the degree of
multiprogramming increases and if we require some other pages of a process and
we don’t have space , so in that case we can switch the least recent used page
from the main memory into the swap space .
SEGMENTATION :
In paging we divide the process into multiple pages but it is possible that the
function in the process gets split into multiple pages and may not be available at
the same time in the main memory hence the system becomes inefficient.
What is Thrashing ?
Ans : In the main memory if the system is busy in servicing the page faults rather
than executing the process is called thrashing . For example the main memory has
different pages of different processes but due to this there is more no of page
fault occurring for execution of a single process , hence it’s a disadvantage in
paging technique.
To solve this we can set an upper and lower bound on page fault rate . If the pf
rate exceed the upper bound allocate more frames to the process.If the pf rate
exceeds the lower bound remove frames from the process.
What is Buffer?
A buffer is a memory area that stores data being transferred between two devices or
between a device and an application.
spooling refers to putting jobs in a buffer, a special area in memory, or on a disk where
a device can access them when it is ready.
The interrupts are a signal emitted by hardware or software when a process or an event
needs immediate attention.
User threads are implemented by users. kernel threads are implemented by OS.
If one user-level thread performs a If one kernel thread perform a the blocking
blocking operation then entire process operation then another thread can continue
will be blocked. execution.
Difference between vertical and horizontal scaling
Scaling alters the size of a system. In the scaling process, we either compress or expand the
system to meet the expected needs. The scaling operation can be achieved by adding resources to
meet the smaller expectation in the current system, or by adding a new system in the existing
one, or both.
Vertical scaling keeps your existing infrastructure but adds computing power. Your existing pool
of code does not need to change — you simply need to run the same code on machines with
better specs. By scaling up, you increase the capacity of a single machine and increase its
throughput. Vertical scaling allows data to live on a single node, and scaling spreads the load
through CPU and RAM resources for your machines.
Horizontal scaling simply adds more instances of machines without first implementing
improvements to existing specifications. By scaling out, you share the processing power and load
balancing across multiple machines.