0% found this document useful (0 votes)
37 views16 pages

Operating System

The document discusses operating systems, their purpose, components, and types. It defines an operating system as a program that acts as an interface between the user and computer hardware. The main components of an OS include process management, memory management, I/O management, and file management. The types of OS discussed are single-user, batch, multiprogramming, multitasking, multiprocessing, distributed, and real-time operating systems.

Uploaded by

Gadget
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
37 views16 pages

Operating System

The document discusses operating systems, their purpose, components, and types. It defines an operating system as a program that acts as an interface between the user and computer hardware. The main components of an OS include process management, memory management, I/O management, and file management. The types of OS discussed are single-user, batch, multiprogramming, multitasking, multiprocessing, distributed, and real-time operating systems.

Uploaded by

Gadget
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 16

OPERATING SYSTEM

What is Operating System and its need?


Ans : It is an system program which looks after the operations and functionalities
of a computer and acts as an interface between the end user and computer
hardware .
Need : The application gets more bulky and complex , No separate memory
allocation for the apps due to which modification by one app on another is
possible Eg : Pubg running and acquiring all the resources and a notepad app
doesn’t get a chance to run .
Goals of OS:
1. To make the CPU utilization maximum
2. Keeping the degree of multiprogramming high
3. Reducing the chances of process starvation
Service :
1.File Management
2. Memory Management
3.Device Management
4.Process Management
5.I/O Management

Types of Operating System :


1. Single Processing – Can execute only 1 process from ready queue
2. Batch Processing – Users gives the jobs in the job queue, based on the
similarity of jobs it is classified into batches and are executed sequentially.
It may lead to starvation coz of execution of bigger jobs,Cpu may become
idle incase of I/O operations. Eg : File conversion , image processing
3. Multiprogramming - increases CPU utilization by keeping multiple jobs
(code and data) in the memory so that the CPU always has one to execute
in case some job gets busy with I/O.
4. Multitasking – It executes multiple processes at one time with the help of
context switching.Eg : Modern Computers , Desktops
5. Multiprocessing – More than 1 Cpu executes the processes
6. Distributed Operating System – It is a type of model where applications are
running on multiple computers linked by communications.Eg – Solaris OS
that has application in cloud computing ,networking.
7. Real Time – It A real-time operating system (RTOS) is a special-purpose
operating system used in computers that has strict time constraints for any
job to be performed. Eg : Air Traffic Controller
Program : Contains a set of instructions in order to perform some task
Process : A program under execution.
Difference between Program and process :
1. Definition
2. Program is stored under secondary memory , Process is executed in RAM
3. Program has longer life , Process has short life
4. Program does not have PCB , Process has its
Threads – It is a single sequence stream of a process which is a lightweight process
as they posess some of the properties of process. They also have their own TCB .
Eg : For example, in a browser, multiple tabs can be different threads.
Difference between Thread context switching and process context switching :
1. In threads , CS is faster than in process
2. In threads , swichting of memory space is not required as in process.
3. Os saves the state of threads in thread CS , In process it saves process state.
Multithreading : Multithreading is a technique used in operating systems to improve
the performance and responsiveness of computer systems. Multithreading allows
multiple threads (i.e., lightweight processes) to share the same resources of a single
process, such as the CPU, memory, and I/O devices.
Comparison Basis Process Thread

Definition A process is a program under execution A thread is a lightweight process that can
i.e. an active program. be managed independently by a scheduler

Context switching Processes require more time for context Threads require less time for context
time switching as they are heavier. switching as they are lighter than
processes.

Communication Communication between processes Communication between threads requires


requires more time than between less time than between processes.
threads.

Blocked If a process gets blocked, remaining If a user level thread gets blocked, all of its
processes can continue execution. peer threads also get blocked.

Data and Code Processes have independent data and A thread shares the data segment, code
sharing code segments. segment, files etc. with its peer threads.

S.NO Multitasking Multithreading

Multitasking involves often While in multithreading also, CPU


CPU switching between the switching is often involved between
2. tasks. the threads.

In multitasking, the
processes share separate While in multithreading, processes
3. memory. are allocated the same memory.

Multitasking is slow compared to


While multithreading is faster.
multithreading.

Isolation and memory protection exist in Isolation and memory protection does not
multitasking. exist in multithreading.

What is a Kernel , User-space,Kernel-space ?


Ans : It is that part of OS that directly interact with the hardware of the computer
and perform the most crucial task. It’s the heart of OS.

User-space: The user space is also known as userland and is the memory
space where all user applications or application software executes. Everything
other than OS cores and kernel runs here. One of the roles of the kernel is to
manage all user processes or applications within user space and to prevent
them from interfering.

Kernel-space: The memory space where the core of the operating system
(kernel) executes and provides its services is known as kernel space. It's
reserved for running device drivers, OS kernel, and all other kernel extensions.

Types of Kernels:

1. Monolithic Kernel : All the functions and services reside in the kernal
space , hence it becomes more bulky and if one functions gets crashes
then entire kernel will stop.Also it requires more memory.
2. Micro Kernel : File management and I/O management is handled by
user space whereas process and memory management is handled by
kernal space. It is more stable and reliable but the performance is slow
coz lots of switching is required between the user and kernel space.
3. Hybrid Kernel : Combination of both the kernel.

How is the communication takes place between user and kernal space ?

Ans : There are instance where two or more process that are working
independently through its individual memory space might require to
communicate with each other . Hence it is either done by shared memory or
Message Passing - Establishing a channel between the two modes and message
is passed.

How do applications interact with kernel – using System calls.

System calls- It is the mechanism through which application program


interacts with the kernel for requesting a service which cannot be perform by
him like accessing I/O devices etc.This is the only way through which
the program can go from kernal mode to user mode.

Eg – Process=load,execute,end,abort File=create, delete

BIOS – Basic Input/Output System UEFI – Unified Extensible Firmware Interface


How does computer bootup ?
Ans : First we switch on the power , then the cpu goes to the BIOS chip to initialize
all the initial functions , after this BIOS runs a test in order to initialize all the
hardware components and loads….
Difference between 32-bit OS and 64-bit OS :
A 32-bit system can access 2 32 different memory addresses, A 64-bit system can access
264 different memory addresses.

Feature 32-bit OS 64-bit OS

Maximum of several terabytes of


Memory Maximum of 4 GB RAM
RAM

Can take advantage of more


Limited by the maximum
Performance memory, enabling faster
amount of RAM it can access
performance

Can run 32-bit and 16-bit Can run 32-bit and 64-bit
Compatibility
applications applications

Address Space Uses 32-bit address space Uses 64-bit address space

Hardware May not support newer Supports newer hardware with


support hardware 64-bit drivers

Price Less expensive than 64-bit OS More expensive than 32-bit OS

Can handle multiple tasks more


Can handle multiple tasks but
Multitasking efficiently
with limited efficiency

Can run high graphical games


Can run high graphical games,
and handle complex software
Gaming but may not be as efficient as
more efficiently
with 64-bit OS

Types of Process scheduling :


Preemptive Scheduling –The scheduling in which a running process can be interrupted if a high priority
process enters the queue and is allocated to the CPU is called preemptive scheduling.
Non Preemptive Scheduling – The scheduling in which a running process cannot be interrupted by any
other process is called non-preemptive scheduling.
Schedulling Queues :

 Job Queue – Whenever any process enters the system its there in job Queue
 Ready Queue – Processes in the ready queue waiting for run time are in the ready queue.
 Device Queue – Some processes may be waiting some I/O operation, such processes are
in the device queue.

Types of Scheduler:

Long Term Scheduler :Also knowns as Job Scheduler. It selects the process that are to be
placed in ready queue. The long term scheduler basically decides the priority in which processes
must be placed in main memory.

Short-Term Scheduler :Also knowns as CPU Scheduler. It decides the priority in which
processes is in the ready queue are allocated the central processing unit (CPU) time for their
execution.

Medium-Term Scheduler :It places the blocked and suspended processes in the secondary
memory of a computer system. The task of moving from main memory to secondary memory is
called swapping out. The t ask of moving back a swapped out process from secondary
memory to main memory is known as swapping in.

Short-Term Medium-term Long-Term


Basis Scheduler Scheduler Scheduler

It is also called a
It is also called a It is also called a job
1. Alternate process swapping
CPU scheduler. scheduler.
Name scheduler.

It provides lesser
It reduces the control It controls the degree
control over the
over the degree of of
2. Degree in degree of
multiprogramming. multiprogramming.
programming multiprogramming.

The speed of a long-


The speed of the Speed of medium
term term scheduler
short-term scheduler scheduler between
is more than
is very fast. the short-term and
medium-term
long-term scheduler
3. Speed scheduler.
Context Switching – It is a process of storing a state of a running process so that it
can be restored again later and loading the state of another process.It is caused
by multitasking,interrupt handling or switching between user and kernel mode.

Burst Time : The total time for which the process gets the control of CPU

Waiting Time : The total time for which the process has to wait in the ready queue for the control
of the CPU.

Turn around Time : It is the total time starting from the job queue uptill the execution of the
process.

FCFS ALGO – In this algorithm, the process are been executed based on first come first serve
basis.

Disadvantages - FCFS may suffer from the convoy effect if the burst time of the first job is the
highest among all. As in the real life, if a convoy is passing through the road then the other
persons may get blocked until it passes completely.

Shortest Job First (NP) : In this algorithm , the process is been executed based on
the ascending order of their burst Time. In SJF Scheduling, a process with high burst time
may suffer starvation. Starvation is the process in which a process with higer burst time is kept
on waiting and waiting , but is not allocated to the CPU.

Shortest Job First (P): In this higher priority process gets executed first.

Round Robin : In Round robin Scheduling Algorithm, each process is given a fixed time
called quantum for execution. After the Quantum of time passes, the current running process is
preempted and the next process gets executed for next quantum of time.There is an overhead of
context switching.

Critical Section : It is a section where shared resources such as code section is


shared among multiple processes or thread.
Entry Section : In this section ,request is made to enter the C.S

Exit section : Marks the exit of the process

Remainder section : Remaining code part is executed.

Process Synchronization :

When there are multiple processes or threads running , there might be a situation
where changes made by one process gets overridden by other process.Hence it is
important.

Eg : Consider your bank account has 5000$.You try to withdraw 4000$ using net banking and
simultaneously try to withdraw via ATM too.For Net Banking at time t = 0ms bank checks you
have 5000$ as balance and you’re trying to withdraw 4000$ which is lesser than your available
balance. So, it lets you proceed further and at time t = 1ms it connects you to server to transfer
the amount. Imagine, for ATM at time t = 0.5ms bank checks your available balance which
currently is 5000$ and thus let’s you enter ATM password and withdraw amount. At time t = 1.5
ms ATM dispenses the cash of 4000$ and at time t = 2 net banking transfer is complete of 4000$.

Solution :

1. Mutual Exclusion : Only one process is allowed to enter the critical section
at a time.
2. Progress : If there is no process in C.S then new process from ready queue
has to be taken.
3. Bounded Waiting : There should be a limited waiting time for a process to
enter into C.S.It should not be waiting endlessly for it.

MUTEX: It is a binary variable used for mutual execution which is based on the
concept of lock and release mechanism . When a process is present in C.S it locks
the section and when its execution is over it releases the variable . Eg – A single
toilet with this mechanism.

 Mutex is for thread.


 Mutex is binary in nature.
 Mutex works in userspace

Semaphore: It is based on signalling mechanism for accessing the C.S.It has two
operations wait and signal .There are two types: Binary semaphore has values
True/False,0/1 or counting semaphore which is a non negative number. Eg – A
bathroom with 4 identical toilets.

 Semaphore is for processes


 Semaphores are of two types that are binary and counting
 Semaphores work in kernel space

Atomic Operation : When these type of operations are executed , accessing or


modifying the resources which is used by these operations are not allowed.

Reader/Writer Problem : This is a synchronization problem which says that when


there are resources that are shared by more than 2 processes then there can be 2
type of users. Readers- That reads the data or Writer – That writes the data.

 When a reader is in C.S then other reader can enter the C.S but not the writer
and When a writer is in C.S then no reader and no other writer is allowed to
enter the C.S .

Dining Philosopher : This is also an synchronization problem that states that a


philosopher can eat only if both the chopsticks are free.They could either
think(wait) or eat(signal) Imagine there are 5 philosopher and 5 chospticks, each is
waiting for the one chopstick to get free and hence caused a deadlock situation.

It can be solved by even odd mechanism i.e having even no of philosopher and odd
no of chospticks .

DEADLOCK – It refers to a condition where 2 or more processes are waiting for


each other to release a resource indefinitely.
Deadlock conditions :

1. Mutual exclusion : Resources are in non-shareable mode,meaning that


only one process can use a particular resource at that time.
2. Hold and wait : A process is holding one resources and waiting for
other resource to get release that are being held by another process
3. No preemption : Resources cant be taking away from process once it
is taken.
4. Circular Wait: 2 or more processes are waiting for each other resulting
in a circle.

Deadlock Detection and Recovery :

If the resources have single instance than we can use resource allocation graph in
which if there is a cycle than we can say deadlock exist.

Incase of Multiple Resources we use bankers algorithm : In this algo we have three
parameter – Allocation , Request and Available : If the available is greater than or
equal to what is required than we will add the allocation to available and move
ahead . If we are able to fullfill all the request of the processes than we can say
that system is in safe state.

Recovery : Kill one or more processes , resource preemption , resource allocation


(systematically).

Deadlock Prevention or Avoidance :

1. By allocating the resources priority wise.


2. By continuously applying the detection algorithm.

Memory Management
The size of the memory decides the degree of multiprogramming. Also the main
aim of the OS is to keep the CPU utilization high, therefore it is important to have
an efficient memory management.

1. Fixed Partioning : In this technique , the memory is divide into a fixed


sized . This technique can suffer from internal and external fragmentation
and also reduces the degree of multiprogramming .
2. Dynamic Partitioning : In this technique, the memory is divided into
variable size depending upon the process size , but in this case if suppose
there are multiple process which has done its execution , so the space
space of these processes are free but they are not in contiguous form and if
a new process comes it cant be allocated in the memory inspite of the
memory being free but not in continuous form.
 They don’t suffer by internal fragmentation but by External
Fragmentation.Also the degree of multiprogramming is high.
3. Compaction : In this technique all the free holes of the dynamic partition
memory is combined together to form a contiguous memory .This process
helps to tackle external fragmentation and hence they are called as
defragmentation.
Huge amount of time is required for merging all the holes and hence the
system becomes inefficient.

Partition Algorithm :

1. First Fit : In this algo , whatever the first enough big block which can
accommodate the process is selected.It suffer from internal fragmentation.
2. Best Fit : In this algo , it tries to select the fittest possible block such that
there is no internal fragmentation .
3. Worst Fit : It select the biggest block to accommodate a process and then
the second biggest hole and so on.

Need for Paging : Since compact makes the system inefficient , hence we divide
the process into no of process such that it can be allocated at different holes.

Paging : It is mechanism of fetching the pages of the process from the secondary
memory into the main memory which is divide into no of frames . The frame size
is equal to that of page size.

The main memory contains a page table that has an index and base addresses of
the pages in the secondary memory.

Physical Address : It is the actual physical address in the main memory .

Logical Address : It is the virtual address that is been generated by CPU . The
MMU maps and translates the logical address to physical address . It uses page
table which contains frame number and offset address. It provide abstraction so
that process can access memory without knowing the actual memory location.

Virtual Memory : It is an illusion of having a big main memory which is nothing but
in reality a chunk of a secondary memory which is been treated as main memory
and it is called as swap space .

How it works : Instead of loading a big process in a main memory , different pages
of different processes are loaded in the main memory so that the degree of
multiprogramming increases and if we require some other pages of a process and
we don’t have space , so in that case we can switch the least recent used page
from the main memory into the swap space .

Advantages : Increase in DM Disadvantage : Thrasing may occur.


Demand Paging : It is the process of getting the require page in the main memory
by either doing swap in or swap out operation between main memory and swap
space.

SEGMENTATION :

In paging we divide the process into multiple pages but it is possible that the
function in the process gets split into multiple pages and may not be available at
the same time in the main memory hence the system becomes inefficient.

Segmentation is a technique in which a main memory is divide into variable size of


segments and then it is allocated to the processes.The segment table consist of
segment number and offset address.So by using segmentation the process is split
into segments such that main function is in one segment and library function is in
another segment.

Advantages : No internal fragmentation Disadvantages : Suffers from external

What is Thrashing ?

Ans : In the main memory if the system is busy in servicing the page faults rather
than executing the process is called thrashing . For example the main memory has
different pages of different processes but due to this there is more no of page
fault occurring for execution of a single process , hence it’s a disadvantage in
paging technique.

To solve this we can set an upper and lower bound on page fault rate . If the pf
rate exceed the upper bound allocate more frames to the process.If the pf rate
exceeds the lower bound remove frames from the process.

Page Replacement Algorithm :


1. FIFO : In this algorithm the oldest page is been replaced by the newest page
i.e the page that had come first in the stack will be replaced first.
2. LRU : In this algorithm , the least recently usedpage that has not used in
recent time based on the past data is replaced by new .
3. Optimal : In this algorithm , the least recently used page in future that is
farthest is replaced by new page.It is the best algorithm but is hypothetical
and very difficult to implement .

Belady Anomaly : It states that as in LRU and Optimal we increases the no of


frames so the page fault rate get decreases but in FIFO if we do this the no of page
fault increases inspite of increasing the no of frames .

What are starvation and aging in OS?


Starvation: Starvation is a resource management problem where a process does not get
the resources it needs for a long time because the resources are being allocated to other
processes.
Aging: Aging is a technique to avoid starvation in a scheduling system. It works by
adding an aging factor to the priority of each request. The aging factor must increase the
priority of the request as time passes and must ensure that a request will eventually be
the highest priority request

Locality of Reference refers to the tendency of the computer program to access


instructions whose addresses are near one another.

What is rotational latency?


Rotational Latency: Rotational Latency is the time taken by the desired sector of the
disgeek to rotate into a position so that it can access the read/write heads. So the disk
scheduling algorithm that gives minimum rotational latency is better.
94. What is seek time?
Seek Time: Seek time is the time taken to locate the disk arm to a specified track where
the data is to be read or written. So the disk scheduling algorithm that gives a minimum
average seek time is better.

What is cache memory ?


Ans : It is a small and fast memory which is used to store frequently used data in
the main memory so that whenever the main memory requires , cache can
provide it instantly.It is costlier than main and secondary memory.

What is Buffer?
A buffer is a memory area that stores data being transferred between two devices or
between a device and an application.

spooling refers to putting jobs in a buffer, a special area in memory, or on a disk where
a device can access them when it is ready.

The interrupts are a signal emitted by hardware or software when a process or an event
needs immediate attention.

User-level thread Kernel level thread

User threads are implemented by users. kernel threads are implemented by OS.

OS doesn’t recognize user-level threads. Kernel threads are recognized by OS.

Implementation of the perform kernel


Implementation of User threads is easy.
thread is complicated.

Context switch time is less. Context switch time is more.

Context switch requires no hardware


Hardware support is needed.
support.

If one user-level thread performs a If one kernel thread perform a the blocking
blocking operation then entire process operation then another thread can continue
will be blocked. execution.
Difference between vertical and horizontal scaling

Scaling alters the size of a system. In the scaling process, we either compress or expand the
system to meet the expected needs. The scaling operation can be achieved by adding resources to
meet the smaller expectation in the current system, or by adding a new system in the existing
one, or both.
Vertical scaling keeps your existing infrastructure but adds computing power. Your existing pool
of code does not need to change — you simply need to run the same code on machines with
better specs. By scaling up, you increase the capacity of a single machine and increase its
throughput. Vertical scaling allows data to live on a single node, and scaling spreads the load
through CPU and RAM resources for your machines.

Horizontal scaling simply adds more instances of machines without first implementing
improvements to existing specifications. By scaling out, you share the processing power and load
balancing across multiple machines.

You might also like