0% found this document useful (0 votes)
24 views126 pages

Operating System

The document provides an overview of operating systems (OS), defining their roles as interfaces between user applications and hardware, and detailing their goals such as user convenience and efficiency. It discusses various types of OS, including batch processing, multiprogramming, multitasking, and real-time systems, along with their advantages and disadvantages. Additionally, it covers process management, including process attributes, states, scheduling algorithms, and context switching.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views126 pages

Operating System

The document provides an overview of operating systems (OS), defining their roles as interfaces between user applications and hardware, and detailing their goals such as user convenience and efficiency. It discusses various types of OS, including batch processing, multiprogramming, multitasking, and real-time systems, along with their advantages and disadvantages. Additionally, it covers process management, including process attributes, states, scheduling algorithms, and context switching.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 126

Operating System

Note: The ppt content is only for reference. Mistakes may


be in the ppt. So it is advised to go through reference book
Unit 1
What is OS?
• OS stands for operating system
Definition
• It act as interface between user
applications and hardware.
• System program that plays the role of
resource manager.
• Set of utilities to simplify application
development/execution.
• Act like government.
• Examples of Operating Systems are
Windows, Linux, Mac OS, etc.
• Hardware – CPU, Memory , I/O devices.
Goals of OS
• User Convenience- how easily user can interact with hardware
• Efficiency – how good it can utilize resources
• Generally, it is evaluated in terms of throughput.
• Throughput refers to number of tasks executed per second.
• For window OS- primary goal is user convenience while is secondary goal
• In Linux OS- efficiency – primary goal, user convenience – secondary goal
• Other goals –
• reliability,
• portability,
• robustness,
• scalability
Parts of OS
• Generally two parts
• Shell
• Kernel
• OS consists of different modules(system program
• Kernel consists of functions that defines the function of
different modules of OS.
• Right click on file folder empty place has different menus-
(different modules defined)
• Double click, right click has different function defined and
are stored in kernel.
• To reach/interact with kernel we use shell.
• Generally, two ways of shell – GUI or CLI command line.
• Each OS has different kernel
• Different kernel has multiple Shell
• For e.g. windows support both GUI and CLI
• While linux OS support generally CLI
Program Mode of operation
• User Mode
• Kernel Mode
System Call  In user mode, program under execution
• System call is the way by which doesn’t have whole access to system
user program interact with OS. resource.
 To define which mode, a mode bit is used.
• Double click – shell -> system  Mode bit – 1 : user mode, 0: kernel mode.
call -> kernel.  If allow every access , virus creation is easily
• Shell scripting – different possible.
commands and what they do.  In kernel mode , program under execution
• File- open(), read(), close(), can access all resources.
write(), create file,  Also known as privileged mode.
• Process control- load, execute,
abort, fork, wait =, signal, etc.
• Information- get pid, attributes,
system time and date.
• Device related- read, write, ioctl.
Functions of OS
• Resource Management- mapping hardware resource to multiple
applications running simultaneous.
• Process Management- how to run multiple process on system
hardware. (CPU scheduling)
• Storage Management- how process are stored in secondary
memory( File system (NTFS FAT32)
• Memory Management- How to stored process and bring which process
in RAM ( as it is limited). ( Multi programming/tasking)
• Security and Privacy- ( Authentication / password, user and admin
mode, one program cannot interfere with other program resources).
• Accounting – Task manager
Types of OS
• Batch Processing OS
• Multiprogramming
• Multitasking
• Multiprocessing
• Real Time OS
• Embedded
• Distributed OS
• Generally 3 components- CPU , Memory, I/O.
• OS resides in Main memory (RAM).
Batch Processing
• This type of operating system does not interact with
the computer directly.
• There is an operator which takes similar jobs having
the same requirement and groups them into
batches.
• It is the responsibility of the operator to sort jobs
with similar needs
• Only one program reside in main memory , low CPU
utilization
Advantages of Batch Operating System
• Job completion time is known.
• Multiple users can share the batch systems.
Disadvantages of Batch Operating System
• The computer operators should be well known with
batch systems.
• Batch systems are hard to debug.
• The other jobs will have to wait for an unknown time
Multiprogramming OS
• To increase CPU utilization , in this we allow more
than one program to reside in RAM
• This is also known as Degree of
multiprogramming. It represent how many
process reside in Main memory.
• As we increase DOM, CPU utilization also
increases.
• When any process leaves for I/O , other process is
scheduled to the CPU.
• At a time only process is processed by CPU.
Advantages of Multi-Programming Operating
System
• Multi Programming increases the Throughput of
the System.
• It helps in reducing the response time.
Disadvantages of Multi-Programming
Operating System
• There is not any facility for user interaction of
system resources with the system.
Multitasking OS
• It is also known as pre-emptive
version of Multiprogramming
• It is simply a multiprogramming
Operating System with having facility
of a Round-Robin Scheduling
Algorithm.
• It can run multiple programs
simultaneously.
• But user are unaware of it.
Advantages of Multi-Tasking
Operating System
• Better user responsiveness.
Disadvantages of Multi-Tasking
Operating System
• User interaction is only one at a time.
Multiprocessing OS
• More than one CPU reside in
the system.
• Enhanced Throughput and
robustness is seen.
• Generally, of two types
• Tight coupled (shared ) – each
cpu has common shared
memory
• Loose coupled- each cpu has
different memory.
Real Time OS
• OSs serve real-time systems.
• The time interval required to process and
respond to inputs is very small.
• This time interval is called response
time.
• Real-time systems are used when there
are time requirements that are very strict
like missile systems, air traffic control
systems, robots, etc.
• Generally of two types
• Hard RTOS- strict response time e.g.
missile, airbags system
• Soft RTOS- deadline can be little bit delay
e.g. banking, streaming.
Distributed Operating System
• Various autonomous interconnected
computers communicate with each other
using a shared communication network.
• Independent systems possess their own
memory unit and CPU.
• These are referred to as loosely coupled
systems or distributed systems.
• High availability and robustness
Advantages of Distributed Operating
System
• Failure of one will not affect the other
network communication, as all systems are
independent of each other.
• Scalable, and load balanced
Disadvantage
• Synchronization
Embedded OS
• Designed for specific purpose.
• Basically embedded in machine to make them smart.
• E.g. Microwave, AC
• User interaction with OS is minimum.
• Rigid in nature-(once developed cannot be easily
reprogrammed).
Unit 2 – Process Management
What is Process?
• Program under execution is known as Process.
• Initially the program is stored in secondary memory.
• For execution it has to brought in Main memory.
• Process management involves identifying the steps
involved in completing a task, assessing the resources
required for each step, and determining the best way to
execute the task.
• Stack- local variables, function
• Heap –dynamic memory allocation
• Data – information
• Text- current activity description
Process Attributes
• A process has the following attributes.
• Process Id: A unique identifier assigned by the operating system.
• Process State: Can be ready, running, etc.
• Program Counter
• List of General-purpose Registers
• Priority Info
• List of files
• List of devices.
• All of the above attributes of a process are also known as the context of the
process.
• Every process has its own process control block(PCB), i.e. each process will have a
unique PCB.
• This PCB info is also stored in RAM.
Process State
• A process is in one of the following states:
• New: Newly Created Process (or) being-created process.(App
downloaded) (secondary memory)
• Ready: After the creation process moves to the Ready state, i.e. the
process is ready for execution. (Main memory )
• Run: Currently running process in CPU (only one process at a time can be
under execution in a single processor) (Main memory). (dispatcher)
• Wait (or Block): When a process requests I/O access.(Main memory)
• Complete (or Terminated): The process completed its execution.(out of
Main memory)
• Suspended Ready: When the ready queue becomes full, some processes
are moved to a suspended ready state
• Suspended Block: When the waiting queue becomes full.
• When any process changes its state , context switching happen.
Context Switching of Process
• The process of saving the context of one process and loading the context of another
process is known as Context Switching. In simple terms, it is like loading and unloading
the process from the running state to the ready state and vice -versa.
• It is performed by dispatcher . It is small program that perform context switching.
• When Does Context Switching Happen?
• 1. When a high-priority process comes to a ready state (i.e. with higher priority than the
running process).
2. An Interrupt occurs.
3. User and kernel-mode switch (It is not necessary though)
4. Preemptive CPU scheduling is used.
CPU-Bound vs I/O-Bound Processes
• A CPU-bound process requires more CPU time or spends more time in the running
state. An I/O-bound process requires more I/O time and less CPU time. An I/O-bound
process spends more time in the waiting state.
• Process planning is an integral part of the process management operating system. It
refers to the mechanism used by the operating system to determine which process
to run next. The goal of process scheduling is to improve overall system performance
by maximizing CPU utilization, minimizing execution time, and improving system
response time.
Scheduling Queue
• Job queue- to store processes of
new state.
• Ready Queue- to store processes
of ready state.
• Device Queue- to store processes
of wait state.
Types of scheduler:
• Long term scheduler(Job): bring
job from new to ready state.
• Short term scheduler (CPU): bring
job from ready to running state.
• Middle term scheduler: from ready
to suspended state.
Objectives of Process
Scheduling Algorithm:
•Utilization of CPU at maximum What are the different
level. Keep CPU as busy as terminologies to take care
possible. of in any CPU Scheduling
•Allocation of CPU should be algorithm?
fair. •Arrival Time: Time at which the
•Throughput should be process arrives in the ready queue.
Maximum. i.e. Minimum •Completion Time: Time at which
turnaround time, i.e. time taken process completes its execution.
by a process to finish execution •Burst Time: Time required by a
should be the least. process for CPU execution.
•There should be a minimum •Turn Around Time: Time Difference
waiting time and the process between completion time and arrival
should not starve in the ready time.
queue. Turn Around Time = Completion Time –
•Minimum response time. It Arrival Time
means that the time when a •Waiting Time(W.T): Time Difference
process produces the first response between turn around time and burst
time.
Types of Scheduling
• Preemptive Scheduling:
Preemptive scheduling is
used when a process
switches from running
state to ready state or
from the waiting state to
the ready state.
• Non-Preemptive Schedulin
g
: Non-Preemptive
scheduling is used when a
process terminates , or
when a process switches
from running state to
waiting state.
FCFS (First Come First Serve)
Criteria:
• Scheduled based on arrival time
• If tie – then prefer process with
low process id.
Type:
• Non-pre-emptive
• To solve question we draw Gantt
chart , that keep tracks of
process order execution.
• It always start with zero
• Ready Queue- P1, P2, P3
TAT= CT- AT
WT= TAT- BT
Response time- first
time it get chances
of CPU
P1= 0
P2= 27
P3= 33
• Convoy Effect :
• If the first process arrived in
system has long burst time,
then all the remaining process
waiting time increases.
• Short term process may have
also to wait long.
• FCFS faces this problem.
Shortest Job First( SJF)
Criteria :
• Burst time or Execution Time
• Tie Breaker- arrival time and process id.
Type:
• Non-preemptive
Advantages of Shortest Job first:
• As SJF reduces the average waiting time
thus, it is better than the first come first
serve scheduling algorithm.
• Disadvantages of SJF:
• One of the demerit SJF has is starvation.
• Not practically implemented as Burst time of
process in advance is unknown.
Shortest Remaining Time First(SRTF)
Criteria :
• Burst time or Execution Time
• Tie Breaker- arrival time and process id.
Type:
• Preemptive (so short process arrives it
pre-empt already running process)
Advantages of Shortest Job first:
• In SRTF the short processes are
handled very fast.
• Disadvantages of SJF:
• One of the demerit SJF has is
starvation.
• Not practically implemented as Burst
time of process in advance is unknown.
Note : In case of Pre-emption, waiting and
response time is different while in non-pre-
emption it is same.
Priority Based Scheduling
Criteria:
• Based on Priority, we scheduled
processes.
• Mostly priority is unique for each
process. In case tie breaker, question
always give hint.
Type:
• Pre-emptive and Non-preemptive.
• Higher number or small number is
given what priority given in question.
• Starvation problem.
• When Solve using pre-emptive nature.
Starvation and its Counter Measure
• Starvation is the problem that occurs when high priority processes keep
executing and low priority processes get blocked for indefinite time.

• In heavily loaded computer system, a steady stream of higher-priority


processes can prevent a low-priority process from ever getting the CPU.

• In starvation resources are continuously utilized by high priority


processes.

• Problem of starvation can be resolved using Aging.

• In Aging priority of long waiting processes is gradually increased.


Round Robin
• Criteria:
• Arrival time + time quantum
• Time quantum is the maximum time for which a process
is assigned the cpu, and after this time cpu is pre-
empted from process.
• In case tie breaker, the process id with lower is
preferred.
• E.g.- fielding practices.
Type:
Preemptive.
Note:- Choosing time quantum is very critical. Small- low cpu efficiency, high- low user
interaction(fcfs)
Multilevel Queuing Scheduling
• Generally only one scheduling
policy is allowed in the system.
• But in case designer want to
opt for more than one
scheduling policy then we opt
for Multilevel Queuing
scheduling policy.
• For example, a common
division is a foreground
(interactive) process and
a background
(batch) process.
• Generally we do this in two ways:
• Fixed priority preemptive scheduling method-
run all process of high priority and then serve low priority.
Starvation problem
inflexibility
• Time slicing-
Divide the whole cpu into fraction and serve them accordingly.
Multilevel feedback Queuing
scheduling
• Multilevel Feedback Queue
Scheduling (MLFQ) CPU Scheduling is
like Multilevel Queue(MLQ) Scheduling but
in this process can move between the
queues. And thus, much more efficient
than multilevel queue scheduling.
• Priorities adjusted dynamically:
• Multiple queues:
Advantages of Multilevel Feedback
Queue Scheduling:
• It is more flexible.
• It allows different processes to move
between different queues. Now let us suppose that queues 1 and 2
• It prevents starvation by moving a process follow round robin with time quantum 4
that waits too long for the lower priority and 8 respectively and queue 3 follow
queue to the higher priority queue. FCFS.
Q. Consider a system that has a CPU-bound process, which requires a burst time of 40
seconds. The multilevel Feed Back Queue scheduling algorithm is used and the queue
time quantum ‘2’ seconds and in each level it is incremented by ‘5’ seconds. Then how
many times the process will be interrupted and in which queue the process will terminate
the execution?
• Process P needs 40 Seconds for total execution.
• At Queue 1 it is executed for 2 seconds and then interrupted and shifted to queue 2.
• At Queue 2 it is executed for 7 seconds and then interrupted and shifted to queue 3.
• At Queue 3 it is executed for 12 seconds and then interrupted and shifted to queue 4.
• At Queue 4 it is executed for 17 seconds and then interrupted and shifted to queue 5.
• At Queue 5 it executes for 2 seconds and then it completes.
• Hence the process is interrupted 4 times and completed on queue 5.
Process Synchronization
• The process needs to communicate with each to achieve the objective.
• For e.g.- create a file in MS word and when given print statement, it communicate
with printer driver program. OS act as interface between these communication.
• Generally, there are two types of process
• Independent
• Cooperating process
• Independent process are those which do not communicate with any other process.
The execution of one process does not affect the execution of other processes.
• In cooperating process are those which communicate with other processes for
attaining common goal. A process that can affect or be affected by other processes
executing in the system.
• Process Synchronization is the coordination of execution of multiple processes in a
multi-process system to ensure that they access shared resources in a controlled
and predictable manner.
• The main objective of process synchronization is to ensure that multiple processes
access shared resources without interfering with each other and to prevent the
possibility of inconsistent data due to concurrent access.
Need of Synchronization
Require between cooperating process.
To obtain expected result of execution.
Problems that can occur without
synchronization:
• Inconsistency
• Data Loss
• Deadlock
Where Synchronization required:
When two process wants to communicate
That is basically when u want to access
share common resources.
We store the common resource in critical
section.
It is segment of code where shared
variables is accessed
Race Condition:
• When more than one process is
executing the same code or accessing
the same memory or any shared
variable in that condition there is a
possibility that the output or the value
of the shared variable is wrong so for
that all the processes doing the race to
say that my output is correct this
condition known as a race condition.
• Several processes access and process
the manipulations over the same data
concurrently, and then the outcome
depends on the particular order in which
the access takes place.
• A race condition is a situation that may
occur inside a critical section. This
happens when the result of multiple
thread execution in the critical section
differs according to the order in which
the threads execute.
Requirements for Critical section
Problem
• Mutual Exclusion
• Progress
• Bound Waiting
• In mutual exclusion, if a process is accessing critical
section , no other process is allowed to access the
critical section.
• In progress, If there is no process in the critical section
and any process is interested in accessing the critical
section, then the process is allowed to access the critical
section.
• In bound waiting , if a process is in critical section, and
other process is waiting to get into critical section. Then
the first process after completing doesn’t allow to
reenter in critical section again; so that other process is
bounded. (Alternative entering).
Solution using Lock
Solution using turn

Progress:
Not satisfied as P1 cannot proceed

Bounded Waiting :
Process can enter into the CS in
strict alternation
• Flag – to store the intention of process regarding entering
the critical section.

Petersons Solution
• Turn – variable to give other one preference to enter into
CS

Note : Work
perfect for only
two process
Semaphore
• Semaphore is just a normal variable
that helps in achieving process
synchronization.
• They are used to enforce mutual
exclusion, avoid race conditions,
and implement synchronization
between processes.
• In order to access these semaphore
values, we use function
• Wait() or P() or Downgrade()
• Signal() or V() or Upgrade()
• Semaphores are of two
types:
1.Binary Semaphore –
This is also known as a
mutex lock. It can have
only two values – 0 and 1.
Its value is initialized to 1.
It is used to implement
the solution of critical
section problems with
multiple processes.
2.Counting Semaphore –
Its value can range over an
unrestricted domain. It is
used to control access to a
resource that has multiple
instances.
Classical Problem of synchronization
• Bounded Buffer Problem/Producer Consumer Problem
• Reader Writer Problem
• Dining Philosopher Problem
• We have a buffer of fixed size. A producer can produce an item and can place in the
buffer. A consumer can pick items and can consume them.
• We need to ensure that when a producer is placing an item in the buffer, then at the same
time consumer should not consume any item.
• In this problem, buffer is the critical section.
Producer Consumer Problem
Reader Writer Problem

Solution

Writer Process
Case 1 : 1 writer 2
reader

Case 2 :
Dining Philosophers Problem
Solutions to DPP
Unit 3 : Deadlock
Operation on resources
• Request: to get control on access
• Use: After request granted, the process can access
the resource.
• Release: After accessing the resource , the process
has to release the resources.

• Deadlock:
• If two or more process waiting on event that never
occur, then these process are said to be deadlock.
• E.g. – recruitment based on experience
• bat ball game between siblings
Necessary condition for Deadlock
• Deadlock can occur when all conditions are satisfied.
1. Mutual exclusion
2. Hold and wait
3. No preemption
4. Circular wait
Mutual exclusion
If one process is accessing or using any resources , no
other process is allowed to access that resource.
• Hold and wait:
One process should hold atleast one resource and wait
for at least one resource.
No pre-emption
No any resource should be pre-empted from any process.
Circular wait:
All deadlock process must wait for each other in circular
manner.
Resource Allocation Graph(Directed)
• Generally two entities
Vertices Process
• Process
Resource
• Resources s

Edge
• Request- from process to resource
• Allocation- from resource to process
Deadlock Tackling technique
• Deadlock prevention
• Deadlock avoidance
• Deadlock detection
Deadlock Prevention
Deadlock Detection
• When resource are single instances
• When resources are multiple instances
• If single instances, then draw weight for graph(WFG) ( a
variant of resource allocation graph).
• Replace resource and directly connect with the process.
• Here we see how process are waiting on each other.
• If more than one instance , then cycle in WFG is not still
sufficient for deadlock.( possibility but not guaranteed).
Here cycle does
not ensure no
deadlock
Proces Allocatio Request Available
s n
R1 R2 R1 R2 R1 R2
P1 0 1 1 0 0 0
P2 1 0 0 0
P3 1 0 0 1
P4 0 1 0 0

Note : - If all the process can complete then


there is no deadlock
Deadlock Recovery

Terminate all process or one by one


• Kill the low priority process
• Kill based on completion
• Kill based on priority
Resource pre-emption
• Kill the process who is holding more resources
• Kill the process who is demanding more resources
2 phase locking protocol
• This is more related to transaction in Database.
• One of the method to ensure isolation property in transaction
is to require data items be accessed in a mutually exclusive
manner.
• allow a transaction to access a data item only if it is currently
holding a lock on that item.
• Generally two types of locks are there.
• Shared lock- Shared lock is also called read lock, used for
reading data items only.(Lock(S))
• Exclusive lock- With the Exclusive Lock, a data item can be
read as well as written. Also called write lock
S.No. Shared Lock Exclusive Lock
Lock mode is read only Lock mode is read as well as
1.
operation. write operation.
Shared lock can be placed on Exclusive lock can only be
objects that do not have an placed on objects that do
2.
exclusive lock already placed on not have any other kind of
them. lock.
Prevents others from
Prevents others from updating
3. reading or updating the
the data.
data.
Issued when transaction wants Issued when transaction
4. to read item that do not have an wants to update unlocked
exclusive lock. item.

Any number of transaction can Exclusive lock can be hold


5.
hold shared lock on an item. by only one transaction.

S-lock is requested using lock-S X-lock is requested using


6.
instruction. lock-X instruction.

Example: Multiple transactions Example: Transaction


7.
reading the same data updating a table row
• Serializability in DBMS guarantees that the execution of multiple transactions in parallel
does not produce any unexpected or incorrect results.
• Implementing this lock system without any restrictions gives us the Simple Lock-based
protocol (or Binary Locking), but it has its own disadvantages, they do not
guarantee Serializability.
• Two Phase Locking
• A transaction is said to follow the Two-Phase Locking protocol if Locking and Unlocking
can be done in two phases.
• Growing Phase: New locks on data items may be acquired but none can be released.
• Shrinking Phase: Existing locks may be released but no new locks can be acquired.
Note: If lock conversion is allowed, then upgrading of lock( from S(a) to X(a) ) is allowed
in the Growing Phase, and downgrading of lock (from X(a) to S(a)) must be done in the
shrinking phase.
• This is just a skeleton transaction
that shows how unlocking and
locking work with 2-PL. Note for:
• Transaction T1
• The growing Phase is from steps 1-3
• The shrinking Phase is from steps 5-7
• Lock Point at 3
• Transaction T2
• The growing Phase is from steps 2-6
• The shrinking Phase is from steps 8-9
• Lock Point at 6

Note : 2PL suffer from deadlock.


• Domain of protection
• Protection in OS : Domain of Protection, Association, Aut
hentication –
GeeksforGeeks
• Access matrix in Operating System - GeeksforGeeks
Access Matrix in OS
Different types of rights:
There are different types of rights the files can
have. The most common ones are:
1.Read- This is a right given to a process in a
• Access Matrix is a security model of domain, which allows it to read the file.
protection state in computer system. It is 2.Write- Process in domain can write into the file.
represented as a matrix. 3.Execute- Process in domain can execute the
• Access matrix is used to define the rights file.
4.Print- Process in domain only has access to
of each process executing in the domain
printer.
with respect to each object. Sometimes, domains can have more than one
• The rows of matrix represent domains right, i.e. combination
F1 of rights
F2 mentioned
F3 Printer
above.
and columns represent objects.
D1 read read
• Each cell of matrix represents set of
access rights which are given to the D2 print
processes of domain means each entry(i,
j) defines the set of operations that a D3 read execute

process executing in domain Di can


D4 read write read write
invoke on object Oj.
• Observations of above matrix:
• There are four domains and four objects– three files(F1, F2,
F3) and one printer.
• A process executing in D1 can read files F1 and F3.
• A process executing in domain D4 has same rights as D1 but it
Prin
can also write on files. F1 F2 F3 D1 D2 D3 D4
ter
• Printer can be accessed by only one process executing in
domain D2.
switc
• A process executing in domain D3 has the right to read file F2 D1 read read
h
and execute file F3.
• Mechanism of access matrix: switc switc
• The mechanism of access matrix consists of many policies and D2 print
h h
semantic properties. Specifically, we must ensure that a
process executing in domain Di can access only those objects
exec
that are specified in row i. Policies of access matrix concerning D3 read
protection involve which rights should be included in the (i, ute
j)th entry. We must also decide the domain in which each
process executes. This policy is usually decided by the
operating system. The users decide the contents of the
access-matrix entries. Association between the domain and
According to the above matrix, a process
processes can be either static or dynamic. Access matrix
provides a mechanism for defining the control for this
executing in domain D2 can switch to domain
association between domain and processes. D3 and D4. A process executing in domain D4
• Switch operation: When we switch a process from one
domain to another, we execute a switch operation on an can switch to domain D1 and process
object(the domain). We can control domain switching by
including domains among the objects of the access matrix.
executing in domain D1 can switch to domain
Processes should be able to switch from one domain (Di) to
another domain (Dj) if and only if a switch right is given to
D2.
access(i, j). This is explained using an example below:
UNIT 4- Memory
Management
Memory Management
• It is module of OS that is responsible for allocation and
deallocation of memory(RAM) to process.
• Long term scheduler works close with memory management.
Memory management technique
Fixed Partition Contiguous allocation
• Partition is made of main memory
• E.g. restraunt with fixed seat arrangement.

• If extra space is allocated to a process then extra space is


wasted , known as internal fragmentation.
Partition allocation policy
Variable sized contiguous allocation
• A new
partition
is created
at run
time
whenever
any
process
arrive in
the
system.
Compaction
• Solution for external fragmentation.

Compaction is not feasible as we have to halt every


process and again deallocated and allocate the memory
to every process.
Paging
• Non contiguous memory
management technique.
• Process is divided into equal sized
page.
• Main memory is divided into equal
size frames.
• Frame size = Page size.
• Processor will have view of process
and its pages.
• Pages are scattered in frames.
• Page table is to keep record where
these pages of process are stored in
which frames. (map pages to
physical frames
• Since CPU generate only address , it doesn’t generate
here main memory address as it only see pages of
process but doesn’t know where pages are actually
stored in main memory.
• Paging is required because we cannot completely place
all process components single in Main memory.
• Paging is useful in virtual memory.
• Page table is also stored in main memory.
• If page table is very small then it is stored in cpu
registers otherwise main memory.
• Let assume page size = 2 B Logical address- page address.
Sical

Page table
Logical address space = 2 ^ logical address bit(p+d)
Physical address space = 2^ physical address bit (f+d)
Time required to access content of
page
• Two memory access.
TLB (transition look aside buffer)
Multilevel Paging
• Since The page table is also stored in the main memory,
it is basically stored in frame or frames.
• So when the page table is of much more size , it
difficulty to keep track of frames.
• So we further store the page table in other page table
that contains the pages of page table.
• So first access will be done on the page table of page
table and then from page table we will directly go the
respective process page.
Segmentation
• Divide process in logically
related partitions(segments)
• Segments are scattered in
physical memory.
• Each segment are of variable
size.
Virtual Memory
• Feature of OS
• Enable to run larger
process with small main
memory.
• Based on demand paging
• Bring only those pages
which are required else
stored in secondary
memory
Page replacement policy
• FIFO
• Replace the page which come in mm first.
• Suffer from belady anomaly. – frames increase page fault also increases
Optimal policy
• Replace with that pages that is not referred for longest time in future.
• Provide minimum page fault.
• Practically into possible
Least recently used (LRU)
• Replace with pages that is not referred for longest
period of time since last allocation.
• Provides minimum page fault in practical scenario.
Most recently used(MRU)
• Replace with that page that is most recently referred.
Thrashing
• Frame allocation- equal and
proportion allocation.
• Thrashing is a condition or a
situation when the system is
spending a major portion of its time
servicing the page faults, but the
actual processing done is very
negligible.
• Causes of thrashing:
1.High degree of multiprogramming.
2.Lack of frames.
3.Page replacement policy.
Unit 5 : File and disk Management
File and file system
• Collection of related information stored in secondary storage.

• Collection of files are stored in file directory(folder).


• File system is a module of OS that manages , control, organise file and
related structures.
File system types and types of file directory
structures

First level only


directory and
then it can have
files , directory ,
folder
Disk Formatting
• Physical formatting
• Logical formatting
• In physical formatting , manufacture divides the disk into tracks and
sectors.
• Logical formatting is done by users. First install file system on disk and
then (creating c drive , d or E and then installing OS).
• After partition sector are numbered into disk blocks. Generally, one
sector one disk block.
File Allocation Methods
• Contiguous Allocation
Linked Allocation
Indexed Allocation
• Covers benefit of both earlier allocation.
Disk and disk Scheduling
Seek Time: Seek time is the time taken to locate
the disk arm to a specified track where the data is to
be read or written. So, the disk scheduling algorithm
that gives a minimum average seek time is better.
Rotational Latency: Rotational Latency is the time
taken by the desired sector of the disk to rotate into
a position so that it can access the read/write heads.
So, the disk scheduling algorithm that gives
minimum rotational latency is better.
Transfer Time: Transfer time is the time to transfer
the data. It depends on the rotating speed of the disk
and the number of bytes to be transferred.
Disk Access Time:
Disk Access Time = Seek Time + Rotational Latency
+ Transfer Time
Total Seek Time = Total head Movement * Seek Time
FCFS Scheduling
• FCFS is the simplest of all Disk
Scheduling Algorithms. In FCFS,
the requests are addressed in the
order they arrive in the disk
queue.
• Example: Suppose the order of
request is-
(82,170,43,140,24,16,190) And
current position of Read/Write
head is: 50. Number of cylinder is
200.
So, total overhead movement (total
distance covered by the disk arm) =
• (82-50)+(170-82)+(170-43)+(140-
43)+(140-24)+(24-16)+(190-16)
=642
SSTF (Shortest Seek Time First)
• In SSTF (Shortest Seek Time First),
requests having the shortest seek time
are executed first.
• As a result, the request near the disk
arm will get executed first.
• total overhead movement (total
distance covered by the disk arm) =
(50-43)+(43-24)+(24-16)+(82- Disadvantages
16)+(140-82)+(170-140)+(190-170) • Overhead to calculate seek time in advance
=208 • Can cause Starvation for a request if it has a
higher seek time as compared to incoming
Advantages requests
• The average Response Time decreases • The high variance of response time as SSTF
favors only some requests
• Throughput increases
SCAN (Elevator)
• In the SCAN algorithm the disk arm
moves in a particular direction and
services the requests coming in its
path and after reaching the end of the
disk, it reverses its direction and
again services the request arriving in
its path.
• So, this algorithm works as an
elevator and is hence also known as
an elevator algorithm.
Advantages of SCAN Algorithm
• As a result, the requests at the High throughput
midrange are serviced more and those Low variance of response time
arriving behind the disk arm will have Average response time
to wait. Disadvantages of SCAN Algorithm
Long waiting time for requests for locations just visited
• total distance covered by the disk
by disk arm
arm is calculated as = (199-50) +
(199-16) = 332
Circular SCAN (CSCAN)
• These situations are avoided in the
CSCAN algorithm in which the disk
arm instead of reversing its direction
goes to the other end of the disk and
starts servicing the requests from
there.
• So, the total overhead movement
(total distance covered by the disk
arm) is calculated as =(199-50) +
(199-0) + (43-0) = 391

Advantages of C-SCAN Algorithm


Provides more uniform wait time compared to
SCAN.
LOOK
• LOOK Algorithm is similar to
the SCAN disk scheduling
algorithm except for the
difference that the disk arm in
spite of going to the end of the
disk goes only to the last
request to be serviced in front
of the head and then reverses
its direction from there only.
• So, the total overhead
movement (total
distance covered by the
disk arm) is calculated
as = (190-50) + (190-16)
= 314
CLOOK (Circular LOOK)
• As LOOK is similar to the SCAN
algorithm, in a similar way,
C-LOOK is similar to the CSCAN
disk scheduling algorithm.
• In CLOOK, the disk arm in spite of
going to the end goes only to the last
request to be serviced in front of the
head and then from there goes to the
other end’s last request.
• Thus, it also prevents the extra
delay which occurred due to
unnecessary traversal to the end of
the disk So, the total overhead movement (total distance
covered by the disk arm) is calculated as
= (190-50) + (190-16) + (43-16) = 341

You might also like