OS Full Notes
OS Full Notes
System software operates and controls the computer system and provides a platform to
run application software.
An operating system is a piece of software that manages all the resources of a computer
system, both hardware and software, and provides an environment in which the user can
execute his/her programs in a convenient and efficient manner by hiding underlying
complexity of the hardware and acting as a resource manager.
Why OS?
1. What if there is no OS?
a. Bulky and complex app. (Hardware interaction code must be in app’s code
base)
b. Resource exploitation by 1 App.
c. No memory protection.
2. What is an OS made up of?
a. Collection of system software.
User
Application programs
Operating system
Computer hardware
The operating system provides the means for proper use of the resources in the
Batch-processing OS,
2. Firstly, user prepares his job using punch cards.
1.
3. Then, he submits the job to the computer operator.
Operator collects the jobs from different users and sort the jobs into batches with
4. similar needs.
5. Then, operator submits the batches to the processor one by one.
CodeHelp
Multiprogramming
memory
increases CPU utilization by keeping multiple jobs (code and data)
in the so that the CPU always has one to x cute in case some job gets busy with
I/O.-
-
Single CPU
- Context switching for proc ss s.
- Switch happens wh n curr nt proc ss go s to wait state.
CPU i le time r uc .
Program: A Program is an executable file which contains a certain set of instructions written
to complete the specific job or operation on your computer.
• It’s a compiled code. Ready to be executed.
• Stored in Disk
Thread:
• Single sequence stream within a process.
CodeHelp
Scheduling:
Threads are scheduled for execution based on their priority. Even though threads are
executing within the runtime, all threads are assigned processor time slices by the operating
system.
1. Kernel: A kernel is that part of the operating system which interacts directly with the
A shell, also known as a command interpreter, is that part of the operating system that
Functions of Kernel:
Types CodeHelp
1. Process management:
a.
Scheduling processes and threads on the CPUs.
b.
Keeping track of which part of memory are currently being used and by
3. which process.
File management:
a.
Creating and deleting fil s.
b.
ii. Buffering
1. Within one job.
iii. 2. Eg. Youtube video buffering
aching
3. Hybrid Kernel: Advantages of both worlds. (File mgmt. in User space and rest in Kernel
a.
b. space. )
c. Combined approach.
CodeHelp
Nano/Exo kernels…
Q.
A system call is a mechanism using which a user program can request a service from the kernel
for which it does not have the permission to perform.
User programs typically do not have permission to perform operations like accessing I/O devices
and communicating other programs.
System Calls are the only way through which a process can go into kernel mode from user mode.
Types of System Calls:
1) Process Control
a. end, abort
b. load, execute
c. create process, terminate process
d. get process attributes, set process attributes
e. wait for time
f. wait event, signal event
g. allocate and free memory
2) File Management
a. create file, delete file
b. open, close
c. read, write, reposition
d. get file attributes, set file attributes
3) Device Management
a. request device, release device
b. read, write, reposition
c. get device attributes, set device attributes
d. logically attach or detach devices
4) Information maintenance
a. get time or date, set time or date
b. get system data, set system data
c. get process, file, or device attributes
d. set process, file, or device attributes
5) Communication Management
a. create, delete communication connection
b. send, receive messages
c. transfer status information
d. attach or detach remote devices
i. PC On
ii. CPU initializes itself and looks for a firmware program (BIOS) stored in
BIOS Chip (Basic input-output system chip is a ROM chip found on
mother board that allows to access & setup computer system at most
basic level.)
1. In modern PCs, CPU loads UEFI (Unified extensible firmware
interface)
iii. CPU runs the BIOS which tests and initializes system hardware. Bios
loads configuration settings. If something is not appropriate (like
missing RAM) error is thrown and boot process is stopped.
This is called POST (Power on self-test) process.
CodeHelp
(UEFI can do a lot more than just initialize hardware; it’s really a tiny
operating system. For example, Intel CPUs have the Intel Management
Engine. This provides a variety of features, including powering Inte ’s
Active Management Technology, which allows for remote management
iv. of business PCs.)
bootloader.
Linux systems use GRUB, and Macs use something called boot.efi
Lec-7: 32-Bit vs 64-Bit OS
1. A 32-bit OS has 32-bit registers, and it can access 2^32 unique memory
addresses. i.e., 4GB of physical memory.
2. A 64-bit OS has 64-bit registers, and it can access 2^64 unique memory addresses. i.e.,
17,179,869,184 GB of physical memory.
3. 32-bit CPU architecture can process 32 bits of data & information.
4. 64-bit CPU architecture can process 64 bits of data & information.
5. Advantages of 64-bit over the 32-bit operating system:
a. Addressable Memory: 32-bit CPU -> 2^32 memory addresses, 64-bit CPU -> 2^64
CodeHelp
memory addresses.
b. Resource usage: Installing more RAM on a system with a 32-bit OS doesn't im act
performance. However, upgrade that system with excess RAM to the 64-bit version of
Windows, and you'll notice a difference.
c. Performance: All calculations take place in the registers. When you’re erforming math in
your code, operands are loaded from memory into r gist rs. So, having arger registers
allow you to perform larger calculations at the same time.
32-bit processor can execute 4 bytes of data in 1 instruction cyc whi 64-bit means that
processor can execute 8 bytes of data in 1 instruction cycle.
(In 1 sec, there could be thousands to billons of instruction cycl s d p nding upon a
processor design)
d. Compatibility: 64-bit CPU can run both 32-bit and 64-bit OS. While 32-bit CPU can only
run 32-bit OS.
. Better Graphics p rformance: 8-byt s graphics calculations make graphics-intensive apps
run faster.
Lec-8: Storage Devices Basics
CodeHelp
1. Register: Smallest unit of storage. It is a part of CPU itself.
A register may hold an instruction, a storage addr ss, or any data (such as bit sequence or individual
characters).
Registers are a type of comput r m mory us to quickly accept, store, and transfer data and
instructions that are being us imm iat ly by the CPU.
2. Cache: Additional memory syst m that t mporarily stores frequently used instructions and data for
quicker processing by the CPU.
3. Main Mem ry: RAM.
4. Sec ndary Mem ry: St rage me ia, on which computer can store data & programs.
Comparison
1. ost:
a. Primary st rages are c stly.
b. Registers are most expensive due to expensive semiconductors & labour.
c. Secondary storages are cheaper than primary.
2. Access Speed:
a. Primary has higher access speed than secondary memory.
b. Registers has highest access speed, then comes cache, then main memory.
3. Storage size:
a. Secondary has more space.
4. Volatility:
a. Primary memory is volatile.
b. Secondary is non-volatile.
Lec-9: Introduction to Process
4. Architecture of process:
5. Attributes of process:
a. Feature that allows i ntifying a proc ss uniquely.
b. Process table
i. All proc ss s are b ing track by OS using a table like data structure.
ii. Each entry in that table is process control block (PCB).
c. PCB: St res inf /attributes of a process.
i. Data structure used for each process, that stores information of a process such as
pr cess id, pr gram counter, process state, priority etc.
6. P B structure:
Registers in the PCB, it is a data structure. When a processes is running and it's time slice expires,
the current value of process specific registers would be stored in the PCB and the process would be
swapped out. When the process is scheduled to be run, the register values is read from the PCB and
written to the CPU registers. This is the main purpose of the registers in the PCB.
Lec-10: Process States | Process Queues
1. Swapping
a. Time-sharing system may have medium term schedular (MTS).
b. Remove processes from memory to reduce degree of multi-programming.
c. These removed processes can be reintroduced into memory, and its execution can be continued
where it left off. This is called Swapping.
d. Swap-out and swap-in is done by MTS.
e. Swapping is necessary to improve process mix or because a change in memory requirements has
overcommitted available memory, requiring memory to be freed up.
f. Swapping is a mechanism in which a process can be swapped temporarily out of main memory (or
move) to secondary storage (disk) and make that memory available to other processes. At some
CodeHelp
later time, the system swaps back the process from the secondary storage to main memory.
2. Context-Switching
a. Switching the CPU to anoth r proc ss r quires p rforming a state save of the current process and a
state restore of a diff r nt proc ss.
b. When this occurs, the k rn l saves the cont xt of the old process in its PCB and loads the saved
context of the new proc ss sch ul to run.
c. It is pure overhea , b cause the syst m do s no useful work while switching.
d. Speed varies from machine to machin , d p nding on the memory speed, the number of registers
that must be c pied etc.
3. Orphan pr cess
a. The pr cess wh se parent process has been terminated and it is still running.
b. Orphan pr cesses are ad pted by init process.
c. Init is the first pr cess f OS.
4. Zombie pr cess / Defunct pr cess
a. A zombie process is a process whose execution is completed but it still has an entry in the process
table.
b. Zombie processes usually occur for child processes, as the parent process still needs to read its
child’s exit status. Once this is done using the wait system call, the zombie process is eliminated
from the process table. This is known as reaping the zombie process.
c. It is because parent process may call wait () on child process for a longer time duration and child
process got terminated much earlier.
. As entry in the process table can only be removed, after the parent process reads the exit status of
child process. Hence, the child process remains a zombie till it is removed from the process table.
LEC-12: Intro to Process Scheduling | FCFS | Convoy Effect
1. Process Scheduling
a. Basis of Multi-programming OS.
b. By switching the CPU among processes, the OS can make the computer more productive.
c. Many processes are kept in memory at a time, when a process must wait or time quantum expires,
the OS takes the CPU away from that process & gives the CPU to another process & this pattern
continues.
2. CPU Scheduler
a. Whenever the CPU become ideal, OS must select one process from the ready queue to be executed.
b. Done by STS.
3. Non-Preemptive scheduling
CodeHelp
a. Once CPU has been allocated to a process, the process keeps the CPU until it releases CPU either by
terminating or by switching to wait-state.
b. Starvation, as a process with long burst time may starve ess burst time rocess.
c. Low CPU utilization.
4. Preemptive scheduling
a. CPU is taken away from a process after time quantum xpir s a ong with t rminating or switching
to wait-state.
b. Less Starvation
c. High CPU utilization.
5. Goals of CPU scheduling
a. Maximum CPU utilization
b. Minimum Turnaround time (TAT).
c. Min. Wait-time
d. Min. response time.
e. Max. throughput of syst m.
6. Throughput: No. of process s compl t p r unit time.
7. Arrival time (AT): Time wh n proc ss is arriv at th r ady queue.
8. Burst time (BT): The time require by the proc ss for its xecution.
9. Turnar und time (TAT): Time taken from first time process enters ready state till it terminates. (CT - AT)
10. Wait time (WT): Time pr cess spen s waiting for CPU. (WT = TAT – BT)
11. Response time: Time urati n between process getting into ready queue and process getting CPU for the
first time.
12. ompleti n Time ( T): Time taken till process gets terminated.
13. F FS (First come-first serve):
a. Whichever process comes first in the ready queue will be given CPU first.
b. In this, if one rocess has longer BT. It will have major effect on average WT of diff processes, called
onvoy effect.
c. onvoy Effect is a situation where many processes, who need to use a resource for a short time, are
blocked by one rocess holding that resource for a long time.
i. This cause poor resource management.
LEC-13: CPU Scheduling | SJF | Priority | RR
1. Shortest Job First (SJF) [Non-preemptive]
a. Process with least BT will be dispatched to CPU first.
b. Must do estimation for BT for each process in ready queue beforehand,
Correct estimation of BT is an impossible task (ideally.)
c. Run lowest time process for all time then, choose job having lowest BT at that instance.
d. This will suffer from convoy effect as if the very first process which came is
Ready state is having a large BT.
e. Process starvation might happen.
f. Criteria for SJF algos, AT + BT.
2. SJF
CodeHelp
[Preemptive]
a. Less starvation.
b. No convoy effect.
c. Gives average WT less for a given set of processes as schedu ing short job before a long one
decreases the WT of short job more than it increases the WT of the ong rocess.
3. Priority Scheduling [Non-preemptive]
a. Priority is assigned to a process when it is creat d.
b. SJF is a special case of general priority scheduling with priority inv rs y ro ortional to BT.
4. Priority Scheduling [Preemptive]
a. Current RUN state job will be preempted if n xt job has high r priority.
b. May cause indefinite waiting (Starvation) for low r priority jobs. (Possibility is they won’t get
executed ever). (True for both preemptive and non-pr mptive v rsion)
i. Solution: Ageing is the solution.
ii. Gradually increase priority of process that wait so long. E.g., increase priority by 1 every 15
minutes.
5. Round robin scheduling (RR)
a. Most popular
b. Like FCFS but pre mptive.
c. Designed for time sharing syst ms.
. Criteria: AT + time quantum (TQ), Do sn’t d p nd on BT.
e. No pr cess is g ing to wait forever, hence very low starvation. [No convoy effect]
f. Easy to implement.
g. If TQ is small, m re will be the context switch (more overhead).
LEC-14: MLQ | MLFQ
CodeHelpd.Systemprocess:CreatedbyOS(Highestpriority)
CodeHelp
3. Comparison:
FCFS SJF PSJF Priority P- RR MLQ MLFQ
Priority
Design Simple Complex Complex Complex Complex Sim Com ex Com lex
Preemption No No Yes No Yes Yes Yes Yes
Convoy Yes Yes No Yes Ys No Yes Yes
effect
Overhead No No Yes No Ys Ys Yes Yes
LEC-15: Introduction to Concurrency
1. Concurrency is the execution of the multiple instruction sequences at the same time. It
happens in the operating system when there are several process threads running in parallel.
2. Thread:
• Single sequence stream within a process.
• An independent path of execution in a process.
• Light-weight process.
• Used to achieve parallelism by dividing a process’s tasks which are
independent path of execution.
• E.g., Multiple tabs in a browser, text editor (When you are typing in an editor, spell
checking, formatting of text and saving the text are done concurrently by multi le threads.)
3. Thread Scheduling: Threads are scheduled for execution based on their
riority. Even though threads are executing within the runtime, all threads
are assigned rocessor time s ices by the operating system.
4. Threads context switching
• OS saves current state of thread & switch s to anoth r thr ad of same process.
• Doesn’t includes switching of memory addr ss space. (But Program
counter, registers & stack are included.)
• Fast switching as compared to process switching
• CPU’s cache state is preserved.
5. How each thread get access to the CPU?
• Each thread has its own program count r.
• Depending upon the thr ad sch duling algorithm, OS schedule these threads.
• OS will fetch instructions corr sponding to PC of that thread and execute instruction.
6. I/O or TQ, based context switching is one h re as well
• We have TCB (Thr ad control block) like PCB for state storage
management while perf rming c ntext switching.
7. Will single CPU system w uld gain by multi-threading technique?
• Never.
• As two threads have to c ntext switch for that single CPU.
• This w n’t give any gain.
8. Benefits f Multi-threading.
• Responsiveness
• Resource sharing: Efficient resource sharing.
• Economy: It is more economical to create and context switch threads.
1. Also, allocating memory and resources for process creation is
costly, so better to divide tasks into threads of same process.
• Threads allow utilization of multiprocessor architectures to a greater scale and efficiency.
LEC-16: Critical Section Problem and How to address it
1. Conditional variable
a. The condition variable is a synchronization primitive that lets the thread wait
until a certain condition occurs.
b. Works with a lock
c. Thread can enter a wait state only when it has acquired a lock. When a
thread enters the wait state, it will release the lock and wait until another
thread notifies that the event has occurred. Once the waiting thread enters
the running state, it again acquires the lock immediately and starts executing.
d. Why to use conditional variable?
CodeHelp
CodeHelp
1. We have 5 philosophers.
2. They spend their life just b ing in two stat s:
a. Thinking
b. Eating
3. They sit on a circular table surroun by 5 chairs (1 each), in the center of table is a bowl of
noodles, and the table is laid with 5 single forks.
4. Thinking state: When a ph. Thinks, he doesn’t interact with others.
5. Eating state: When a ph. Gets hungry, he tries to pick up the 2 forks adjacent to him (Left and
Right). He can pick ne f rk at a time.
6. One can’t pick up a f rk if it is already taken.
7. When ph. Has b th f rks at the same time, he eats without releasing forks.
8. S luti n can be given using semaphores.
a. Each fork is a binary semaphore.
b. A ph. alls wait() operation to acquire a fork.
c. Release fork by calling signal().
. Semaphore fork[5]{1};
9. Although the semaphore solution makes sure that no two neighbors are
eating simultaneously but it could still create Deadlock.
10. Suppose that all 5 ph. Become hungry at the same time and each picks up
their left fork, then All fork semaphores would be 0.
11. When each ph. Tries to grab his right fork, he will be waiting for ever (Deadlock)
12. We must use some methods to avoid Deadlock and make the solution work
a. Allow at most 4 ph. To be sitting simultaneously.
b. Allow a ph. To pick up his fork only if both forks are available and to
do this, he must pick them up in a critical section (atomically).
c. Odd-even rule.
an odd ph. Picks up first his left fork and then his right fork,
whereas an even ph. Picks up his right fork then his left fork.
13. Hence, only semaphores are not enough to solve this problem.
We must add some enhancement rules to make deadlock free solution.
LEC-21: Deadlock Part-1
from starting.
6. Example of resources: Memory space, CPU cycles, files, locks, sockets, IO devices etc.
7. Single resource can have multiple instances of that. E.g., CPU is a resource, and a system can have 2
CPUs.
8. How a process/thread utilize a resource?
a. Request: Request the R, if R is free Lock it, lse wait ti it is avai ab .
b. Use
c. Release: Release resource instance and make it available for oth r processes
1. Deadlock Avoidance: Idea is, the kernel be given in advance info concerning which resources will
use in its lifetime.
By this, system can decide for each request whether the process should wait.
To decide whether the current request can be satisfied or delayed, the system must consider the
resources currently available, resources currently allocated to each process in the system and the
future requests and releases of each process.
a. Schedule process and its resources allocation in such a way that the DL never occur.
b. Safe state: A state is safe if the system can allocate resources to each process (up to its
CodeHelp
4. How OS manages the isolation and protect? (Memory Mapping and Protection)
a. OS provides this Virtual Address Space (VAS) concept.
b. To separate memory space, we need the ability to determine the range of legal addresses that the
process may access and to ensure that the process can access only these legal addresses.
c. The relocation register contains value of smallest physical address (Base address [R]); the limit
register contains the range of logical addresses (e.g., relocation = 100040 & limit = 74600).
d. Each logical address must be less than the limit register.
e. MMU maps the logical address dynamically by adding the value in the relocation register.
f. When CPU scheduler selects a process for execution, the dispatcher loads the relocation and
limit registers with the correct values as part of the context switch. Since every address generated by the CPU
(Logical address) is checked against these registers, we can protect both OS and other users’ programs and
data from being modified by running process.
g. Any attempt by a program executing in user mode to access the OS memory or other uses’
memory results in a trap in the OS, which treat the attempt as a fatal error.
h. Address Translation
5. CodeHelpAllocationMethodonPhysicalMemory
a. Contiguous Allocation
b. Non-contiguous Allocation
6. Contiguous Memory Allocation
a. In this scheme, each proc ss is contain d in a single contiguous block of memory.
Fixed Partitioningi.Thmain m mory is divid d into partitions of equal or different sizes.
b. ii.iii.Limitations:1.InternalFragmentation:ifthesizeoftheprocessislesserthen the total size ofthepartitionthensomesizeofthepartitiongetswastedandremainunused.Thisiswastageofthememoryandcalledinternalfragmentation.2.ExternalFragmentation:Thetotalunusedspaceofvariouspartitionscannotbe
CodeHelpii.
1. Defragmentation/Compaction
a. Dynamic partitioning suffers from external fragmentation.
b. Compaction to minimize the probability of external fragmentation.
c. All the free partitions are made contiguous, and all the loaded partitions are brought together.
d. By applying this technique, we can store the bigger processes in the memory. The free partitions
are merged which can now be allocated according to the needs of new processes. This technique is
also called defragmentation.
e. The efficiency of the system is decreased in the case of compaction since all the free spaces will be
transferred from several places to a single place.
2. How free space is stored/represented in OS?
CodeHelp
a. Free holes in the memory are represented by a free list (Linked-List data structure).
3. How to satisfy a request of a of n size from a list of free holes?
a. Various algorithms which are implemented by the Operating System in order to find out the holes
in the linked list and allocate them to the processes.
b. First Fit
i. Allocate the first hole that is big enough.
ii. Simple and easy to implement.
iii. Fast/Less time complexity
c. Next Fit
i. Enhancement on First fit but starts s arch always from last allocat d hole.
ii. Same advantages of First Fit.
d. Best Fit
i. Allocate small st hole that is big nough.
ii. Lesser int rnal fragm ntation.
iii. May create many small hol s and cause major external fragmentation.
iv. Slow, as r quir to it rate whole fr holes list.
. Worst Fit
i. Allocate the larg st hole that is big nough.
ii. Sl w, as required to iterate whole free holes list.
iii. Leaves larger h les that may accommodate other processes.
LEC-26: Paging | Non-Contiguous Memory Allocation
process to be non-contiguous.
b. It avoids external fragmentation and the need of compaction.
c. Idea is to divide the physical memory into fixed-sized b ocks ca ed Frames, a ong with divide
logical memory into blocks of same size called Pages. (# Page size = Frame size)
d. Page size is usually determined by the processor archit cture. Traditiona y, ages in a system had
uniform size, such as 4,096 bytes. However, proc ssor d signs oft n a ow two or more, sometimes
simultaneous, page sizes due to its benefits.
e. Page Table
i. A Data structure stores which page is mapp d to which frame.
ii. The page table contains the base addr ss of ach page in the physical memory.
f. Every address generated by CPU (logical address) is divided into two parts: a page number (p) and
a page offset (d). The p is us d as an ind x into a page table to get base address the corresponding
frame in physical m mory.
g. Page table is stored in main memory at the time of process creation and its base address is stored
in process control block (PCB).
h. A page table base register (PTBR) is present in the system that points to the current page table.
hanging page tables requires only this one register, at the time f context-switching.
4. How Paging avoids external fragmentation?
a. Non-contiguous allocation of the pages of the process is allowed in the random free frames of the
physical memory.
5. Why paging is slow and how do we make it fast?
a. There are too many memory references to access the desired location in physical memory.
6. Translation Look-aside buffer (TLB)
a. A Hardware support to speed-up paging process.
b. It’s a hardware cache, high speed memory.
c. TBL has key and value.
d. Page table is stores in main memory & because of this when the memory
references is made the translation is slow.
e. When we are retrieving physical address using page table, after getting frame
address corresponding to the page number, we put an entry of the into the TLB. So that next
time, we can get the values from TLB directly without referencing actual page table. Hence,
make paging process faster.
CodeHelpf.TLBhit,TLBcontainsthemappingfortherqustedlogicaladdress.
9.
10. Advantages:
a. No internal fragmentation.
b. One segment has a contiguous allocation, hence efficient working within segment.
c. The size of segment table is generally less than the size of page table.
. It results in a more efficient system because the compiler keeps the same
type of functions in one segment.
11. Disadvantages:
a. External fragmentation.
b. The different size of segment is not good that the time of swapping.
12. Modern System architecture provides both segmentation and paging implemented in
some hybrid approach.
LEC-28: What is Virtual Memory? || Demand Paging || Page Faults
1. Virtual memory is a technique that allows the execution of processes that are not completely in the
memory. It provides user an illusion of having a very big main memory. This is done by treating a part of
secondary memory as the main memory. (Swap-space)
2. Advantage of this is, programs can be larger than physical memory.
3. It is required that instructions must be in physical memory to be executed. But it limits the size of a
program to the size of physical memory. In fact, in many cases, the entire program is not needed at the
same time. So, we want an ability to execute a program that is only partially in memory would give
many benefits:
CodeHelp
a. A program would no longer be constrained by the amount of physical memory that is
available.
b. Because each user program could take less physical memory, more rograms could be run at
the same time, with a corresponding increase in CPU uti ization and through ut.
c. Running a program that is not entirely in memory wou d benefit both the system and the
user.
4. Programmer is provided very large virtual memory wh n on y a sma r hysical memory is available.
5. Demand Paging is a popular method of virtual m mory manag m nt.
6. In demand paging, the pages of a process which are l ast us d, g t stor d in the s condary memory.
7. A page is copied to the main memory when its demand is mad , or page fault occurs. There are various
page replacement algorithms which are used to d t rmine the pag s which will be replaced.
8. Rather than swapping the entire process into memory, we use Lazy Swapp r. A lazy swapper never
swaps a page into memory unless that page will be needed.
9. We are viewing a proc ss as a s qu nce of pag s, rather than one large contiguous address space, using
the term Swapper is t chnically incorr ct. A swapp r manipulates entire processes, whereas a Pager is
concerned with individual pag s of a proc ss.
10. How Demand Paging works?
a. When a proc ss is to be swapp -in, the pag r guesses which pages will be used.
b. Instead of swapping in a whole proc ss, the pager brings only those pages into memory. This,
it avoi s rea ing into m mory pag s that will not be used anyway.
c. Ab ve way, OS ecreases the swap time and the amount of physical memory needed.
. The valid-invalid bit scheme in the page table is used to distinguish between pages that are
in mem ry and that are on the isk.
i. Valid-invalid bit 1 means, the associated page is both legal and in memory.
ii. Valid-invalid bit 0 means, the page either is not valid (not in the LAS of the process)
r is valid but is currently on the disk.
CodeHelpthepageasthroughithadalwaysbeeninmemory.
f. . If a process never attempts to access some invalid bit pag , the roc ss wi be
executed successfully without even the need pag s pr s nt in the swap space.
i. Check an internal table (in PCB of the process) to determine whether the reference
was valid or an invalid m mory access.
ii. If r f. was invalid proc ss throws xception.
If r f. is valid, pag r will swap-in the page.
iii. We find a fr frame (from fr -frame list)
iv. Sch ule a isk op ration to r ad the desired page into the newly allocated frame.
v. Wh n isk r ad is compl t , we modify the page table that, the page is now in
mem ry.
vi. Restart the instruction that was interrupted by the trap. The process can now access
i.
j. Pure Demand Paging
i. In extreme case, we can start executing a process with no pages in
memory. When OS sets the instruction pointer to the first instruction
of the process, which is not in the m mory. The proc ss imm diately
faults for the page and page is brought in the memory.
ii. Nev r bring a page into m mory until it is required.
k. We use locality of r f r nce to bring out r asonable performance from demand paging.
11. Advantages of Virtual m mory
a. The egree of multi-programming will be increased.
b. User can run large apps with less real physical memory.
12. Disadvantages f Virtual Mem ry
a. The system can bec me slower as swapping takes time.
b. Thrashing may ccur.
LEC-29: Page Replacement Algorithms
1. Whenever Page Fault occurs, that is, a process tries to access a page which is not currently present in a
frame and OS must bring the page from swap-space to a frame.
2. OS must do page replacement to accommodate new page into a free frame, but there might be a possibility
the system is working in high utilization and all the frames are busy, in that case OS must replace one of the
pages allocated into some frame with the new page.
3. The page replacement algorithm decides which memory page is to be replaced. Some allocated page is
swapped out from the frame and new page is swapped into the freed frame.
4. Types of Page Replacement Algorithm: (AIM is to have minimum page faults)
a. FIFO
i. Allocate frame to the page as it comes into the memory by replacing the oldest page.
CodeHelp
1. Thrashing
a. If the process doesn’t have the number of frames it needs to support pages in active use, it will
quickly page-fault. At this point, it must replace some page. However, since all its pages are in active use, it must
replace a page that will be needed again right away. Consequently, it quickly faults again, and again, and again,
replacing pages that it must bring back in immediately.
b. This high paging activity is called Thrashing.
c. A system is Thrashing when it spends more time servicing the page faults
than executing processes.
CodeHelp
d. Technique to Handle Thrashing
i. Working s t mo l
1. This mo l is bas on the concept of the Locality Model.
2. The basic principle stat s that if we allocate enough frames to a process to
acc mm ate its current locality, it will only fault whenever it moves to some
new l cality. But if the allocated frames are lesser than the size of the current
l cality, the process is bound to thrash.
ii. Page Fault frequency
1. Thrashing has a high page-fault rate.
2. We want to control the page-fault rate.
3. When it is too high, the process needs more frames. Conversely, if the page-fault
rate is too low, then the process may have to many frames.
4. We establish upper and lower bounds on the desired page fault rate.
5. If pf-rate exceeds the upper limit, allocate the process another frame, if pf-rate
fails falls below the lower limit, remove a frame from the process.
6. By controlling pf-rate, thrashing can be prevented.