OS in 6 Hours
OS in 6 Hours
Unit – I Introduction : Operating system and functions, Classification of Operating systems- Batch, Interactive, Time sharing, Real
(Chapter-1: Introduction)- Operating system, Goal & functions, System Components, Operating System services,
Time System, Multiprocessor Systems, Multiuser Systems, Multiprocess Systems, Multithreaded Systems, Operating System Classification of Operating systems- Batch, Interactive, Multiprogramming, Multiuser Systems, Time sharing,
Structure- Layered structure, System Components, Operating System services, Reentrant Kernels, Monolithic and Microkernel Multiprocessor Systems, Real Time System.
Systems.
(Chapter-2: Operating System Structure)- Layered structure, Monolithic and Microkernel Systems, Interface, System Call.
Chapter-3: Process Basics)- Process Control Block (PCB), Process identification information, Process States, Process
Unit – II CPU Scheduling: Scheduling Concepts, Performance Criteria, Process States, Process Transition Diagram, Schedulers,
Process Control Block (PCB), Process address space, Process identification information, Threads and their management, Transition Diagram, Schedulers, CPU Bound and i/o Bound, Context Switch.
Scheduling Algorithms, Multiprocessor Scheduling. Deadlock: System model, Deadlock characterization, Prevention, Avoidance (Chapter-4: CPU Scheduling)- Scheduling Performance Criteria, Scheduling Algorithms.
and detection, Recovery from deadlock. (Chapter-5: Process Synchronization)- Race Condition, Critical Section Problem, Mutual Exclusion,, Dekker’s solution,
Peterson’s solution, Process Concept, Principle of Concurrency,
Unit – III Concurrent Processes: Process Concept, Principle of Concurrency, Producer / Consumer Problem, Mutual Exclusion, (Chapter-6: Semaphores)- Classical Problem in Concurrency- Producer/Consumer Problem, Reader-Writer Problem,
Critical Section Problem, Dekker’s solution, Peterson’s solution, Semaphores, Test and Set operation; Classical Problem in Dining Philosopher Problem, Sleeping Barber Problem, Test and Set operation.
Concurrency- Dining Philosopher Problem, Sleeping Barber Problem; Inter Process Communication models and Schemes, Process (Chapter-7: Deadlock)- System model, Deadlock characterization, Prevention, Avoidance and detection, Recovery from
generation. deadlock.
(Chapter-8)- Fork Command, Multithreaded Systems, Threads and their management
Unit – IV Memory Management: Basic bare machine, Resident monitor, Multiprogramming with fixed partitions, (Chapter-9: Memory Management)- Memory Hierarchy, Locality of reference, Multiprogramming with fixed partitions,
Multiprogramming with variable partitions, Protection schemes, Paging, Segmentation, Paged segmentation, Virtual memory Multiprogramming with variable partitions, Protection schemes, Paging, Segmentation, Paged segmentation.
concepts, Demand paging, Performance of demand paging, Page replacement algorithms, Thrashing, Cache memory (Chapter-10: Virtual memory)- Demand paging, Performance of demand paging, Page replacement algorithms,
organization, Locality of reference. Thrashing.
Chapter-11: Disk Management)- Disk Basics, Disk storage and disk scheduling, Total Transfer time.
Unit – V I/O Management and Disk Scheduling: I/O devices, and I/O subsystems, I/O buffering, Disk storage and disk scheduling,
(Chapter-12: File System)- File allocation Methods, Free-space Management, File organization and access mechanism,
RAID. File System: File concept, File organization and access mechanism, File directories, and File sharing, File system
Knowledge Gate Website
implementation issues, File system protection and security. Knowledge Gate Website
File directories, and File sharing, File system implementation issues, File system protection and security.
2. Memory Management: Manages allocation and deallocation of physical and virtual memory 2. Process Management
spaces to various programs. • Process Scheduler: Determines the execution of processes.
• Process Control Block (PCB): Contains process details such as process ID, priority, status, etc.
3. I/O Device Management: Handles I/O operations of peripheral devices like disks, keyboards, • Concurrency Control: Manages simultaneous execution.
etc., including buffering and caching.
3. Memory Management
4. File Management: Manages files on storage devices, including their information, naming, • Physical Memory Management: Manages RAM allocation.
permissions, and hierarchy. • Virtual Memory Management: Simulates additional memory using disk space.
• Memory Allocation: Assigns memory to different processes.
5. Network Management: Manages network protocols and functions, enabling the OS to
4. File System Management
establish network connections and transfer data. • File Handling: Manages the creation, deletion, and access of files and directories.
• File Control Block: Stores file attributes and control information.
6. Security & Protection: Ensures system protection against unauthorized access and other • Disk Scheduling: Organizes the order of reading or writing to disk.
security threats through authentication, authorization, and encryption.
Knowledge Gate Website Knowledge Gate Website
8. Networking
• Network Protocols: Rules for communication between devices on a network.
• Network Interface: Manages connection between the computer and the network.
3. The most common implementation of spooling can be found in typical 4. Ever had your mouse or keyboard freeze briefly? We often click around to test if
input/output devices such as the keyboard, mouse and printer. For example, in it's working. When it unfreezes, all those stored clicks execute rapidly due to the
printer spooling, the documents/files that are sent to the printer are first stored in device's spool.
the memory. Once the printer is ready, it fetches the data and prints it.
• Advantages: Multitasking Operating system/time sharing/Multiprogramming with Round Robin/ Fair Share
• High CPU Utilization: Enhances processing efficiency. 1. Time sharing (or multitasking) is a logical extension of multiprogramming, it allows many users to share the
• Less Waiting Time: Minimizes idle time. computer simultaneously. the CPU executes multiple jobs (May belong to different user) by switching among
them, but the switches occur so frequently that, each user is given the impression that the entire computer
• Multi-Task Handling: Manages concurrent tasks effectively. system is dedicated to his/her use, even though it is being shared among many users.
• Shared CPU Time: Increases system efficiency. 2. In the modern operating systems, we are able to play MP3 music, edit documents in Microsoft Word, surf the
Google Chrome all running at the same time. (by context switching, the illusion of parallelism is achieved)
3. For multitasking to take place, firstly there should be multiprogramming i.e. presence of multiple programs
ready for execution. And secondly the concept of time sharing.
• Disadvantages:
• Complex Scheduling: Difficult to program.
• Complex Memory Management: Intricate handling of memory is required.
All processors are treated equally Each processor is assigned a Utilizes multiple CPUs to run
Definition Definition Allows multiple programs to share a single CPU.
and can run any task. specific task or role. multiple processes concurrently.
Task Any processor can perform any Tasks are divided according to Simulates concurrent execution by rapidly Achieves true parallel execution of
Allocation task. processor roles. Concurrency
switching between tasks. processes.
Load is evenly distributed, Performance may vary based on Complexity and Less complex, primarily managing task switching More complex, requiring
Performance
enhancing performance. the specialization of tasks. Coordination on one CPU. coordination among multiple CPUs.
• Soft real-time operating system - The soft real-time operating system has certain deadlines,
may be missed and they will take the action at a time t=0+. The critical time of this operating Point Hard Real-Time Operating System Soft Real-Time Operating System
system is delayed to some extent. The examples of this operating system are the digital
camera, mobile phones and online data etc. Must meet strict deadlines Can miss deadlines occasionally
Deadline Constraints
without fail. without failure.
• Layered Approach - With proper hardware support, operating systems can be broken into pieces. The operating Micro-Kernel approach
system can then retain much greater control over the computer and over the applications that make use of that
computer. • In the mid-1980s, researchers at Carnegie Mellon University developed an operating system
1. Implementers have more freedom in changing the inner workings called Mach that modularized the kernel using the microkernel approach.
of the system and in creating modular operating systems. • This method structures the operating system by removing all nonessential components from
2. Under a top-down approach, the overall functionality and features the kernel and implementing them as system and user-level programs. The result is a smaller
are determined and are separated into components. kernel.
3. Information hiding is also important, because it leaves
programmers free to implement the low-level routines as they see
fit.
4. A system can be made modular in many ways. One method is the
layered approach, in which the operating system is broken into a
number of layers (levels). The bottom layer (layer 0) is the
hardware; the highest (layer N) is the user interface.
? Operating
System
• Command Interpreters - Some operating systems include the command interpreter in the • Graphical User Interfaces - A second strategy for interfacing with the operating system is
kernel. Others, such as Windows and UNIX, treat the command interpreter as a special through a user- friendly graphical user interface, or GUI. Here, users employ a mouse-based
program that is running when a job is initiated or when a user first logs on (on interactive window- and-menu system characterized by a desktop.
systems).
• The user moves the mouse to position its pointer on images, or icons, on the screen (the
• On systems with multiple command interpreters to choose from, the interpreters are known as desktop) that represent programs, files, directories, and system functions. Depending on the
shells. For example, on UNIX and Linux systems, a user may choose among several different mouse pointer’s location, clicking a button on the mouse can invoke a program, select a file or
shells, including the Bourne shell, C shell, Bourne-Again shell, Korn shell, and others. directory—known as a folder —or pull down a menu that contains commands.
System call
• System calls provide the means for a user program to ask
the operating system to perform tasks reserved for the
operating system on the user program’s behalf.
1. request device, release device 1. get time or date, set time or date
3. get device attributes, set device attributes 3. get process, file, or device attributes
User Mode
Kernel Mode
Knowledge Gate Website Knowledge Gate Website
Even if two processes may be associated with same program, they will be considered as two
Process separate execution sequences and are totally different process.
In general, a process is a program in execution.
For instance, if a user has invoked many copies of web browser program, each copy will be
A Program is not a Process by default. A program is a passive entity, i.e. a treated as separate process. even though the text section is same but the data, heap and stack
file containing a list of instructions stored on disk (secondary memory) sections can vary.
(often called an executable file).
A program becomes a Process when an executable file is loaded into main
memory and when it’s PCB is created.
A process on the other hand is an Active Entity, which require resources
like main memory, CPU time, registers, system buses etc.
• Heap: Which is memory dynamically allocated during Does not require system resources when Requires CPU time, memory, and other
process runtime. Resources not running. resources during execution.
Process States
• A Process changes states as it executes. The state of a process is defined in parts by the
current activity of that process. A process may be in one of the following states:
• New: The process is being created.
• Dispatcher - The dispatcher is the module that gives control of the CPU to the process CPU Bound and I/O Bound Processes
selected by the short-term scheduler.
• A process execution consists of a cycle of CPU execution or wait and i/o execution or wait. Normally a process
alternates between two states.
• This function involves the following: Switching context, switching to user mode, jumping to
the proper location in the user program to restart that program. • Process execution begin with the CPU burst that may be followed by a i/o burst, then another CPU and i/o burst
and so on. Eventually in the last will end up on CPU burst. So, process keep switching between the CPU and i/o
during execution.
• The dispatcher should be as fast as possible, since it is invoked during every process switch.
The time it takes for the dispatcher to stop one process and start another running is known as • I/O Bound Processes: An I/O-bound process is one that spends more of its time doing I/O than it spends doing
computations.
the dispatch latency.
• CPU Bound Processes: A CPU-bound process, generates I/O requests infrequently, using more of its time doing
computations.
• It is important that the long-term scheduler select a good process mix of I/O-bound and CPU-bound processes. If
all processes are I/O bound, the ready queue will almost always be empty, and the short-term scheduler will
have little to do. Similarly, if all processes are CPU bound, the I/O waiting queue will almost always be empty,
devices will go unused, and again the system will be unbalanced.
Sector-78
Rohtak
Sadar bazar
Grand father paper industry machenical
May-2023,
Cacha 1 km away
Elder brother marrage in
Resource May lead to inefficient CPU Typically more efficient, as it can quickly
Utilization utilization. switch tasks.
Suitable Batch systems and applications that Interactive and real-time systems
Applications require predictable timing. requiring responsive behavior.
• Throughput: If the CPU is busy executing processes, then work is being done. • Waiting time: Waiting time is the sum of the periods spent waiting in
One measure of work is the number of processes that are completed per time the ready queue.
unit, called throughput.
Average • Solution, smaller process have to be executed before longer process, to achieve less average
waiting time.
P0 1 100
P1 0 2
Average
• It supports both version non-pre-emptive and pre-emptive (purely greedy • In Shortest Remaining Time First (SRTF) (Pre-emptive) whenever a process enters
approach) in ready state, again we make a scheduling decision weather, this new process
with the smaller CPU burst requirement then the remaining CPU burst of the
• In Shortest Job First (SJF)(non-pre-emptive) once a decision is made and among
running process and if it is the case then the running process is pre-empted and
the available process, the process with the smallest CPU burst is scheduled on
new process is scheduled on the CPU.
the CPU, it cannot be pre-empted even if a new process with the smaller CPU
burst requirement then the remaining CPU burst of the running process enter in
• This version (SRTF) is also called optimal is it guarantee minimal average waiting
the system.
time.
P3 4 2 • Disadvantage
P4 5 8 • Here process with the longer CPU burst requirement goes into starvation and have
Average response time.
• This algo cannot be implemented as there is no way to know the length of the next CPU
burst. As SJF is not implementable, we can use the one technique where we try to predict
the CPU burst of the next coming process.
• Tie is broken using FCFS order. No importance to senior or burst time. It supports both non-pre-
emptive and pre-emptive versions.
• In Priority (non-pre-emptive) once a decision is made and among the available process, the
process with the highest priority is scheduled on the CPU, it cannot be pre-empted even if a
new process with higher priority more than the priority of the running process enter in the
system.
• In Priority (pre-emptive) once a decision is made and among the available process, the process
with the highest priority is scheduled on the CPU. if it a new process with priority more than
the priority of the running process enter in the system, then we do a context switch and the
processor is provided to the new process with higher priority.
• There is no general agreement on whether 0 is the highest or lowest priority, it can vary from
systems to systems.
P2 2 3 7
P3 3 5 8(H) • Disadvantage
P4 3 1 5 • Here process with the smaller priority may starve for the CPU
P5 4 2 6 • No idea of response time or waiting time.
Average
P. No Arrival Time Burst Time Completion Time Turn Around Time Waiting Time (WT)
Round robin (AT) (BT) (CT) (TAT) = CT - AT = TAT - BT
• This algo is designed for time sharing systems, where it is not, the idea to complete one process and then start P0 0 4
another, but to be responsive and divide time of CPU among the process in the ready state(circular).
P1 1 5
• The CPU scheduler goes around the ready queue, allocating the CPU to each process for a maximum of 1 Time
quantum say q. Up to which a process can hold the CPU in one go, with in which either a process terminates if P2 2 2
process have a CPU burst of less than given time quantum or context switch will be executed and process must
P3 3 1
release the CPU voluntarily and enter the ready queue and wait for the next chance.
• If there are n processes in the ready queue and the time quantum is q, then each process gets 1/n of the CPU P4 4 6
time in chunks of at most q time units. Each process must wait no longer than (n - 1) x q time units until its next P5 6 3
time quantum.
Average
• Each queue has its own scheduling algorithm. For example Multi-level Feedback Queue Scheduling
• System process might need priority algorithm
• Problem with multi-level queue scheduling is how to decide number of
• Interactive process might be scheduled by an RR algorithm
ready queue, scheduling algorithm inside the queue and between the
• Batch processes is scheduled by an FCFS algorithm.
queue and once a process enters a specific queue we can not change and
queue after that.
• In addition, there must be scheduling among the queues, which is commonly implemented as
fixed-priority preemptive scheduling or round robin with different time quantum. • The multilevel feedback queue scheduling algorithm, allows a process to
move between queues. The idea is to separate processes according to the
characteristics of their CPU bursts. If a process uses too much CPU time, it
will be moved to a lower-priority queue. In addition, a process that waits
too long in a lower-priority queue may be moved to a higher-priority
queue. This form of aging prevents starvation.
• A process entering the ready queue is put in queue 0. A process in queue 0
is given a time quantum of 8 milliseconds. If it does not finish within this
time, it is moved to the tail of queue 1. If queue 0 is empty, the process at
the head of queue 1 is given a quantum of 16 milliseconds. If it does not
complete, it is preempted and is put into queue 2. Processes in queue 2
are run on an FCFS basis but are run only when queues 0 and 1 are empty.
Knowledge Gate Website Knowledge Gate Website
In general, a multilevel feedback queue scheduler is defined by the following
parameters:
• The number of queues
• The scheduling algorithm for each queue
• The method used to determine when to upgrade a process to a higher-
priority queue
• The method used to determine when to demote a process to a lower-
priority queue.
• The definition of a multilevel feedback queue scheduler makes it the most
general CPU-scheduling algorithm. It can be configured to match a specific
system under design. Unfortunately, it is also the most complex algorithm,
since defining the best scheduler requires some means by which to select
values for all the parameters.
• Bounded Waiting: There exists a bound or a limit on the number of times a process is
allowed to enter its critical section and no process should wait indefinitely to enter the
CS.
• Here we will use a Boolean array flag with two cells, where each cell is initialized to F • This solution follows the Mutual Exclusion Criteria.
P0 P1
while (1) while (1) • But in order to achieve the progress the system ended up
{ { being in a deadlock state.
flag [0] = T; flag [1] = T;
while (flag [1]); while (flag [0]);
Critical Section Critical Section
flag [0] = F; flag [1] = F;
Remainder section Remainder Section
} }
Operation System Solution (Semaphores) • Peterson’s Solution was confined to just two
processes, and since in a general system can have n Pi()
1. Semaphores are synchronization tools using which we will attempt n-process solution. processes, Semaphores provides n-processes solution. {
2. A semaphore S is a simple integer variable that, but apart from initialization it can be • While solving Critical Section Problem only we
While(T)
initialize semaphore S = 1. {
accessed only through two standard atomic operations: wait(S) and signal(S).
Initial Section
3. The wait(S) operation was originally termed as P(S) and signal(S) was originally called V(S). • Semaphores are going to ensure Mutual Exclusion and
Progress but does not ensures bounded waiting. wait(s)
Critical Section
Wait(S) Signal(S) signal(s)
Wait(S) Signal(S)
{ { Remainder Section
{ {
while(s<=0); }
s++; while(s<=0); s++; }
s--; s--;
} }
} }
• Here in this section we will discuss a number of problems like • A producer needs to check whether the buffer is overflowed or not after producing an item,
• Producer consumer problem/ Bounder Buffer Problem before accessing the buffer.
• Similarly, a consumer needs to check for an underflow before accessing the buffer and then
• Reader-Writer problem consume an item.
• Also, the producer and consumer must be synchronized, so that once a producer and consumer
• Dining Philosopher problem it accessing the buffer the other must wait.
Semaphore E =
Semaphore F =
{ { { {
Semaphore S = Semaphore S =
while(T) while(T) while(T) while(T)
{ { { {
Semaphore E = Semaphore E = // Produce an item
// Consume item
} } } }
Readcount = Readcount =
Wait(wrt) Wait(wrt)
CS //Write CS //Read CS //Write CS //Read
Signal(wrt) Signal(wrt)
Readcount--
Indian Chopsticks
Boolean test and set (Boolean *target) While(1) • Many modern computer systems therefore provide special hardware instructions
{ { that allow us either to test and modify the content of a word atomically —that is,
Boolean rv = *target; while (test and set(&lock)); as one uninterruptible unit. We can use these special instructions to solve the
*target = true; /* critical section */ critical-section problem in a relatively simple manner.
return rv; lock = false;
} /* remainder section */ • The important characteristic of this instruction is that it is executed atomically.
} Thus, if two test and set() instructions are executed simultaneously (each on a
different CPU), they will be executed sequentially in some arbitrary order.
P1 P2
R1 R2
Knowledge Gate Website Knowledge Gate Website
Tax
Services
• No pre-emption
• Circular wait
• Hold and wait: - A process must be holding at least one resource and • No pre-emption: - Resources cannot be pre-empted; that is, a resource can be
waiting to acquire additional resources that are currently being held released only voluntarily by the process holding it, after that process has
by other processes. E.g. Plate and spoon completed its task.
3. Detection: - We can allow the system to enter a deadlocked state, then detect it, and recover.
4. Ignorance: - We can ignore the problem altogether and pretend that deadlocks never occur
in the system.
R
1
Knowledge
R
2 Website
Gate Knowledge Gate Website
Allocation: An n*m matrix defines the number of resources of each Allocation Safety Algorithm
type currently allocated to each process. If Allocation[i][j] equals k, then E F G We can now present the algorithm for finding out whether or not a system is in a safe state. This
process Pi is currently allocated k instances of resource type Rj. P0 1 0 1 algorithm can be described as follows:
P1 1 1 2 1- Let Work and Finish be vectors of length m and n, respectively. Initialize Work = Available and
P2 1 0 3 Finish[i] = false for i = 0, 1, ..., n − 1.
Need/Demand/Requirement: An n*m matrix indicates the remaining Need Work
P3 2 0 0
resource need of each process. If Need[i][j] equals k, then process Pi
may need k more instances of resource type Rj to complete its task. 2- Find an index i such that both E F G E F G
Finish[i] == false P 3 3 0
Note that Need[i][j] = Max[i][j] − Allocation[i][j]. 0 3 3 0
These data structures vary over time in both size and value. Needi ≤ Work P1 1 0 2
If no such i exists, go to step 4. P2 0 3 0
Current Need P3 3 4 1
E F G 3- Work = Work + Allocationi
P0 3 3 0 Finish[i] = true
P1 1 0 2 Go to step 2. Finish[i]
P2 0 3 0 E F G
4- If Finish[i] == true for all i, then the system is in a safe state.
P3 3 4 1
This algorithm may require an order of m*n2 operations to F F F
determine whether a state is safe.
Knowledge Gate Website Knowledge Gate Website
Resource Allocation Graph • A directed edge from process Pi to resource type Rj is denoted by Pi → Rj is called a request
• Deadlock can also be described in terms of a directed graph called a system resource- edge; it signifies that process Pi has requested an instance of resource type Rj and is currently
allocation graph. This graph consists of a set of vertices V and a set of edges E. waiting for that resource.
• The set of vertices V is partitioned into two different types of nodes: P = {P1, P2, ..., Pn}, the set • A directed edge from resource type Rj to process Pi is denoted by Rj → Pi is called an assignment
edge; it signifies that an instance of resource type Rj has been allocated to process Pi.
consisting of all the active processes in the system, and R = {R1, R2, ..., Rm}, the set consisting of
all resource types in the system.
• If every resource have only one resource in the resource allocation graph than
detection of cycle is necessary and sufficient condition for deadlock detection.
• Recourse pre-emption 3. Despite this, many operating systems opt for this approach to save on the cost
• Selecting a victim of implementing deadlock detection.
• Partial or Complete Rollback
4. Deadlocks are often rare, so the trade-off may seem justified. Manual restarts
may be required when a deadlock occurs.
• In number of applications specially in those where work is of repetitive nature, like web
server i.e. with every client we have to run similar type of code. Have to create a separate
process every time for serving a new request.
• So it must be a better solution that instead to creating a new process every time from
scratch we must have a short command using which we can do this logic.
Memory Hierarchy
• Let first understand what we need from a memory
• Large capacity
• Less per unit cost
• Less access time(fast access)
• The memory hierarchy system consists of all storage devices employed in a computer system.
Duty of Operating System • There can be two approaches for storing a process in main memory.
• Operating system is responsible for the following activities in connection with 1. Contiguous allocation policy
memory management:
1. Address Translation: Convert logical addresses to physical addresses for data retrieval.
2. Non-contiguous allocation policy
2. Memory Allocation and Deallocation: Decide which processes or data segments to load
or remove from memory as needed.
3. Memory Tracking: Monitor which parts of memory are in use and by which processes.
4. Memory Protection: Implement safeguards to restrict unauthorized access to memory,
ensuring both process isolation and data integrity.
• Must be stored in main memory in contiguous fashion. 3. So, if the value of logical address is less than limit, then it means it’s a valid request and we can
continue with translation otherwise, it is a illegal request which is immediately trapped by OS.
Space Allocation Method in Contiguous Allocation • Fixed size partitioning: - here, we divide memory into fixed size partitions, which
may be of different sizes, but here if a process request for some space, then a
• Variable size partitioning: -In this policy, in starting, we treat the memory as a partition is allocated entirely if possible, and the remaining space will be waisted
whole or a single chunk & whenever a process request for some space, exactly internally.
same space is allocated if possible and the remaining space can be reused again.
xcvxc
• Advantage: - simple, easy to use, easy to understand • Advantage: - perform best in fix size partitioning scheme.
• Disadvantage: -poor performance, both in terms of time and space • Disadvantage: - difficult to implement, perform worst in variable size partitioning as the
remaining spaces which are of very small size.
• Worst fit policy: - It also searches the entire memory and allocate the largest partition Q Consider five memory partitions of size 100 KB, 500 KB, 200 KB, 300 KB, and 600 KB, where KB
possible. refers to kilobyte. These partitions need to be allotted to four processes of sizes 212 KB, 417 KB,
• Advantage: - perform best in variable size partitioning 112 KB and 426 KB in that order?
• Disadvantage: - perform worst in fix size partitioning, resulting into large internal
fragmentation.
• Internal fragmentation: - Internal fragmentation is a function of fixed size • How can we solve external fragmentation?
partition which means, when a partition is allocated to a process. Which is either
the same size or larger than the request then, the unused space by the process in • We can also swap processes in the main memory after fixed intervals of time
the partition Is called as internal fragmentation & they can be swapped in one part of the memory and the other part become
empty(Compaction, defragmentation). This solution is very costly in respect
to time as it will take a lot of time to swap process when system is in running
state.
2. The page number(p) is used as an index into a Page table 2. Every process have a separate page table.
3. Page table base register(PTBR) provides the base of the page table and then the corresponding page no is
accessed using p. 3. Number of entries a process have in the page table is the number of pages a process have in
the secondary memory.
4. Here we will finds the corresponding frame no (the base address of that frame in main memory in which the
page is stored)
4. Size of each entry in the page table is same it is corresponding frame number.
5. Combine corresponding frame no with the instruction offset and get the physical address. Which is used to
access main memory.
5. Page table is a data structure which is it self stored in main memory.
- -
- -
- -
- -
- -
- -
- -
-
-
- -
- -
-
-
-
-
-
- -
-
• The TLB is used with page tables in the following way. The TLB contains only a few of the page- • Also we add the page number and frame number to the TLB, so that they will be found quickly
table entries. When a logical address is generated by the CPU, its page number is presented to on the next reference.
the TLB. If the page number is found, its frame number is immediately available and is used to
• If the TLB is already full of entries, the operating system must select one for replacement i.e.
access memory.
Page replacement policies.
• If the page number is not in the TLB (known as a TLB Miss), then a memory reference to the
page table must be made. • The percentage of times that a particular page number is found in the TLB is called the Hit
Ratio.
• Solution:
• Use multiple TLB’s but it will be costly.
• Some TLBs allow certain entries to be
wired down, meaning that they cannot be
removed from the TLB. Typically, TLB
entries for kernel code are wired down.
• So we have to find what should be the size of the page, where both cost are minimal.
• Segment Table: Each entry in the segment table has Segmentation with Paging
a segment base and a segment limit. The segment
base contains the starting physical address where • Since segmentation also suffers from external fragmentation, it is better to divide the segments into pages.
the segment resides in memory, and the segment • In segmentation with paging a process is divided into segments and further the segments are divided into pages.
limit specifies the length of the segment.
• One can argue it is segmentation with paging is quite similar to multilevel paging, but actually it is better,
• The segment number is used as an index to the because here when page table is divided, the size of partition can be different (as actually the size of different
segment table. The offset d of the logical address chapters can be different). All properties of segmentation with paging is same as multilevel paging.
must be between 0 and the segment limit. If it is
not, we trap to the operating system.
4. Now we modify the internal table kept with the process(PCB) and the page table to indicate Performance of Demand Paging
that the page is now in memory. We restart the instruction that was interrupted by the trap. The
process can now access the page as though it had always been in memory. • Effective Access time for Demand Paging:
• (1 - p) x ma + p x page fault service time.
• The FIFO page-replacement algorithm is easy to understand and program. However, its Optimal Page Replacement Algorithm
performance is not always good. • Replace the page that will not be used for the longest period of time.
• It has the lowest page-fault rate of all algorithms.
• Belady’s Anomaly: for some page-replacement algorithms, the page-fault rate may increase • Guarantees the lowest possible page fault rate for a fixed number of frames and will
as the number of allocated frames increases.
never suffer from Belady's anomaly.
• Unfortunately, the optimal page-replacement algorithm is difficult to implement, because it
requires future knowledge of the reference string. It is mainly used for comparison studies.
• This model uses a parameter Δ, to define the working set window. The set of pages in the • Magnetic disks serve as the main
most recent Δ page references is the working set. secondary storage in computers. Each disk sector s
depends on the selection of Δ . If Δ is too small, it will not encompass the entire locality; if Δ logical data storage.
is too large, it may overlap several localities.
• Disks spin at speeds ranging from 60 to 250
rotations per second, commonly noted in
RPM like 5,400 or 15,000.
4. When one request is completed, the operating system chooses which pending request to service next. How does the
operating system make this choice? Any one of several disk-scheduling algorithms can be used.
Advantages:
• Easy to understand easy to use
• Every request gets a fair chance
• No starvation (may suffer from convoy effect)
Disadvantages:
• Does not try to optimize seek time, or waiting time.
• Advantages:
• Seek movements decreases
• Throughput increases
•
• Disadvantages:
• Overhead to calculate the closest request.
• Can cause Starvation for a request which is far from the current location of the header
• High variance of response time and waiting time as SSTF favors only closest requests
• At the other end, the direction of head movement is reversed, and servicing continues. The head
continuously scans back and forth across the disk.
Advantages:
• Simple easy to understand and use
• No starvation but more wait for some random process
• Low variance and Average response time
Disadvantages:
• Long waiting time for requests for locations just visited by disk arm.
• Unnecessary move to the end of the disk, even if there is no request.
Advantages:
• Provides more uniform wait time compared to SCAN
• Better response time compared to scan
Disadvantage:
• More seeks movements in order to reach starting position
sector s
read-write head
cylinder c
platter
arm
• Total Transfer Time = Seek Time + Rotational Latency + Transfer Time Q Consider a disk where there are 512 tracks, each track is capable of holding 128
sector and each sector holds 256 bytes, find the capacity of the track and disk and
• Seek Time: - It is a time taken by Read/Write header to reach the correct track. (Always given in number of bits required to reach correct track, sector and disk.
question)
• Rotational Latency: - It is the time taken by read/Write header during the wait for the correct
sector. In general, it’s a random value, so far average analysis, we consider the time taken by disk to
complete half rotation.
• Transfer Time: - it is the time taken by read/write header either to read or write on a disk. In
general, we assume that in 1 complete rotation, header can read/write the either track, so
• total time will be = (File Size/Track Size) *time taken to complete one revolution.
Each method has advantages and disadvantages. Although some systems support all three, it is
more common for a system to use one method for all files.
• Disadvantage: -
• Only sequential access is possible, To find the ith block of a file, we must start at the beginning
and follow the pointers until we get to the ith block.
• Another disadvantage is the space required for the pointers, so each file requires slightly more
space than it would otherwise.
• Multilevel index. A variant of linked representation uses a first-level index block to point to a • Combined scheme: In UNIX-based systems, the file’s Inode stores the first 15 pointers from the
set of second-level index blocks, which in turn point to the file blocks. index block. The first 12 point directly to data blocks, eliminating the need for a separate index
block for small files.
• To access a block, the operating system uses the first-level index to find a second-level index
block and then uses that block to find the desired data block. This approach could be • The next three pointers are for indirect blocks: the first for a single indirect block, the second
continued to a third or fourth level, depending on the desired maximum file size. for a double indirect block, and the last for a triple indirect block, each increasingly indirecting
to the actual data blocks.
• Disadvantage
• Indexed allocation does suffer from wasted space. The pointer overhead of the index block
is generally greater than the pointer overhead of linked allocation.
File organization
• File organization refers to the way data is stored in a file. File organization is very important
because it determines the methods of access, efficiency, flexibility and storage devices to use.
• Four methods of organizing files:
• 1. Sequential file organization:
• a. Records are stored and accessed in a particular sorted order using a key field.
• b. Retrieval requires searching sequentially through the entire file record by record to
the end.
• 2. Random or direct file organization:
• a. Records are stored randomly but accessed directly.
• b. To access a file which is stored randomly, a record key is used to determine where a
record is stored on the storage media.
• c. Magnetic and optical disks allow data to be stored and accessed randomly.
• Here, 'r' indicates read permission, 'w' indicates write permission, and
'-' indicates no permission.
• Access Control Lists (ACLs): Each object's column in the matrix can be • Capability Lists: Each subject's row in the matrix can be converted into a
converted to an Access Control List, which lists all subjects and their Capability List, which lists all objects and the operations the subject can
corresponding permissions for that object. perform on them.
• File A: • User 1:
• User 1: r-w • File A: r-w
• User 2: r • File B: r
• File B: • User 2:
• User 1: r • File A: r
• User 2: w • File B: w
• User 3: r • File C: r-w