OS in 6 Hours
OS in 6 Hours
Unit – I Introduction : Operating system and functions, Classification of Operating systems- Batch, Interactive, Time sharing, Real
Time System, Multiprocessor Systems, Multiuser Systems, Multiprocess Systems, Multithreaded Systems, Operating System
Structure- Layered structure, System Components, Operating System services, Reentrant Kernels, Monolithic and Microkernel
Systems.
Unit – II CPU Scheduling: Scheduling Concepts, Performance Criteria, Process States, Process Transition Diagram, Schedulers,
Process Control Block (PCB), Process address space, Process identification information, Threads and their management,
Scheduling Algorithms, Multiprocessor Scheduling. Deadlock: System model, Deadlock characterization, Prevention, Avoidance
and detection, Recovery from deadlock.
Unit – III Concurrent Processes: Process Concept, Principle of Concurrency, Producer / Consumer Problem, Mutual Exclusion,
Critical Section Problem, Dekker’s solution, Peterson’s solution, Semaphores, Test and Set operation; Classical Problem in
Concurrency- Dining Philosopher Problem, Sleeping Barber Problem; Inter Process Communication models and Schemes, Process
generation.
Unit – IV Memory Management: Basic bare machine, Resident monitor, Multiprogramming with fixed partitions,
Multiprogramming with variable partitions, Protection schemes, Paging, Segmentation, Paged segmentation, Virtual memory
concepts, Demand paging, Performance of demand paging, Page replacement algorithms, Thrashing, Cache memory
organization, Locality of reference.
Unit – V I/O Management and Disk Scheduling: I/O devices, and I/O subsystems, I/O buffering, Disk storage and disk scheduling,
RAID. File System: File concept, File organization and access mechanism, File directories, and File sharing, File system
Knowledge Gate Website
implementation issues, File system protection and security.
Chapters of This Video
(Chapter-1: Introduction)- Operating system, Goal & functions, System Components, Operating System services,
Classification of Operating systems- Batch, Interactive, Multiprogramming, Multiuser Systems, Time sharing,
Multiprocessor Systems, Real Time System.
(Chapter-2: Operating System Structure)- Layered structure, Monolithic and Microkernel Systems, Interface, System Call.
Chapter-3: Process Basics)- Process Control Block (PCB), Process identification information, Process States, Process
Transition Diagram, Schedulers, CPU Bound and i/o Bound, Context Switch.
(Chapter-4: CPU Scheduling)- Scheduling Performance Criteria, Scheduling Algorithms.
(Chapter-5: Process Synchronization)- Race Condition, Critical Section Problem, Mutual Exclusion,, Dekker’s solution,
Peterson’s solution, Process Concept, Principle of Concurrency,
(Chapter-6: Semaphores)- Classical Problem in Concurrency- Producer/Consumer Problem, Reader-Writer Problem,
Dining Philosopher Problem, Sleeping Barber Problem, Test and Set operation.
(Chapter-7: Deadlock)- System model, Deadlock characterization, Prevention, Avoidance and detection, Recovery from
deadlock.
(Chapter-8)- Fork Command, Multithreaded Systems, Threads and their management
(Chapter-9: Memory Management)- Memory Hierarchy, Locality of reference, Multiprogramming with fixed partitions,
Multiprogramming with variable partitions, Protection schemes, Paging, Segmentation, Paged segmentation.
(Chapter-10: Virtual memory)- Demand paging, Performance of demand paging, Page replacement algorithms,
Thrashing.
Chapter-11: Disk Management)- Disk Basics, Disk storage and disk scheduling, Total Transfer time.
(Chapter-12: File System)- File allocation Methods, Free-space Management, File organization and access mechanism,
Knowledge Gate Website
File directories, and File sharing, File system implementation issues, File system protection and security.
What is Operating System
1. Intermediatory – Acts as an intermediary between user & h/w .
2. Memory Management: Manages allocation and deallocation of physical and virtual memory
spaces to various programs.
3. I/O Device Management: Handles I/O operations of peripheral devices like disks, keyboards,
etc., including buffering and caching.
4. File Management: Manages files on storage devices, including their information, naming,
permissions, and hierarchy.
6. Security & Protection: Ensures system protection against unauthorized access and other
security threats through authentication, authorization, and encryption.
Knowledge Gate Website
Major Components of operating system
1. Kernel
• Central Component: Manages the system's resources and communication between hardware and software.
2. Process Management
• Process Scheduler: Determines the execution of processes.
• Process Control Block (PCB): Contains process details such as process ID, priority, status, etc.
• Concurrency Control: Manages simultaneous execution.
3. Memory Management
• Physical Memory Management: Manages RAM allocation.
• Virtual Memory Management: Simulates additional memory using disk space.
• Memory Allocation: Assigns memory to different processes.
7. User Interface
• Command Line Interface (CLI): Text-based user interaction.
• Graphical User Interface (GUI): Visual, user-friendly interaction with the OS.
8. Networking
• Network Protocols: Rules for communication between devices on a network.
• Network Interface: Manages connection between the computer and the network.
वकसी के विए
Processor Knowledge wait
Gate नहीीं करे गा
Website
• Non-Multiprogrammed: CPU sits idle
while waiting for a job to complete.
• Disadvantages:
• Complex Scheduling: Difficult to program.
• Complex Memory Management: Intricate handling of memory is required.
Task Any processor can perform any Tasks are divided according to
Allocation task. processor roles.
Hardware Requires only one CPU and manages multiple Requires multiple CPUs, enabling
Requirements tasks on it. parallel processing.
Complexity and Less complex, primarily managing task switching More complex, requiring
Coordination on one CPU. coordination among multiple CPUs.
? Operating
System
2. load, execute
2. open, close
Kernel Mode
Knowledge Gate Website
Knowledge Gate Website
Process
In general, a process is a program in execution.
A Program is not a Process by default. A program is a passive entity, i.e. a
file containing a list of instructions stored on disk (secondary memory)
(often called an executable file).
A program becomes a Process when an executable file is loaded into main
memory and when it’s PCB is created.
A process on the other hand is an Active Entity, which require resources
like main memory, CPU time, registers, system buses etc.
Static; exists as code on disk or in Dynamic; exists in memory and has a state
State storage. (e.g., running, waiting).
Does not require system resources when Requires CPU time, memory, and other
Resources not running. resources during execution.
• This function involves the following: Switching context, switching to user mode, jumping to
the proper location in the user program to restart that program.
• The dispatcher should be as fast as possible, since it is invoked during every process switch.
The time it takes for the dispatcher to stop one process and start another running is known as
the dispatch latency.
Resource May lead to inefficient CPU Typically more efficient, as it can quickly
Utilization utilization. switch tasks.
Suitable Batch systems and applications that Interactive and real-time systems
Applications require predictable timing. requiring responsive behavior.
• Burst Time (BT): Amount of CPU time required by the process to finish its execution.
• Turn Around Time (TAT): Completion Time (CT) – Arrival Time (AT), Waiting Time + Burst Time (BT)
P0 0 100
P1 1 2
Average
P0 1 100
P1 0 2
Average
• Solution, smaller process have to be executed before longer process, to achieve less average
waiting time.
• The FCFS algorithm is thus particularly troublesome for time-sharing systems (due to its
non-pre-emptive nature), where it is important that each user get a share of the CPU at
regular intervals.
• This version (SRTF) is also called optimal is it guarantee minimal average waiting
time.
P1 2 5
P2 3 1
P3 4 2
P4 5 8
Average
• Disadvantage
• Here process with the longer CPU burst requirement goes into starvation and have
response time.
• This algo cannot be implemented as there is no way to know the length of the next CPU
burst. As SJF is not implementable, we can use the one technique where we try to predict
the CPU burst of the next coming process.
• Tie is broken using FCFS order. No importance to senior or burst time. It supports both non-pre-
emptive and pre-emptive versions.
• In Priority (non-pre-emptive) once a decision is made and among the available process, the
process with the highest priority is scheduled on the CPU, it cannot be pre-empted even if a
new process with higher priority more than the priority of the running process enter in the
system.
• In Priority (pre-emptive) once a decision is made and among the available process, the process
with the highest priority is scheduled on the CPU. if it a new process with priority more than
the priority of the running process enter in the system, then we do a context switch and the
processor is provided to the new process with higher priority.
• There is no general agreement on whether 0 is the highest or lowest priority, it can vary from
systems to systems.
P0 1 4 4
P1 2 2 5
P2 2 3 7
P3 3 5 8(H)
P4 3 1 5
P5 4 2 6
Average
• Disadvantage
• Here process with the smaller priority may starve for the CPU
• No idea of response time or waiting time.
• Disadvantage
• Longer process may starve
• Performance depends heavily on time quantum - If value of the time quantum is very less,
then it will give lesser average response time (good but total no of context switches will be
more, so CPU utilization will be less), If time quantum is very large then average response
time will be more bad, but no of context switches will be less, so CPU utilization will be
good.
• No idea of priority
• A multilevel queue scheduling algorithm, partitions the ready queue into several separate
queues. The processes are permanently assigned to one queue, generally based on properties
and requirement of the process.
• In addition, there must be scheduling among the queues, which is commonly implemented as
fixed-priority preemptive scheduling or round robin with different time quantum.
• Race condition is a situation in which the output of a process depends on the execution
sequence of process. i.e. if we change the order of execution of different process with
respect to other process the output may change.
• Progress: If no process is executing in its critical section and some processes wish to
enter their critical sections, then only those processes that are not executing in their
remainder sections can participate in deciding which will enter its critical section
next(means other process will participate which actually wish to enter). there should
be no deadlock.
• Bounded Waiting: There exists a bound or a limit on the number of times a process is
allowed to enter its critical section and no process should wait indefinitely to enter the
CS.
3. Hardware Solution
1. Test and Set Lock
2. Disable interrupt
• There are 3 Different idea to achieve valid solution, in which some are invalid
while some are valid.
• 3- Peterson’s Solution
P0 P1
while (1) while (1)
{ {
while (turn! = 0); while (turn! = 1);
Critical Section Critical Section
turn = 1; turn = 0;
Remainder section Remainder Section
} }
• The solution does not follow the Progress, as it is suffering from the
strict alternation. Because we never asked the process whether it
wants to enter the CS or not?
P0 P1
while (1) while (1)
{ {
flag [0] = T; flag [1] = T;
while (flag [1]); while (flag [0]);
Critical Section Critical Section
flag [0] = F; flag [1] = F;
Remainder section Remainder Section
} }
P0 P1
while (1) while (1)
{ {
flag [0] = T; flag [1] = T;
turn = 1; turn = 0;
while (turn = = 1 && flag [1] = = T); while (turn = = 0 && flag [0] = =T);
Critical Section Critical Section
flag [0] = F; flag [1] = F;
Remainder section Remainder Section
} }
Wait(S) Signal(S)
{ {
while(s<=0);
s++;
s--;
} }
• Reader-Writer problem
Semaphore S =
Semaphore E =
Semaphore F =
Semaphore F =
} }
// Consume item
} }
Wrt =
Readcount =
CS //Write CS //Read
Wrt =
Readcount =
Wait(wrt)
CS //Write CS //Read
Signal(wrt)
Wrt = Readcount++
Readcount =
Wait(wrt)
CS //Write CS //Read
Signal(wrt)
Readcount--
Readcount =
Wait(wrt) signal(mutex)
CS //Write CS //Read
Signal(wrt) Wait(mutex)
Readcount--
signal(mutex)
Knowledge Gate Website
Mutex = Writer() Reader()
Wait(mutex)
Wrt = Readcount++
If(readcount ==1)
Readcount =
wait(wrt) // first
Wait(wrt) signal(mutex)
CS //Write CS //Read
Signal(wrt) Wait(mutex)
Readcount--
If(readcount ==0)
signal(wrt) // last
signal(mutex)
Knowledge Gate Website
Dining Philosopher Problem
• Consider five philosophers who spend their lives
thinking and eating. The philosophers share a
circular table surrounded by five chairs, each
belonging to one philosopher.
• In the center of the table is a bowl of rice, and
the table is laid with five single chopsticks.
• When a philosopher thinks, she does not interact
with her colleagues.
• Odd philosopher picks up first her left chopstick and then her
right chopstick, whereas an even philosopher picks up her
right chopstick and then her left chopstick.
Knowledge Gate Website
The Sleeping Barber problem
• Barbershop: A barbershop consists of a waiting room
with n chairs and a barber room with one barber chair.
• Customers: Customers arrive at random intervals. If there
is an available chair in the waiting room, they sit and wait.
If all chairs are taken, they leave.
• Barber: The barber sleeps if there are no customers. If a
customer arrives and the barber is asleep, they wake the
barber up.
• Synchronization: The challenge is to coordinate the
interaction between the barber and the customers using
concurrent programming mechanisms.
Barber Customer
while(true) wait(mutex);
if(waiting < n)
{ {
wait(customer); waiting = waiting + 1;
signal(customer);
wait(mutex); signal(mutex);
waiting = waiting - 1; wait(barber);
// Get hair cut
signal(barber);
}
signal(mutex); else
// Cut hair {
signal(mutex);
} Knowledge Gate
} Website
Hardware Type Solution Test and Set
• Software-based solutions such as Peterson’s are not guaranteed to work on modern computer
architectures. In the following discussions, we explore several more solutions to the critical-
section problem using techniques ranging from hardware to software, all these solutions are
based on the premise of locking —that is, protecting critical regions through the use of locks.
P1 P2
R1 R2
Knowledge Gate Website
Knowledge Gate Website
Tax
Services
• No pre-emption
• Circular wait
2. Avoidance: - Try to avoid deadlock in run time so ensuring that the system will never enter a
deadlocked state.
3. Detection: - We can allow the system to enter a deadlocked state, then detect it, and recover.
4. Ignorance: - We can ignore the problem altogether and pretend that deadlocks never occur
in the system.
Polio vaccine
2. Alternative protocol: A process may request some resources and use them.
Before it can request any additional resources, it must release all the resources
that it is currently allocated.
3. Wait time out: We place a max time outs up to which a process can wait. After
which process must release all the holding resources & exit.
P1 P2
R 1 R2 Website
Knowledge Gate
No pre-emption
• If a process requests some resources
• We first check whether they are available. If they are, we allocate them.
• We check whether they are allocated to some other process that is waiting for
additional resources. If so, we pre-empt the desired resources from the waiting process
and allocate them to the requesting process (Considering Priority).
• If the resources are neither available nor held by a waiting process, the requesting
process must wait, or may allow to pre-empt resource of a running process Considering
Priority.
• With this additional knowledge, the operating system can decide for each request
whether process should wait or not.
Current Need
E F G
P0 3 3 0
P1 1 0 2
P2 0 3 0
P3 3 4 1
• The set of vertices V is partitioned into two different types of nodes: P = {P1, P2, ..., Pn}, the set
consisting of all the active processes in the system, and R = {R1, R2, ..., Rm}, the set consisting of
all resource types in the system.
• If every resource have only one resource in the resource allocation graph than
detection of cycle is necessary and sufficient condition for deadlock detection.
• Once a dead-lock is detected there are two options for recovery from a deadlock
• Process Termination
• Abort all deadlocked processes
• Abort one process at a time until the deadlock is removed
• Recourse pre-emption
• Selecting a victim
• Partial or Complete Rollback
3. Despite this, many operating systems opt for this approach to save on the cost
of implementing deadlock detection.
4. Deadlocks are often rare, so the trade-off may seem justified. Manual restarts
may be required when a deadlock occurs.
• In number of applications specially in those where work is of repetitive nature, like web
server i.e. with every client we have to run similar type of code. Have to create a separate
process every time for serving a new request.
• So it must be a better solution that instead to creating a new process every time from
scratch we must have a short command using which we can do this logic.
• Disadvantage
• To create a new process by fork command we have to do system call as, fork is system
function
• Which is slow and time taking
• Increase the burden over Operating System
• Different image of the similar type of task have same code part which means we have the
multiple copy of the same data waiting the main memory
• User threads are supported above the kernel, without kernel support. These are the threads
that application programmers would put into their programs.
• Kernel threads are supported within the kernel of the OS itself. All modern OS support kernel
level threads, allowing the kernel to perform multiple simultaneous tasks and/or to service
multiple kernel system calls simultaneously
• The memory hierarchy system consists of all storage devices employed in a computer system.
• We know that when a process is required to be executed it must be loaded to main memory,
by policy has two implications.
• It must be loaded to main memory completely for execution.
• Variable size partitioning: -In this policy, in starting, we treat the memory as a
whole or a single chunk & whenever a process request for some space, exactly
same space is allocated if possible and the remaining space can be reused again.
• Disadvantage: - perform worst in fix size partitioning, resulting into large internal
fragmentation.
• We can also swap processes in the main memory after fixed intervals of time
& they can be swapped in one part of the memory and the other part become
empty(Compaction, defragmentation). This solution is very costly in respect
to time as it will take a lot of time to swap process when system is in running
state.
3. Page table base register(PTBR) provides the base of the page table and then the corresponding page no is
accessed using p.
4. Here we will finds the corresponding frame no (the base address of that frame in main memory in which the
page is stored)
5. Combine corresponding frame no with the instruction offset and get the physical address. Which is used to
access main memory.
3. Number of entries a process have in the page table is the number of pages a process have in
the secondary memory.
4. Size of each entry in the page table is same it is corresponding frame number.
• Disadvantage
• Translation process is slow as Main Memory is accessed two times(one for page table and
other for actual access).
• System suffers from internal fragmentation(as paging is an example of fixed size partition).
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
• The search is fast; the hardware, however, is expensive, TLB Contains the frequently referred
page numbers and corresponding frame number.
• Solution:
• Use multiple TLB’s but it will be costly.
• Some TLBs allow certain entries to be
wired down, meaning that they cannot be
removed from the TLB. Typically, TLB
entries for kernel code are wired down.
• If we increase the size of page table then internal fragmentation increase but size of page
table decreases.
• If we decrease the size of page then internal fragmentation decrease but size of page table
increases.
• So we have to find what should be the size of the page, where both cost are minimal.
• Disadvantages
• Virtual memory is not easy to implement.
• It may substantially decrease performance if it is used carelessly (Thrashing)
2. We find a free frame if available we can brought in desired page, but if not we have to
select a page as a victim and swap it out from main memory to secondary memory and then
swap in the desired page(situation effectively doubles the page-fault service time ).
3.1. The modify bit for a page is set whenever the page has been modified. In this case, we
must write the page to the disk.
3.2. If the modify bit is not set: It means the page has not been modified since it was read
into the main memory. We need not write the memory page to the disk: it is already there.
• Belady’s Anomaly: for some page-replacement algorithms, the page-fault rate may increase
as the number of allocated frames increases.
• If a page is in active use, it will be in the working set. If it is no longer being used, it will drop
from the working set.
• The working set is an approximation of the program's locality. The accuracy of the working set
depends on the selection of Δ . If Δ is too small, it will not encompass the entire locality; if Δ
is too large, it may overlap several localities.
2. Whenever a process needs I/O to or from the disk, it issues a system call to the operating system. The request may specifies
several pieces of information: Whether this operation is input or output, disk address, Memory address, number of sectors to
be transferred.
3. If the desired disk drive and controller are available, the request can be serviced immediately. If the drive or controller is busy,
any new requests for service will be placed in the queue of pending requests for that drive.
4. When one request is completed, the operating system chooses which pending request to service next. How does the
operating system make this choice? Any one of several disk-scheduling algorithms can be used.
Advantages:
• Easy to understand easy to use
• Every request gets a fair chance
• No starvation (may suffer from convoy effect)
Disadvantages:
• Does not try to optimize seek time, or waiting time.
• Advantages:
• Seek movements decreases
• Throughput increases
•
• Disadvantages:
• Overhead to calculate the closest request.
• Can cause Starvation for a request which is far from the current location of the header
• High variance of response time and waiting time as SSTF favors only closest requests
• At the other end, the direction of head movement is reversed, and servicing continues. The head
continuously scans back and forth across the disk.
Advantages:
• Simple easy to understand and use
• No starvation but more wait for some random process
• Low variance and Average response time
Disadvantages:
• Long waiting time for requests for locations just visited by disk arm.
• Unnecessary move to the end of the disk, even if there is no request.
Advantages:
• Provides more uniform wait time compared to SCAN
• Better response time compared to scan
Disadvantage:
• More seeks movements in order to reach starting position
Advantage: -
• Better performance compared to SCAN
• Should be used in case to less load
Disadvantage: -
• Overhead to find the last request
• Should not be used in case of more load.
Advantage: -
• Provides more uniform wait time compared to LOOK
• Better response time compared to LOOK
Disadvantage: -
• Overhead to find the last request and go to initial position is more
• Should not be used in case of more load.
sector s
read-write head
cylinder c
platter
arm
• Rotational Latency: - It is the time taken by read/Write header during the wait for the correct
sector. In general, it’s a random value, so far average analysis, we consider the time taken by disk to
complete half rotation.
• Transfer Time: - it is the time taken by read/write header either to read or write on a disk. In
general, we assume that in 1 complete rotation, header can read/write the either track, so
• total time will be = (File Size/Track Size) *time taken to complete one revolution.
• Contiguous
• Linked
• Indexed
Each method has advantages and disadvantages. Although some systems support all three, it is
more common for a system to use one method for all files.
• Disadvantage
• Suffer from huge amount of external fragmentation.
• Another problem with contiguous allocation is file modification
• Disadvantage: -
• Only sequential access is possible, To find the ith block of a file, we must start at the beginning
and follow the pointers until we get to the ith block.
• Another disadvantage is the space required for the pointers, so each file requires slightly more
space than it would otherwise.
• Disadvantage
• Indexed allocation does suffer from wasted space. The pointer overhead of the index block
is generally greater than the pointer overhead of linked allocation.
• Direct Access:
• It is an alternative method for accessing a file, which is based on the disk model of a file, since disk allow random access
to any block or a record of a file
• for this method, a file is viewed as a numbered sequence of blocks or records which are read/written in an arbitrary
manner that is there is no restriction on the order of recording or writing
• it is well suited for database management system.
• Index access
• In this method and alternate index is created which contain key field and a pointer to the various blocks.
• To find and entry in the file for a key value we first search the index and then use the pointer to directly excess of file and
find the desired entry
No user-specific directories. All users Each user has their own private
User Isolation share the same directory space. directory.
Simpler to implement but can become Slightly more complex due to the need
Complexity cluttered and difficult to manage with for user management, but offers
many files. better organization.
Knowledge Gate Website
Why It's Necessary
• Organization: It helps in sorting and locating files more efficiently.
• User-Friendliness: Directories make it easier for users to categorize their files by
project, file type, or other attributes.
• Access Control: Using directories, different levels of access permission can be
applied, providing an extra layer of security.
Features of Directories
• Metadata: Directories also store metadata about the files and subdirectories
they contain, such as permissions, ownership, and timestamps.
• Dynamic Nature: As files are added or removed, the directory dynamically
updates its list of contents.
• Links and Shortcuts: Some systems support the creation of pointers or links
within directories to other files or directories.
Knowledge Gate Website
Feature Sequential File Indexed File
Records accessed one after another in Records can be accessed directly using an
Access Method order index
Speed of Access Slower, especially for large files Faster for random access, thanks to index
Generally more efficient as no space is Less efficient due to storage needed for
Storage Efficiency used for index index
• In this matrix, each row represents a subject and each column represents an object. The entry
at the intersection of a row and column defines the type of access that the subject has to the
object.
• Here, 'r' indicates read permission, 'w' indicates write permission, and
'-' indicates no permission.
• File A:
• User 1: r-w
• User 2: r
• File B:
• User 1: r
• User 2: w
• User 3: r
• User 1:
• File A: r-w
• File B: r
• User 2:
• File A: r
• File B: w
• File C: r-w