0% found this document useful (0 votes)
317 views73 pages

OS in 6 Hours

OS lecture notes by knowledge gate
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
317 views73 pages

OS in 6 Hours

OS lecture notes by knowledge gate
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 73

Syllabus(Semester Exam) Chapters of This Video

Unit – I Introduction : Operating system and functions, Classification of Operating systems- Batch, Interactive, Time sharing, Real
(Chapter-1: Introduction)- Operating system, Goal & functions, System Components, Operating System services,
Time System, Multiprocessor Systems, Multiuser Systems, Multiprocess Systems, Multithreaded Systems, Operating System Classification of Operating systems- Batch, Interactive, Multiprogramming, Multiuser Systems, Time sharing,
Structure- Layered structure, System Components, Operating System services, Reentrant Kernels, Monolithic and Microkernel Multiprocessor Systems, Real Time System.
Systems.
(Chapter-2: Operating System Structure)- Layered structure, Monolithic and Microkernel Systems, Interface, System Call.
Chapter-3: Process Basics)- Process Control Block (PCB), Process identification information, Process States, Process
Unit – II CPU Scheduling: Scheduling Concepts, Performance Criteria, Process States, Process Transition Diagram, Schedulers,
Process Control Block (PCB), Process address space, Process identification information, Threads and their management, Transition Diagram, Schedulers, CPU Bound and i/o Bound, Context Switch.
Scheduling Algorithms, Multiprocessor Scheduling. Deadlock: System model, Deadlock characterization, Prevention, Avoidance (Chapter-4: CPU Scheduling)- Scheduling Performance Criteria, Scheduling Algorithms.
and detection, Recovery from deadlock. (Chapter-5: Process Synchronization)- Race Condition, Critical Section Problem, Mutual Exclusion,, Dekker’s solution,
Peterson’s solution, Process Concept, Principle of Concurrency,
Unit – III Concurrent Processes: Process Concept, Principle of Concurrency, Producer / Consumer Problem, Mutual Exclusion, (Chapter-6: Semaphores)- Classical Problem in Concurrency- Producer/Consumer Problem, Reader-Writer Problem,
Critical Section Problem, Dekker’s solution, Peterson’s solution, Semaphores, Test and Set operation; Classical Problem in Dining Philosopher Problem, Sleeping Barber Problem, Test and Set operation.
Concurrency- Dining Philosopher Problem, Sleeping Barber Problem; Inter Process Communication models and Schemes, Process (Chapter-7: Deadlock)- System model, Deadlock characterization, Prevention, Avoidance and detection, Recovery from
generation. deadlock.
(Chapter-8)- Fork Command, Multithreaded Systems, Threads and their management
Unit – IV Memory Management: Basic bare machine, Resident monitor, Multiprogramming with fixed partitions, (Chapter-9: Memory Management)- Memory Hierarchy, Locality of reference, Multiprogramming with fixed partitions,
Multiprogramming with variable partitions, Protection schemes, Paging, Segmentation, Paged segmentation, Virtual memory Multiprogramming with variable partitions, Protection schemes, Paging, Segmentation, Paged segmentation.
concepts, Demand paging, Performance of demand paging, Page replacement algorithms, Thrashing, Cache memory (Chapter-10: Virtual memory)- Demand paging, Performance of demand paging, Page replacement algorithms,
organization, Locality of reference. Thrashing.
Chapter-11: Disk Management)- Disk Basics, Disk storage and disk scheduling, Total Transfer time.
Unit – V I/O Management and Disk Scheduling: I/O devices, and I/O subsystems, I/O buffering, Disk storage and disk scheduling,
(Chapter-12: File System)- File allocation Methods, Free-space Management, File organization and access mechanism,
RAID. File System: File concept, File organization and access mechanism, File directories, and File sharing, File system
Knowledge Gate Website
implementation issues, File system protection and security. Knowledge Gate Website
File directories, and File sharing, File system implementation issues, File system protection and security.

What is Operating System What is Operating System


1. Intermediatory – Acts as an intermediary between user & h/w . 2. Resource Manager/Allocator – Operating system controls and coordinates the
use of system resources among various application programs in an unbiased
fashion.

Knowledge Gate Website Knowledge Gate Website


What is Operating System Example
3. Platform - OS provides platform on which other application programs can be
installed, provides the environment within which programs are executed.

Knowledge Gate Website Knowledge Gate Website

Goals and Functions of operating system Goals of operating system


• Goals are the ultimate destination, but we follow functions to implement goals.

1. Primary goals (Convenience / user friendly)

2. Secondary goals (Efficiency (Using resources in efficient manner) /


Reliability / maintainability)

सबका साथ सबका विकास

Knowledge Gate Website Knowledge Gate Website


Functions of operating system Major Components of operating system
1. Process Management: Involves handling the creation, scheduling, and termination of
1. Kernel
processes, which are executing programs. • Central Component: Manages the system's resources and communication between hardware and software.

2. Memory Management: Manages allocation and deallocation of physical and virtual memory 2. Process Management
spaces to various programs. • Process Scheduler: Determines the execution of processes.
• Process Control Block (PCB): Contains process details such as process ID, priority, status, etc.
3. I/O Device Management: Handles I/O operations of peripheral devices like disks, keyboards, • Concurrency Control: Manages simultaneous execution.
etc., including buffering and caching.
3. Memory Management
4. File Management: Manages files on storage devices, including their information, naming, • Physical Memory Management: Manages RAM allocation.
permissions, and hierarchy. • Virtual Memory Management: Simulates additional memory using disk space.
• Memory Allocation: Assigns memory to different processes.
5. Network Management: Manages network protocols and functions, enabling the OS to
4. File System Management
establish network connections and transfer data. • File Handling: Manages the creation, deletion, and access of files and directories.
• File Control Block: Stores file attributes and control information.
6. Security & Protection: Ensures system protection against unauthorized access and other • Disk Scheduling: Organizes the order of reading or writing to disk.
security threats through authentication, authorization, and encryption.
Knowledge Gate Website Knowledge Gate Website

5. Device Management Batch Operating System


• Device Drivers: Interface between the hardware and the operating system.
1. Early computers were not interactive device, there user use to prepare a job which consist three parts
• I/O Controllers: Manage data transfer to and from peripheral devices. 1. Program
2. Control information
6. Security and Access Control 3. Input data
• Authentication: Verifies user credentials. 2. Only one job is given input at a time as there was no memory, computer will take the input then process it and
• Authorization: Controls access permissions to files and directories. then generate output.
• Encryption: Ensures data confidentiality and integrity. 3. Common input/output device were punch card or tape drives. So these devices were very slow, and processor
remain ideal most of the time.
7. User Interface
• Command Line Interface (CLI): Text-based user interaction.
• Graphical User Interface (GUI): Visual, user-friendly interaction with the OS.

8. Networking
• Network Protocols: Rules for communication between devices on a network.
• Network Interface: Manages connection between the computer and the network.

Knowledge Gate Website Knowledge Gate Website


Batch Operating System Spooling
4. To speed up the processing job with similar types (for e.g. FORTRAN jobs, COBOL jobs etc. )
were batched together and were run through the processor as a group (batch). Simultaneous peripheral operations online
5. In some system grouping is done by the operator while in some systems it is performed by the 1. In a computer system input-output devices, such as printers are very slow relative to the
'Batch Monitor' resided in the low end of main memory) performance of the rest of the system.
6. Then jobs (as a deck of punched cards) are bundled into batches with similar requirement. 2. Spooling is a process in which data is temporarily held in memory or other volatile storage to
be used by a device or a program.

Knowledge Gate Website Knowledge Gate Website

3. The most common implementation of spooling can be found in typical 4. Ever had your mouse or keyboard freeze briefly? We often click around to test if
input/output devices such as the keyboard, mouse and printer. For example, in it's working. When it unfreezes, all those stored clicks execute rapidly due to the
printer spooling, the documents/files that are sent to the printer are first stored in device's spool.
the memory. Once the printer is ready, it fetches the data and prints it.

Knowledge Gate Website Knowledge Gate Website


Multiprogramming Operating System • Non-Multiprogrammed: CPU sits idle
while waiting for a job to complete.
• Multiple Jobs: Keeps several jobs in main
memory simultaneously, allowing more • Multiprogrammed: The OS switches to
efficient utilization of the CPU. and executes another job if the current
job needs to wait, utilizing the CPU
• Job Execution: The OS picks and begins to
effectively.
execute one of the jobs in memory.

• Waiting Jobs: Eventually, a job may need • Conclusion Show must go on


to wait for a task, such as an I/O operation, • Efficient Utilization: Ensures that the
to complete. CPU is never idle as long as at least one
job needs to execute, leading to better
utilization of resources.
वकसी के विए
Processor Knowledge wait
Gate नहीीं करे गा
Website Knowledge Gate Website

• Advantages: Multitasking Operating system/time sharing/Multiprogramming with Round Robin/ Fair Share
• High CPU Utilization: Enhances processing efficiency. 1. Time sharing (or multitasking) is a logical extension of multiprogramming, it allows many users to share the
• Less Waiting Time: Minimizes idle time. computer simultaneously. the CPU executes multiple jobs (May belong to different user) by switching among
them, but the switches occur so frequently that, each user is given the impression that the entire computer
• Multi-Task Handling: Manages concurrent tasks effectively. system is dedicated to his/her use, even though it is being shared among many users.
• Shared CPU Time: Increases system efficiency. 2. In the modern operating systems, we are able to play MP3 music, edit documents in Microsoft Word, surf the
Google Chrome all running at the same time. (by context switching, the illusion of parallelism is achieved)
3. For multitasking to take place, firstly there should be multiprogramming i.e. presence of multiple programs
ready for execution. And secondly the concept of time sharing.
• Disadvantages:
• Complex Scheduling: Difficult to program.
• Complex Memory Management: Intricate handling of memory is required.

Knowledge Gate Website Knowledge Gate Website


Multiprocessing Operating System/ tightly coupled system
1. Multiprocessor Operating System refers to the use of two or more central processing units (CPU) within a
single computer system. These multiple CPU’s share system bus, memory and other peripheral devices.
2. Multiple concurrent processes each can run on a separate CPU, here we achieve a true parallel execution of
processes.
3. Becomes most important in computer system, where the complexity of the job is more, and CPU divides and
conquers the jobs. Generally used in the fields like artificial intelligence and expert system, image processing,
weather forecasting etc.

Knowledge Gate Website Knowledge Gate Website

Point Symmetric Processing Asymmetric Processing Point Multi-Programming Multi-Processing

All processors are treated equally Each processor is assigned a Utilizes multiple CPUs to run
Definition Definition Allows multiple programs to share a single CPU.
and can run any task. specific task or role. multiple processes concurrently.

Task Any processor can perform any Tasks are divided according to Simulates concurrent execution by rapidly Achieves true parallel execution of
Allocation task. processor roles. Concurrency
switching between tasks. processes.

Generally simpler as all processors More complex due to the


Complexity Maximizes CPU utilization by keeping it busy with
Enhances performance by allowing
are treated the same. dedicated role of each processor. Resource Utilization tasks to be processed
different tasks.
simultaneously.
Easily scalable by adding more May require reconfiguration as
Scalability Hardware Requires only one CPU and manages multiple Requires multiple CPUs, enabling
processors. processors are added. Requirements tasks on it. parallel processing.

Load is evenly distributed, Performance may vary based on Complexity and Less complex, primarily managing task switching More complex, requiring
Performance
enhancing performance. the specialization of tasks. Coordination on one CPU. coordination among multiple CPUs.

Knowledge Gate Website Knowledge Gate Website


Real time Operating system • Hard real-time operating system - This is also a type of OS and it is predicted by
1. A real time operating system is a special purpose operating system which has well defined fixed time constraints.
a deadline. The predicted deadlines will react at a time t = 0. Some examples of
Processing must be done within the defined time limit or the system will fail. this operating system are air bag control in cars.
2. Valued more for how quickly or how predictably it can respond, without buffer delays than for the amount of
work it can perform in a given period of time.
3. For example, a petroleum refinery, Airlines reservation system, Air traffic control system, Systems that provide up
to the minute information on stock prices, Defense application systems like as RADAR.

Knowledge Gate Website Knowledge Gate Website

• Soft real-time operating system - The soft real-time operating system has certain deadlines,
may be missed and they will take the action at a time t=0+. The critical time of this operating Point Hard Real-Time Operating System Soft Real-Time Operating System
system is delayed to some extent. The examples of this operating system are the digital
camera, mobile phones and online data etc. Must meet strict deadlines Can miss deadlines occasionally
Deadline Constraints
without fail. without failure.

Response Time Fixed and guaranteed. Predictable, but not guaranteed.


Used in life-critical systems like
Used in multimedia, user
Applications medical devices, nuclear
interfaces, etc.
reactors.
Typically more complex and Less complex and usually less
Complexity and Cost
costlier. expensive.

Must be highly reliable and High reliability desired, but some


Reliability
fault-tolerant. failures are tolerable.

Knowledge Gate Website Knowledge Gate Website


Distributed OS Structure of Operating System
1. A distributed operating system is a software over a collection of independent, networked,
• A common approach is to partition the task into small components, or modules, rather than
communicating, loosely coupled nodes and physically separate computational nodes.
have one monolithic system. Each of these modules should be a well-defined portion of the
2. The nodes communicate with one another through various networks, such as high-speed buses and system, with carefully defined inputs, outputs, and functions.
the Internet. They handle jobs which are serviced by multiple CPUs. Each individual node holds a
specific software subset of the global aggregate operating system.
• Simple Structure - Many operating systems do not
3. There are four major reasons for building distributed systems: resource sharing, computation speedup, have well-defined structures. Frequently, such
reliability, and communication. systems started as small, simple, and limited
systems and then grew beyond their original scope.
MS-DOS is an example of such a system.
• Not divided into modules. Its interface , levels and
functionality are not well separated

Knowledge Gate Website Knowledge Gate Website

• Layered Approach - With proper hardware support, operating systems can be broken into pieces. The operating Micro-Kernel approach
system can then retain much greater control over the computer and over the applications that make use of that
computer. • In the mid-1980s, researchers at Carnegie Mellon University developed an operating system
1. Implementers have more freedom in changing the inner workings called Mach that modularized the kernel using the microkernel approach.
of the system and in creating modular operating systems. • This method structures the operating system by removing all nonessential components from
2. Under a top-down approach, the overall functionality and features the kernel and implementing them as system and user-level programs. The result is a smaller
are determined and are separated into components. kernel.
3. Information hiding is also important, because it leaves
programmers free to implement the low-level routines as they see
fit.
4. A system can be made modular in many ways. One method is the
layered approach, in which the operating system is broken into a
number of layers (levels). The bottom layer (layer 0) is the
hardware; the highest (layer N) is the user interface.

Knowledge Gate Website Knowledge Gate Website


• One benefit of the microkernel approach is that it makes extending the operating system easier. All new
services are added to user space and consequently do not require modification of the kernel.
User and Operating-System Interface
• When the kernel does have to be modified, the changes tend to be fewer, because the microkernel is a
• There are several ways for users to interface with the operating system. Here, we discuss two
smaller kernel. fundamental approaches.
• The MINIX 3 microkernel, for example, has only approximately 12,000 lines of code. Developer • Command-line interface, or command interpreter.
Andrew S. Tanenbaum
• Graphical User Interfaces.

? Operating
System

Knowledge Gate Website Knowledge Gate Website

• Command Interpreters - Some operating systems include the command interpreter in the • Graphical User Interfaces - A second strategy for interfacing with the operating system is
kernel. Others, such as Windows and UNIX, treat the command interpreter as a special through a user- friendly graphical user interface, or GUI. Here, users employ a mouse-based
program that is running when a job is initiated or when a user first logs on (on interactive window- and-menu system characterized by a desktop.
systems).
• The user moves the mouse to position its pointer on images, or icons, on the screen (the
• On systems with multiple command interpreters to choose from, the interpreters are known as desktop) that represent programs, files, directories, and system functions. Depending on the
shells. For example, on UNIX and Linux systems, a user may choose among several different mouse pointer’s location, clicking a button on the mouse can invoke a program, select a file or
shells, including the Bourne shell, C shell, Bourne-Again shell, Korn shell, and others. directory—known as a folder —or pull down a menu that contains commands.

Knowledge Gate Website Knowledge Gate Website


• Because a mouse is impractical for most mobile systems, smartphones and handheld tablet • The choice of whether to use a command-line or GUI interface is mostly one of personal preference.
computers typically use a touchscreen interface. • System administrators who manage computers and power users who have deep knowledge of a system
• Here, users interact by making gestures on the touchscreen—for example, pressing and frequently use the command-line interface. For them, it is more efficient, giving them faster access to
the activities they need to perform.
swiping fingers across the screen.
• Indeed, on some systems, only a subset of system functions is available via the GUI, leaving the less
common tasks to those who are command-line knowledgeable.

Knowledge Gate Website Knowledge Gate Website

System call
• System calls provide the means for a user program to ask
the operating system to perform tasks reserved for the
operating system on the user program’s behalf.

• System calls provide an interface to the services made


available by an operating system. These calls are generally
available as routines written in C and C++.

• The API specifies a set of functions that are available to an


application programmer, including the parameters that are
passed to each function and the return values the
programmer can expect.

Knowledge Gate Website Knowledge Gate Website


• Types of System Calls - System calls can be grouped roughly into six major categories: process • File management
control, file manipulation, device manipulation, information maintenance, communications,
and protection.
1. create file, delete file
• Process control
1. end, abort
2. open, close
2. load, execute

3. create process, terminate process 3. read, write, reposition


4. get process attributes, set process attributes

5. wait for time 4. get file attributes, set file attributes


6. wait event, signal event

7. allocate and free memory

Knowledge Gate Website Knowledge Gate Website

• Device management • Information maintenance

1. request device, release device 1. get time or date, set time or date

2. read, write, reposition 2. get system data, set system data

3. get device attributes, set device attributes 3. get process, file, or device attributes

4. logically attach or detach devices 4. set process, file, or device attributes

Knowledge Gate Website Knowledge Gate Website


• Communications Mode
• We need two separate modes of operation: User mode and Kernel mode (also called supervisor mode, system
mode, or privileged mode). A bit, called the mode bit, is added to the hardware of the computer to indicate the
1. create, delete communication connection current mode: kernel (0) or user (1).
• When the computer system is executing on behalf of a user application, the system is in user mode. However,
when a user application requests a service from the operating system (via a system call), the system must
2. send, receive messages transfer status information transition from user to kernel mode to fulfill the request.

Knowledge Gate Website Knowledge Gate Website

User Mode

Kernel Mode
Knowledge Gate Website Knowledge Gate Website
Even if two processes may be associated with same program, they will be considered as two
Process separate execution sequences and are totally different process.
In general, a process is a program in execution.
For instance, if a user has invoked many copies of web browser program, each copy will be
A Program is not a Process by default. A program is a passive entity, i.e. a treated as separate process. even though the text section is same but the data, heap and stack
file containing a list of instructions stored on disk (secondary memory) sections can vary.
(often called an executable file).
A program becomes a Process when an executable file is loaded into main
memory and when it’s PCB is created.
A process on the other hand is an Active Entity, which require resources
like main memory, CPU time, registers, system buses etc.

Knowledge Gate Website Knowledge Gate Website

• A Process consists of following sections:


• Text section: Also known as Program Code. Point Program Process
A set of instructions written to perform a
Definition An instance of a program being executed.
• Stack: Which contains the temporary data (Function specific task.
Parameters, return addresses and local variables).
Static; exists as code on disk or in Dynamic; exists in memory and has a state
State storage. (e.g., running, waiting).
• Data Section: Containing global variables.

• Heap: Which is memory dynamically allocated during Does not require system resources when Requires CPU time, memory, and other
process runtime. Resources not running. resources during execution.

Exists independently and is not Can operate concurrently with other


Independence executing. processes.

Can interact with other processes and the


Does not interact with other programs or
Interaction the system.
operating system through system calls and
inter-process communication.

Knowledge Gate Website Knowledge Gate Website


Process Control Block (PCB) 4. CPU-scheduling information: This information includes a process
priority, pointers to scheduling queues, and any other scheduling
• Each process is represented in the operating system by a process control block parameters.
(PCB) — also called a task control block.
• PCB simply serves as the repository for any information that may vary from 5. Memory-management information: This information may include
process to process. It contains many pieces of information associated with a such items as the value of the base and limit registers and the page
specific process, including these:
tables, or the segment tables, depending on the memory system used
1. Process state: The state may be new, ready, running, waiting, halted, and
so on. by the operating system.
2. Program counter: The counter indicates the address of the next instruction
6. Accounting information: This information includes the amount of
to be executed for this process.
CPU and real time used, time limits, account numbers, job or process
3. CPU registers: The registers vary in number and type, depending on the numbers, and so on.
computer architecture. They include accumulators, index registers, stack
pointers, and general-purpose registers, plus any condition-code information.
Along with the program counter, this state information must be saved when 7. I/O status information: This information includes the list of I/O
an interrupt occurs, to allow the process to be continued correctly afterward. devices allocated to the process, a list of open files, and so on.

Knowledge Gate Website Knowledge Gate Website

Process States
• A Process changes states as it executes. The state of a process is defined in parts by the
current activity of that process. A process may be in one of the following states:
• New: The process is being created.

• Running: Instructions are being executed.

• Waiting (Blocked): The process is waiting


for some event to occur (such as an I/O
completion or reception of a signal).

• Ready: The process is waiting to be assigned


to a processor.

• Terminated: The process has finished execution.

Knowledge Gate Website Knowledge Gate Website


Schedulers Point Long-Term Scheduler Short-Term Scheduler Middle Scheduler
Adjusts the degree of
• Schedulers: A process migrates among the various scheduling queues throughout its lifetime.
Controls the admission of new Selects which ready process multiprogramming, moving
The operating system must select, for scheduling purposes, processes from these queues in Function
processes into the system. will execute next. processes between ready and
some fashion. The selection process is carried out by the appropriate scheduler. waiting queues.

• Types of Schedulers Executes infrequently as it Executes frequently to Executes at an intermediate


• Long Term Scheduler (LTS)/Spooler: Long-term schedulers determine which processes Frequency deals with the admission of rapidly switch between frequency, balancing long-term
new processes. processes. and short-term needs.
enter the ready queue from the job pool. Operating less frequently than short-term
schedulers, they focus on long-term system goals such as maximizing throughput. Determines which programs
Manages CPU scheduling and
Controls the mix of CPU-bound
Responsibility are admitted to the system and I/O-bound processes to
• Medium-term scheduler: The medium-term scheduler swaps processes in and out of the switching of processes.
from the job pool. optimize throughput.
memory to optimize CPU usage and manage memory allocation. By doing so, it adjusts the Influences overall system Directly impacts CPU Balances system load to
Impact on System
degree of multiprogramming and frees up memory as needed. Swapping allows the Performance
performance and degree of utilization and response prevent resource bottlenecks
system to pause and later resume a process, improving overall system efficiency. multiprogramming. time. or idle resources.
Makes decisions considering
• Short Term Scheduler (STS): The short-term scheduler, or CPU scheduler, selects from Makes decisions based on Makes decisions based on
both short-term and long-term
among the processes that are ready to execute and allocates the CPU to one of them. Decision Making long-term goals like system short-term goals like
goals, optimizing resource
throughput. minimizing response time.
allocation.
Knowledge Gate Website Knowledge Gate Website

• Dispatcher - The dispatcher is the module that gives control of the CPU to the process CPU Bound and I/O Bound Processes
selected by the short-term scheduler.
• A process execution consists of a cycle of CPU execution or wait and i/o execution or wait. Normally a process
alternates between two states.
• This function involves the following: Switching context, switching to user mode, jumping to
the proper location in the user program to restart that program. • Process execution begin with the CPU burst that may be followed by a i/o burst, then another CPU and i/o burst
and so on. Eventually in the last will end up on CPU burst. So, process keep switching between the CPU and i/o
during execution.
• The dispatcher should be as fast as possible, since it is invoked during every process switch.
The time it takes for the dispatcher to stop one process and start another running is known as • I/O Bound Processes: An I/O-bound process is one that spends more of its time doing I/O than it spends doing
computations.
the dispatch latency.
• CPU Bound Processes: A CPU-bound process, generates I/O requests infrequently, using more of its time doing
computations.
• It is important that the long-term scheduler select a good process mix of I/O-bound and CPU-bound processes. If
all processes are I/O bound, the ready queue will almost always be empty, and the short-term scheduler will
have little to do. Similarly, if all processes are CPU bound, the I/O waiting queue will almost always be empty,
devices will go unused, and again the system will be unbalanced.

Knowledge Gate Website Knowledge Gate Website


Context Switch

• Switching the CPU to another process requires


performing a state save of the current process
and a state restore of a different process. This
task is known as a context switch.

• When a context switch occurs, the kernel


saves the context of the old process in its PCB
and loads the saved context of the new
process scheduled to run. Context-switch time
is pure overhead, because the system does no
useful work while switching.

Knowledge Gate Website Knowledge Gate Website

Sector-78
Rohtak
Sadar bazar
Grand father paper industry machenical
May-2023,
Cacha 1 km away
Elder brother marrage in

Knowledge Gate Website Knowledge Gate Website


CPU Scheduling Type of scheduling
1. CPU scheduling is the process of determining which process in the ready queue • Non-Pre-emptive: Under Non-Pre-emptive scheduling, once the CPU has been
is allocated to the CPU. allocated to a process, the process keeps the CPU until it releases the CPU
2. Various scheduling algorithms can be used to make this decision, such as First- willingly.
Come-First-Served (FCFS), Shortest Job Next (SJN), Priority and Round Robin • A process will leave the CPU only
(RR). 1. When a process completes its execution (Termination state)
3. Different algorithm support different class of process and favor different 2. When a process wants to perform some i/o operations(Blocked state)
scheduling criterion.

Knowledge Gate Website Knowledge Gate Website

Pre-emptive Point Non-Pre-emptive Scheduling Pre-emptive Scheduling


• Under Pre-emptive scheduling, once the CPU has been allocated to a process, A process will
leave the CPU willingly or it can be forced out. So it will leave the CPU Once a process starts, it runs to A process can be interrupted and
CPU Allocation
1. When a process completes its execution completion or wait for some event. moved to the ready queue.
2. When a process leaves CPU voluntarily to perform some i/o operations
3. If a new process enters in the ready states (new, waiting), in case of high priority Can be longer, especially for short Generally shorter, as higher-priority
Response Time
4. When process switches from running to ready state because of time quantum expire. tasks. tasks can pre-empt others.

More complex, requiring careful


Complexity Simpler to implement.
handling of shared resources.

Resource May lead to inefficient CPU Typically more efficient, as it can quickly
Utilization utilization. switch tasks.

Suitable Batch systems and applications that Interactive and real-time systems
Applications require predictable timing. requiring responsive behavior.

Knowledge Gate Website Knowledge Gate Website


• Scheduling criteria - Different CPU-scheduling algorithms have different • CPU utilization: Keeping the CPU as busy as possible.
properties, and the choice of a particular algorithm may favour one class of
processes over another. So, in order to efficiently select the scheduling
algorithms following criteria should be taken into consideration:

Knowledge Gate Website Knowledge Gate Website

• Throughput: If the CPU is busy executing processes, then work is being done. • Waiting time: Waiting time is the sum of the periods spent waiting in
One measure of work is the number of processes that are completed per time the ready queue.
unit, called throughput.

Knowledge Gate Website Knowledge Gate Website


• Response Time: Is the time it takes to start responding, not the • Note: The CPU-scheduling algorithm does not affect the amount of time during
which a process executes or perform I/0; it affects only the amount of time that a
time it takes to output the response. process spends waiting in the ready queue.
• It is desirable to maximize CPU utilization and throughput and to minimize
turnaround time, waiting time, and response time.

Knowledge Gate Website Knowledge Gate Website

Terminology FCFS (FIRST COME FIRST SERVE)


• FCFS is the simplest scheduling algorithm, as the name suggest, the process that requests the
• Arrival Time (AT): Time at which process enters a ready state.
CPU first is allocated the CPU first.
• Burst Time (BT): Amount of CPU time required by the process to finish its execution.
• Implementation is managed by FIFO Queue.
• Completion Time (CT): Time at which process finishes its execution.
• It is always non pre-emptive in nature.
• Turn Around Time (TAT): Completion Time (CT) – Arrival Time (AT), Waiting Time + Burst Time (BT)

• Waiting Time: Turn Around Time (TAT) – Burst Time (BT)

Knowledge Gate Website Knowledge Gate Website


P. No Arrival Time Burst Time Completion Time Turn Around Time Waiting Time
Advantage
(AT) (BT) (CT) (TAT) = CT - AT (WT) = TAT - BT
P0 2 4 • Easy to understand, and can easily be implemented using Queue data structure.
P1 1 2
P2 0 3 • Can be used for Background processes where execution is not urgent.
P3 4 2
P4 3 1
Average

Knowledge Gate Website Knowledge Gate Website

P. No AT BT TAT=CT-AT WT=TAT -BT Convoy Effect


P0 0 100 • If the smaller process have to wait more for the CPU because of Larger process then this effect
P1 1 2
is called Convoy Effect, it result into more average waiting time.

Average • Solution, smaller process have to be executed before longer process, to achieve less average
waiting time.

P. No AT BT TAT=CT-AT WT=TAT -BT

P0 1 100

P1 0 2
Average

Knowledge Gate Website Knowledge Gate Website


Disadvantage Shortest Job First (SJF)(non-pre-emptive)
• FCFS suffers from convoy which means smaller process have to wait larger process, Shortest Remaining Time First (SRTF)/ (Shortest Next CPU Burst) (Pre-emptive)
which result into large average waiting time.
• Whenever we make a decision of selecting the next process for CPU execution,
• The FCFS algorithm is thus particularly troublesome for time-sharing systems (due to its out of all available process, CPU is assigned to the process having smallest burst
non-pre-emptive nature), where it is important that each user get a share of the CPU at time requirement. When the CPU is available, it is assigned to the process that
regular intervals. has the smallest next CPU burst. If there is a tie, FCFS is used to break tie.

• Higher average waiting time and TAT compared to other algorithms.

Knowledge Gate Website Knowledge Gate Website

• It supports both version non-pre-emptive and pre-emptive (purely greedy • In Shortest Remaining Time First (SRTF) (Pre-emptive) whenever a process enters
approach) in ready state, again we make a scheduling decision weather, this new process
with the smaller CPU burst requirement then the remaining CPU burst of the
• In Shortest Job First (SJF)(non-pre-emptive) once a decision is made and among
running process and if it is the case then the running process is pre-empted and
the available process, the process with the smallest CPU burst is scheduled on
new process is scheduled on the CPU.
the CPU, it cannot be pre-empted even if a new process with the smaller CPU
burst requirement then the remaining CPU burst of the running process enter in
• This version (SRTF) is also called optimal is it guarantee minimal average waiting
the system.
time.

Knowledge Gate Website Knowledge Gate Website


P. No Arrival Time Burst Time Completion Time Turn Around Time Waiting Time • Advantage
(AT) (BT) (CT) (TAT) = CT - AT (WT) = TAT - BT
• Pre-emptive version guarantees minimal average waiting time so some time also referred
P0 1 7 as optimal algorithm. Provide a standard for other algo in terms of average waiting time.
2 5 • Provide better average response time compare to FCFS.
P1
P2 3 1

P3 4 2 • Disadvantage

P4 5 8 • Here process with the longer CPU burst requirement goes into starvation and have
Average response time.
• This algo cannot be implemented as there is no way to know the length of the next CPU
burst. As SJF is not implementable, we can use the one technique where we try to predict
the CPU burst of the next coming process.

Knowledge Gate Website Knowledge Gate Website

Priority scheduling Priority scheduling


• Here a priority is associated with each process. At any instance of time out of all available process,
CPU is allocated to the process which possess highest priority (may be higher or lower number).

• Tie is broken using FCFS order. No importance to senior or burst time. It supports both non-pre-
emptive and pre-emptive versions.
• In Priority (non-pre-emptive) once a decision is made and among the available process, the
process with the highest priority is scheduled on the CPU, it cannot be pre-empted even if a
new process with higher priority more than the priority of the running process enter in the
system.
• In Priority (pre-emptive) once a decision is made and among the available process, the process
with the highest priority is scheduled on the CPU. if it a new process with priority more than
the priority of the running process enter in the system, then we do a context switch and the
processor is provided to the new process with higher priority.
• There is no general agreement on whether 0 is the highest or lowest priority, it can vary from
systems to systems.

Knowledge Gate Website Knowledge Gate Website


P. No AT BT Priority CT TAT = CT - AT WT = TAT - BT • Advantage
• Gives a facility specially to system process.
1 4 4
P0 • Allow us to run important process even if it is a user process.
P1 2 2 5

P2 2 3 7

P3 3 5 8(H) • Disadvantage
P4 3 1 5 • Here process with the smaller priority may starve for the CPU
P5 4 2 6 • No idea of response time or waiting time.
Average

• Note: - Specially use to support system process or important user process


• Ageing: - a technique of gradually increasing the priority of processes that wait in
the system for long time. E.g. priority will increase after every 10 mins

Knowledge Gate Website Knowledge Gate Website

P. No Arrival Time Burst Time Completion Time Turn Around Time Waiting Time (WT)
Round robin (AT) (BT) (CT) (TAT) = CT - AT = TAT - BT
• This algo is designed for time sharing systems, where it is not, the idea to complete one process and then start P0 0 4
another, but to be responsive and divide time of CPU among the process in the ready state(circular).
P1 1 5
• The CPU scheduler goes around the ready queue, allocating the CPU to each process for a maximum of 1 Time
quantum say q. Up to which a process can hold the CPU in one go, with in which either a process terminates if P2 2 2
process have a CPU burst of less than given time quantum or context switch will be executed and process must
P3 3 1
release the CPU voluntarily and enter the ready queue and wait for the next chance.
• If there are n processes in the ready queue and the time quantum is q, then each process gets 1/n of the CPU P4 4 6
time in chunks of at most q time units. Each process must wait no longer than (n - 1) x q time units until its next P5 6 3
time quantum.
Average

Knowledge Gate Website Knowledge Gate Website


• Advantage Multi Level-Queue Scheduling
• Perform best in terms of average response time
• After studying all important approach to CPU scheduling, we must understand anyone of them
• Works will in case of time-sharing systems, client server architecture and interactive alone is not good for every process in the system, as different process have different
system scheduling needs so, we must have a kind of hybrid scheduling idea, supporting all classes of
• kind of SJF implementation processes.

• Disadvantage • Here processes are easily classified into different groups.


• System process
• Longer process may starve
• foreground (interactive) processes
• Performance depends heavily on time quantum - If value of the time quantum is very less, • background (batch) processes.
then it will give lesser average response time (good but total no of context switches will be
more, so CPU utilization will be less), If time quantum is very large then average response • A multilevel queue scheduling algorithm, partitions the ready queue into several separate
time will be more bad, but no of context switches will be less, so CPU utilization will be queues. The processes are permanently assigned to one queue, generally based on properties
good. and requirement of the process.
• No idea of priority

Knowledge Gate Website Knowledge Gate Website

• Each queue has its own scheduling algorithm. For example Multi-level Feedback Queue Scheduling
• System process might need priority algorithm
• Problem with multi-level queue scheduling is how to decide number of
• Interactive process might be scheduled by an RR algorithm
ready queue, scheduling algorithm inside the queue and between the
• Batch processes is scheduled by an FCFS algorithm.
queue and once a process enters a specific queue we can not change and
queue after that.
• In addition, there must be scheduling among the queues, which is commonly implemented as
fixed-priority preemptive scheduling or round robin with different time quantum. • The multilevel feedback queue scheduling algorithm, allows a process to
move between queues. The idea is to separate processes according to the
characteristics of their CPU bursts. If a process uses too much CPU time, it
will be moved to a lower-priority queue. In addition, a process that waits
too long in a lower-priority queue may be moved to a higher-priority
queue. This form of aging prevents starvation.
• A process entering the ready queue is put in queue 0. A process in queue 0
is given a time quantum of 8 milliseconds. If it does not finish within this
time, it is moved to the tail of queue 1. If queue 0 is empty, the process at
the head of queue 1 is given a quantum of 16 milliseconds. If it does not
complete, it is preempted and is put into queue 2. Processes in queue 2
are run on an FCFS basis but are run only when queues 0 and 1 are empty.
Knowledge Gate Website Knowledge Gate Website
In general, a multilevel feedback queue scheduler is defined by the following
parameters:
• The number of queues
• The scheduling algorithm for each queue
• The method used to determine when to upgrade a process to a higher-
priority queue
• The method used to determine when to demote a process to a lower-
priority queue.
• The definition of a multilevel feedback queue scheduler makes it the most
general CPU-scheduling algorithm. It can be configured to match a specific
system under design. Unfortunately, it is also the most complex algorithm,
since defining the best scheduler requires some means by which to select
values for all the parameters.

Knowledge Gate Website Knowledge Gate Website

Process Synchronization & Race Condition General Structure of a process P()


• As we understand in a multiprogramming environment a good number of processes compete {
for limited number of resources. Concurrent access to shared data at some time may result in • Initial Section: Where process is accessing private While(T)
data inconsistency for e.g. resources. {
P () • Entry Section: Entry Section is that part of code where, Initial Section
{ each process request for permission to enter its critical Entry Section
read ( i ); section. Critical Section
i = i + 1; Exit Section
• Critical Section: Where process is access shared
write( i ); Remainder Section
resources.
} }
• Exit Section: It is the section where a process will exit }
• Race condition is a situation in which the output of a process depends on the execution from its critical section.
sequence of process. i.e. if we change the order of execution of different process with
respect to other process the output may change. • Remainder Section: Remaining Code.

Knowledge Gate Website Knowledge Gate Website


Criterion to Solve Critical Section Problem Some Points to Remember
• Mutual Exclusion and Progress are mandatory requirements that needs to be
• Mutual Exclusion: No two processes should be present inside the critical section at the followed in order to write a valid solution for critical section problem.
same time, i.e. only one process is allowed in the critical section at an instant of time.
• Bounded waiting is optional criteria, if not satisfied then it may lead to
• Progress: If no process is executing in its critical section and some processes wish to
starvation.
enter their critical sections, then only those processes that are not executing in their
remainder sections can participate in deciding which will enter its critical section
next(means other process will participate which actually wish to enter). there should
be no deadlock.

• Bounded Waiting: There exists a bound or a limit on the number of times a process is
allowed to enter its critical section and no process should wait indefinitely to enter the
CS.

Knowledge Gate Website Knowledge Gate Website

Solutions to Critical Section Problem Two Process Solution


We generally have the following solutions to a Critical Section Problems: • In general it will be difficult to write a valid solution in the first go to solve critical
1. Two Process Solution section problem among multiple processes, so it will be better to first attempt
1. Using Boolean variable turn two process solution and then generalize it to N-Process solution.
2. Using Boolean array flag
3. Peterson’s Solution
• There are 3 Different idea to achieve valid solution, in which some are invalid
2. Operating System Solution while some are valid.
1. Counting Semaphore
2. Binary Semaphore • 1- Using Boolean variable turn
3. Hardware Solution
• 2- Using Boolean array flag
1. Test and Set Lock
2. Disable interrupt
• 3- Peterson’s Solution

Knowledge Gate Website Knowledge Gate Website


• Here we will use a Boolean variable turn, which is initialize randomly(0/1). • The solution follows Mutual Exclusion as the two processes cannot
enter the CS at the same time.
P0 P1
while (1) while (1) • The solution does not follow the Progress, as it is suffering from the
strict alternation. Because we never asked the process whether it
{ {
wants to enter the CS or not?
while (turn! = 0); while (turn! = 1);
Critical Section Critical Section
turn = 1; turn = 0;
Remainder section Remainder Section
} }

Knowledge Gate Website Knowledge Gate Website

• Here we will use a Boolean array flag with two cells, where each cell is initialized to F • This solution follows the Mutual Exclusion Criteria.
P0 P1
while (1) while (1) • But in order to achieve the progress the system ended up
{ { being in a deadlock state.
flag [0] = T; flag [1] = T;
while (flag [1]); while (flag [0]);
Critical Section Critical Section
flag [0] = F; flag [1] = F;
Remainder section Remainder Section
} }

Knowledge Gate Website Knowledge Gate Website


Dekker's algorithm • Peterson's solution is a classic Software-based solution to the critical-section problem for two
process. We will be using both: turn and Boolean flag.
Pi Pj
do do
{ { P0 P1
flag[i] = true; flag[j] = true; while (1) while (1)
while (flag[j]) while (flag[i])
{ { { {
if (turn == j) if (turn == i)
{ { flag [0] = T; flag [1] = T;
flag[i] = false; flag[j] = false; turn = 1; turn = 0;
while (turn == j) ; while (turn == i) ;
flag[i] = true; flag[j] = true; while (turn = = 1 && flag [1] = = T); while (turn = = 0 && flag [0] = =T);
} }
} }
Critical Section Critical Section
/* critical section */ /* critical section */ flag [0] = F; flag [1] = F;
turn = j; turn = i;
flag[i] = false; flag[j] = false; Remainder section Remainder Section
/* remainder section */ /* remainder section */ } }
} }
while (true); while (true);
Knowledge Gate Website • This solution ensures Mutual
Knowledge Gate
Exclusion, Website
Progress and Bounded Wait.

Operation System Solution (Semaphores) • Peterson’s Solution was confined to just two
processes, and since in a general system can have n Pi()
1. Semaphores are synchronization tools using which we will attempt n-process solution. processes, Semaphores provides n-processes solution. {
2. A semaphore S is a simple integer variable that, but apart from initialization it can be • While solving Critical Section Problem only we
While(T)
initialize semaphore S = 1. {
accessed only through two standard atomic operations: wait(S) and signal(S).
Initial Section
3. The wait(S) operation was originally termed as P(S) and signal(S) was originally called V(S). • Semaphores are going to ensure Mutual Exclusion and
Progress but does not ensures bounded waiting. wait(s)
Critical Section
Wait(S) Signal(S) signal(s)
Wait(S) Signal(S)
{ { Remainder Section
{ {
while(s<=0); }
s++; while(s<=0); s++; }
s--; s--;
} }
} }

Knowledge Gate Website Knowledge Gate Website


Classical Problems on Synchronization Producer-Consumer Problem
• Problem Definition – There are two process Producer and Consumers, producer produces
• There are number of actual industrial problem we try to solve in order to
information and put it into a buffer which have n cell, that is consumed by a consumer. Both
improve our understand of Semaphores and their power of solving problems. Producer and Consumer can produce and consume only one article at a time.

• Here in this section we will discuss a number of problems like • A producer needs to check whether the buffer is overflowed or not after producing an item,
• Producer consumer problem/ Bounder Buffer Problem before accessing the buffer.
• Similarly, a consumer needs to check for an underflow before accessing the buffer and then
• Reader-Writer problem consume an item.
• Also, the producer and consumer must be synchronized, so that once a producer and consumer
• Dining Philosopher problem it accessing the buffer the other must wait.

• The Sleeping Barber problem

Knowledge Gate Website Knowledge Gate Website

Solution Using Semaphores


• Now to solve the problem we will be using three semaphores: Producer() Consumer()
• Semaphore S = 1 // CS
• Semaphore E = n // Count Empty cells
• Semaphore F = 0 // Count Filled cells Semaphore S =

Semaphore E =

Semaphore F =

Knowledge Gate Website Knowledge Gate Website


Producer() Consumer()
Producer() Consumer()
Producer() Consumer() Producer() Consumer()

{ { { {
Semaphore S = Semaphore S =
while(T) while(T) while(T) while(T)
{ { { {
Semaphore E = Semaphore E = // Produce an item

// Pick item from buffer


Semaphore F = Semaphore F = // Add item to buffer

// Consume item
} } } }

Knowledge Gate Website Knowledge Gate Website

Total three resources Producer() Consumer()


are used { {
Producer() Consumer() • semaphore E take while(T) while(T)
Producer() Consumer() count of empty cells
{ {
{ { and over flow
Semaphore S = • semaphore F take // Produce an item wait(F)//UnderFlow
while(T) while(T)
count of filled cells wait(E)//OverFlow wait(S)
{ {
Semaphore E = // Produce an item and under flow wait(S) // Pick item from buffer

wait(S) • Semaphore S take // Add item to buffer signal(S)


wait(S) // Pick item from buffer care of buffer signal(S) wait(E)
Semaphore F = // Add item to buffer signal(S)
wait(F) Consume item
signal(S)
} }
// Consume item
} }
} }

Knowledge Gate Website Knowledge Gate Website


Reader-Writer Problem • Points that needs to be taken care for generating a Solutions:
• The solution may allow more than one reader at a time, but should not allow
• Suppose that a database is to be shared among several any writer.
concurrent processes. Some of these processes may
want only to read the database (readers), whereas
• The solution should strictly not allow any reader or writer, while a writer is
others may want to update (that is, to read and write) performing a write operation.
the database(writers).

• If two readers access the shared data simultaneously,


no adverse effects will result. But, if a writer and some
other process (either a reader or a writer) access the
database simultaneously, chaos may ensue.

• To ensure that these difficulties do not arise, we


require that the writers have exclusive access to the
shared database while writing to the database.

Knowledge Gate Website Knowledge Gate Website

• Solution using Semaphores Mutex = Writer() Reader()


• The reader processes share the following data structures:
• semaphore mutex = 1, wrt =1; // Two semaphores Wrt =
• int readcount = 0; // Variable
Readcount =

Three resources are used CS //Write CS //Read


• Semaphore Wrt is used for synchronization between WW, WR, RW
• Semaphore reader is used to synchronize between RR
• Readcount is simple int variable which keep counts of number of readers

Knowledge Gate Website Knowledge Gate Website


Mutex = Writer() Reader() Mutex = Writer() Reader()

Wrt = Wrt = Readcount++

Readcount = Readcount =

Wait(wrt) Wait(wrt)
CS //Write CS //Read CS //Write CS //Read
Signal(wrt) Signal(wrt)
Readcount--

Knowledge Gate Website Knowledge Gate Website

Mutex = Writer() Reader() Mutex = Writer() Reader()


Wait(mutex) Wait(mutex)
Wrt = Readcount++ Wrt = Readcount++
If(readcount ==1)
Readcount = Readcount =
wait(wrt) // first
Wait(wrt) signal(mutex) Wait(wrt) signal(mutex)
CS //Write CS //Read CS //Write CS //Read
Signal(wrt) Wait(mutex) Signal(wrt) Wait(mutex)
Readcount-- Readcount--
If(readcount ==0)
signal(wrt) // last
signal(mutex) signal(mutex)
Knowledge Gate Website Knowledge Gate Website
Dining Philosopher Problem • From time to time, a philosopher gets hungry and tries to
pick up the two chopsticks that are closest to her (the
• Consider five philosophers who spend their lives chopsticks that are between her and her left and right
thinking and eating. The philosophers share a neighbors).
circular table surrounded by five chairs, each
• A philosopher may pick up only one chopstick at a time.
belonging to one philosopher. Obviously, she can’t pick up a chopstick that is already in
the hand of a neighbor. When a hungry philosopher has
• In the center of the table is a bowl of rice, and
both her chopsticks at the same time, she eats without
the table is laid with five single chopsticks. releasing her chopsticks. When she is finished eating, she
puts down both of her chopsticks and starts thinking
• When a philosopher thinks, she does not interact again.
with her colleagues.

Knowledge Gate Website Knowledge Gate Website

Indian Chopsticks

Knowledge Gate Website Knowledge Gate Website


Solution for Dining Philosophers • Here we have used an array of semaphores called chopstick[]
Void Philosopher (void)
{ • Solution is not valid because there is a possibility of deadlock.
while ( T )
{
Thinking ( ) ;
wait(chopstick [i]);
wait(chopstick([(i+1)%5]);
Eat( );
signal(chopstick [i]);
signal(chopstick([(i+1)%5]);
}
}
Knowledge Gate Website Knowledge Gate Website

• The proposed solution for deadlock problem is


• Allow at most four philosophers to be sitting The Sleeping Barber problem
simultaneously at the table.
• Barbershop: A barbershop consists of a waiting room
• Allow six chopstick to be used simultaneously at the with n chairs and a barber room with one barber chair.
table. • Customers: Customers arrive at random intervals. If there
is an available chair in the waiting room, they sit and wait.
• Allow a philosopher to pick up her chopsticks only if both If all chairs are taken, they leave.
chopsticks are available (to do this, she must pick them
• Barber: The barber sleeps if there are no customers. If a
up in a critical section).
customer arrives and the barber is asleep, they wake the
barber up.
• One philosopher picks up her right chopstick first and then
left chop stick, i.e. reverse the sequence of any philosopher. • Synchronization: The challenge is to coordinate the
interaction between the barber and the customers using
• Odd philosopher picks up first her left chopstick and then her concurrent programming mechanisms.
right chopstick, whereas an even philosopher picks up her
right chopstick and then her left chopstick.
Knowledge Gate Website Knowledge Gate Website
semaphore barber = 0; // Indicates if the barber is available
semaphore customer = 0; // Counts the waiting customers
Hardware Type Solution Test and Set
semaphore mutex = 1; // Mutex for critical section int
waiting = 0; // Number of waiting customers • Software-based solutions such as Peterson’s are not guaranteed to work on modern computer
architectures. In the following discussions, we explore several more solutions to the critical-
Barber Customer section problem using techniques ranging from hardware to software, all these solutions are
while(true) wait(mutex); based on the premise of locking —that is, protecting critical regions through the use of locks.
if(waiting < n)
{ { • The critical-section problem could be solved simply in a single-processor environment if we
wait(customer); waiting = waiting + 1; could prevent interrupts from occurring while a shared variable was being modified.
signal(customer);
wait(mutex); signal(mutex);
waiting = waiting - 1; wait(barber);
// Get hair cut
signal(barber);
}
signal(mutex); else
// Cut hair {
signal(mutex);
} Knowledge Gate Website
} Knowledge Gate Website

Boolean test and set (Boolean *target) While(1) • Many modern computer systems therefore provide special hardware instructions
{ { that allow us either to test and modify the content of a word atomically —that is,
Boolean rv = *target; while (test and set(&lock)); as one uninterruptible unit. We can use these special instructions to solve the
*target = true; /* critical section */ critical-section problem in a relatively simple manner.
return rv; lock = false;
} /* remainder section */ • The important characteristic of this instruction is that it is executed atomically.
} Thus, if two test and set() instructions are executed simultaneously (each on a
different CPU), they will be executed sequentially in some arbitrary order.

Knowledge Gate Website Knowledge Gate Website


Basics of Dead-Lock
• In a multiprogramming environment, several processes may compete for a finite number of
resources.
• A process requests resources; if the resources are not available at that time, the process
enters a waiting state. Sometimes, a waiting process is never again able to change state,
because the resources it has requested are held by other waiting processes. This situation is
called a deadlock.
• A set of processes is in a deadlocked state when every process in the set is waiting for an
event that can be caused only by another process in the set.

P1 P2
R1 R2
Knowledge Gate Website Knowledge Gate Website

Tax

Services

Knowledge Gate Website Knowledge Gate Website


Necessary conditions for deadlock • Mutual exclusion: - At least one resource must be held in a non-sharable mode;
that is, only one process at a time can use the resource.
A deadlock can occur if all these 4 conditions occur in the system simultaneously.
• Mutual exclusion • If another process requests that resource, the requesting process must be
delayed until the resource has been released. And the resource Must be desired
by more than one process.
• Hold and wait

• No pre-emption

• Circular wait

Knowledge Gate Website Knowledge Gate Website

• Hold and wait: - A process must be holding at least one resource and • No pre-emption: - Resources cannot be pre-empted; that is, a resource can be
waiting to acquire additional resources that are currently being held released only voluntarily by the process holding it, after that process has
by other processes. E.g. Plate and spoon completed its task.

Knowledge Gate Website Knowledge Gate Website


• Circular wait: - A set P0, P1, ..., Pn of waiting processes must exist such Deadlock Handling methods
that P0 is waiting for a resource held by P1, P1 is waiting for a resource 1. Prevention: - Design such protocols that there is no possibility of deadlock.
held by P2, ..., Pn−1 is waiting for a resource held by Pn, and Pn is
2. Avoidance: - Try to avoid deadlock in run time so ensuring that the system will never enter a
waiting for a resource held by P0. deadlocked state.

3. Detection: - We can allow the system to enter a deadlocked state, then detect it, and recover.

4. Ignorance: - We can ignore the problem altogether and pretend that deadlocks never occur
in the system.

Knowledge Gate Website Knowledge Gate Website

Prevention • Mutual exclusion: - In prevention approach, there is no solution for mutual


• It means designing such systems where there is no possibility of existence of exclusion as resource can’t be made sharable as it is a hardware property and
deadlock. For that we have to remove one of the four necessary condition of process also can’t be convinced to do some other task.
deadlock.
• In general, however, we cannot prevent deadlocks by denying the mutual-
exclusion condition, because some resources are intrinsically non-sharable.
Polio vaccine

Knowledge Gate Website Knowledge Gate Website


Hold & wait No pre-emption
1. Conservative Approach: Process is allowed to run if & only if it has acquired all • If a process requests some resources
the resources.
• We first check whether they are available. If they are, we allocate them.
2. Alternative protocol: A process may request some resources and use them.
• If they are not,
Before it can request any additional resources, it must release all the resources
that it is currently allocated. • We check whether they are allocated to some other process that is waiting for
additional resources. If so, we pre-empt the desired resources from the waiting process
3. Wait time out: We place a max time outs up to which a process can wait. After and allocate them to the requesting process (Considering Priority).
which process must release all the holding resources & exit.
• If the resources are neither available nor held by a waiting process, the requesting
P1 P2 process must wait, or may allow to pre-empt resource of a running process Considering
Priority.

R
1
Knowledge
R
2 Website
Gate Knowledge Gate Website

Circular wait • Problem with Prevention


• Different deadlock Prevention approach put different type of restrictions or
• We can eliminate circular wait problem by giving a natural number mapping to conditions on the processes and resources Because of which system becomes
every resource and then any process can request only in the increasing order slow and resource utilization and reduced system throughput.
and if a process wants a lower number, then process must first release all the
resource larger than that number and then give a fresh request.

Knowledge Gate Website Knowledge Gate Website


• So, in order to avoid deadlock in run time, System try to maintain some books
Avoidance like a banker, whenever someone ask for a loan(resource), it is granted only
when the books allow.

Knowledge Gate Website Knowledge Gate Website

Avoidance Max Need Allocation Current Need


E F G E F G E F G
• To avoiding deadlocks we require additional information about how resources are to be
requested. which resources a process will request during its lifetime. P0 4 3 1 P0 1 0 1 P0
P1 2 1 4 P1 1 1 2 P1
• With this additional knowledge, the operating system can decide for each request P2 1 3 3 P2 1 0 3 P2
whether process should wait or not.
P3 5 4 1 P3 2 0 0 P3

System Max Available


E F G E F G
8 4 6

Knowledge Gate Website Knowledge Gate Website


• Safe sequence: some sequence in which we can Banker’s Algorithm
satisfies demand of every process without going into Several data structures must be maintained to implement the banker’s algorithm. These data structures
deadlock, if yes, this sequence is called safe sequence. encode the state of the resource-allocation system. We need the following data structures, where n is the
number of processes in the system and m is the number of resource types:

• Safe Sate: If their exist at least one possible safe


Available: A vector of length m indicates the number of available Available
resources of each type. If Available[j] equals k, then k instances of
sequence. resource type Rj are available.
E F G
3 3 0
• Unsafe Sate: If their exist no possible safe sequence. Max: An n*m matrix defines the maximum demand of each process. If
Max[i][j] equals k, then process Pi may request at most k instances of
resource type Rj.
Max Need
E F G
P0 4 3 1
P1 2 1 4
P2 1 3 3
P3 5 4 1
Knowledge Gate Website Knowledge Gate Website

Allocation: An n*m matrix defines the number of resources of each Allocation Safety Algorithm
type currently allocated to each process. If Allocation[i][j] equals k, then E F G We can now present the algorithm for finding out whether or not a system is in a safe state. This
process Pi is currently allocated k instances of resource type Rj. P0 1 0 1 algorithm can be described as follows:
P1 1 1 2 1- Let Work and Finish be vectors of length m and n, respectively. Initialize Work = Available and
P2 1 0 3 Finish[i] = false for i = 0, 1, ..., n − 1.
Need/Demand/Requirement: An n*m matrix indicates the remaining Need Work
P3 2 0 0
resource need of each process. If Need[i][j] equals k, then process Pi
may need k more instances of resource type Rj to complete its task. 2- Find an index i such that both E F G E F G
Finish[i] == false P 3 3 0
Note that Need[i][j] = Max[i][j] − Allocation[i][j]. 0 3 3 0
These data structures vary over time in both size and value. Needi ≤ Work P1 1 0 2
If no such i exists, go to step 4. P2 0 3 0
Current Need P3 3 4 1
E F G 3- Work = Work + Allocationi
P0 3 3 0 Finish[i] = true
P1 1 0 2 Go to step 2. Finish[i]
P2 0 3 0 E F G
4- If Finish[i] == true for all i, then the system is in a safe state.
P3 3 4 1
This algorithm may require an order of m*n2 operations to F F F
determine whether a state is safe.
Knowledge Gate Website Knowledge Gate Website
Resource Allocation Graph • A directed edge from process Pi to resource type Rj is denoted by Pi → Rj is called a request
• Deadlock can also be described in terms of a directed graph called a system resource- edge; it signifies that process Pi has requested an instance of resource type Rj and is currently
allocation graph. This graph consists of a set of vertices V and a set of edges E. waiting for that resource.

• The set of vertices V is partitioned into two different types of nodes: P = {P1, P2, ..., Pn}, the set • A directed edge from resource type Rj to process Pi is denoted by Rj → Pi is called an assignment
edge; it signifies that an instance of resource type Rj has been allocated to process Pi.
consisting of all the active processes in the system, and R = {R1, R2, ..., Rm}, the set consisting of
all resource types in the system.

Knowledge Gate Website Knowledge Gate Website

Knowledge Gate Website Knowledge Gate Website


• Cycle in resource allocation graph is necessary but not sufficient condition for
detection of deadlock.

• If every resource have only one resource in the resource allocation graph than
detection of cycle is necessary and sufficient condition for deadlock detection.

Knowledge Gate Website Knowledge Gate Website

Deadlock detection and recovery Ignorance(Ostrich Algorithm)


• Once a dead-lock is detected there are two options for recovery from a deadlock 1. Operating System behaves like there is no concept
of deadlock.
• Process Termination
• Abort all deadlocked processes 2. Ignoring deadlocks can lead to system performance
• Abort one process at a time until the deadlock is removed issues as resources get locked by idle processes.

• Recourse pre-emption 3. Despite this, many operating systems opt for this approach to save on the cost
• Selecting a victim of implementing deadlock detection.
• Partial or Complete Rollback
4. Deadlocks are often rare, so the trade-off may seem justified. Manual restarts
may be required when a deadlock occurs.

Knowledge Gate Website Knowledge Gate Website


Fork
• Requirement of Fork command

• In number of applications specially in those where work is of repetitive nature, like web
server i.e. with every client we have to run similar type of code. Have to create a separate
process every time for serving a new request.

• So it must be a better solution that instead to creating a new process every time from
scratch we must have a short command using which we can do this logic.

Knowledge Gate Website Knowledge Gate Website

• Idea of fork command • Advantages of using fork commands


• Here fork command is a system command • Now it is relatively easy to create and manage similar types of process of repetitive nature
using which the entire image of the process with the help of fork command.
can be copied and we create a new process,
this idea help us to complete the creation of • Disadvantage
the new process with speed. • To create a new process by fork command we have to do system call as, fork is system
function
• After creating a process, we must have a • Which is slow and time taking
mechanism to identify weather in newly • Increase the burden over Operating System
created process which one is child and which
is parent. • Different image of the similar type of task have same code part which means we have the
multiple copy of the same data waiting the main memory
• Implementation of fork command
• In general, if fork return 0 then it is child and
if fork return 1 then it is parent, and then
using a programmer level code we can
change the code of child process to behave
as new process.
Knowledge Gate Website Knowledge Gate Website
• A thread is a basic unit of CPU utilization, consisting of a program counter, a stack, and a set of Multithreading Models
registers, ( and a thread ID. )
• There are two types of threads to be managed in a modern system: User threads and kernel
• Traditional (heavyweight) processes have a single thread of control - There is one program threads.
counter, and one sequence of instructions that can be carried out at any given time.
• Multi-threaded applications have multiple threads within a single process, each having their • User threads are supported above the kernel, without kernel support. These are the threads
own program counter, stack and set of registers, but sharing common code, data, and certain that application programmers would put into their programs.
structures such as open files.
• Kernel threads are supported within the kernel of the OS itself. All modern OS support kernel
level threads, allowing the kernel to perform multiple simultaneous tasks and/or to service
multiple kernel system calls simultaneously

Knowledge Gate Website Knowledge Gate Website

Many-To-One Model One-To-One Model


• In the many-to-one model, many user-level threads
• The one-to-one model creates a separate kernel thread to handle each user thread. It
are all mapped onto a single kernel thread.
overcomes the problems listed above involving blocking system calls and the splitting of
processes across multiple CPUs.
• However, if a blocking system call is made, then the
entire process blocks, even if the other user threads • However, the overhead of managing the one-to-one model is more significant, involving more
would otherwise be able to continue. overhead and slowing down the system. Most implementations of this model place a limit on
how many threads can be created.
• Because a single kernel thread can operate only on • Linux and Windows from 95 to XP implement the one-to-one model for threads.
a single CPU, the many-to-one model does not allow
individual processes to be split across multiple
CPUs.

• Green threads for Solaris implement the many-to-


one model in the past, but few systems continue to
do so today.

Knowledge Gate Website Knowledge Gate Website


Many-To-Many Model
• The many-to-many model multiplexes any number
of user threads onto an equal or smaller number of
kernel threads, combining the best features of the
one-to-one and many-to-one models.

• Users have no restrictions on the number of threads


created. Blocking kernel system calls do not block
the entire process.

• Processes can be split across multiple processors.


Individual processes may be allocated variable
numbers of kernel threads, depending on the
number of CPUs present and other factors.

Knowledge Gate Website Knowledge Gate Website

Memory Hierarchy
• Let first understand what we need from a memory
• Large capacity
• Less per unit cost
• Less access time(fast access)

• The memory hierarchy system consists of all storage devices employed in a computer system.

Cycle Car Airbus

Knowledge Gate Website Knowledge Gate Website


Locality of Reference
• The references to memory at any given interval of time tend to be confined within a few
localized areas in memory. This phenomenon is known as the property of locality of
reference. There are two types of locality of reference.
• Spatial Locality: Use of data elements in the nearby locations.
• Temporal Locality: Temporal locality refers to the reuse of specific data or resources,
within a relatively small-time duration, i.e. Most Recently Used.

Showroom Go down Factory


4 MB 32 GB 8 TB

Knowledge Gate Website Knowledge Gate Website

Duty of Operating System • There can be two approaches for storing a process in main memory.

• Operating system is responsible for the following activities in connection with 1. Contiguous allocation policy
memory management:
1. Address Translation: Convert logical addresses to physical addresses for data retrieval.
2. Non-contiguous allocation policy
2. Memory Allocation and Deallocation: Decide which processes or data segments to load
or remove from memory as needed.
3. Memory Tracking: Monitor which parts of memory are in use and by which processes.
4. Memory Protection: Implement safeguards to restrict unauthorized access to memory,
ensuring both process isolation and data integrity.

Knowledge Gate Website Knowledge Gate Website


Contiguous allocation policy Address Translation in Contiguous Allocation
1. Here we use a Memory Management Unit(OS) which contains a relocation register, which contains
• We know that when a process is required to be executed it must be loaded to main memory, the base address of the process in the main memory and it is added in the logical address every time.
by policy has two implications.
2. In order to check whether address generated to CPU is valid(with in range) or invalid, we compare it
• It must be loaded to main memory completely for execution.
with the value of limit register, which contains the max no of instructions in the process.

• Must be stored in main memory in contiguous fashion. 3. So, if the value of logical address is less than limit, then it means it’s a valid request and we can
continue with translation otherwise, it is a illegal request which is immediately trapped by OS.

Knowledge Gate Website Knowledge Gate Website

Space Allocation Method in Contiguous Allocation • Fixed size partitioning: - here, we divide memory into fixed size partitions, which
may be of different sizes, but here if a process request for some space, then a
• Variable size partitioning: -In this policy, in starting, we treat the memory as a partition is allocated entirely if possible, and the remaining space will be waisted
whole or a single chunk & whenever a process request for some space, exactly internally.
same space is allocated if possible and the remaining space can be reused again.
xcvxc

Knowledge Gate Website Knowledge Gate Website


• First fit policy: - It states searching the memory from the base and will allocate first partition • Best fit policy: - We search the entire memory and will allocate the smallest partition which is
which is capable enough. capable enough.

• Advantage: - simple, easy to use, easy to understand • Advantage: - perform best in fix size partitioning scheme.

• Disadvantage: -poor performance, both in terms of time and space • Disadvantage: - difficult to implement, perform worst in variable size partitioning as the
remaining spaces which are of very small size.

Knowledge Gate Website Knowledge Gate Website

• Worst fit policy: - It also searches the entire memory and allocate the largest partition Q Consider five memory partitions of size 100 KB, 500 KB, 200 KB, 300 KB, and 600 KB, where KB
possible. refers to kilobyte. These partitions need to be allotted to four processes of sizes 212 KB, 417 KB,
• Advantage: - perform best in variable size partitioning 112 KB and 426 KB in that order?

• Disadvantage: - perform worst in fix size partitioning, resulting into large internal
fragmentation.

Knowledge Gate Website Knowledge Gate Website


• Next fit policy: - Next fit is the modification in the best fit where, after satisfying a request, we • External fragmentation: - External fragmentation is a function of contiguous
start satisfying next request from the current position. allocation policy. The space requested by the process is available in memory but,
as it is not being contiguous, cannot be allocated this wastage is called external
fragmentation.

The Big Cason Family want to celebrate a party

Knowledge Gate Website Knowledge Gate Website

• Internal fragmentation: - Internal fragmentation is a function of fixed size • How can we solve external fragmentation?
partition which means, when a partition is allocated to a process. Which is either
the same size or larger than the request then, the unused space by the process in • We can also swap processes in the main memory after fixed intervals of time
the partition Is called as internal fragmentation & they can be swapped in one part of the memory and the other part become
empty(Compaction, defragmentation). This solution is very costly in respect
to time as it will take a lot of time to swap process when system is in running
state.

• Either we should go for non-contiguous allocation, which means process can


be divided into parts and different parts can be allocated in different areas.

Knowledge Gate Website Knowledge Gate Website


Non-Contiguous Memory allocation(Paging) • Secondary memory is divides into fixed size partition(because management is easy) all of them
of same size called pages(easy swapping and no external fragmentation).
• Paging is a memory-management scheme that permits the physical address • Main memory is divided into fix size partitions (because management is easy), each of them
space of a process to be non-contiguous. having same size called frames(easy swapping and no external fragmentation).
• Size of frame = size of page
• Paging avoids external fragmentation • In general number of pages are much more than number of frames (approx. 128 time)

Knowledge Gate Website Knowledge Gate Website

Translation process Page Table


1. CPU generate a logical address is divided into two parts - p and d
1. Page table is a data structure not hardware.
1. where p stands for page no and d stands for instruction offset.

2. The page number(p) is used as an index into a Page table 2. Every process have a separate page table.
3. Page table base register(PTBR) provides the base of the page table and then the corresponding page no is
accessed using p. 3. Number of entries a process have in the page table is the number of pages a process have in
the secondary memory.
4. Here we will finds the corresponding frame no (the base address of that frame in main memory in which the
page is stored)
4. Size of each entry in the page table is same it is corresponding frame number.
5. Combine corresponding frame no with the instruction offset and get the physical address. Which is used to
access main memory.
5. Page table is a data structure which is it self stored in main memory.

Knowledge Gate Website Knowledge Gate Website


• Advantage 103 1 Thousand 103 1 kilo 210 1 kilo
• Removal of External Fragmentation 106 1 Million 106 1 Mega 220 1 Mega
• Disadvantage 109 1 Billion 109 1 Giga 230 1 Giga
• Translation process is slow as Main Memory is accessed two times(one for page table and 1012 1 Trillion 1012 1 Tera 240 1 Tera
other for actual access).
1015 1 Peta 250 1 Peta
• A considerable amount of space a waisted in storing page table(meta data). 1018 1 Exa 260 1 Exa
1021 1 Zetta 270 1 Zetta
• System suffers from internal fragmentation(as paging is an example of fixed size partition).
1024 1 Yotta 280 1 Yotta
• Translation process is difficult and complex to understand and implement.

Knowledge Gate Website Knowledge Gate Website

Address Length in bits n Address Length in bits n


No of Locations 2n No of Locations 2n

- -
- -
- -
- -
- -
- -
- -

Memory Size = Number of Location * Size of each Location


Knowledge Gate Website Knowledge Gate Website
Address Length in bits Upper Bound(Log2n) Address Length in bits Upper Bound(Log2n)
No of Locations n No of Locations n

-
-
- -
- -
-
-
-
-
-
- -
-

Memory Size/ Size of each Location = Number of Location


Knowledge Gate Website Knowledge Gate Website

S No SM LA MM PA p f d addressable Page Size


• Page Table Size = No of entries in Page table * Size of each entry(f)
1 32 GB 128 MB 1B 1KB
• Process Size = No of Pages * Size of each page 2 42 33 11 1B
3 512GB 31 1B 512B
4 128GB 32GB 30 1B
5 28 14 4096B

Knowledge Gate Website Knowledge Gate Website


• A serious problem with page is, translation process is slow as page table is accessed two times • Each entry in the TLB consists of two parts: a key (Page no) and a value (frame no). When the
(one for page table and other for actual access). associative memory is search for page no, the page no is compared with all page no
simultaneously. If the item is found, the corresponding frame no field is returned.
• To solve the problems in paging we take the help of TLB. The TLB is associative, high-speed
memory. • The search is fast; the hardware, however, is expensive, TLB Contains the frequently referred
page numbers and corresponding frame number.

Knowledge Gate Website Knowledge Gate Website

• The TLB is used with page tables in the following way. The TLB contains only a few of the page- • Also we add the page number and frame number to the TLB, so that they will be found quickly
table entries. When a logical address is generated by the CPU, its page number is presented to on the next reference.
the TLB. If the page number is found, its frame number is immediately available and is used to
• If the TLB is already full of entries, the operating system must select one for replacement i.e.
access memory.
Page replacement policies.
• If the page number is not in the TLB (known as a TLB Miss), then a memory reference to the
page table must be made. • The percentage of times that a particular page number is found in the TLB is called the Hit
Ratio.

Knowledge Gate Website Knowledge Gate Website


• Effective Memory Access Time: • Disadvantage of TLB:
• Hit [TLB + Main Memory] + 1-Hit [TLB + 2 Main Memory] • TLB can hold the data of one process at a
time and in case of multiple context
• TLB removes the problem of slow access. switches TLB will be required to flush
frequently.

• Solution:
• Use multiple TLB’s but it will be costly.
• Some TLBs allow certain entries to be
wired down, meaning that they cannot be
removed from the TLB. Typically, TLB
entries for kernel code are wired down.

Knowledge Gate Website Knowledge Gate Website

Size of Page Multilevel Paging / Hierarchical Paging


• Modern systems support a large logical address space (2^32 to 2^64).
• In such cases, the page table itself becomes excessively large and can contain millions of entries and can take a
• If we increase the size of page table then internal fragmentation increase but size of page lot of space in memory, so cannot be accommodated into a single frame.
table decreases. • A simple solution to this is to divide page table into smaller pieces.
• One way is to use a two-level paging algorithm, in which the page table itself is also paged.
• If we decrease the size of page then internal fragmentation decrease but size of page table
increases.

• So we have to find what should be the size of the page, where both cost are minimal.

Knowledge Gate Website Knowledge Gate Website


Segmentation
• Paging is unable to separate the user's view of memory from the
actual physical memory. Segmentation is a memory-management
scheme that supports this user view of memory.
• A logical address space is a collection of segments. Each segment
has a name and a length. The addresses specify both the
segment name and the offset within the segment. The user
therefore specifies each address by two quantities: a segment
name and an offset.

• Segments can be of variable lengths unlike pages and are stored


in main memory.

Knowledge Gate Website Knowledge Gate Website

• Segment Table: Each entry in the segment table has Segmentation with Paging
a segment base and a segment limit. The segment
base contains the starting physical address where • Since segmentation also suffers from external fragmentation, it is better to divide the segments into pages.
the segment resides in memory, and the segment • In segmentation with paging a process is divided into segments and further the segments are divided into pages.
limit specifies the length of the segment.
• One can argue it is segmentation with paging is quite similar to multilevel paging, but actually it is better,
• The segment number is used as an index to the because here when page table is divided, the size of partition can be different (as actually the size of different
segment table. The offset d of the logical address chapters can be different). All properties of segmentation with paging is same as multilevel paging.
must be between 0 and the segment limit. If it is
not, we trap to the operating system.

• When an offset is legal, it is added to the segment


base to produce the address in physical memory of
the desired byte.Segmentation suffers from
External Fragmentation.

Knowledge Gate Website Knowledge Gate Website


Inverted Page Table Virtual Memory
● The drawback of paging is that each page table may consist of millions of entries. These tables may consume 1. To enable multiprogramming and optimize memory, modern computing often uses pure
large amounts of physical memory just to keep track of how other physical memory is being used. To solve this demand paging to keep multiple processes in memory.
problem, we can use an Inverted Page Table.
2. Pure Demand Paging: A memory management scheme where a process starts with no pages
● An inverted page table has one entry for each real page (or frame) of memory. Each entry consists of the virtual
address of the page stored in that real memory location, with information about the process that owns the page.
in memory. Pages are only loaded when explicitly required during execution.
Thus, only one page table is in the system, and it has only one entry for each page of physical memory. 1. Process starts with zero pages in memory. Immediate page fault occurs.
2. Load the needed page into memory. Execution resumes after the required page is loaded
● Thus number of entries in the page table is equal to the number of frames in the physical memory.
into memory.
3. Additional page faults occur as the process continues and requires new pages.
4. Execution proceeds without further faults once all necessary pages are in memory. The
key principle is to only load pages when absolutely necessary.

Knowledge Gate Website Knowledge Gate Website

• Advantage Implementation of Virtual Memory


• A program would no longer be constrained by the amount of physical memory that is
• We add a new column in page table, which have
available, Allows the execution of processes that are not completely in main memory, i.e. binary value 0 or Invalid which means page is not
process can be larger than main memory. currently in main memory, 1 or valid which means
• More programs could be run at the same time as use of main memory is less. page is currently in main memory.

• Page Fault: - When a process tries to access a page


• Disadvantages that is not in main memory then a Page Fault
• Virtual memory is not easy to implement. Occurs.
• It may substantially decrease performance if it is used carelessly (Thrashing)

Knowledge Gate Website Knowledge Gate Website


Steps to handle Page Fault 3. We can reduce this overhead by using a Modify bit or Dirty Bit as a new column in page table.
1. If the reference was invalid, it means there is a page fault and page is not currently in
main-memory, now we have to load this required page in main-memory. 3.1. The modify bit for a page is set whenever the page has been modified. In this case, we
must write the page to the disk.
2. We find a free frame if available we can brought in desired page, but if not we have to 3.2. If the modify bit is not set: It means the page has not been modified since it was read
select a page as a victim and swap it out from main memory to secondary memory and then into the main memory. We need not write the memory page to the disk: it is already there.
swap in the desired page(situation effectively doubles the page-fault service time ).

Knowledge Gate Website Knowledge Gate Website

4. Now we modify the internal table kept with the process(PCB) and the page table to indicate Performance of Demand Paging
that the page is now in memory. We restart the instruction that was interrupted by the trap. The
process can now access the page as though it had always been in memory. • Effective Access time for Demand Paging:
• (1 - p) x ma + p x page fault service time.

• Here, p: Page fault rate or probability of a page fault.

• ma is memory access time.

Knowledge Gate Website Knowledge Gate Website


Page Replacement Algorithms Page Replacement Algorithms
• First In First Out Page Replacement Algorithm: - A FIFO replacement algorithm associates
• Now, we must solve a major problems to implement demand paging i.e. a page with each page, the time when that page was brought into memory. When a page must be
replacement algorithm. Page Replacement will decide which page to replace next. replaced, the oldest page is chosen. i.e. the first page that came into the memory will be
replaced first.

• In the above example the number of page fault is 15.

Knowledge Gate Website Knowledge Gate Website

• The FIFO page-replacement algorithm is easy to understand and program. However, its Optimal Page Replacement Algorithm
performance is not always good. • Replace the page that will not be used for the longest period of time.
• It has the lowest page-fault rate of all algorithms.
• Belady’s Anomaly: for some page-replacement algorithms, the page-fault rate may increase • Guarantees the lowest possible page fault rate for a fixed number of frames and will
as the number of allocated frames increases.
never suffer from Belady's anomaly.
• Unfortunately, the optimal page-replacement algorithm is difficult to implement, because it
requires future knowledge of the reference string. It is mainly used for comparison studies.

Knowledge Gate Website Knowledge Gate Website


Least Recently Used (LRU) Page Replacement Algorithm Thrashing
• We can think of this strategy as the optimal page-replacement algorithm looking backward in 1. When a process spends more time swapping pages than executing, it's called Thrashing. Low
time, rather than forward. Replace the page that has not been used for the longest period of CPU utilization prompts adding more processes to increase multiprogramming.
time.
2. If a process needs more frames, it starts taking them from others, causing those processes to
• LRU is much better than FIFO replacement in term of page-fault. The LRU policy is often used also fault and swap pages. This empties the ready queue and lowers CPU utilization.
in industry. LRU also does not suffer from Belady’s Anomaly.
3. Responding to decreased CPU activity, the scheduler adds more processes, worsening the
issue. This leads to Thrashing: a cycle of increasing page faults, plummeting system
throughput, and rising memory-access times, ultimately resulting in no productive work.

Knowledge Gate Website Knowledge Gate Website

Solution-The Working Set Strategy Disk track t spindle

• This model uses a parameter Δ, to define the working set window. The set of pages in the • Magnetic disks serve as the main
most recent Δ page references is the working set. secondary storage in computers. Each disk sector s

has a flat, circular platter with magnetic


• If a page is in active use, it will be in the working set. If it is no longer being used, it will drop surfaces for data storage. cylinder c read-write head

from the working set. • A read-write head hovers over these


surfaces, moving in unison on a disk arm.
• The working set is an approximation of the program's locality. The accuracy of the working set
platter

Platters have tracks divided into sectors for arm

depends on the selection of Δ . If Δ is too small, it will not encompass the entire locality; if Δ logical data storage.
is too large, it may overlap several localities.
• Disks spin at speeds ranging from 60 to 250
rotations per second, commonly noted in
RPM like 5,400 or 15,000.

Knowledge Gate Website Knowledge Gate Website


• Total Transfer Time = Seek Time + Rotational Latency + Transfer Time Disk scheduling
• Seek Time: - It is a time taken by Read/Write header to reach the correct track. (Always given in
question)
1. One of the responsibilities of the operating system is to use the hardware efficiently. For the disk drives, efficiency means
having less seek time, less waiting time and high data transfer rate. We can improve all of these by managing the order in
• Rotational Latency: - It is the time taken by read/Write header during the wait for the correct sector. In which disk I/O requests are serviced.
general, it’s a random value, so far average analysis, we consider the time taken by disk to complete
2. Whenever a process needs I/O to or from the disk, it issues a system call to the operating system. The request may specifies
half rotation. several pieces of information: Whether this operation is input or output, disk address, Memory address, number of sectors to
• Transfer Time: - it is the time taken by read/write header either to read or write on a disk. In general, be transferred.
we assume that in 1 complete rotation, header can read/write the either track, so
• total time will be = (File Size/Track Size) *time taken to complete one revolution.
3. If the desired disk drive and controller are available, the request can be serviced immediately. If the drive or controller is busy,
any new requests for service will be placed in the queue of pending requests for that drive.

4. When one request is completed, the operating system chooses which pending request to service next. How does the
operating system make this choice? Any one of several disk-scheduling algorithms can be used.

Knowledge Gate Website Knowledge Gate Website

FCFS (First Come First Serve)


The simplest form of disk scheduling is, of course, the first-come, first-served (FCFS) algorithm. In FCFS,
the requests are addressed in the order they arrive in the disk queue. This algorithm is intrinsically fair,
but it generally does not provide the fastest service.

Advantages:
• Easy to understand easy to use
• Every request gets a fair chance
• No starvation (may suffer from convoy effect)

Disadvantages:
• Does not try to optimize seek time, or waiting time.

Knowledge Gate Website Knowledge Gate Website


SSTF(Shortest Seek Time First) Scheduling
• Major component in total transfer time is seek time, in order to reduce seek time if we service
all the requests close to the current head position, this idea is the basis for the SSTF algorithm.
• In SSTF, the request nearest to the disk arm will get executed first i.e. requests having shortest
seek time are executed first. Although the SSTF algorithm is a substantial improvement over
the FCFS algorithm, it is not optimal.

• Advantages:
• Seek movements decreases
• Throughput increases

• Disadvantages:
• Overhead to calculate the closest request.
• Can cause Starvation for a request which is far from the current location of the header
• High variance of response time and waiting time as SSTF favors only closest requests

Knowledge Gate Website Knowledge Gate Website

SCAN/ Elevator Algorithm


• The disk arm starts at one end of the disk and moves towards the other end, servicing requests as
it reaches each track, until it gets to the other end of the disk.

• At the other end, the direction of head movement is reversed, and servicing continues. The head
continuously scans back and forth across the disk.

Advantages:
• Simple easy to understand and use
• No starvation but more wait for some random process
• Low variance and Average response time

Disadvantages:
• Long waiting time for requests for locations just visited by disk arm.
• Unnecessary move to the end of the disk, even if there is no request.

Knowledge Gate Website Knowledge Gate Website


C-SCAN Scheduling
• When the disk head reaches one end and changes direction, fewer requests are nearby since
those cylinders were just serviced. Most pending requests are at the opposite end, having
waited the longest.
• Circular-scan is a variant of SCAN designed to provide a more uniform wait time. Like SCAN, C-
SCAN moves the head from one end of the disk to the other, servicing requests along the way.
When the head reaches the other end, however, it immediately returns to the beginning of
the disk without servicing any requests on the return trip.

Advantages:
• Provides more uniform wait time compared to SCAN
• Better response time compared to scan

Disadvantage:
• More seeks movements in order to reach starting position

Knowledge Gate Website Knowledge Gate Website

LOOK Scheduling C LOOK


• As LOOK is similar to SCAN algorithm, in similar way, C-LOOK is similar to C-SCAN disk
• It is similar to the SCAN disk scheduling algorithm except the difference that the disk arm scheduling algorithm. In C-LOOK, the disk arm in spite of going to the end goes only to the last
inspite of going to the end of the disk goes only to the last request to be serviced in front of request to be serviced in front of the head and then from there goes to the other end’s last
the head and then reverses its direction from there only. Thus, it prevents the extra delay request. Thus, it also prevents the extra delay which occurred due to unnecessary traversal to
which occurred due to unnecessary traversal to the end of the disk. the end of the disk.
Advantage: - Advantage: -
• Better performance compared to SCAN • Provides more uniform wait time compared to LOOK
• Should be used in case to less load • Better response time compared to LOOK
Disadvantage: - Disadvantage: -
• Overhead to find the last request • Overhead to find the last request and go to initial position is more
• Should not be used in case of more load. • Should not be used in case of more load.

Knowledge Gate Website Knowledge Gate Website


track t spindle

sector s

read-write head
cylinder c

platter

arm

Knowledge Gate Website Knowledge Gate Website

• Total Transfer Time = Seek Time + Rotational Latency + Transfer Time Q Consider a disk where there are 512 tracks, each track is capable of holding 128
sector and each sector holds 256 bytes, find the capacity of the track and disk and
• Seek Time: - It is a time taken by Read/Write header to reach the correct track. (Always given in number of bits required to reach correct track, sector and disk.
question)

• Rotational Latency: - It is the time taken by read/Write header during the wait for the correct
sector. In general, it’s a random value, so far average analysis, we consider the time taken by disk to
complete half rotation.

• Transfer Time: - it is the time taken by read/write header either to read or write on a disk. In
general, we assume that in 1 complete rotation, header can read/write the either track, so

• total time will be = (File Size/Track Size) *time taken to complete one revolution.

Knowledge Gate Website Knowledge Gate Website


Q consider a disk where each sector contains 512 bytes and there are 400 sectors Q Consider a system with 8 sector per track and 512 bytes per sector. Assume that
per track and 1000 tracks on the disk. If disk is rotating at speed of 1500 RPM, find disk rotates at 3000 rpm and average seek time is 15ms standard. Find total time
the total time required to transfer file of size 1 MB. Suppose seek time is 4ms? required to transfer a file which requires 8 sectors to be stored.

a) Assume contiguous allocation

b) Assume Non- contiguous allocation

Knowledge Gate Website Knowledge Gate Website

File allocation methods Contiguous Allocation


The main aim of file allocation problem is how disk space is utilized effectively and files can be • Contiguous allocation requires that each file occupy a set of contiguous blocks
accessed quickly. Three major methods of allocating disk space are in wide use: on the disk.
• In directory usually we store three column file name, start dba and length of file
• Contiguous
in number of blocks.
• Linked
• Indexed

Each method has advantages and disadvantages. Although some systems support all three, it is
more common for a system to use one method for all files.

Knowledge Gate Website Knowledge Gate Website


• Advantage Linked Allocation
• Accessing a file that has been allocated contiguously is easy. Thus, both • With linked allocation, each file is a linked list of disk blocks; the disk blocks may be
sequential and direct access can be supported by contiguous allocation. scattered anywhere on the disk. The directory contains a pointer to the first and last
blocks of the file.
• Disadvantage
• Suffer from huge amount of external fragmentation.
• Another problem with contiguous allocation is file modification

Knowledge Gate Website Knowledge Gate Website

• Advantage: - Indexed Allocation


• To create, read, write a new file is simply easy. The size of a file need not be declared when the
file is created. A file can continue to grow as long as free blocks are available. • Indexed allocation solves problems of contiguous and linked allocation, by bringing all the
pointers together into one location: the index block.
• There is no external fragmentation with linked allocation, and any free block on the free-space
list can be used to satisfy a request.

• Disadvantage: -
• Only sequential access is possible, To find the ith block of a file, we must start at the beginning
and follow the pointers until we get to the ith block.
• Another disadvantage is the space required for the pointers, so each file requires slightly more
space than it would otherwise.

Knowledge Gate Website Knowledge Gate Website


• Each file has an index block containing an array of disk-block addresses. The • Linked scheme: To allow for large files, we can link together several index blocks. For example,
directory entry points to this index block. an index block might contain a small header giving the name of the file and a set of the first
100 disk-block addresses. The next address (the last word in the index block) is null (for a small
• When a file is created, all index block pointers are null. Writing to the ith block file) or is a pointer to another index block (for a large file).
updates the corresponding index-block entry with a block address from the free-
space manager.
• The size of the index block is a trade-off: it should be small to save space but
large enough to accommodate pointers for big files.

Knowledge Gate Website Knowledge Gate Website

• Multilevel index. A variant of linked representation uses a first-level index block to point to a • Combined scheme: In UNIX-based systems, the file’s Inode stores the first 15 pointers from the
set of second-level index blocks, which in turn point to the file blocks. index block. The first 12 point directly to data blocks, eliminating the need for a separate index
block for small files.
• To access a block, the operating system uses the first-level index to find a second-level index
block and then uses that block to find the desired data block. This approach could be • The next three pointers are for indirect blocks: the first for a single indirect block, the second
continued to a third or fourth level, depending on the desired maximum file size. for a double indirect block, and the last for a triple indirect block, each increasingly indirecting
to the actual data blocks.

Knowledge Gate Website Knowledge Gate Website


• Advantage
• Indexed allocation supports direct access, without suffering from external fragmentation,
because any free block on the disk can satisfy a request for more space.

• Disadvantage
• Indexed allocation does suffer from wasted space. The pointer overhead of the index block
is generally greater than the pointer overhead of linked allocation.

Knowledge Gate Website Knowledge Gate Website

Free-Space Management Linked List


• A approach to free-space management is to link
• Since disk space is limited, we need to reuse the space from deleted files for new files, if possible. To keep track together all the free disk blocks, keeping a pointer
of free disk space, the system maintains a free-space list. The free-space list records all free disk blocks—those to the first free block in a special location on the
not allocated to some file or directory. disk and caching it in memory.
• To create a file, we search the free-space list for the required amount of space and allocate that space to the new
file. This space is then removed from the free-space list. When a file is deleted, its disk space is added to the • This first block contains a pointer to the next free
free-space list. disk block, and so on.

• This scheme is not efficient; to traverse the list, we


must read each block, which requires substantial
I/O time. However, operating system simply needs a
free block so that it can allocate that block to a file,
so the first block in the free list is used.

Knowledge Gate Website Knowledge Gate Website


Bit Vector • Unfortunately, bit vectors are inefficient unless the entire vector is kept in main memory. Keeping it in
main memory is possible for smaller disks but not necessarily for larger ones.
• Frequently, the free-space list is implemented as a bit map or bit vector. Each block is represented by 1
bit. If the block is free, the bit is 1; if the block is allocated, the bit is 0. • A 1.3-GB disk with 512-byte blocks would need a bit map of over 332 KB to track its free blocks.
• For example, consider a disk where blocks 2, 3, 4, 5, 8, 9, 10, 11, 12, 13, 17, 18, 25, 26, and 27 are free • A 1-TB disk with 4-KB blocks requires 256 MB to store its bit map. Given that disk size constantly
and the rest of the blocks are allocated. The free-space bit map would be increases, the problem with bit vectors will continue to escalate as well.
001111001111110001100000011100000 ...
• The main advantage of this approach is its relative simplicity and its efficiency in finding the first free
block or n consecutive free blocks on the disk.

Knowledge Gate Website Knowledge Gate Website

File organization
• File organization refers to the way data is stored in a file. File organization is very important
because it determines the methods of access, efficiency, flexibility and storage devices to use.
• Four methods of organizing files:
• 1. Sequential file organization:
• a. Records are stored and accessed in a particular sorted order using a key field.
• b. Retrieval requires searching sequentially through the entire file record by record to
the end.
• 2. Random or direct file organization:
• a. Records are stored randomly but accessed directly.
• b. To access a file which is stored randomly, a record key is used to determine where a
record is stored on the storage media.
• c. Magnetic and optical disks allow data to be stored and accessed randomly.

Knowledge Gate Website Knowledge Gate Website


• 3. Serial file organization: File access mechanism
• a. Records in a file are stored and accessed one after another. • Sequential access:
• b. This type of organization is mainly used on magnetic tapes. • it is the simplest access mechanism in which information is stored in a file are exist in an order such that one record is
process after the other
• 4. Indexed-sequential file organization method: • for example editors and compilers usually access file in this manner. next line
• Almost similar to sequential method only that, an index is used to enable • Direct Access:
the computer to locate individual records on the stage media • It is an alternative method for accessing a file, which is based on the disk model of a file, since disk allow random access
• For example, on a magnetic drum, records are stored sequentially on the to any block or a record of a file
• for this method, a file is viewed as a numbered sequence of blocks or records which are read/written in an arbitrary
tracks. However, each record is assigned an index that can be used to manner that is there is no restriction on the order of recording or writing
• it is well suited for database management system.
access it directly.
• Index access
• In this method and alternate index is created which contain key field and a pointer to the various blocks.
• To find and entry in the file for a key value we first search the index and then use the pointer to directly excess of file and
find the desired entry

Knowledge Gate Website Knowledge Gate Website

Directory Operations that Can Be Performed on a Directory


1. Create Directory: Make a new directory to store files and subdirectories.
• A directory is similar to a "folder" in everyday terminology, and it exists within a 2. Delete Directory: Remove an existing directory, usually only if it's empty.
file system. 3. Rename Directory: Change the name of a directory.
• It's a virtual container where multiple files and other directories (often called 4. List Contents: View the files and subdirectories within a directory.
subdirectories) can reside. 5. Move Directory: Relocate a directory to a different path in the file system.
• It organizes the file system in a hierarchical manner, meaning directories can 6. Copy Directory: Make a duplicate of a directory, including its files and
contain subdirectories, which may contain further subdirectories, and so on. subdirectories.
7. Change Directory: Switch the working directory to a different one.
8. Search Directory: Find specific files or subdirectories based on certain criteria
like name or file type.
9. Sort Files: Arrange the files in a directory by name, date, size, or other
attributes.
10. Set Permissions: Change the access controls for a directory (read, write,
execute).
Knowledge Gate Website Knowledge Gate Website
Feature Single-Level Directory Two-Level Directory Why It's Necessary
• Organization: It helps in sorting and locating files more efficiently.
No user-specific directories. All users Each user has their own private
User Isolation share the same directory space. directory. • User-Friendliness: Directories make it easier for users to categorize their files by
project, file type, or other attributes.
All files are stored in one directory,
Files can be organized under user- • Access Control: Using directories, different levels of access permission can be
Organization making it less organized.
specific directories, allowing for better
applied, providing an extra layer of security.
file management.

Can be less efficient as all files are in a


More efficient due to fewer files in
Search Efficiency single directory, requiring more time
each user-specific directory. Features of Directories
to find a specific file.
• Metadata: Directories also store metadata about the files and subdirectories
Hard to implement user-specific
Easier to implement user-specific they contain, such as permissions, ownership, and timestamps.
Access Control access controls because all files reside
access controls, enhancing security. • Dynamic Nature: As files are added or removed, the directory dynamically
in the same directory.
updates its list of contents.
Simpler to implement but can become Slightly more complex due to the need • Links and Shortcuts: Some systems support the creation of pointers or links
Complexity cluttered and difficult to manage with for user management, but offers within directories to other files or directories.
many files. better organization.
Knowledge Gate Website Knowledge Gate Website

Feature Sequential File Indexed File File Protection System


Records accessed one after another in Records can be accessed directly using an • Reliability:
Access Method order index • Reliability in a file protection system ensures that files are accessible and retrievable whenever needed,
without loss of data. Techniques such as backup, mirroring, and RAID configurations contribute to high
reliability.
Speed of Access Slower, especially for large files Faster for random access, thanks to index • Security:
• Security mechanisms protect files from unauthorized access, modification, or deletion. Encryption, firewall
settings, and secure file transfer protocols like SFTP can be employed to safeguard files.
• Controlled Access:
Generally more efficient as no space is Less efficient due to storage needed for • Controlled access specifies who can do what with a file. Users are given permissions like read, write, and
Storage Efficiency used for index index execute (r-w-x), often categorized into roles for easy management. Controlled access is crucial for
maintaining the integrity and confidentiality of files.
• Access Control:
More complex; updating index needed for • Access control mechanisms like Access Control Lists (ACLs) or Role-Based Access Control (RBAC) define rules
Update Complexity Simpler, usually involves appending
record change specifying which users or system processes are granted access to files and directories. They can also specify
the type of operations (read, write, execute) permitted.
• In summary, a robust file protection system is multi-layered, incorporating reliability measures, strong security
protocols, and detailed access control mechanisms to ensure the safe and efficient management of files.
Suitable for databases, directories with quick
Use Case Suitable for batch processing, backups
lookup
Knowledge Gate Website Knowledge Gate Website
Access Matrix • Matrix Form: In its most straightforward representation, the Access
• The Access Matrix is a conceptual framework used in computer security to describe the Matrix is a table where the cell at the intersection of row i and
permissions that different subjects (such as users or processes) have when accessing different column j contains the set of operations that subject i can perform on
objects (such as files, directories, or resources).
object j.
File A File B File C
• In this matrix, each row represents a subject and each column represents an object. The entry
at the intersection of a row and column defines the type of access that the subject has to the User 1 r-w r -
object. User 2 r w r-w
User 3 - r w

• Here, 'r' indicates read permission, 'w' indicates write permission, and
'-' indicates no permission.

Knowledge Gate Website Knowledge Gate Website

• Access Control Lists (ACLs): Each object's column in the matrix can be • Capability Lists: Each subject's row in the matrix can be converted into a
converted to an Access Control List, which lists all subjects and their Capability List, which lists all objects and the operations the subject can
corresponding permissions for that object. perform on them.

• File A: • User 1:
• User 1: r-w • File A: r-w
• User 2: r • File B: r

• File B: • User 2:
• User 1: r • File A: r
• User 2: w • File B: w
• User 3: r • File C: r-w

Knowledge Gate Website Knowledge Gate Website


• Sparse Matrix: In large systems, the Access Matrix is usually sparse. Implementation of Access Matrix
Special data structures can be used to represent only the non-empty 1. Global Table: A global table is essentially the raw access matrix itself, where each cell denotes the permissions
a subject has on an object. While straightforward, this method is not practical for large systems due to the
cells to save space. sparsity of the matrix and the associated storage overhead.
2. Access Lists for Objects: Here, the focus is on objects like files or directories. Each object maintains an Access
Control List (ACL) that records what operations are permissible by which subjects. ACLs are object-centric and
make it easy to determine all access rights to a particular object. However, this approach makes it cumbersome
to list all capabilities of a particular subject across multiple objects.
3. Capability Lists for Domains: In this subject-centric approach, each subject or domain maintains a list of
objects along with the operations it can perform on them, known as a Capability List. This makes it
straightforward to manage and review the permissions granted to each subject. On the downside, revoking or
changing permissions across all subjects for a specific object can be more challenging.
4. Lock-Key Mechanism: In a lock-key mechanism, each object is assigned a unique "lock," and subjects are
granted "keys" to unlock these locks. When a subject attempts to access an object, the system matches the key
with the lock to determine if the operation is permissible. This approach can be seen as an abstraction over the
access matrix and can be used to dynamically change permissions with minimal overhead.

Knowledge Gate Website Knowledge Gate Website

You might also like