0% found this document useful (0 votes)
1 views

Operating System

An operating system (OS) is a type of system software that manages computer hardware resources and provides an interface between users and the hardware. It performs essential functions such as process management, memory management, file management, and device management, ensuring efficient resource allocation and multitasking. Examples of operating systems include Windows, Linux, and Mac OS, with various types such as single-user, multi-user, and real-time operating systems.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1 views

Operating System

An operating system (OS) is a type of system software that manages computer hardware resources and provides an interface between users and the hardware. It performs essential functions such as process management, memory management, file management, and device management, ensuring efficient resource allocation and multitasking. Examples of operating systems include Windows, Linux, and Mac OS, with various types such as single-user, multi-user, and real-time operating systems.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 71

Operating System

Question No(1). Define operating system?


Answer _ Operating System a type of system software.
It basically manages all the resources of the computer.
An operating system acts as an interface between the
software and different parts of the computer or the
computer hardware.
The operating system is designed in such a way that it
can manage the overall resources and operations of the
computer.
Operating System is a fully integrated set of specialized
programs that handle all the operations of the
computer.
The purpose of an operating system is to provide an
environment in which a user can execute programs in
efficient manner.
Examples of Operating Systems are Windows, Linux,
Mac OS, etc.
An Operating System (OS) is a collection of software
that manages computer hardware resources and
provides common services for computer programs.
It controls and monitors the execution of all other
programs that reside in the computer, which also
includes application programs and other system
software of the computer.
We need a system which can act as an intermediary
and manage all the processes and resources present in
the system.

An Operating System can be defined as an interface


between user and hardware.
It is responsible for the execution of all the processes,
Resource Allocation, CPU management, File
Management and many other tasks.
Operating System a type of system software.
o The Central Processing Unit (CPU) contains an
arithmetic and logic unit for manipulating data, a
number of registers for storing data, and a control
circuit for fetching and executing instructions.
o The memory unit of a digital computer contains
storage for instructions and data.
o The Random Access Memory (RAM) for real-time
processing of the data.
o The Input-Output devices for generating inputs
from the user and displaying the final results to the
user.
o The Input-Output devices connected to the
computer include the keyboard, mouse, terminals,
magnetic disk drives, and other communication
devices.

Types of the operating system


1. Single user operating system
2. Multi user operating system
3. Multitasking operating system
4. Multiprocessing operating system
5. Multiprogramming operating system
6. Real time operating system
7. Network operating system
8. Time sharing operating system

1. Single user operating system – An


operating system which support single
user at a time is known as single user
operating system.
2. Multi user operating system _ An
operating system which support multiple
user at a time is known as multiuser
operating system.
3. Multitasking operating system _ An
operating system which support multiple
task to be executed is known as
multitasking operating system.
4.Multi-Programming Operating System
Multiprogramming Operating Systems can
be simply illustrated as more than one
program is present in the main memory
and any one of them can be kept in
execution.
This is basically used for better utilization
of resources.
5.Time-Sharing Operating Systems
Each task is given some time to execute so
that all the tasks work smoothly.
Each user gets the time of the CPU as
they use a single system.
These systems are also known as
Multitasking Systems. The task can be
from a single user or different users also.
The time that each task gets to execute is
called quantum. After this time interval is
over OS switches over to the next task.

6. Real-Time Operating System


o These types of OSs serve real-time systems.
The time interval required to process and
respond to inputs is very small.
o This time interval is called response
time.Real-time systems are used when
there are time requirements that are very
strict like missile systems, air traffic control
systems, robots, etc.

Characteristics of the
operating system
1. Resource Management
 Manages hardware resources like CPU,
memory, disk storage, and I/O devices.
 Allocates resources efficiently for multiple

processes and users.


2. Process Management
 Handles process creation, execution, and
termination.
 Manages multitasking and ensures fair
process scheduling.
3. Memory Management
 Tracks memory usage and
allocates/deallocates memory for processes.
 Implements techniques like virtual memory

to maximize efficiency.
4. File System Management
 Manages data storage and retrieval in files

and directories.
 Provides file access permissions, security,
and naming conventions.
5. Device Management
 Controls and communicates with connected

hardware devices.
Uses device drivers to ensure compatibility
6. User Interface
 Provides an interface for user interaction,

either:
o Graphical User Interface (GUI):
Windows, macOS.
o Command-Line Interface (CLI): Linux

terminal, PowerShell.
7. Security and Protection
 Protects data and resources from
unauthorized access.
 Implements authentication (passwords,
biometrics), encryption, and firewalls.
8. Networking Capabilities
 Enables communication and resource
sharing across networks.
 Supports protocols like TCP/IP for internet

functionality.
9. Multitasking and Multithreading
 Allows multiple programs (or threads) to run

simultaneously.
 Ensures efficient CPU utilization through
context switching.
10. Error Detection and Handling
 Monitors the system for hardware or
software errors.
 Provides recovery mechanisms to maintain
stability.

Services of Operating System


 File Management
 Memory Management
 Process Management
 Resource Management
 Time Management
 Program execution
 Input Output Operations
 Communication between Process
 Security and Privacy
 User Interface
 Networking
 Error handling
Program Execution
 Loads programs into memory, executes

them, and ensures they run smoothly.

Error Detection and Handling


 Hardware Monitoring: Identifies issues
like disk failures or power disruptions.
 Software Monitoring: Detects bugs or
exceptions in program execution.
 Recovery Services: Provides
mechanisms to recover from errors.
User Interface Services
 Graphical User Interface (GUI): Provides

visual interaction through windows,


icons, and menus.
 Command-Line Interface (CLI): Allows
text-based commands for system
interaction.

Networking Services
 Communication Protocols: Supports
internet and network connectivity (e.g.,
TCP/IP).
 File Sharing: Facilitates sharing of files

and resources across networks.


 Remote Access: Allows control of the

system over a network.


………………………………………………………………
………………………
Functions of Operating System
An Operating System acts as a
communication interface between the user
and computer hardware.
Its purpose is to provide a platform on
which a user can execute programs
conveniently and efficiently.
An operating system is software that
manages all the allocation of Computer
Hardware.
The main goal of the Operating System is
to make the computer environment more
convenient to use and the Secondary goal
is to use the resources most efficiently.
Why Operating Systems Used?
 It controls all the computer resources.
 It provides valuable services to user
programs.
 It coordinates the execution of user
programs.
 It provides resources for user
programs.
 It provides an interface (virtual
machine) to the user.
 It hides the complexity of software.
 It supports multiple execution modes.
 It monitors the execution of user
programs to prevent errors.
Functions of an Operating System
Memory Management
The operating system manages the Primary Memory or Main
Memory. Main memory is made up of a large array of bytes or
words where each byte or word is assigned a certain address.
Main memory is fast storage and it can be accessed directly by
the CPU. For a program to be executed, it should be first loaded
in the main memory.

An Operating System performs the


following activities for
Memory Management:
 It keeps track of primary memory, i.e.,
which bytes of memory are used by
which user program.
 The memory addresses that have
already been allocated and the memory
addresses of the memory that has not
yet been used.
 In multiprogramming, the OS decides
the order in which processes are
granted memory access, and for how
long.
 It Allocates the memory to a process
when the process requests it and
deallocates the memory when the
process has terminated or is performing
an I/O operation.

 It keeps track of primary memory, i.e.,


which bytes of memory are used by
which user program. The memory
addresses that have already been
allocated and the memory addresses of
the memory that has not yet been
used.
 In multiprogramming, the OS decides
the order in which processes are
granted memory access, and for how
long.
 It Allocates the memory to a process
when the process requests it and
deallocates the memory when the
process has terminated or is performing
an I/O operation.

Processor Management
In a multi-programming environment,
the OS decides the order in which
processes have access to the processor,
and how much processing time each
process has. This function of OS is
called Process Scheduling. An
Operating System performs the
following activities for Processor
Management.
Allocates the CPU that is a processor to
a process. De-allocates processor when
a process is no longer required.
Process management
The Two-State Model
The simplest way to think about a
process’s lifecycle is with just two
states:
1. Running: This means the process is
actively using the CPU to do its work.
2. Not Running: This means the
process is not currently using the CPU.
It could be waiting for something, like
user input or data, or it might just be
paused.
Two State Process Model
When a new process is created, it starts
in the not running state. Initially, this
process is kept in a program called
the dispatcher.

Device Management
An OS manages device communication
via its respective drivers. It performs
the following activities for device
management.
 Keeps track of all devices connected to
the system. Designates a program
responsible for every device known as
the Input/Output controller.
 Decide which process gets access to a
certain device and for how long.
 Allocates devices effectively and
efficiently. Deallocates devices when
they are no longer required.
 There are various input and output
devices. An OS controls the working of
these input-output devices.
 It receives the requests from these
devices, performs a specific task, and
communicates back to the requesting
process.
File Management
A file system is organized into directories for efficient or easy
navigation and usage. These directories may contain other
directories and other files. An Operating System carries out the
following file management activities. It keeps track of where
information is stored, user access settings, the status of every
file, and more. These facilities are collectively known as the file
system. An OS keeps track of information regarding the creation,
deletion, transfer, copy, and storage of files in an organized way.
It also maintains the integrity of the data stored in these files,
including the file directory structure, by protecting against
unauthorized access.

History of Operating System


An operating system is a type of software that acts as an
interface between the user and the hardware. It is responsible for
handling various critical functions of the computer and utilizing
resources very efficiently so the operating system is also known
as a resource manager. The operating system also acts like a
government because just as the government has authority over
everything, similarly the operating system has authority over all
resources. Various tasks that are handled by OS are file
management, task management, garbage management,
memory management, process management, disk management,
I/O management, peripherals management, etc.
Generations of Operating Systems
 1940s-1950s: Early Beginnings
o Computers operated without operating systems (OS).
o Programs were manually loaded and run, one at a
time.
o The first operating system was introduced in 1956. It
was a batch processing system GM-NAA I/O (1956)
that automated job handling.
 1960s: Multiprogramming and Timesharing
o Introduction of multiprogramming to utilize CPU
efficiently.
o Timesharing systems, like CTSS (1961) and Multics
(1969), allowed multiple users to interact with a single
system.
 1970s: Unix and Personal Computers
o Unix (1971) revolutionized OS design with simplicity,
portability, and multitasking.
o Personal computers emerged, leading to simpler OSs
like CP/M (1974) and PC-DOS (1981).
 1980s: GUI and Networking
o Graphical User Interfaces (GUIs) gained popularity with
systems like Apple Macintosh (1984) and Microsoft
Windows (1985).
o Networking features, like TCP/IP in Unix, became
essential.
 1990s: Linux and Advanced GUIs
o Linux (1991) introduced open-source development.
o Windows and Mac OS refined GUIs and gained
widespread adoption.
 2000s-Present: Mobility and Cloud
o Mobile OSs like iOS (2007) and Android (2008)
dominate.
o Cloud-based and virtualization technologies reshape
computing, with OSs like Windows Server and Linux
driving innovation.
I/O Management
I/O management is the important function of operating system
refers to how the OS handles input and output operations
between the computer and external devices, such as keyboards,
mice, printers, hard drives, and monitors.
Memory Management in Operating System
The term memory can be defined as a collection of data in a
specific format. It is used to store instructions and process data.
The memory comprises a large array or group of words or bytes,
each with its own location. The primary purpose of a computer
system is to execute programs. These programs, along with the
information they access, should be in the main memory during
execution. The CPU fetches instructions from memory according
to the value of the program counter.
To achieve a degree of multiprogramming and proper utilization
of memory, memory management is important. Many memory
management methods exist, reflecting various approaches, and
the effectiveness of each algorithm depends on the situation.

What is Main Memory?


The main memory is central to the operation of a Modern
Computer. Main Memory is a large array of words or bytes,
ranging in size from hundreds of thousands to billions. Main
memory is a repository of rapidly available information shared by
the CPU and I/O devices. Main memory is the place where
programs and information are kept when the processor is
effectively utilizing them. Main memory is associated with the
processor, so moving instructions and information into and out of
the processor is extremely fast. Main memory is also known
as RAM (Random Access Memory) . This memory is volatile. RAM
loses its data when a power interruption occurs.
Main Memory
What is Memory Management?
In a multiprogramming computer, the Operating System resides
in a part of memory, and the rest is used by multiple processes.
The task of subdividing the memory among different processes
is called Memory Management.
Memory management is a method in the operating system to
manage operations between main memory and disk during
process execution.
The main aim of memory management is to achieve efficient
utilization of memory.
Why Memory Management is Required?
 Allocate and de-allocate memory before and after process
execution.
 To keep track of used memory space by processes.
 To minimize fragmentation issues.
 To proper utilization of main memory.
 To maintain data integrity while executing of process.

What is a System Call?


A system call is a method for a computer program to request a
service from the kernel of the operating system on which it is
running.

A system call is a method of interacting with the operating


system via programs.
A system call is a request from computer software to an
operating system's kernel.
The kernel system can only be accessed using system calls.
System calls are required for any programs that use resources.

Why do you need system calls in Operating System?

1. It is must require when a file system wants to create or


delete a file.
2. Network connections require the system calls to sending
and receiving data packets.
3. If you want to read or write a file, you need to system calls.
4. If you want to access hardware devices, including a printer,
scanner, you need a system call.
5. System calls are used to create and manage new
processes.
6. They allow system programs to exercise control over
hardware devices-for example, to set parameters or to read
status.
7. They enforce access controls and permissions to protect
system resources.
8. Used for inter-process communication and coordination.
9. System calls provide mechanisms for accessing status
information and configuration data about the system.

Types of System Calls


There are commonly five types of system calls. These are
as follows:
1. Process Control
2. File Management
3. Device Management
4. Information Maintenance
5. Communication
1. Process Control
Process control is the system call that is used to direct the processes.
Some process control examples include creating, load, abort, end, execute,
process, terminate the process, etc.
Communication is a system call that is used for communication. There are
some examples of communication, including create, delete communication
connections, send, receive messages, etc.Key activities involved in
communication are as follows:
o Setting Up Communication Links: OS connects two processes or
devices through a communication channel. The connection may be
performed using sockets, pipes, or shared memory; thus, it facilitates
easy communication between these two processes or devices.
o Closing Communication Links: OS dismantles or deletes a link if its
communication process is complete; resources are released, and the
system is not affected.
o The OS ensures that a message or data sent by one process or system
reaches the target recipient, either locally or on a network.
o Receiving Messages: The receiving process gets the data sent by
another process. The OS manages the incoming messages and makes
sure that it is delivered to the correct target.
File Allocation Methods
The allocation methods define how the files are stored in
the disk blocks. There are three main disk space or file
allocation methods.
 Contiguous Allocation
 Linked Allocation
 Indexed Allocation
The main idea behind these methods is to provide:
 Efficient disk space utilization.
 Fast access to the file blocks.

1. Contiguous Allocation
In contiguous allocation ,files are assigned to contiguous
area of secondary storage a user specifies in advance the
size of the area needed to hold a files to be created .
if the desired amount of contiguous space is not available
the files can not be created.
 Address of starting block
 Length of the allocated portion.
The file ‘mail’ in the following figure starts from the block
19 with length = 6 blocks. Therefore, it occupies 19, 20, 21,
22, 23, 24 blocks.
Advantages:
 Both the Sequential and Direct Accesses are supported by
this. For direct access, the address of the kth block of the
file which starts at block b can easily be obtained as (b+k).
 This is extremely fast since the number of seeks are
minimal because of contiguous allocation of file blocks.
Disadvantages:
 This method suffers from both internal and external
fragmentation. This makes it inefficient in terms of memory
utilization.
 Increasing file size is difficult because it depends on the
availability of contiguous memory at a particular instance.
2. Linked List Allocation
In linked list allocation each files is linked of the disk
blocks.
These disk block may be scattered through the disk.
A few byte of each disk block contains the address of the
next block .
A single entry in the files allocation table
Starting block and length of the file
Not external fragmentation
Best of sequential files

Advantages:
 This is very flexible in terms of file size. File size can be
increased easily since the system does not have to look for
a contiguous chunk of memory.
 This method does not suffer from external fragmentation.
This makes it relatively better in terms of memory
utilization.
Disadvantages:
 Because the file blocks are distributed randomly on the
disk, a large number of seeks are needed to access every
block individually. This makes linked allocation slower.
 It does not support random or direct access. We can not
directly access the blocks of a file.
 Pointers required in the linked allocation incur some extra
overhead.
3. Indexed Allocation
Each files is provide with its own index block ,which
is an array of disk block pointer .
The kth entry in the index block pointer to the kth
disk block of the files ,
The file allocation table contain block number of the
index .
Indexed allocation solves this problem by bringing all the
pointer together into one location know as the index block.
Each file has its own index block which is an array of the
disk block address . the I entry in the index block pointer to
be I block of the files.

Advantages:
 This supports direct access to the blocks occupied by the
file and therefore provides fast access to the file blocks.
 It overcomes the problem of external fragmentation.
Disadvantages:
 The pointer overhead for indexed allocation is greater than
linked allocation.

 What is Virtual Memory


 Virtual Memory is a storage scheme that provides user an
illusion of having a very big main memory. This is done by
treating a part of secondary memory as the main memory.
 The History of Virtual Memory
 Before virtual memory, computers used RAM and secondary
memory for data storage. Early computers used magnetic
core in place of main memory and magnetic drums in place
of secondary memory. In the 1940s and 1950s, computer
memory was very expensive and limited in siz
 User can load the bigger size processes than the available
main memory by having the illusion that the memory is
available to load the process.
 Instead of loading one big process in the main memory, the
Operating System loads the different parts of more than
one process in the main memory.
There are two main types of virtual memory:
 Paging
 Segmentation

 Paging
 Paging divides memory into small fixed-size blocks called
pages. When the computer runs out of RAM, pages that
aren’t currently in use are moved to the hard drive, into an
area called a swap file. The swap file acts as an extension
of RAM.
Segmentation
Segmentation divides virtual memory into segments of different
sizes. Segments that aren’t currently needed can be moved to
the hard drive. The system uses a segment table to keep track of
each segment’s status, including whether it’s in memory, if it’s
been modified, and its physical address.
Benefits of Virtual Memory:
1. Increased Memory Space:
o Programs can run as if there’s more RAM than
physically available.
o Allows large applications to execute even on systems
with limited physical RAM.
2. Multi-tasking:
o Enables the system to run multiple programs
simultaneously by allocating memory dynamically and
efficiently.
3. Isolation and Protection:
o Prevents applications from interfering with each other
by isolating their memory spaces.
o If one program crashes, it won’t affect others directly.
4. Efficient Memory Use:
o Only frequently accessed data stays in RAM; less-used
data resides in the disk.
5. Cost Efficiency:
o Instead of upgrading RAM, virtual memory allows you
to use existing disk space for temporary memory
needs.
6. Support for Complex Programs:
o Virtual memory enables large programs (e.g., image
editing software, databases) to operate on systems
with limited hardware.

Banker's Algorithm in Operating


System (OS)
There is an algorithm called Banker's Algorithm used
in removing deadlocks while dealing with the safe
allocation of resources to processes in a computer
system.
It gives all the resources to each process and avoids
resource allocation conflicts.
Overview of Banker's Algorithm
o The 'S-State' checks all the possible tests or activities
before determining whether the resource should be
allocated to any process or not.
o It enables the operating system to share resources among
all the processes without creating deadlock or any critical
situation.
o The algorithm is so named based on banking operations
because it mimics the process through which a bank
determines whether or not it can safely make loan
approvals.
Real Life Example
Imagine having a bank with T amount of money and n
account holders. At some point, whenever one of the
account holders requests a loan:
o The bank withdraws the requested amount of cash from the
total amount available for any further withdrawals.
o The bank checks if the cash that is available for withdrawal
will be enough to cater to all future requests/withdraws.
o If there is enough money available (that is to say, the
available cash is greater than T), he lends the loan.
o This ensures that the bank will not suffer operational
problems when it receives subsequent applications.
Banker's Algorithm in Operating Systems
Likewise, in an operating system:
o When a new process is created, it needs to provide all the
vital information, such as which processes are scheduled to
run shortly, resource requests, and potential delays.
o This knowledge helps the OS decide which sequence of
process executions needs to proceed to avoid any deadlock.
o Since the order of executions that should occur in order to
prevent deadlocks is defined, the Banker's Algorithm is
usually considered a deadlock avoidance or a deadlock
detection algorithm in OS

Advantages
1. It contains various resources that meet the requirements of
each process.
2. Each process should provide information to the operating
system for upcoming resource requests, the number of
resources, and how long the resources will be held.
3. It helps the operating system manage and control process
requests for each type of resource in the computer system.
4. The algorithm has a Max resource attribute that represents
indicates each process can hold the maximum number of
resources in a system.
5. It means that in the Banker's Algorithm, the resources are
granted only if there is no possibility of a deadlock when
those resources are to be assigned. Thus, it ensures that
the system runs at optimal performance.
6. The algorithm allows one to avoid the pointless holding of
resources by any process since the algorithm actually
checks whether the granting of resources is feasible or not.

Disadvantages
1. It requires a fixed number of processes, and no additional
processes can be started in the system while executing the
process.
2. The algorithm does no longer allows the processes to
exchange its maximum needs while processing its tasks.
3. Each process has to know and state their maximum
resource requirement in advance for the system.
4. The number of resource requests can be granted in a finite
time, but the time limit for allocating the resources is one
year.
5. It could be pretty intricate to manage the algorithm, which
is especially known in the case of systems with a vast
quantity of processes and resources. This, consequently,
translates to increased overhead.

What is Deadlock in Operating


System (OS)?
A Deadlock is a situation where each of the computer
process waits for a resource which is being assigned to
some another process.
Key concepts include mutual exclusion, resource holding,
circular wait, and no preemption. In this situation, none of
the process gets executed since the resource it needs, is
held by some other process which is also waiting for some
other resource to be released.
Every process needs some resources to complete its
execution. However, the resource is granted in a sequential
order.
1. The process requests for some resource.
2. OS grant the resource if it is available otherwise let the
process waits.
3. The process uses it and release on the completion.
Key concepts include mutual exclusion, resource holding,
circular wait, and no preemption.
How Does Deadlock occur in the Operating System?
A process in an operating system uses resources in the
following way.
 Requests a resource
 Use the resource
 Releases the resource
A situation occurs in operating systems when there are two
or more processes that hold some resources and wait for
resources held by other(s). For example, in the below
diagram, Process 1 is holding Resource 1 and waiting for
resource 2 which is acquired by process 2, and process 2 is
waiting for resource 1.
Examples of Deadlock
There are several examples of deadlock. Some of them are
mentioned below.
1. The system has 2 tape drives. P0 and P1 each hold one
tape drive and each needs another one.
2. Semaphores A and B, initialized to 1, P0, and P1 are in
deadlock as follows:
 P0 executes wait(A) and preempts.
 P1 executes wait(B).
 Now P0 and P1 enter in deadlock.

P0 P1

wait(A);
wait(B)

wait(B); wait(A)
3. Assume the space is available for allocation of 200K
bytes, and the following sequence of events occurs.

P0
P1

Request Request
80KB; 70KB;

Request Request
60KB; 80KB;

Handling Deadlocks
Deadlock is a situation where a process or a set of
processes is blocked, waiting for some other resource that
is held by some other waiting process. It is an undesirable
state of the system.
In other words, Deadlock is a critical situation in computing
where a process, or a group of processes, becomes unable
to proceed because each is waiting for a resource that is
held by another process in the same group.
Strategies for handling Deadlock
1. Deadlock Ignorance
Deadlock Ignorance is the most widely used approach
among all the mechanism. This is being used by many
operating systems mainly for end user uses. In this
approach, the Operating system assumes that deadlock
never occurs. It simply ignores deadlock. This approach is
best suitable for a single end user system where User uses
the system only for browsing and all other normal stuff.
2. Deadlock prevention
Deadlock happens only when Mutual Exclusion, hold and
wait, No preemption and circular wait holds simultaneously.
If it is possible to violate one of the four conditions at any
time then the deadlock can never occur in the system.
3. Deadlock avoidance
In deadlock avoidance, the operating system checks
whether the system is in safe state or in unsafe state at
every step which the operating system performs.
The process continues until the system is in safe state.
Once the system moves to unsafe state, the OS has to
backtrack one step.
In simple words, The OS reviews each allocation so that the
allocation doesn't cause the deadlock in the system.
We will discuss Deadlock avoidance later in detail.
4. Deadlock detection and recovery
This approach let the processes fall in deadlock and then
periodically check whether deadlock occur in the system or
not.
If it occurs then it applies some of the recovery methods to
the system to get rid of deadlock.
Paging in Memory Management
Paging is a memory management scheme that eliminates
the need for a contiguous allocation of physical memory.
The process of retrieving processes in the form of pages
from the secondary storage into the main memory is known
as paging. The basic purpose of paging is to separate each
procedure into pages. Additionally, frames will be used to
split the main memory. This scheme permits the physical
address space of a process to be non – contiguous.
In paging, the physical memory is divided into fixed-size
blocks called page frames, which are the same size as the
pages used by the process. The process’s logical address
space is also divided into fixed-size blocks called pages,
which are the same size as the page frames. When a
process requests memory, the operating system allocates
one or more page frames to the process and maps the
process’s logical pages to the physical page frames.
Principles of Protection
 The principle of least privilege dictates that programs,
users, and systems be given just enough privileges to
perform their tasks.
 This ensures that failures do the least amount of harm and
allow the least of harm to be done.
 Typically each user is given their own account, and has only
enough privilege to modify their own files.

Protection is especially important in a multiuser


environment when multiple users use computer resources
such as CPU, memory, etc.
It is the operating system's responsibility to offer a
mechanism that protects each process from other
processes.
In a multiuser environment, all assets that require
protection are classified as objects, and those that wish to
access these objects are referred to as subjects.
The operating system grants different 'access rights' to
different subjects.
Need of Protection in Operating System
Various needs of protection in the operating system are as
follows:
1. There may be security risks like unauthorized reading,
writing, modification, or preventing the system from
working effectively for authorized users.
2. It helps to ensure data security, process security, and
program security against unauthorized user access or
program access.
3. It is important to ensure no access rights' breaches, no
viruses, no unauthorized access to the existing data.
4. Its purpose is to ensure that only the systems' policies
access programs, resources, and data.
Access matrix
The Access Matrix is a conceptual framework used in
operating systems to model and manage the permissions
and rights of different users (subjects) to access various
system resources (objects).
It provides a way to enforce security policies and control
access to resources in a system. Let's break it down step by
step:

1. Definition:
The Access Matrix is a two-dimensional table where:
 Rows represent subjects (users, processes, or programs).
 Columns represent objects (files, directories, devices, etc.).
 Each cell in the matrix specifies the set of operations that a
subject can perform on an object.

2. Structure of the Access Matrix:


The Access Matrix is structured as follows:
Printe ..
Objects File 1 File 2
r .
Subject 1
Read, ..
(e.g., User Read Print
Write .
A)
Subject 2
No ..
(e.g., User Execute Print
Access .
B)
..
Subject 3 .
Read, No
(e.g., Write
Execute Access
Process X)

 Each cell contains a set of access rights, such as:


o Read: The subject can read the object.
o Write: The subject can modify the object.
o Execute: The subject can execute the object if it’s a
program.
o Print: The subject can use a printer object.
o No Access: No permissions are granted.

3. Key Components:
 Subjects: Entities that request access to resources (e.g.,
users, processes, programs).
 Objects: Resources in the system (e.g., files, devices,
memory segments).
 Access Rights: Permissions defining what actions subjects
can perform on objects.

4. Advantages of the Access Matrix:


1. Simplicity: The matrix provides a clear, visual
representation of access rights.
2. Flexibility: It can define complex access policies for
multiple users and resources.
3. Security: It ensures that only authorized subjects can
access specific objects.

4. What is Demand Paging?


Demand paging is a technique used in virtual memory
systems where pages enter main memory only when
requested or needed by the CPU.
In demand paging, the operating system loads only the
necessary pages of a program into memory at runtime,
instead of loading the entire program into memory at the
start.

The operating system then loads the required pages


from the disk into memory and updates the page tables
accordingly.
This process is transparent to the running program and it
continues to run as if the page had always been in memory.
The demand paging system is similar to the swapping paging
system in that processes are mostly stored in the main memory
(usually on the hard disk).
As a result, demand paging is a procedure that addresses the
problem above just by shifting pages on demand.

What is Page Fault?


The term “page miss” or “page fault” refers to a situation where
a referenced page is not found in the main memory.
A page fault occurred when the program needed to
access a page that is not currently in memory.
In modern operating systems, page faults are a common
component of virtual memory management.
When a program tries to access a page, or fixed-size
block of memory, that isn’t currently loaded in physical
memory (RAM), an exception known as a page fault
happens.

What is Synchronous Transmission?


Synchronous data transmission is a type of data transfer that
carries a frequent stream of data in the form of signals along
with timing signals generated by an electric clock that ensures
the synchronization of the sender and receiver. Synchronous
transmission allows data to be transmitted in fixed intervals in
the form of frames or blocks.
What is Asynchronous Transmission?
Asynchronous transmission is a type of data transmission which
works on start and stop bits. In Asynchronous transmission, each
character contains its start and stop bit and irregular interval of
time between them.
Difference Between Synchronous and Asynchronous
Transmission
S.No Synchronous Asynchronous
.

1. In Synchronous transmission a In Asynchronous


common clock is shared by the transmission each
transmitter and receiver to character contains its own
achieve synchronisation while start and stop bits.
data transmission.

2. In Synchronous transmission data In Asynchronous


is sent in frames or blocks. transmission data is sent in
the form of bytes or
characters.

3. Synchronous transmission is Asynchronous transmission


faster, as a common clock is is slower as each character
shared by the sender and has its own start and stop
receiver. bit.

4. Synchronous transmission is Asynchronous transmission


costlier. is cheaper.

5. It is easy to design. It is complex.

6. In synchronous transmission there In asynchronous


is no gap between the data as transmission there is a gap
they share a common clock. between the data due to
the start and stop bit
feature.
How i\o request are transferred to hardware
device?

I/O Request Initiation (User Level):


 Applications issue I/O requests via system calls (e.g.,
read(), write(), etc.).
 These requests are passed to the OS's kernel to handle the
underlying hardware details.
2. OS Kernel Processing:
 The kernel checks permissions and translates the high-level
I/O request into hardware-specific commands.
 It determines which device the request is meant for and
selects the appropriate device driver.
3. Device Driver:
 A device driver is a software module that acts as an
intermediary between the OS and hardware devices.
 The driver translates the generic I/O request into device-
specific commands (e.g., setting registers or sending
data).
4. I/O Scheduler:
 The OS may use an I/O scheduler to optimize the order of
requests, improving efficiency (e.g., by reducing seek time
for disk drives).
 This step is critical in multitasking environments where
multiple I/O requests may compete for the same device.
5. Communication with the Hardware:
 The device driver communicates with the hardware
through:
o Memory-Mapped I/O (MMIO): The device is mapped
to specific memory addresses, and the OS interacts
with it by reading/writing to these addresses.
o Port-Mapped I/O (PIO): Special CPU instructions
(e.g., IN/OUT on x86) are used to communicate with
the device through I/O ports.
6. Interrupts and Polling:
 Once the device receives the command, it processes the
request and notifies the OS:
o Interrupts: The device sends an interrupt signal to
the CPU, which temporarily pauses current operations
to handle the I/O.
o Polling: The OS continuously checks the device status
to determine if the operation is complete (less efficient
than interrupts).
7. Direct Memory Access (DMA):
 For large data transfers, the OS may use DMA. The DMA
controller directly transfers data between the device and
memory, bypassing the CPU and freeing it for other tasks.
8. Completion and Return:
 Once the device finishes the operation, the driver updates
the kernel, and the kernel notifies the application.
 The I/O request is marked as complete, and the result (e.g.,
data read or status) is returned to the application.

Monitors
Monitors are a programming language component
that aids in the regulation of shared data access.
The Monitor is a package that contains shared data
structures, operations, and synchronization between
concurrent procedure calls.
Therefore, a monitor is also known as a
synchronization tool. Java, C#, Visual Basic, Ada, and
concurrent Euclid are among some of the languages
that allow the use of monitors.
Processes operating outside the monitor can't
access the monitor's internal variables, but they can
call the monitor's procedures.
Characteristics of Monitors in OS
A monitor in OS has the following characteristics:
 We can only run one program at a time inside the
monitor.
 Monitors in an operating system are defined as a
group of methods and fields that are combined with
a special type of package in the OS.
 A program cannot access the monitor's internal
variable if it is running outside the monitor. However,
a program can call the monitor's functions.
 Monitors were created to make synchronization
problems less complicated.
 Monitors provide a high level of synchronization
between processes.
Components of Monitor in an Operating System
The monitor is made up of four primary parts:
1. Initialization: The code for initialization is included in
the package, and we just need it once when creating
the monitors.
2. Private Data: It is a feature of the monitor in an
operating system to make the data private. It holds
all of the monitor's secret data, which includes
private functions that may only be utilized within the
monitor. As a result, private fields and functions are
not visible outside of the monitor.
3. Monitor Procedure: Procedures or functions that can
be invoked from outside of the monitor are known
as monitor procedures.
Monitors in Process Synchronization
Monitors are a higher-level synchronization construct that
simplifies process synchronization by providing a high-level
abstraction for data access and synchronization.
Monitors are implemented as programming language
constructs, typically in object-oriented languages, and
provide mutual exclusion, condition variables, and data
encapsulation in a single construct.
1. A monitor is essentially a module that encapsulates a
shared resource and provides access to that resource
through a set of procedures.
2. The procedures provided by a monitor ensure that only one
process can access the shared resource at any given time,
and that processes waiting for the resource are suspended
until it becomes available.
3. Monitors are used to simplify the implementation of
concurrent programs by providing a higher-level abstraction
that hides the details of synchronization.
4. Monitors provide a structured way of sharing data and
synchronization information, and eliminate the need for
complex synchronization primitives such as semaphores
and locks.
5. The key advantage of using monitors for process
synchronization is that they provide a simple, high-level
abstraction that can be used to implement complex
concurrent systems.
6. Monitors also ensure that synchronization is encapsulated
within the module, making it easier to reason about the
correctness of the system.
Dining-Philosophers Solution Using Monitors
Prerequisite: Monitor, Process Synchronization
Dining-Philosophers Problem – N philosophers seated
around a circular table

 There is one chopstick between each philosopher


 A philosopher must pick up its two nearest
chopsticks in order to eat
 A philosopher must pick up first one chopstick, then
the second one, not both at once.
Solution Using Monitor
The monitor manages:
1. The states of philosophers (thinking, hungry, or
eating).
2. Condition variables to allow philosophers to wait
until the required forks are available.
Steps in the Solution
1. Initialization:
o Each philosopher starts in the THINKING state.
o Each philosopher has an associated condition
variable (self[i]) to wait for forks.
2. Pickup Forks:
o The philosopher sets their state to HUNGRY.
o The test method checks if both adjacent
philosophers are not eating. If forks are
available, the philosopher starts eating.
o If forks are not available, the philosopher waits
on their condition variable.
3. Put Down Forks:
o The philosopher sets their state back to
THINKING.
o The test method is called for the left and right
neighbors to see if they can now eat.
4. Test Method:
o Ensures that a philosopher can eat only if
neither of their neighbors is eating.

User-Level Thread
The User-level Threads are implemented by the
user-level software.
These threads are created and managed by the
thread library, which the operating system
provides as an API for creating, managing, and
synchronizing threads.
it is faster than the kernel-level threads, it is
basically represented by the program counter,
stack, register, and PCB.
User-level threads are typically employed in
scenarios where fine control over threading is
necessary, but the overhead of kernel threads is
not desired.
They are also useful in systems that lack native
multithreading support, allowing developers to
implement threading in a portable way.
Example – User threads library includes POSIX
threads, Mach C-Threads
Advantages of User-Level Threads
 Quick and easy to create: User-level threads can be
created and managed more rapidly.
 Highly portable: They can be implemented across
various operating systems.
 No kernel mode privileges required: Context
switching can be performed without transitioning to
kernel mode.
Disadvantages of User-Level Threads
 Limited use of multiprocessing: Multithreaded
applications may not fully exploit multiple
processors.
 Blocking issues: A blocking operation in one thread
can halt the entire process.
Kernel-Level Thread
Threads are the units of execution within an
operating system process.
The OS kernel is responsible for generating,
scheduling, and overseeing kernel-level threads
since it controls them directly.
The Kernel-level threads are directly handled by
the OS directly whereas the thread’s
management is done by the kernel.
Each kernel-level thread has its own context,
including information about the thread’s status,
such as its name, group, and priority.
Example – The example of Kernel-level threads
are Java threads, POSIX thread on Linuxs, etc.
Advantages of Kernel-Level Threads
 True parallelism: Kernel threads allow real parallel
execution in multithreading.
 Execution continuity: Other threads can continue to
run even if one is blocked.
 Access to system resources: Kernel threads have
direct access to system-level features, including I/O
operations.
Disadvantages of Kernel-Level Threads
 Management overhead: Kernel threads take more
time to create and manage.
 Kernel mode switching: Requires mode switching to
the kernel, adding overhead.

Thread Scheduling
Scheduling of threads involves two boundary
scheduling.
1. Scheduling of user-level threads (ULT) to kernel-level
threads (KLT) via lightweight process (LWP) by the
application developer.
2. Scheduling of kernel-level threads by the system
scheduler to perform different unique OS functions.
User-level thread
The operating system does not recognize the user-level thread.
User threads can be easily implemented and it is implemented by the user.
If a user performs a user-level thread blocking operation, the whole process is
blocked.
The kernel level thread does not know nothing about the user level thread.

Advantages of User-level threads


1. The user threads can be easily implemented than the
kernel thread.
2. User-level threads can be applied to such types of
operating systems that do not support threads at the
kernel-level.
3. It is faster and efficient.
4. Context switch time is shorter than the kernel-level
threads.
5. It does not require modifications of the operating
system.
Disadvantages of User-level threads
1. User-level threads lack coordination between the
thread and the kernel.
2. If a thread causes a page fault, the entire process is
blocked.

Kernel level thread


The kernel thread recognizes the operating system.
There is a thread control block and process control
block in the system for each thread and process in
the kernel-level thread.
The kernel-level thread is implemented by the
operating system. The kernel knows about all the
threads and manages them.
Window Solaris.
Advantages of Kernel-level threads
1. The kernel-level thread is fully aware of all threads.
2. The scheduler may decide to spend more CPU time in
the process of threads being large numerical.
3. The kernel-level thread is good for those applications
that block the frequency.
Disadvantages of Kernel-level threads
1. The kernel thread manages and schedules all
threads.
2. The implementation of kernel threads is difficult than
the user thread.
3. The kernel-level thread is slower than user-level
threads.
Components of Threads
1. Program counter
2. Register set
3. Stack space
Benefits of Threads
o Enhanced throughput of the system: When the process
is split into many threads, and each thread is treated as a
job, the number of jobs done in the unit time increases.

o Effective Utilization of Multiprocessor system: When


you have more than one thread in one process, you
can schedule more than one thread in more than one
processor.
o Faster context switch: The context switching period
between threads is less than the process context
switching.
o Responsiveness: When the process is split into
several threads, and when a thread completes its
execution, that process can be responded to as soon
as possible.
o Communication: Multiple-thread communication is
simple because the threads share the same address
space, while in process, we adopt just a few
exclusive communication strategies for
communication between two processes.
o Resource sharing: Resources can be shared between
all threads within a process, such as code, data, and
files. Note: The stack and register cannot be shared
between thread.

Process Management in Linux


A process means program in execution. It generally
takes an input, processes it and gives us the
appropriate output. Check Introduction to Process
Management for more details about a process. There
are basically 2 types of processes.
1. Foreground processes: Such kind of processes are
also known as interactive processes.
These are the processes which are to be executed or
initiated by the user or the programmer, they can not
be initialized by system services.
Such processes take input from the user and return
the output.
While these processes are running we can not directly
initiate a new process from the same terminal.
2. Background processes: Such kind of processes are
also known as non interactive processes.
These are the processes that are to be executed or
initiated by the system itself or by users, though
they can even be managed by users.
These processes have a unique PID or process if
assigned to them and we can initiate other processes
within the same terminal from which they are
initiated.

Windows
The first version of Windows, released in
1985, was simply a GUI offered as an
extension
of Microsoft’s existing disk operating
system, or MS-DOS.
Based in part on licensed concepts that
Apple Inc.
had used for its Macintosh System
Software, Windows for
the first time allowed DOS users to visually
navigate a virtual desktop, opening
graphical
“windows” displaying the contents of
electronic folders and files with the click of a
mouse button, rather than typing commands
and directory paths at a text prompt.
What is UNIX?
UNIX is an operating system used in both single and
multiple users and the various tasks operation was
invented in the late 1960s at the Bell Laboratories,
where AT&T. operates, It was intended to be a
strong, reliable, and versatile system and it has been
initially targeted at servers, workstations, and
academic systems.
What is Windows?
Windows is one of the fastest operating systems
developed and marketed by Microsoft. The first
Windows was launched in 1985. Windows supports a
GUI-based system that is more user-friendly with
little or no knowledge of computers people.

Paramete
rs UNIX Windows

It is a
It is a menu
command-
based
Basic based
operating
operating
system.
system.

It is an open-
It is a
source system
proprietary
Licensin which can be
software
g used to under
owned by
General Public
Microsoft.
License.

User It has a text It has


Interface base interface, a Graphical
making it User
Paramete
rs UNIX Windows

Interface,
harder to
making it
grasp for
simpler to
newcomers.
use.

It supports It supports
Processi
Multiprocessin Multithreadin
ng
g. g.

It uses File
It uses Unix Allocation
File System
System(UFS) (FAT32) and
File
that comprises New
System
STD.ERR and technology
STD.IO file file
systems. system(NTFS)
.

It is more
secure as all It is less
changes to the secure
Security
system require compared to
explicit user UNIX.
permission.

Data It is tedious to It has an


Backup create a integrated
& backup and backup and
Recover recovery recovery
y system in system that
UNIX, but it is make it
improving with simpler to
Paramete
rs UNIX Windows

the
introduction of
new use.
distributions
of Unix.

Hardware
support is
limited in UNIX Drivers are
Hardwar system. Some available for
e hardware almost all the
might not have hardware.
drivers built
for them.

Although
Windows has
Unix and its been stable
distributions in recent
Reliabilit
are well known years, it is
y
for being very still to match
stable to run. the stability
provided by
Unix systems.

It is fully case-
Case sensitive, and It has case
Sensitiv files can be sensitivity as
e considered an option.
separate files

Virtual File System


The virtual file system is one of the best features of
Linux that makes Linux an operating system of
choice.
Format a pen drive in any Linux flavor operating
system and try to see the contents in windows OS.
Windows OS does not show anything.
Now try doing the same in windows i.e format a
Pendrive in the windows operating system to store
some files and then try to view these files through
any Linux operating system.
we will be able to see the contents. we will also be
able to open and read file content easily.
Two mapping functions are used that transform the
properties of the real-time system to the
characteristics required by VFS.
Remote File System (RFS) in File Management
Files can be shared across the network via variety of
methods –
 Using FTP i.e., file transfer protocol is used to
transfer file from one computer to other.
 Using distributed file system (DFS) in which remote
directories are visible from local machine.
 Using Remote File System (RFS) in which the arrival
of networks has allowed communication between
remote computer.
 These networks allows various hardware and
software resources to be shared throughout the
world.
Remote file sharing (RFS) is a type of distributed file
system technology. It was developed in 1980 by
AT&T.
Later, it was delivered with UNIX System version V
(five) release 3 (SVR3). It enables file and/or data
access to multiple remote users over the Internet or
a network connection.
It is also known as a general process of providing
remote user access to locally stored files and/or
data.

File System Mounting? Disk mounting


Mounting is a process in which the operating system
adds the directories and files from a storage device
to the user’s computer file system.
The file system is attached to an empty directory, by
adding so the system user can access the data that
is available inside the storage device through the
system file manager.
Storage systems can be internal hard disks, external
hard disks, USB flash drivers, SSD cards, memory
cards, network-attached storage devices, CDs and
DVDs, remote file systems, or anything else.
Overview of Disk Mounting
 Mount Point: The location in the directory hierarchy
where the disk is attached (e.g., /mnt/disk1 or
/media/usb in Linux).
 File System: The format used to organize and store
files on the disk (e.g., NTFS, FAT32, ext4).
 Mounting Process:
o The operating system reads the file system's
metadata.
o It attaches the file system to the specified
mount point.
o Files and directories on the disk are now
accessible.
Different modules
File System Interface Module
Space Management Module
File Allocation Module
Access Control and Security Module
File Organization and Mapping Module
I/O Control Module

What is a Hard Disk Drive?


The disk is divided into tracks. Each track
is further divided into sectors.
The point to be noted here is that the outer
tracks are bigger than the inner tracks but
they contain the same number of sectors
and have equal storage capacity.
Hard disk drives (HDD) serve as secondary
storage, providing a non-volatile medium
for data. To understand how HDDs work
and their role in computer architecture,
the GATE CS Self-Paced Course explains
storage devices, memory management, and
much more to ensure you’re well-versed in
these foundational concepts
Some important terms must be noted
here:

Hard Disk Drive


1. Seek time – The time taken by the R-
W head to reach the desired track from
its current position.
2. Rotational latency – Time is taken by
the sector to come under the R-W head.
3. Data transfer time – Time is taken to
transfer the required amount of data. It
depends upon the rotational speed.
4. Controller time – The processing
time taken by the controller.
5. Average Access time – seek time +
Average Rotational latency + data
transfer time + controller time.

What is Multiple-Processor Scheduling?

In systems containing more than one


processor, multiple-processor scheduling
addresses task allocations to multiple CPUs .
It would also involve the determination of
which CPU handles a particular task and
balancing loads between available
processors.
What is CPU Scheduling?
It is the mechanism through which an
operating system chooses which tasks or
processes are to be executed by the CPU at
any instant in time. The major goal is to
keep the CPU in busy mode by assigning
the time of various processes in an
effective manner so that the overall
performance of the system can be
optimized. There are several algorithms,
like Round Robin or Priority Scheduling,
applied to govern the scheduling of tasks.

Introduction to Input-Output Interface


Input-Output Interface is used as a method
which helps in transferring of information
between the internal storage devices i.e.
memory and the external peripheral device
.
A peripheral device is that which provide
input and output for the computer, it is also
called Input-Output devices.
For Example: A keyboard and mouse
provide Input to the computer are called
input devices while a monitor and printer
that provide output to the computer are
called output devices. Just like the external
hard-drives, there is also availability of
some peripheral devices which are able to
provide both input and output.

Input-Output Interface
In micro-computer base system, the only
purpose of peripheral devices is just to
provide special communication links for the
interfacing them with the CPU.
Applications
1. Device Independence
2. Efficient Data Transfer
3. Multiplexing and Sharing
Devices
4. Device Communication
Management
5. Buffering and Caching
6. Error Handling and Recovery
7. Interrupt Handling
8. Standardized Communication
9. File System Access
10. Plug-and-Play Support
11. Synchronization and Control
12. Real-Time Data Processing
13. Virtual Device Emulation
14. Networking
15. Power Management
16. Specialized Hardware
Applications
17. Security
18. Device Drivers
19. Storage Management
20. Multimedia Applications

Different input output


operation
. Read Operation
 Definition: The process of transferring
data from an I/O device to the system's
memory or a file.
 Example: Reading data from a disk into
RAM or reading characters from a
keyboard.
 System Call: In UNIX-like systems, the
read() system call is used for reading
data from files or devices.

2. Write Operation
 Definition: The process of transferring
data from memory to an I/O device,
such as writing data to a disk or
sending characters to a printer.
 Example: Saving data to a file or
sending output to a screen or printer.
 System Call: In UNIX-like systems, the
write() system call is used for writing
data to files or devices.

3. Control Operation
 Definition: Involves controlling the
behavior of the I/O device. This can
include device-specific operations such
as configuring settings, starting or
stopping a device, or querying the
status of a device.
 Example: Sending a command to a
printer to start printing or changing the
baud rate for a serial port.
 System Call: In UNIX-like systems,
ioctl() (Input/Output Control) is used for
device-specific control operations.

4. Seek Operation
 Definition: A seek operation is used in
storage devices like hard drives or
tapes to position the read/write head to
the correct location on the storage
medium.
 Example: When accessing a file, the
operating system must seek to the file's
data blocks on the disk.
 Context: This is typically used in disk
drives, where the operating system
must find the location of a specific file
or block.


7. Direct Memory Access (DMA)
 Definition: A method where the I/O
controller can transfer data directly to
and from memory, bypassing the CPU.
This is often used for high-speed data
transfer operations.
 Example: Transferring data from a hard
disk to memory without involving the
CPU, allowing the CPU to perform other
tasks simultaneously.
 Advantage: Frees up the CPU and
speeds up data transfer.

8. Buffered I/O
 Definition: In this operation, data is
temporarily stored in a buffer (a region
of memory) before being transferred to
or from an I/O device. Buffering helps
improve performance by reducing the
number of I/O operations.
 Example: When reading or writing large
files, the operating system may use a
buffer to store data before writing it to
disk or after reading it from disk.
 System Call: In many operating
systems, buffered I/O is used with
functions like fread(), fwrite(), or
fgets().
9. Synchronous vs. Asynchronous I/O
 Synchronous I/O: The process waits for
the I/O operation to complete before
proceeding with further execution.
o Example: A file read operation
where the process waits until the
data is completely read before
continuing.
o System Call: Standard read() or
write() calls are typically
synchronous.
 Asynchronous I/O: The process
continues execution while the I/O
operation is performed in the
background. The process is notified
once the operation is complete.
o Example: A network application that
sends data over the network
without waiting for a response,
allowing the application to handle
other tasks.
o System Call: Asynchronous I/O is
often handled using special system
calls or libraries such as aio_read()
in POSIX or ReadFileEx() in
Windows.
10. Memory-Mapped I/O
 Definition: In memory-mapped I/O, the
operating system maps an I/O device's
memory into the address space of the
application. The application can then
directly access the device's memory
using regular read/write operations.
 Example: A graphics card's memory
(frame buffer) is mapped to the
application's address space, allowing
direct access to pixel data for
rendering.
 Advantage: Provides a fast, efficient
way for applications to interact with I/O
devices.

11. I/O Scheduling


 Definition: The operating system
organizes I/O operations for efficiency,
particularly for storage devices like
hard drives. The goal is to minimize the
movement of the disk arm (in the case
of mechanical drives) and to maximize
throughput.
 Example: A disk scheduling algorithm
like Elevator (SCAN) or Shortest Seek
Time First (SSTF) organizes I/O requests
to optimize disk access times.
12. Blocking vs. Non-blocking I/O
 Blocking I/O: In blocking I/O, the
process is paused until the I/O
operation completes. The process is
blocked and cannot perform other tasks
until the operation finishes.
o Example: Reading data from a disk
where the process is paused until
the data is available.
 Non-blocking I/O: The process continues
execution while the I/O operation is in
progress. The process can check
periodically if the operation is complete
or handle other tasks.
o Example: Non-blocking file reads
where a program checks if the data
is available without pausing the
entire program.

Kernel I/O Subsystem in


Operating System
The kernel provides many services
related to I/O. Several services such
as scheduling, caching, spooling,
device reservation, and error
handling – are provided by the
kernel’s I/O subsystem built on the
hardware and device-driver
infrastructure.
An Operating System (OS) is a
complex software program that
manages the hardware and software
resources of a computer system.
One of the critical components of
an OS is the Kernel I/O Subsystem,
which provides an interface
between the operating system and
input/output (I/O) devices.
The Kernel I/O Subsystem is an
essential part of any
modern Operating System.
The Kernel I/O Subsystem also
manages
the concurrency and synchronizatio
n issues that arise when multiple
applications try to access the same
device simultaneously.
Blocking and Nonblocking IO in
Operating System
Blocking and non-blocking IO are
two different methods used by OS to
handle I/O operations. A blocking
call is where the OS decides that it
needs to wait for a certain operation
to complete before allowing the
program to continue execution. The
result of this is that user can’t do
anything else while waiting on a
system call; if your program doesn’t
have any other activity, then it
should be able to wait indefinitely
without causing any problems. On
the other hand, non-blocking calls
(also known as asynchronous
operations) don’t block the thread
until they finish their work; instead,
this type of system call returns
immediately after completing its job
without having any effect on what’s
happening in other threads.
Asynchronous I/O
synchronous I/O
Difference Between Blocking and
Non-Blocking I/O

Non-Blocking
Blocking I/O I/O

Blocking I/O Non-blocking


is when a I/O means
process is that when
waiting for user
an I/O call GetData().
operation to
Non-Blocking
Blocking I/O I/O

complete.

I/O buffering and its Various


Techniques

A buffer is a memory area that


stores data being transferred
between two devices or between a
device and an application. I/O
buffering is a technique used in
computers to manage data transfer
between the computer’s memory
and input/output devices (like hard
drives, printers, or network
devices). It helps make data transfer
more efficient by temporarily
storing data in a buffer, which is a
reserved area of memory. This
allows the CPU and I/O devices to
work at their speeds without having
to wait for each other, improving
overall system performance.
What is I/O Buffering?
I/O buffering is a technique used in
computer systems to improve the
efficiency of input and output (I/O)
operations. It involves the
temporary storage of data in a
buffer, which is a reserved area of
memory, to reduce the number of
I/O operations and manage the flow
of data between fast and slow
devices or processes
Uses of I/O Buffering
 Buffering is done to deal effectively
with a speed mismatch between
the producer and consumer of the data
stream.
 A buffer is produced in the main
memory to heap up the bytes received
from the modem.
 After receiving the data in the buffer,
the data gets transferred to disk from
the buffer in a single operation.
 This process of data transfer is not
instantaneous, therefore the modem
needs another buffer to store additional
incoming data.
 When the first buffer is filled, then it is
requested to transfer the data to disk.
 The modem then starts filling the
additional incoming data in the second
buffer while the data in the first buffer
gets transferred to the disk.
 When both buffers complete their
tasks, then the modem switches back to
the first buffer while the data from the
second buffer gets transferred to the
disk.
 The use of two buffers disintegrates the
producer and the consumer of the data,
thus minimizing the time requirements
between them.
 Buffering also provides variations for
devices that have different data
transfer sizes.
Types of I/O Buffering Techniques
1. Single Buffer
2. Double Buffer
3. Circular Buffer

………………………………………………………………
………………………

You might also like