Operating System
Operating System
Characteristics of the
operating system
1. Resource Management
Manages hardware resources like CPU,
memory, disk storage, and I/O devices.
Allocates resources efficiently for multiple
to maximize efficiency.
4. File System Management
Manages data storage and retrieval in files
and directories.
Provides file access permissions, security,
and naming conventions.
5. Device Management
Controls and communicates with connected
hardware devices.
Uses device drivers to ensure compatibility
6. User Interface
Provides an interface for user interaction,
either:
o Graphical User Interface (GUI):
Windows, macOS.
o Command-Line Interface (CLI): Linux
terminal, PowerShell.
7. Security and Protection
Protects data and resources from
unauthorized access.
Implements authentication (passwords,
biometrics), encryption, and firewalls.
8. Networking Capabilities
Enables communication and resource
sharing across networks.
Supports protocols like TCP/IP for internet
functionality.
9. Multitasking and Multithreading
Allows multiple programs (or threads) to run
simultaneously.
Ensures efficient CPU utilization through
context switching.
10. Error Detection and Handling
Monitors the system for hardware or
software errors.
Provides recovery mechanisms to maintain
stability.
Networking Services
Communication Protocols: Supports
internet and network connectivity (e.g.,
TCP/IP).
File Sharing: Facilitates sharing of files
Processor Management
In a multi-programming environment,
the OS decides the order in which
processes have access to the processor,
and how much processing time each
process has. This function of OS is
called Process Scheduling. An
Operating System performs the
following activities for Processor
Management.
Allocates the CPU that is a processor to
a process. De-allocates processor when
a process is no longer required.
Process management
The Two-State Model
The simplest way to think about a
process’s lifecycle is with just two
states:
1. Running: This means the process is
actively using the CPU to do its work.
2. Not Running: This means the
process is not currently using the CPU.
It could be waiting for something, like
user input or data, or it might just be
paused.
Two State Process Model
When a new process is created, it starts
in the not running state. Initially, this
process is kept in a program called
the dispatcher.
Device Management
An OS manages device communication
via its respective drivers. It performs
the following activities for device
management.
Keeps track of all devices connected to
the system. Designates a program
responsible for every device known as
the Input/Output controller.
Decide which process gets access to a
certain device and for how long.
Allocates devices effectively and
efficiently. Deallocates devices when
they are no longer required.
There are various input and output
devices. An OS controls the working of
these input-output devices.
It receives the requests from these
devices, performs a specific task, and
communicates back to the requesting
process.
File Management
A file system is organized into directories for efficient or easy
navigation and usage. These directories may contain other
directories and other files. An Operating System carries out the
following file management activities. It keeps track of where
information is stored, user access settings, the status of every
file, and more. These facilities are collectively known as the file
system. An OS keeps track of information regarding the creation,
deletion, transfer, copy, and storage of files in an organized way.
It also maintains the integrity of the data stored in these files,
including the file directory structure, by protecting against
unauthorized access.
1. Contiguous Allocation
In contiguous allocation ,files are assigned to contiguous
area of secondary storage a user specifies in advance the
size of the area needed to hold a files to be created .
if the desired amount of contiguous space is not available
the files can not be created.
Address of starting block
Length of the allocated portion.
The file ‘mail’ in the following figure starts from the block
19 with length = 6 blocks. Therefore, it occupies 19, 20, 21,
22, 23, 24 blocks.
Advantages:
Both the Sequential and Direct Accesses are supported by
this. For direct access, the address of the kth block of the
file which starts at block b can easily be obtained as (b+k).
This is extremely fast since the number of seeks are
minimal because of contiguous allocation of file blocks.
Disadvantages:
This method suffers from both internal and external
fragmentation. This makes it inefficient in terms of memory
utilization.
Increasing file size is difficult because it depends on the
availability of contiguous memory at a particular instance.
2. Linked List Allocation
In linked list allocation each files is linked of the disk
blocks.
These disk block may be scattered through the disk.
A few byte of each disk block contains the address of the
next block .
A single entry in the files allocation table
Starting block and length of the file
Not external fragmentation
Best of sequential files
Advantages:
This is very flexible in terms of file size. File size can be
increased easily since the system does not have to look for
a contiguous chunk of memory.
This method does not suffer from external fragmentation.
This makes it relatively better in terms of memory
utilization.
Disadvantages:
Because the file blocks are distributed randomly on the
disk, a large number of seeks are needed to access every
block individually. This makes linked allocation slower.
It does not support random or direct access. We can not
directly access the blocks of a file.
Pointers required in the linked allocation incur some extra
overhead.
3. Indexed Allocation
Each files is provide with its own index block ,which
is an array of disk block pointer .
The kth entry in the index block pointer to the kth
disk block of the files ,
The file allocation table contain block number of the
index .
Indexed allocation solves this problem by bringing all the
pointer together into one location know as the index block.
Each file has its own index block which is an array of the
disk block address . the I entry in the index block pointer to
be I block of the files.
Advantages:
This supports direct access to the blocks occupied by the
file and therefore provides fast access to the file blocks.
It overcomes the problem of external fragmentation.
Disadvantages:
The pointer overhead for indexed allocation is greater than
linked allocation.
Paging
Paging divides memory into small fixed-size blocks called
pages. When the computer runs out of RAM, pages that
aren’t currently in use are moved to the hard drive, into an
area called a swap file. The swap file acts as an extension
of RAM.
Segmentation
Segmentation divides virtual memory into segments of different
sizes. Segments that aren’t currently needed can be moved to
the hard drive. The system uses a segment table to keep track of
each segment’s status, including whether it’s in memory, if it’s
been modified, and its physical address.
Benefits of Virtual Memory:
1. Increased Memory Space:
o Programs can run as if there’s more RAM than
physically available.
o Allows large applications to execute even on systems
with limited physical RAM.
2. Multi-tasking:
o Enables the system to run multiple programs
simultaneously by allocating memory dynamically and
efficiently.
3. Isolation and Protection:
o Prevents applications from interfering with each other
by isolating their memory spaces.
o If one program crashes, it won’t affect others directly.
4. Efficient Memory Use:
o Only frequently accessed data stays in RAM; less-used
data resides in the disk.
5. Cost Efficiency:
o Instead of upgrading RAM, virtual memory allows you
to use existing disk space for temporary memory
needs.
6. Support for Complex Programs:
o Virtual memory enables large programs (e.g., image
editing software, databases) to operate on systems
with limited hardware.
Advantages
1. It contains various resources that meet the requirements of
each process.
2. Each process should provide information to the operating
system for upcoming resource requests, the number of
resources, and how long the resources will be held.
3. It helps the operating system manage and control process
requests for each type of resource in the computer system.
4. The algorithm has a Max resource attribute that represents
indicates each process can hold the maximum number of
resources in a system.
5. It means that in the Banker's Algorithm, the resources are
granted only if there is no possibility of a deadlock when
those resources are to be assigned. Thus, it ensures that
the system runs at optimal performance.
6. The algorithm allows one to avoid the pointless holding of
resources by any process since the algorithm actually
checks whether the granting of resources is feasible or not.
Disadvantages
1. It requires a fixed number of processes, and no additional
processes can be started in the system while executing the
process.
2. The algorithm does no longer allows the processes to
exchange its maximum needs while processing its tasks.
3. Each process has to know and state their maximum
resource requirement in advance for the system.
4. The number of resource requests can be granted in a finite
time, but the time limit for allocating the resources is one
year.
5. It could be pretty intricate to manage the algorithm, which
is especially known in the case of systems with a vast
quantity of processes and resources. This, consequently,
translates to increased overhead.
P0 P1
wait(A);
wait(B)
wait(B); wait(A)
3. Assume the space is available for allocation of 200K
bytes, and the following sequence of events occurs.
P0
P1
Request Request
80KB; 70KB;
Request Request
60KB; 80KB;
Handling Deadlocks
Deadlock is a situation where a process or a set of
processes is blocked, waiting for some other resource that
is held by some other waiting process. It is an undesirable
state of the system.
In other words, Deadlock is a critical situation in computing
where a process, or a group of processes, becomes unable
to proceed because each is waiting for a resource that is
held by another process in the same group.
Strategies for handling Deadlock
1. Deadlock Ignorance
Deadlock Ignorance is the most widely used approach
among all the mechanism. This is being used by many
operating systems mainly for end user uses. In this
approach, the Operating system assumes that deadlock
never occurs. It simply ignores deadlock. This approach is
best suitable for a single end user system where User uses
the system only for browsing and all other normal stuff.
2. Deadlock prevention
Deadlock happens only when Mutual Exclusion, hold and
wait, No preemption and circular wait holds simultaneously.
If it is possible to violate one of the four conditions at any
time then the deadlock can never occur in the system.
3. Deadlock avoidance
In deadlock avoidance, the operating system checks
whether the system is in safe state or in unsafe state at
every step which the operating system performs.
The process continues until the system is in safe state.
Once the system moves to unsafe state, the OS has to
backtrack one step.
In simple words, The OS reviews each allocation so that the
allocation doesn't cause the deadlock in the system.
We will discuss Deadlock avoidance later in detail.
4. Deadlock detection and recovery
This approach let the processes fall in deadlock and then
periodically check whether deadlock occur in the system or
not.
If it occurs then it applies some of the recovery methods to
the system to get rid of deadlock.
Paging in Memory Management
Paging is a memory management scheme that eliminates
the need for a contiguous allocation of physical memory.
The process of retrieving processes in the form of pages
from the secondary storage into the main memory is known
as paging. The basic purpose of paging is to separate each
procedure into pages. Additionally, frames will be used to
split the main memory. This scheme permits the physical
address space of a process to be non – contiguous.
In paging, the physical memory is divided into fixed-size
blocks called page frames, which are the same size as the
pages used by the process. The process’s logical address
space is also divided into fixed-size blocks called pages,
which are the same size as the page frames. When a
process requests memory, the operating system allocates
one or more page frames to the process and maps the
process’s logical pages to the physical page frames.
Principles of Protection
The principle of least privilege dictates that programs,
users, and systems be given just enough privileges to
perform their tasks.
This ensures that failures do the least amount of harm and
allow the least of harm to be done.
Typically each user is given their own account, and has only
enough privilege to modify their own files.
1. Definition:
The Access Matrix is a two-dimensional table where:
Rows represent subjects (users, processes, or programs).
Columns represent objects (files, directories, devices, etc.).
Each cell in the matrix specifies the set of operations that a
subject can perform on an object.
3. Key Components:
Subjects: Entities that request access to resources (e.g.,
users, processes, programs).
Objects: Resources in the system (e.g., files, devices,
memory segments).
Access Rights: Permissions defining what actions subjects
can perform on objects.
Monitors
Monitors are a programming language component
that aids in the regulation of shared data access.
The Monitor is a package that contains shared data
structures, operations, and synchronization between
concurrent procedure calls.
Therefore, a monitor is also known as a
synchronization tool. Java, C#, Visual Basic, Ada, and
concurrent Euclid are among some of the languages
that allow the use of monitors.
Processes operating outside the monitor can't
access the monitor's internal variables, but they can
call the monitor's procedures.
Characteristics of Monitors in OS
A monitor in OS has the following characteristics:
We can only run one program at a time inside the
monitor.
Monitors in an operating system are defined as a
group of methods and fields that are combined with
a special type of package in the OS.
A program cannot access the monitor's internal
variable if it is running outside the monitor. However,
a program can call the monitor's functions.
Monitors were created to make synchronization
problems less complicated.
Monitors provide a high level of synchronization
between processes.
Components of Monitor in an Operating System
The monitor is made up of four primary parts:
1. Initialization: The code for initialization is included in
the package, and we just need it once when creating
the monitors.
2. Private Data: It is a feature of the monitor in an
operating system to make the data private. It holds
all of the monitor's secret data, which includes
private functions that may only be utilized within the
monitor. As a result, private fields and functions are
not visible outside of the monitor.
3. Monitor Procedure: Procedures or functions that can
be invoked from outside of the monitor are known
as monitor procedures.
Monitors in Process Synchronization
Monitors are a higher-level synchronization construct that
simplifies process synchronization by providing a high-level
abstraction for data access and synchronization.
Monitors are implemented as programming language
constructs, typically in object-oriented languages, and
provide mutual exclusion, condition variables, and data
encapsulation in a single construct.
1. A monitor is essentially a module that encapsulates a
shared resource and provides access to that resource
through a set of procedures.
2. The procedures provided by a monitor ensure that only one
process can access the shared resource at any given time,
and that processes waiting for the resource are suspended
until it becomes available.
3. Monitors are used to simplify the implementation of
concurrent programs by providing a higher-level abstraction
that hides the details of synchronization.
4. Monitors provide a structured way of sharing data and
synchronization information, and eliminate the need for
complex synchronization primitives such as semaphores
and locks.
5. The key advantage of using monitors for process
synchronization is that they provide a simple, high-level
abstraction that can be used to implement complex
concurrent systems.
6. Monitors also ensure that synchronization is encapsulated
within the module, making it easier to reason about the
correctness of the system.
Dining-Philosophers Solution Using Monitors
Prerequisite: Monitor, Process Synchronization
Dining-Philosophers Problem – N philosophers seated
around a circular table
User-Level Thread
The User-level Threads are implemented by the
user-level software.
These threads are created and managed by the
thread library, which the operating system
provides as an API for creating, managing, and
synchronizing threads.
it is faster than the kernel-level threads, it is
basically represented by the program counter,
stack, register, and PCB.
User-level threads are typically employed in
scenarios where fine control over threading is
necessary, but the overhead of kernel threads is
not desired.
They are also useful in systems that lack native
multithreading support, allowing developers to
implement threading in a portable way.
Example – User threads library includes POSIX
threads, Mach C-Threads
Advantages of User-Level Threads
Quick and easy to create: User-level threads can be
created and managed more rapidly.
Highly portable: They can be implemented across
various operating systems.
No kernel mode privileges required: Context
switching can be performed without transitioning to
kernel mode.
Disadvantages of User-Level Threads
Limited use of multiprocessing: Multithreaded
applications may not fully exploit multiple
processors.
Blocking issues: A blocking operation in one thread
can halt the entire process.
Kernel-Level Thread
Threads are the units of execution within an
operating system process.
The OS kernel is responsible for generating,
scheduling, and overseeing kernel-level threads
since it controls them directly.
The Kernel-level threads are directly handled by
the OS directly whereas the thread’s
management is done by the kernel.
Each kernel-level thread has its own context,
including information about the thread’s status,
such as its name, group, and priority.
Example – The example of Kernel-level threads
are Java threads, POSIX thread on Linuxs, etc.
Advantages of Kernel-Level Threads
True parallelism: Kernel threads allow real parallel
execution in multithreading.
Execution continuity: Other threads can continue to
run even if one is blocked.
Access to system resources: Kernel threads have
direct access to system-level features, including I/O
operations.
Disadvantages of Kernel-Level Threads
Management overhead: Kernel threads take more
time to create and manage.
Kernel mode switching: Requires mode switching to
the kernel, adding overhead.
Thread Scheduling
Scheduling of threads involves two boundary
scheduling.
1. Scheduling of user-level threads (ULT) to kernel-level
threads (KLT) via lightweight process (LWP) by the
application developer.
2. Scheduling of kernel-level threads by the system
scheduler to perform different unique OS functions.
User-level thread
The operating system does not recognize the user-level thread.
User threads can be easily implemented and it is implemented by the user.
If a user performs a user-level thread blocking operation, the whole process is
blocked.
The kernel level thread does not know nothing about the user level thread.
Windows
The first version of Windows, released in
1985, was simply a GUI offered as an
extension
of Microsoft’s existing disk operating
system, or MS-DOS.
Based in part on licensed concepts that
Apple Inc.
had used for its Macintosh System
Software, Windows for
the first time allowed DOS users to visually
navigate a virtual desktop, opening
graphical
“windows” displaying the contents of
electronic folders and files with the click of a
mouse button, rather than typing commands
and directory paths at a text prompt.
What is UNIX?
UNIX is an operating system used in both single and
multiple users and the various tasks operation was
invented in the late 1960s at the Bell Laboratories,
where AT&T. operates, It was intended to be a
strong, reliable, and versatile system and it has been
initially targeted at servers, workstations, and
academic systems.
What is Windows?
Windows is one of the fastest operating systems
developed and marketed by Microsoft. The first
Windows was launched in 1985. Windows supports a
GUI-based system that is more user-friendly with
little or no knowledge of computers people.
Paramete
rs UNIX Windows
It is a
It is a menu
command-
based
Basic based
operating
operating
system.
system.
It is an open-
It is a
source system
proprietary
Licensin which can be
software
g used to under
owned by
General Public
Microsoft.
License.
Interface,
harder to
making it
grasp for
simpler to
newcomers.
use.
It supports It supports
Processi
Multiprocessin Multithreadin
ng
g. g.
It uses File
It uses Unix Allocation
File System
System(UFS) (FAT32) and
File
that comprises New
System
STD.ERR and technology
STD.IO file file
systems. system(NTFS)
.
It is more
secure as all It is less
changes to the secure
Security
system require compared to
explicit user UNIX.
permission.
the
introduction of
new use.
distributions
of Unix.
Hardware
support is
limited in UNIX Drivers are
Hardwar system. Some available for
e hardware almost all the
might not have hardware.
drivers built
for them.
Although
Windows has
Unix and its been stable
distributions in recent
Reliabilit
are well known years, it is
y
for being very still to match
stable to run. the stability
provided by
Unix systems.
It is fully case-
Case sensitive, and It has case
Sensitiv files can be sensitivity as
e considered an option.
separate files
Input-Output Interface
In micro-computer base system, the only
purpose of peripheral devices is just to
provide special communication links for the
interfacing them with the CPU.
Applications
1. Device Independence
2. Efficient Data Transfer
3. Multiplexing and Sharing
Devices
4. Device Communication
Management
5. Buffering and Caching
6. Error Handling and Recovery
7. Interrupt Handling
8. Standardized Communication
9. File System Access
10. Plug-and-Play Support
11. Synchronization and Control
12. Real-Time Data Processing
13. Virtual Device Emulation
14. Networking
15. Power Management
16. Specialized Hardware
Applications
17. Security
18. Device Drivers
19. Storage Management
20. Multimedia Applications
2. Write Operation
Definition: The process of transferring
data from memory to an I/O device,
such as writing data to a disk or
sending characters to a printer.
Example: Saving data to a file or
sending output to a screen or printer.
System Call: In UNIX-like systems, the
write() system call is used for writing
data to files or devices.
3. Control Operation
Definition: Involves controlling the
behavior of the I/O device. This can
include device-specific operations such
as configuring settings, starting or
stopping a device, or querying the
status of a device.
Example: Sending a command to a
printer to start printing or changing the
baud rate for a serial port.
System Call: In UNIX-like systems,
ioctl() (Input/Output Control) is used for
device-specific control operations.
4. Seek Operation
Definition: A seek operation is used in
storage devices like hard drives or
tapes to position the read/write head to
the correct location on the storage
medium.
Example: When accessing a file, the
operating system must seek to the file's
data blocks on the disk.
Context: This is typically used in disk
drives, where the operating system
must find the location of a specific file
or block.
7. Direct Memory Access (DMA)
Definition: A method where the I/O
controller can transfer data directly to
and from memory, bypassing the CPU.
This is often used for high-speed data
transfer operations.
Example: Transferring data from a hard
disk to memory without involving the
CPU, allowing the CPU to perform other
tasks simultaneously.
Advantage: Frees up the CPU and
speeds up data transfer.
8. Buffered I/O
Definition: In this operation, data is
temporarily stored in a buffer (a region
of memory) before being transferred to
or from an I/O device. Buffering helps
improve performance by reducing the
number of I/O operations.
Example: When reading or writing large
files, the operating system may use a
buffer to store data before writing it to
disk or after reading it from disk.
System Call: In many operating
systems, buffered I/O is used with
functions like fread(), fwrite(), or
fgets().
9. Synchronous vs. Asynchronous I/O
Synchronous I/O: The process waits for
the I/O operation to complete before
proceeding with further execution.
o Example: A file read operation
where the process waits until the
data is completely read before
continuing.
o System Call: Standard read() or
write() calls are typically
synchronous.
Asynchronous I/O: The process
continues execution while the I/O
operation is performed in the
background. The process is notified
once the operation is complete.
o Example: A network application that
sends data over the network
without waiting for a response,
allowing the application to handle
other tasks.
o System Call: Asynchronous I/O is
often handled using special system
calls or libraries such as aio_read()
in POSIX or ReadFileEx() in
Windows.
10. Memory-Mapped I/O
Definition: In memory-mapped I/O, the
operating system maps an I/O device's
memory into the address space of the
application. The application can then
directly access the device's memory
using regular read/write operations.
Example: A graphics card's memory
(frame buffer) is mapped to the
application's address space, allowing
direct access to pixel data for
rendering.
Advantage: Provides a fast, efficient
way for applications to interact with I/O
devices.
Non-Blocking
Blocking I/O I/O
complete.
………………………………………………………………
………………………