Os Unit - 5
Os Unit - 5
● Disk Response Time: Response Time is the average time spent by a request waiting to
perform its I/O operation. The average Response time is the response time of all requests.
Variance Response Time is the measure of how individual requests are serviced with
respect to average response time. So the disk scheduling algorithm that gives minimum
variance response time is better.
Goal of Disk Scheduling Algorithms
The list of various disks scheduling algorithm is given below. Each algorithm is carrying some
advantages and disadvantages. The limitation of each algorithm leads to the evolution of a new
algorithm.
1. FCFS (First Come First Serve): FCFS is the simplest of all Disk Scheduling Algorithms. In
FCFS, the requests are addressed in the order they arrive in the disk queue. Let us understand
this with the help of an example.
=(82-50)+(170-82)+(170-43)+(140-43)+(140-24)+(24-16)+(190-16)
=642
Advantages of FCFS
Disadvantages of FCFS
In SSTF (Shortest Seek Time First), requests having the shortest seek time are executed first.
So, the seek time of every request is calculated in advance in the queue and then they are
scheduled according to their calculated seek time. As a result, the request near the disk arm will
get executed first. SSTF is certainly an improvement over FCFS as it decreases the average
response time and increases the throughput of the system.
Example:
3. SCAN
In the SCAN algorithm the disk arm moves in a particular direction and services the requests
coming in its path and after reaching the end of the disk, it reverses its direction and again services
the request arriving in its path. So, this algorithm works as an elevator and is hence also known
as an elevator algorithm. As a result, the requests at the midrange are serviced more and those
arriving behind the disk arm will have to wait.
SCAN Algorithm
Suppose the requests to be addressed are-82,170,43,140,24,16,190.
And the Read/Write arm is at 50, and it is also given that the disk arm should move “towards the
larger value”.
Therefore, the total overhead movement (total distance covered by the disk arm) is calculated
as
= (199-50) + (199-16) = 332
● High throughput
● Low variance of response time
● Average response time
● Long waiting time for requests for locations just visited by disk arm
4. C-SCAN
In the SCAN algorithm, the disk arm again scans the path that has been scanned, after reversing
its direction. So, it may be possible that too many requests are waiting at the other end or there
may be zero or few requests pending at the scanned area.
These situations are avoided in the CSCAN algorithm in which the disk arm instead of reversing
its direction goes to the other end of the disk and starts servicing the requests from there. So, the
disk arm moves in a circular fashion and this algorithm is also similar to the SCAN algorithm hence
it is known as C-SCAN (Circular SCAN).
Example:
Circular SCAN
Suppose the requests to be addressed are-82,170,43,140,24,16,190.
And the Read/Write arm is at 50, and it is also given that the disk arm should move “towards the
larger value”.
So, the total overhead movement (total distance covered by the disk arm) is calculated as:
=(199-50) + (199-0) + (43-0) = 391
5. LOOK
LOOK Algorithm is similar to the SCAN disk scheduling algorithm except for the difference that
the disk arm in spite of going to the end of the disk goes only to the last request to be serviced in
front of the head and then reverses its direction from there only. Thus it prevents the extra delay
which occurred due to unnecessary traversal to the end of the disk.
Example:
LOOK Algorithm
6. C-LOOK
As LOOK is similar to the SCAN algorithm, in a similar way, C-LOOK is similar to the CSCAN disk
scheduling algorithm. In CLOOK, the disk arm in spite of going to the end goes only to the last
request to be serviced in front of the head and then from there goes to the other end’s last request.
Thus, it also prevents the extra delay which occurred due to unnecessary traversal to the end of
the disk.
Example:
1. Suppose the requests to be addressed are-82,170,43,140,24,16,190. And the Read/Write
arm is at 50, and it is also given that the disk arm should move “towards the larger value”
C-LOOK
So, the total overhead movement (total distance covered by the disk arm) is calculated as
= (190-50) + (190-16) + (43-16) = 341
File Systems
A file system is a method an operating system uses to store, organize, and manage files and
directories on a storage device. Some common types of file systems include:
● FAT (File Allocation Table): An older file system used by older versions of Windows and
other operating systems.
● NTFS (New Technology File System): A modern file system used by Windows. It
supports features such as file and folder permissions, compression, and encryption.
● ext (Extended File System): A file system commonly used on Linux and Unix-based
operating systems.
● HFS (Hierarchical File System): A file system used by macOS.
● APFS (Apple File System): A new file system introduced by Apple for their Macs and
iOS devices.
A file is a collection of related information that is recorded on secondary storage. Or file is a
collection of logically related entities. From the user’s perspective, a file is the smallest allotment
of logical secondary storage.
The name of the file is divided into two parts as shown below:
● Name
● Extension, separated by a period.
The file system’s job is to keep the files organized in the best way possible.
A free space is created on the hard drive whenever a file is deleted from it. To reallocate them to
other files, many of these spaces may need to be recovered. Choosing where to store the files on
the hard disc is the main issue with files one block may or may not be used to store a file. It may
be kept in the disk’s non-contiguous blocks. We must keep track of all the blocks where the files
are partially located.
Author C Append
Word
wp, tex, rrf, doc Various word processor formats
Processor
Directories
The collection of files is a file directory. The directory contains information about the files, including
attributes, location, and ownership. Much of this information, especially that is concerned with
storage, is managed by the operating system. The directory is itself a file, accessible by various
file management routines.
Single-Level Directory
Two-Level Directory
Tree-Structured Directory
The directory is maintained in the form of a tree. Searching is efficient and also there is grouping
capability. We have absolute or relative path name for a file.
File Allocation Methods
There are several types of file allocation methods. These are mentioned below.
● Continuous Allocation
● Linked Allocation(Non-contiguous allocation)
● Indexed Allocation
Continuous Allocation
A single continuous set of blocks is allocated to a file at the time of file creation. Thus, this is a
pre-allocation strategy, using variable size portions. The file allocation table needs just a single
entry for each file, showing the starting block and the length of the file. This method is best from
the point of view of the individual sequential file. Multiple blocks can be read in at a time to improve
I/O performance for sequential processing. It is also easy to retrieve a single block. For example,
if a file starts at block b, and the ith block of the file is wanted, its location on secondary storage
is simply b+i-1.
Disadvantages of Continuous Allocation
● External fragmentation will occur, making it difficult to find contiguous blocks of space of
sufficient length. A compaction algorithm will be necessary to free up additional space on
the disk.
● Also, with pre-allocation, it is necessary to declare the size of the file at the time of creation.
Allocation is on an individual block basis. Each block contains a pointer to the next block in the
chain. Again the file table needs just a single entry for each file, showing the starting block and
the length of the file. Although pre-allocation is possible, it is more common simply to allocate
blocks as needed. Any free block can be added to the chain. The blocks need not be continuous.
An increase in file size is always possible if a free disk block is available. There is no external
fragmentation because only one block at a time is needed but there can be internal fragmentation
but it exists only in the last disk block of the file.
Disadvantage Linked Allocation(Non-contiguous allocation)
Indexed Allocation
It addresses many of the problems of contiguous and chained allocation. In this case, the file
allocation table contains a separate one-level index for each file: The index has one entry for each
block allocated to the file. The allocation may be on the basis of fixed-size blocks or variable-sized
blocks. Allocation by blocks eliminates external fragmentation, whereas allocation by variable-
size blocks improves locality. This allocation technique supports both sequential and direct access
to the file and thus is the most popular form of file allocation.
Disk Free Space Management
Just as the space that is allocated to files must be managed, so the space that is not currently
allocated to any file must be managed. To perform any of the file allocation techniques, it is
necessary to know what blocks on the disk are available. Thus we need a disk allocation table in
addition to a file allocation table. The following are the approaches used for free space
management.
● Bit Tables: This method uses a vector containing one bit for each block on the disk. Each
entry for a 0 corresponds to a free block and each 1 corresponds to a block in use.
For example 00011010111100110001
In this vector every bit corresponds to a particular block and 0 implies that that particular
block is free and 1 implies that the block is already occupied. A bit table has the advantage
that it is relatively easy to find one or a contiguous group of free blocks. Thus, a bit table
works well with any of the file allocation methods. Another advantage is that it is as small
as possible.
● Free Block List: In this method, each block is assigned a number sequentially and the
list of the numbers of all free blocks is maintained in a reserved block of the disk.
● Compatibility Issues: Different file systems may not be compatible with each other,
making it difficult to transfer data between different operating systems.
● Disk Space Overhead: File systems may use some disk space to store metadata and
other overhead information, reducing the amount of space available for user data.
● Vulnerability: File systems can be vulnerable to data corruption, malware, and other
security threats, which can compromise the stability and security of the system.
● Linked Allocation: In this technique, each file is represented by a linked list of disk blocks.
When a file is created, the operating system finds enough free space on the disk and links
the blocks of the file to form a chain. This method is simple to implement but can lead to
fragmentation and waste of space.
● Contiguous Allocation: In this technique, each file is stored as a contiguous block of disk
space. When a file is created, the operating system finds a contiguous block of free space
and assigns it to the file. This method is efficient as it minimizes fragmentation but suffers
from the problem of external fragmentation.
● Indexed Allocation: In this technique, a separate index block is used to store the
addresses of all the disk blocks that make up a file. When a file is created, the operating
system creates an index block and stores the addresses of all the blocks in the file. This
method is efficient in terms of storage space and minimizes fragmentation.
● File Allocation Table (FAT): In this technique, the operating system uses a file allocation
table to keep track of the location of each file on the disk. When a file is created, the
operating system updates the file allocation table with the address of the disk blocks that
make up the file. This method is widely used in Microsoft Windows operating systems.
Overall, free space management is a crucial function of operating systems, as it ensures that
storage devices are utilized efficiently and effectively.
The system keeps tracks of the free disk blocks for allocating space to files when they are created.
Also, to reuse the space released from deleting the files, free space management becomes
crucial. The system maintains a free space list which keeps track of the disk blocks that are not
allocated to some file or directory. The free space list can be implemented mainly as:
1. Bitmap or Bit vector
A Bitmap or Bit Vector is series or collection of bits where each bit corresponds to a disk block.
The bit can take two values: 0 and 1: 0 indicates that the block is free and 1 indicates an allocated
block. The given instance of disk blocks on the disk in Figure 1 (where green blocks are allocated)
can be represented by a bitmap of 16 bits as: 1111000111111001.
Advantages:
● Simple to understand.
● Finding the first free block is efficient. It requires scanning the words (a group of 8 bits) in
a bitmap for a non-zero word. (A 0-valued word has all bits 0). The first free block is then
found by scanning for the first 1 bit in the non-zero word.
Disadvantages:
● For finding a free block, Operating System needs to iterate all the blocks which is time
consuming.
● The efficiency of this method reduces as the disk size increases.
2. Linked List
In this approach, the free disk blocks are linked together i.e. a free block contains a pointer to
the next free block. The block number of the very first disk block is stored at a separate location
on disk and is also cached in memory.In Figure-2, the free space list head points to Block 5
which points to Block 6, the next free block and so on. The last free block would contain a null
pointer indicating the end of free list. A drawback of this method is the I/O required for free
space list traversal.
Advantages:
Disadvantages:
● When the size of Linked List increases, the headache of miniating pointers is also
increases.
● This method is not efficient during iteration of each block of memory.
Grouping
This approach stores the address of the free blocks in the first free block. The first free block
stores the address of some, say n free blocks. Out of these n blocks, the first n-1 blocks are
actually free and the last block contains the address of next free n blocks. An advantage of this
approach is that the addresses of a group of free disk blocks can be found easily.
Advantage:
● Finding free blocks in massive amount can be done easily using this method.
Disadvantage:
● The only disadvantage is, we need to alter the entire list, if any of the block of the list is
occupied.
Counting
This approach stores the address of the first free disk block and a number n of free contiguous
disk blocks that follow the first block. Every entry in the list would contain:
Advantages:
● Using this method, a group of entire free blocks can take place easily and Fastly.
● The list formed in this method is especially smaller in size.
Disadvantage:
● The first free block in this method, keeps account of other free blocks. Thus, due to that
one block the space requirement is more.
● Efficient Use of Storage Space: Free space management techniques help to optimize
the use of storage space on the hard disk or other secondary storage devices.
● Easy to Implement: Some techniques, such as linked allocation, are simple to implement
and require less overhead in terms of processing and memory resources.
● Faster Access to Files: Techniques such as contiguous allocation can help to reduce
disk fragmentation and improve access time to files.
File Sharing
File Sharing in an Operating System(OS) denotes how information and files are shared between
different users, computers, or devices on a network; and files are units of data that are stored in
a computer in the form of documents/images/videos or any others types of information needed.
For Example: Suppose letting your computer talk to another computer and exchange pictures,
documents, or any useful data. This is generally useful when one wants to work on a project with
others, send files to friends, or simply shift stuff to another device. Our OS provides ways to do
this like email attachments, cloud services, etc. to make the sharing process easier and more
secure.
Now, file sharing is nothing like a magical bridge between Computer A to Computer B allowing
them to swap some files with each other.
● Folder/Directory: It is basically like a container for all of our files on a computer. The
folder can contain files and even other folders maintaining like hierarchical structure for
organizing data.
● Networking: It is involved in connecting computers or devices where we need to share
the resources. Networks can be local (LAN) or global (Internet).
● IP Address: It is numerical data given to every connected device on the network
● Protocol: It is given as the set of rules which drives the communication between devices
on a network. In the context of file sharing, protocols define how files are transferred
between computers.
● File Transfer Protocol (FTP): FTP is a standard network protocol used to transfer files
between a client and a server on a computer network.
SMB is like a network based file sharing protocol mainly used in windows operating systems. It
allows our computer to share files/printer on a network. SMB is now the standard way for
seamless file transfer method and printer sharing.
Example: Imagine in a company where the employees have to share the files on a particular
project . Here SMB is employed to share files among all the windows based operating
system.orate on projects. SMB/CIFS is employed to share files between Windows-based
computers. Users can access shared folders on a server, create, modify, and delete files.
NFS is a distributed based file sharing protocol mainly used in Linux/Unix based operating
System. It allows a computer to share files over a network as if they were based on local. It
provides a efficient way of transfer of files between servers and clients.
Example: Many Programmer/Universities/Research Institution uses Unix/Linux based Operating
System. The Institutes puts up a global server datasets using NFS. The Researchers and
students can access these shared directories and everyone can collaborate on it.
NFS File Sharing
It is the most common standard protocol for transferring of the files between a client and a server
on a computer network. FTPs supports both uploading and downloading of the files, here we can
download,upload and transfer of files from Computer A to Computer B over the internet or
between computer systems.
Example: Suppose the developer makes changes on the server. Using the FTP protocol, the
developer connects to the server they can update the server with new website content and
updates the existing file over there.
These all file sharing methods serves different purpose and needs according to the requirements
and flexibility of the users based on the operating system.
Linux file system has a hierarchal file structure as it contains a root directory and its subdirectories.
All other directories can be accessed from the root directory. A partition usually has only one file
system, but it may have more than one file system.
A file system is designed in a way so that it can manage and provide space for non-volatile storage
data. All file systems required a namespace that is a naming and organizational methodology.
The namespace defines the naming process, length of the file name, or a subset of characters
that can be used for the file name. It also defines the logical structure of files on a memory
segment, such as the use of directories for organizing the specific files. Once a namespace is
described, a Metadata description must be defined for that particular file.
The data structure needs to support a hierarchical directory structure; this structure is used to
describe the available and used disk space for a particular block. It also has the other details
about the files such as file size, date & time of creation, update, and last modified.
Also, it stores advanced information about the section of the disk, such as partitions and volumes.
The advanced data and the structures that it represents contain the information about the file
system stored on the drive; it is distinct and independent of the file system metadata.
Linux file system contains two-part file system software implementation architecture. Consider the
below image:
The file system requires an API (Application programming interface) to access the function calls
to interact with file system components like files and directories. API facilitates tasks such as
creating, deleting, and copying the files. It facilitates an algorithm that defines the arrangement of
files on a file system.
The first two parts of the given file system together called a Linux virtual file system. It provides a
single set of commands for the kernel and developers to access the file system. This virtual file
system requires the specific system driver to give an interface to the file system.
Directory Structure
● The directories help us to store the files and locate them when we need them. Also,
directories are called folders as they can be assumed of as folders where files reside in
the form of a physical desktop analogy. Directories can be organized in a tree-like
hierarchy in Linux and several other operating systems.
● The directory structure of Linux is well-documented and defined in the Linux FHS
(Filesystem Hierarchy Standard). Referencing those directories if accessing them is
achieved via the sequentially deeper names of the directory linked by '/' forward slash like
/var/spool/mail and /var/log. These are known as paths.
● The below table gives a very short standard, defined, and well-known top-level Linux
directory list and their purposes:
● / (root filesystem): It is the top-level filesystem directory. It must include every file needed
to boot the Linux system before another filesystem is mounted. Every other filesystem is
mounted on a well-defined and standard mount point because of the root filesystem
directories after the system is started.
● /boot: It includes the static kernel and bootloader configuration and executable files
needed to start a Linux computer.
● /bin: This directory includes user executable files.
● /dev: It includes the device file for all hardware devices connected to the system. These
aren't device drivers; instead, they are files that indicate all devices on the system and
provide access to these devices.
● /etc: It includes the local system configuration files for the host system.
● /lib: It includes shared library files that are needed to start the system.
● /home: The home directory storage is available for user files. All users have a subdirectory
inside /home.
● /mnt: It is a temporary mount point for basic filesystems that can be used at the time when
the administrator is working or repairing a filesystem.
● /media: A place for mounting external removable media devices like USB thumb drives
that might be linked to the host.
● /opt: It contains optional files like vendor supplied application programs that must be
placed here.
● /root: It's the home directory for a root user. Keep in mind that it's not the '/' (root) file
system.
● /tmp: It is a temporary directory used by the OS and several programs for storing
temporary files. Also, users may temporarily store files here. Remember that files may be
removed without prior notice at any time in this directory.
● /sbin: These are system binary files. They are executables utilized for system
administration.
● /usr: They are read-only and shareable files, including executable libraries and binaries,
man files, and several documentation types.
● /var: Here, variable data files are saved. It can contain things such as MySQL, log files,
other database files, email inboxes, web server data files, and much more.
● In Linux, the file system creates a tree structure. All the files are arranged as a tree and
its branches. The topmost directory called the root (/) directory. All other directories in
Linux can be accessed from the root directory.
● Specifying paths: Linux does not use the backslash (\) to separate the components; it
uses forward slash (/) as an alternative. For example, as in Windows, the data may be
stored in C:\ My Documents\ Work, whereas, in Linux, it would be stored in /home/ My
Document/ Work.
● Partition, Directories, and Drives: Linux does not use drive letters to organize the drive
as Windows does. In Linux, we cannot tell whether we are addressing a partition, a
network device, or an "ordinary" directory and a Drive.
● Case Sensitivity: Linux file system is case sensitive. It distinguishes between lowercase
and uppercase file names. Such as, there is a difference between test.txt and Test.txt in
Linux. This rule is also applied for directories and Linux commands.
● File Extensions: In Linux, a file may have the extension '.txt,' but it is not necessary that
a file should have a file extension. While working with Shell, it creates some problems for
the beginners to differentiate between files and directories. If we use the graphical file
manager, it symbolizes the files and folders.
● Hidden files: Linux distinguishes between standard files and hidden files, mostly the
configuration files are hidden in Linux OS. Usually, we don't need to access or read the
hidden files. The hidden files in Linux are represented by a dot (.) before the file name
(e.g., .ignore). To access the files, we need to change the view in the file manager or need
to use a specific command in the shell.
When we install the Linux operating system, Linux offers many file systems such as Ext, Ext2,
Ext3, Ext4, JFS, ReiserFS, XFS, btrfs, and swap.
JFS stands for Journaled File System, and it is developed by IBM for AIX Unix. It is an alternative
to the Ext file system. It can also be used in place of Ext4, where stability is needed with few
resources. It is a handy file system when CPU power is limited.
ReiserFS is an alternative to the Ext3 file system. It has improved performance and advanced
features. In the earlier time, the ReiserFS was used as the default file system in SUSE Linux, but
later it has changed some policies, so SUSE returned to Ext3. This file system dynamically
supports the file extension, but it has some drawbacks in performance.
In Linux, the "to mount", a filesystem term, refers to the initial days of computing when a
removable disk or tape pack would physically need to be mounted on a correct drive device. On
the disk pack, the filesystem would logically be mounted by the OS to make contents available to
access by application programs, OS, and users after being located on the drive physically.
Simply, a mount point is a directory that's made as a component of the filesystem. For instance,
the home filesystem is placed on the /home directory. Filesystems can be placed on mount points
on many non-root filesystems, but it's less common.
● The root filesystem of Linux is mounted on the / directory (root directory) very early inside
the boot sequence.
● Several filesystems are later mounted by the start-up programs of Linux, either rc upon
SystemV or via systemd in new Linux versions.
● Filesystem mounting during startup is handled by the configuration file, i.e., /etc/fstab.
● An easy way to understand that is fstab is short for "file system table", and it's a filesystem
list that is to be mounted, their options, and designated mount points that may be required
for particular filesystems.
Filesystems can be mounted on an available mount point/directory with the help of the mount
command. In other words, any directory that's applied as a mount point shouldn't have other files
in it and should be empty. Linux will not avoid users from mounting a filesystem on one that's
already available or on a directory that includes files. The actual contents will be covered, and
just the freshly mounted filesystem content will be visible if we mount any filesystem on any
existing filesystem or directory.
Structure of FAT
The File Allocation Table (FAT) has a simple and straightforward structure. It consists of a
sequence of entries, with each entry representing a cluster on the disk. A cluster is a group of
contiguous sectors, which is the smallest unit of disk space that can be allocated to a file. Each
entry in the FAT contains information about the status of the corresponding cluster, such as
whether it is free or allocated to a file. The entries also contain pointers to the next cluster in a
file, allowing the FAT to keep track of the sequence of clusters that make up a file. The first entry
in the FAT is reserved for the root directory of the disk, while the remaining entries are used for
file and directory clusters. The size and format of the FAT can vary depending on the version of
the file system and the size of the disk. For example, older versions of FAT such as FAT12 and
FAT16 have smaller maximum disk sizes and use shorter entry sizes, while newer versions such
as FAT32 can support larger disks and use longer entry sizes to accommodate more clusters.
Types of File Allocation Table(FAT)
● There are three main types of File Allocation Table (FAT) file systems: FAT12, FAT16,
and FAT32.
● FAT12 was the original version of the FAT file system, which was first introduced in 1980
with MSDOS. It was designed for small disks, with a maximum size of 16MB and a cluster
size of 512 bytes. FAT12 is no longer commonly used, but it can still be found on some
older devices such as digital cameras and music players.
● FAT16 was the next version of the FAT file system, which was introduced in 1984 with the
release of MS-DOS 3.0. It supports larger disks than FAT12, with a maximum size of 2GB
and a cluster size of up to 64KB. FAT16 is still used on some devices, but it is not as
common as it used to be.
● FAT32 is the most recent version of the FAT file system, which was introduced in 1996
with the release of Windows 95 OSR2. It was designed to support larger disks than FAT16,
with a maximum size of 2TB and a cluster size of up to 32KB. FAT32 is still widely used
today, particularly on removable storage devices such as USB drives and SD cards.
Importance of FAT in OS
● FAT is a widely used file system that is compatible with many different operating systems,
making it easy to share files between different computers and devices.
● FAT is a simple and easy to implement file system that is suitable for use on a wide range
of storage devices, including hard drives, USB drives, and memory cards.
● FAT supports large disk sizes, making it a suitable file system for modern storage devices
with large capacities.
● FAT helps to minimize disk fragmentation by allocating free clusters that are contiguous,
allowing for efficient use of disk space.
● FAT is a versatile file system that can be used as an intermediary file system for other
types of file systems, allowing for greater flexibility in managing storage devices.
● The journaling mechanism used by FAT helps to minimize the risk of data corruption due
to power failures or other system crashes, ensuring the integrity of stored data.
Effective communication channels like high-speed buses and telephone lines connect all
processors, each equipped with its own local memory and other neighboring processors.
Due to its characteristics, a distributed operating system is classified as a loosely coupled system.
It encompasses multiple computers, nodes, and sites, all interconnected through LAN/WAN lines.
The ability of a Distributed OS to share processing resources and I/O files while providing users
with a virtual machine abstraction is an important feature.
There are many types of Distributed Operating System, some of them are as follows:
1. Client-Server Systems
2. Peer-to-Peer(P2P) Systems
In peer-to-peer (P2P) systems, interconnected nodes directly communicate and collaborate
without centralized control. Each node can act as both a client and a server, sharing resources
and services with other nodes. P2P systems enable decentralized resource sharing, self-
organization, and fault tolerance.
● They support efficient collaboration, scalability, and resilience to failures without relying
on central servers.
● This model facilitates distributed data sharing, content distribution, and computing tasks,
making it suitable for applications like file sharing, content delivery, and blockchain
networks.
3. Middleware
4. Three-Tier
In a distributed operating system, the three-tier architecture divides tasks into presentation, logic,
and data layers. The presentation tier, comprising client machines or devices, handles user
interaction. The logic tier, distributed across multiple nodes or servers, executes processing logic
and coordinates system functions.
● The data tier manages storage and retrieval operations, often employing distributed
databases or file systems across multiple nodes.
● This modular approach enables scalability, fault tolerance, and efficient resource
utilization, making it ideal for distributed computing environments.
5. N-Tier
In an N-tier architecture, applications are structured into multiple tiers or layers beyond the
traditional three-tier model. Each tier performs specific functions, such as presentation, logic, data
processing, and storage, with the flexibility to add more tiers as needed. In a distributed operating
system, this architecture enables complex applications to be divided into modular components
distributed across multiple nodes or servers.
● Each tier can scale independently, promoting efficient resource utilization, fault tolerance,
and maintainability.
● N-tier architectures facilitate distributed computing by allowing components to run on
separate nodes or servers, improving performance and scalability.
● This approach is commonly used in large-scale enterprise systems, web applications, and
distributed systems requiring high availability and scalability.
Distributed operating systems find applications across various domains where distributed
computing is essential. Here are some notable applications:
1. Cloud Computing Platforms:
a. Distributed operating systems form the backbone of cloud computing platforms like
Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform
(GCP).
b. These platforms provide scalable, on-demand computing resources distributed
across multiple data centers, enabling organizations to deploy and manage
applications, storage, and services in a distributed manner.
2. Internet of Things (IoT):
a. Distributed operating systems play a crucial role in IoT networks, where numerous
interconnected devices collect and exchange data.
b. These operating systems manage communication, coordination, and data
processing tasks across distributed IoT devices, enabling applications such as
smart home automation, industrial monitoring, and environmental sensing.
3. Distributed Databases:
a. Distributed operating systems are used in distributed database management
systems (DDBMS) to manage and coordinate data storage and processing across
multiple nodes or servers.
b. These systems ensure data consistency, availability, and fault tolerance in
distributed environments, supporting applications such as online transaction
processing (OLTP), data warehousing, and real-time analytics.
4. Content Delivery Networks (CDNs):
a. CDNs rely on distributed operating systems to deliver web content, media, and
applications to users worldwide.
b. These operating systems manage distributed caching, content replication, and
request routing across a network of edge servers, reducing latency and improving
performance for users accessing web content from diverse geographic locations.
5. Peer-to-Peer (P2P) Networks:
a. Distributed operating systems are used in peer-to-peer networks to enable
decentralized communication, resource sharing, and collaboration among
distributed nodes.
b. These systems facilitate file sharing, content distribution, and decentralized
applications (DApps) by coordinating interactions between peers without relying
on centralized servers.
6. High-Performance Computing (HPC):
a. Distributed operating systems are employed in HPC clusters and supercomputers
to coordinate parallel processing tasks across multiple nodes or compute units.
b. These systems support scientific simulations, computational modeling, and data-
intensive computations by distributing workloads and managing communication
between nodes efficiently.
7. Distributed File Systems:
a. Distributed operating systems power distributed file systems like Hadoop
Distributed File System (HDFS), Google File System (GFS), and CephFS.
b. These file systems enable distributed storage and retrieval of large-scale data sets
across clusters of machines, supporting applications such as big data analytics,
data processing, and content storage.
● Solaris: The SUN multiprocessor workstations are the intended use for it.
● OSF/1: The Open Foundation Software Company designed it, and it works with Unix.
● Micros: All nodes in the system are assigned work by the MICROS operating system,
which also guarantees a balanced data load.
● DYNIX: It is created for computers with many processors, known as Symmetry.
● Locus: It can be viewed simultaneously from both local and distant files without any
location restrictions.
● Mach: It permits the features of multitasking and multithreading.
Security in Distributed Operating system
Protection and security are crucial aspects of a Distributed Operating System, especially in
organizational settings. Measures are employed to safeguard the system from potential damage
or loss caused by external sources. Various security measures can be implemented, including
authentication methods such as username/password and user key. One Time Password (OTP)
is also commonly utilized in distributed OS security applications.
1. It can increase data availability throughout the system by sharing all resources (CPU, disk,
network interface, nodes, computers, and so on) between sites.
2. Because all data is replicated across all sites, it reduces the probability of data corruption
because users can access data from another operating site in the event that one site fails.
3. Data transfer from one site to another is accelerated by it.
4. Since it may be accessible from both local and remote sites, it is an open system.
5. It facilitates a reduction in the time needed to process data.
6. The majority of distributed systems are composed of multiple nodes that work together to
provide fault tolerance. Even if one machine malfunctions, the system still functions.
1. Which tasks need to be completed, when they need to be completed, and where they
need to be completed must be determined by the system. The restrictions of a scheduler
might result in unpredictable runtimes and unused hardware.
2. Since the nodes and connections in DOS need to be secured, it is challenging to establish
sufficient security.
3. Comparing a DOS-connected database to a single-user system, the latter is easier to
maintain and less complex.
4. Compared to other systems, the underlying software is incredibly sophisticated and poorly
understood.
5. Compiling, analyzing, displaying, and keeping track of hardware utilization metrics for
large clusters may be quite challenging.