Operating System Concepts (2)
Operating System Concepts (2)
Overview
An operating system is a software that manages the computer hardware. The hardware
must provide appropriate mechanisms to ensure the correct operation of the computer
system and to prevent user programs from interfering with the proper operation of the
system.
1. Provides the facilities to create, modification of programs and data files using an
editor.
2. Access to the compiler for translating the user program from high level language to
machine language.
3. Provide a loader program to move the compiled program code to the computer’s
memory for execution.
4. Provide routines that handle the details of I/O programming.
Every computer must have an operating system to run other programs. The operating system
coordinates the use of the hardware among the various system programs and application
programs for various users. It simply provides an environment within which other programs
can do useful work.
The operating system is a set of special programs that run on a computer system that allows it
to work properly. It performs basic tasks such as recognizing input from the keyboard,
keeping track of files and directories on the disk, sending output to the display screen and
controlling peripheral devices.
OS is designed to serve two basic purposes:
1. It controls the allocation and use of the computing System’s resources among the
various user and tasks.
2. It provides an interface between the computer hardware and the programmer that
simplifies and makes feasible for coding, creation, debugging of application
programs.
A process is a program in execution and it is more than a program code called as text section
and this concept works under all the operating system because all the task perform by the
operating system needs a process to perform the task
The process executes when it changes the state. The state of a process is defined by the
current activity of the process.
Each process may be in any one of the following states −
New − The process is being created.
Running − In this state the instructions are being executed.
Waiting − The process is in waiting state until an event occurs like I/O
operation completion or receiving a signal.
Ready − The process is waiting to be assigned to a processor.
Terminated − the process has finished execution.
It is important to know that only one process can be running on any processor at any instant.
Many processes may be ready and waiting.
Now let us see the state diagram of these process states −
Explanation
Step 1 − Whenever a new process is created, it is admitted into ready state.
Step 2 − If no other process is present at running state, it is dispatched to running based on
scheduler dispatcher.
Step 3 − If any higher priority process is ready, the uncompleted process will be sent to the
waiting state from the running state.
Step 4 − Whenever I/O or event is completed the process will send back to ready state based
on the interrupt signal given by the running state.
Step 5 − Whenever the execution of a process is completed in running state, it will exit to
terminate state, which is the completion of process.
1.5 I/O channels
I/O Channel is an extension of the DMA concept. It has ability to execute I/O instructions
using special-purpose processor on I/O channel and complete control over I/O operations.
Processor does not execute I/O instructions itself. Processor initiates I/O transfer by
instructing the I/O channel to execute a program in memory.
Program specifies – Device or devices, Area or areas of memory, Priority, and Error
condition actions
Types of I/O Channels:
1. Selector Channel:
Selector channel controls multiple high-speed devices. It is dedicated to the transfer
of data with one of the devices. In selector channel, each device is handled by a
controller or I/O module. It controls the I/O controllers shown in the figure.
2. Multiplexer Channel
Multiplexer channel is a DMA controller that can handle multiple devices at the same
time. It can do block transfers for several devices at once.
1.6 Memory Management
Memory is the important part of the computer that is used to store the data. Its
management is critical to the computer system because the amount of main memory
available in a computer system is very limited. At any time, many processes are
competing for it. Moreover, to increase performance, several processes are executed
simultaneously. For this, we must keep several processes in the main memory, so it is
even more important to manage them effectively.
The memory management techniques can be classified into following main categories:
The Single contiguous memory management scheme is the simplest memory management
scheme used in the earliest generation of computer systems. In this scheme, the main memory
is divided into two contiguous areas or partitions. The operating systems reside permanently
in one partition, generally at the lower memory, and the user process is loaded into the other
partition.
o Wastage of memory space due to unused memory as the process is unlikely to use all
the available memory space.
o The CPU remains idle, waiting for the disk to load the binary image into the main
memory.
o It can not be executed if the program is too large to fit the entire available main
memory space.
o It does not support multiprogramming, i.e., it cannot handle multiple programs
simultaneously.
Physical Address identifies a physical location of required data in a memory. The user
never directly deals with the physical address but can access by its corresponding logical
address. The user program generates the logical address and thinks that the program is
running in this logical address but the program needs physical memory for its execution,
therefore, the logical address must be mapped to the physical address by MMU before they
are used. The term Physical Address Space is used for all physical addresses corresponding
to the logical addresses in a Logical address space.
Logical to Physical address conversion
1.7 Paging
In Operating Systems, Paging is a storage mechanism used to retrieve processes from the
secondary storage into the main memory in the form of pages.
The main idea behind the paging is to divide each process in the form of pages. The main
memory will also be divided in the form of frames.
One page of the process is to be stored in one of the frames of the memory. The pages can be
stored at the different locations of the memory but the priority is always to find the
contiguous frames or holes.
Pages of the process are brought into the main memory only when they are required
otherwise, they reside in the secondary storage.
Paging Example
1.7.1 Demand Paging
Every process in the virtual memory contains lots of pages and in some cases, it might not be
efficient to swap all the pages for the process at once. Because it might be possible that the
program may need only a certain page for the application to run. Let us take an example here,
suppose there is a 500 MB application and it may need as little as 100MB pages to be
swapped, so in this case, there is no need to swap all pages at once.
The demand paging system is somehow similar to the paging system with swapping where
processes mainly reside in the main memory(usually in the hard disk). Thus demand paging
is the process that solves the above problem only by swapping the pages on Demand. This is
also known as lazy swapper( It never swaps the page into the memory unless it is needed).
Swapper that deals with the individual pages of a process are referred to as Pager.
Demand Paging is a technique in which a page is usually brought into the main memory only
when it is needed or demanded by the CPU. Initially, only those pages are loaded that are
required by the process immediately. Those pages that are never accessed are thus never
loaded into the physical memory.
Page replacement is needed in the operating systems that use virtual memory using Demand
Paging. As we know that in Demand paging, only a set of pages of a process is loaded into
the memory. This is done so that we can have more processes in the memory at the same
time.
When a page that is residing in virtual memory is requested by a process for its execution, the
Operating System needs to decide which page will be replaced by this requested page. This
process is known as page replacement and is a vital component in virtual memory
management.
To understand why we need page replacement algorithms, we first need to know about page
faults. Let’s see what is a page fault.
Page Fault: A Page Fault occurs when a program running in CPU tries to access a page that
is in the address space of that program, but the requested page is currently not loaded into the
main physical memory, the RAM of the system.
Since the actual RAM is much less than the virtual memory the page faults occur. So
whenever a page fault occurs, the Operating system has to replace an existing page
in RAM with the newly requested page. In this scenario, page replacement algorithms help
the Operating System in deciding which page to replace. The primary objective of all the
page replacement algorithms is to minimize the number of page faults.
Segmented Paging
A solution to the problem is to use segmentation along with paging to reduce the size of
page table. Traditionally, a program is divided into four segments, namely code segment,
data segment, stack segment and heap segment.
Segments of a process
The size of the page table can be reduced by creating a page table for each segment. To
accomplish this hardware support is required. The address provided by CPU will now be
partitioned into segment no., page no. and offset.
2. Lacks of Frames: If a process has fewer frames then fewer pages of that process
will be able to reside in memory and hence more frequent swapping in and out
will be required. This may lead to thrashing. Hence sufficient amount of frames
must be allocated to each process in order to prevent thrashing.
Recovery of Thrashing:
Do not allow the system to go into thrashing by instructing the long-term
scheduler not to bring the processes into memory after the threshold.
If the system is already thrashing then instruct the mid-term scheduler to
suspend some of the processes so that we can recover the system from thrashing.
2 Process Management and Synchronization
2.1 PCB
While creating a process the operating system performs several operations. To identify the
processes, it assigns a process identification number (PID) to each process. As the
operating system supports multi-programming, it needs to keep track of all the processes.
For this task, the process control block (PCB) is used to track the process’s execution
status. Each block of memory contains information about the process state, program
counter, stack pointer, status of opened files, scheduling algorithms, etc. All these
information is required and must be saved when the process is switched from one state to
another. When the process makes a transition from one state to another, the operating
system must update information in the process’s PCB.
A process control block (PCB) contains information about the process, i.e. registers,
quantum, priority, etc. The process table is an array of PCB’s, that means logically contains
a PCB for all of the current processes in the system.
Characteristics of SJF:
Shortest Job first has the advantage of having a minimum average waiting time among
all operating system scheduling algorithms.
It is associated with each task as a unit of time to complete.
It may cause starvation if shorter processes keep coming. This problem can be solved
using the concept of ageing.
Advantages of Shortest Job first:
As SJF reduces the average waiting time thus, it is better than the first come first serve
scheduling algorithm.
SJF is generally used for long term scheduling
Disadvantages of SJF:
One of the demerit SJF has is starvation.
Many times it becomes complicated to predict the length of the upcoming CPU request
To learn about how to implement this CPU scheduling algorithm, please refer to our detailed
article on Shortest Job First.
3. Longest Job First(LJF):
Longest Job First(LJF) scheduling process is just opposite of shortest job first (SJF), as the
name suggests this algorithm is based upon the fact that the process with the largest burst
time is processed first. Longest Job First is non-preemptive in nature.
Characteristics of LJF:
Among all the processes waiting in a waiting queue, CPU is always assigned to the
process having largest burst time.
If two processes have the same burst time then the tie is broken using FCFS i.e. the
process that arrived first is processed first.
LJF CPU Scheduling can be of both preemptive and non-preemptive types.
Advantages of LJF:
No other task can schedule until the longest job or process executes completely.
All the jobs or processes finish at the same time approximately.
Disadvantages of LJF:
Generally, the LJF algorithm gives a very high average waiting time and average turn-
around time for a given set of processes.
This may lead to convoy effect.
To learn about how to implement this CPU scheduling algorithm, please refer to our detailed
article on the Longest job first scheduling.
4. Priority Scheduling:
Preemptive Priority CPU Scheduling Algorithm is a pre-emptive method of CPU
scheduling algorithm that works based on the priority of a process. In this algorithm, the
editor sets the functions to be as important, meaning that the most important process must be
done first. In the case of any conflict, that is, where there are more than one processor with
equal value, then the most important CPU planning algorithm works on the basis of the FCFS
(First Come First Serve) algorithm.
Characteristics of Priority Scheduling:
Schedules tasks based on priority.
When the higher priority work arrives while a task with less priority is executed, the
higher priority work takes the place of the less priority one and
The latter is suspended until the execution is complete.
Lower is the number assigned, higher is the priority level of a process.
Advantages of Priority Scheduling:
The average waiting time is less than FCFS
Less complex
Disadvantages of Priority Scheduling:
One of the most common demerits of the Preemptive priority CPU scheduling algorithm
is the Starvation Problem. This is the problem in which a process has to wait for a longer
amount of time to get scheduled into the CPU. This condition is called the starvation
problem.
To learn about how to implement this CPU scheduling algorithm, please refer to our detailed
article on Priority Preemptive Scheduling algorithm.
5. Round robin:
Round Robin is a CPU scheduling algorithm where each process is cyclically assigned a
fixed time slot. It is the preemptive version of First come First Serve CPU Scheduling
algorithm. Round Robin CPU Algorithm generally focuses on Time Sharing technique.
Characteristics of Round robin:
It’s simple, easy to use, and starvation-free as all processes get the balanced CPU
allocation.
One of the most widely used methods in CPU scheduling as a core.
It is considered preemptive as the processes are given to the CPU for a very limited time.
Advantages of Round robin:
Round robin seems to be fair as every process gets an equal share of CPU.
The newly created process is added to the end of the ready queue.
To learn about how to implement this CPU scheduling algorithm, please refer to our detailed
article on the Round robin Scheduling algorithm.
Average
waiting
Allocatio Complexi time Preempti Starvatio Performan
Algorithm n is ty (AWT) on n ce
Accordin
g to the
arrival
time of
the
processes Simple
, the CPU and easy Slow
is to performan
FCFS allocated. implement Large. No No ce
Based on
the
lowest
CPU Minimum
burst More Smaller Average
time complex than Waiting
SJF (BT). than FCFS FCFS No Yes Time
Dependin
g on
some
Based on measures
the e.g.,
highest arrival
CPU More time, Big turn-
burst complex process around
LJFS time (BT) than FCFS size, etc. No Yes time
LJFS the
allocation
of the
CPU is
based on
the
highest ng on
CPU some
burst measures
time e.g.,
(BT). But arrival preference
it is time, is given to
preempti complex process the longer
ve than FCFS size, etc. jobs
Same as
SJF the
allocation
of the
CPU is
based on
the Dependin
lowest g on
CPU some
burst measures
time e.g., The
(BT). But arrival preference
it is More time, is given to
preempti complex process the short
SRTF ve. than FCFS size, etc Yes Yes jobs
fixed
time
quantum
(TQ) g.
Accordin
g to the
priority.
The
Well
bigger performan
priority ce but
task This type Smaller contain a
Priority Pre- executes is less than starvation
emptive first complex FCFS Yes Yes problem
Accordin
g to the
priority
with This type
monitorin is less
g the new complex Preempti
incoming than ve Most
higher Priority Smaller beneficial
Priority non- priority preemptiv than with batch
preemptive jobs e FCFS No Yes systems
Accordin
g to the
process
that More
Good
resides in complex performan
the than the ce but
bigger priority Smaller contain a
queue scheduling than starvation
MLQ priority algorithms FCFS No Yes problem
Average
waiting
Allocatio Complexi time Preempti Starvatio Performan
Algorithm n is ty (AWT) on n ce
It is the
most
Accordin Complex
g to the but its Smaller
process complexit than all
of a y rate schedulin
bigger depends g types in Good
priority on the TQ many performan
MFLQ queue. size cases No No ce
In UNIX, a process and all of its children and further descendants together form a
process group. When a user sends a signal from the keyboard, the signal is delivered
to all members of the process group currently associated with the keyboard (usually
all active processes that were created in the current window). Individually, each
process can catch the signal, ignore the signal, or take the default action, which is to
be killed by the signal.
As another example of where the process hierarchy plays a key role, let us look at
how UNIX initializes itself when it is started, just after the computer is booted. A
special process, called init, is present in the boot image. When it starts running, it
reads a file telling how many terminals there are. Then it forks off a new process per
terminal. These processes wait for someone to log in. If a login is successful, the
login process executes a shell to accept commands. These commands may start up
more processes, and so forth. Thus, all the processes in the whole system belong to a
single tree, with init at the root.
In contrast, Windows has no concept of a process hierarchy. Al processes are equal.
The only hint of a process hierarchy is that when a process is created, the parent is
given a special token (called a handle) that it can use to control the child. However, it
is free to pass this token to some other process, thus invalidating the
hierarchy.Processes in UNIX cannot disinherit their children.
2.5 Problems of concurrent processes
Concurrency is the execution of the multiple instruction sequences at the same time. It
happens in the operating system when there are several process threads running in parallel.
The running process threads always communicate with each other through shared memory
or message passing. Concurrency results in sharing of resources result in problems like
deadlocks and resources starvation.
It helps in techniques like coordinating execution of processes, memory allocation and
execution scheduling for maximizing throughput.
Principles of Concurrency :
Both interleaved and overlapped processes can be viewed as examples of concurrent
processes, they both present the same problems.
The relative speed of execution cannot be predicted. It depends on the following:
The activities of other processes
The way operating system handles interrupts
The scheduling policies of the operating system
Problems in Concurrency :
Sharing global resources –
Sharing of global resources safely is difficult. If two processes both make use of a
global variable and both perform read and write on that variable, then the order in
which various read and write are executed is critical.
Optimal allocation of resources –
It is difficult for the operating system to manage the allocation of resources optimally.
Locating programming errors –
It is very difficult to locate a programming error because reports are usually not
reproducible.
Locking the channel –
It may be inefficient for the operating system to simply lock the channel and prevents
its use by other processes.
Advantages of Concurrency :
Running of multiple applications –
It enable to run multiple applications at the same time.
Better resource utilization –
It enables that the resources that are unused by one application can be used for other
applications.
Better average response time –
Without concurrency, each application has to be run to completion before the next one
can be run.
Better performance –
It enables the better performance by the operating system. When one application uses
only the processor and another application uses only the disk drive then the time to run
both applications concurrently to completion will be shorter than the time to run each
application consecutively.
Drawbacks of Concurrency :
It is required to protect multiple applications from one another.
It is required to coordinate multiple applications through additional mechanisms.
Additional performance overheads and complexities in operating systems are required
for switching among applications.
Sometimes running too many applications concurrently leads to severely degraded
performance.
Issues of Concurrency :
Non-atomic –
Operations that are non-atomic but interruptible by multiple processes can cause
problems.
Race conditions –
A race condition occurs of the outcome depends on which of several processes gets to a
point first.
Blocking –
Processes can block waiting for resources. A process could be blocked for long period
of time waiting for input from a terminal. If the process is required to periodically
update some data, this would be very undesirable.
Starvation –
It occurs when a process does not obtain service to progress.
Deadlock –
It occurs when two processes are blocked and hence neither can proceed to execute.
Mutual exclusion is a basic synchronization primitive used to ensure thread safety when
accessing shared variables. The mutual exclusion programming interfaces available in the
five programming environments are reviewed, underlining the common basic features and
the different additional features proposed by each one of them. Then, atomic operations
are introduced as an efficient alternative to mutual exclusion in some specific cases, and
the way they operate in OpenMP is reviewed. Finally, some utilities proposed by
Threading Building Blocks (TBB)—basically, extensions of the Standard Template
Library containers having internally built in thread safety—are introduced.
Mutual exclusion locks are a commonly used mechanism for synchronizing processes or
threads that need access to some shared resource in parallel programs. They work as their
name suggests: if a thread “locks” a resource, another thread that wishes to access it will
need to wait till the first thread unlocks it. Once that happens, the second one would then
proceed to lock it till it is processing it. The program’s threads must be disciplined to unlock
it as soon as they are done using the shared resource to keep the program execution flowing
smoothly.
2.7 Synchronization
For example, consider a bank that stores the account balance of each customer in the
same database. Now suppose you initially have x rupees in your account. Now, you
take out some amount of money from your bank account, and at the same time,
someone tries to look at the amount of money stored in your account. As you are
taking out some money from your account, after the transaction, the total balance left
will be lower than x. But, the transaction takes time, and hence the person reads x as
your account balance which leads to inconsistent data. If in some way, we could make
sure that only one process occurs at a time, we could ensure consistent data.
Let us take a look at why exactly we need Process Synchronization. For example, If
a process1 is trying to read the data present in a memory location while
another process2 is trying to change the data present at the same location, there is a
high chance that the data read by the process1 will be incorrect.
2.8 Deadlock
A process in operating system uses resources in the following way.
1. Requests a resource
2. Use the resource
3. Releases the resource
Deadlock is a situation where a set of processes are blocked
because each process is holding a resource and waiting for another
resource acquired by some other process.
Consider an example when two trains are coming toward each other
on the same track and there is only one track, none of the trains can
move once they are in front of each other. A similar situation occurs
in operating systems when there are two or more processes that
hold some resources and wait for resources held by other(s). For
example, in the below diagram, Process 1 is holding Resource 1 and
waiting for resource 2 which is acquired by process 2, and process 2
is waiting for resource 1.
4) Bankers Algorithm
Let us consider the following snapshot for understanding the banker's algorithm:
P1 212 322
P2 401 902
P3 020 753
P4 112 112
Solution:
1. The Content of the need matrix can be calculated by using the formula given below:
Safe sequence:
Available = (2, 1, 0)
Need <=Available = False
Available = (2, 1, 0)
Request of P1 is granted.
= (2, 1, 0) + (2, 1, 2)
Available = (4, 2, 2)
Available = (4, 2, 2)
Available = (4, 2, 2)
Request of P4 is granted.
= (4, 2, 2) + (1, 1, 2)
Request of P2 is granted.
= (5, 3, 4) + (4, 0, 1)
Available = (9, 3, 5)
= Available (9, 5, 5)
The system allocates all the needed resources to each process. So, we can say that the
system is in a safe state.
= [8 5 7] + [2 1 0] = [10 6 7]
Device management in an operating system means controlling the Input/Output devices like
disk, microphone, keyboard, printer, magnetic tape, USB ports, camcorder, scanner, other
accessories, and supporting units like supporting units control channels. A process may
require various resources, including main memory, file access, and access to disk drives, and
others. If resources are available, they could be allocated, and control returned to the CPU.
Otherwise, the procedure would have to be postponed until adequate resources become
available. The system has multiple devices, and in order to handle these physical or virtual
devices, the operating system requires a separate program known as an ad device controller.
It also determines whether the requested device is available.
2.10.1. The fundamentals of I/O devices may be divided into three categories:
1. Boot Device
2. Character Device
3. Network Device
Boot Device
It stores data in fixed-size blocks, each with its unique address. For example- Disks.
Character Device
Network Device
The operating system (OS) handles communication with the devices via their drivers. The OS
component gives a uniform interface for accessing devices with various physical features.
There are various functions of device management in the operating system. Some of them are
as follows:
1. It keeps track of data, status, location, uses, etc. The file system is a term used to
define a group of facilities.
2. It enforces the pre-determined policies and decides which process receives the device
when and for how long.
3. It improves the performance of specific devices.
4. It monitors the status of every device, including printers, storage drivers, and other
devices.
5. It allocates and effectively deallocates the device. De-allocating differentiates the
devices at two levels: first, when an I/O command is issued and temporarily freed.
Second, when the job is completed, and the device is permanently release
2.10.2. Types of devices
There are three types of Operating system peripheral devices: dedicated, shared, and virtual.
These are as follows:
1. Dedicated Device
In device management, some devices are allocated or assigned to only one task at a time until
that job releases them. Devices such as plotters, printers, tape drivers, and other similar
devices necessitate such an allocation mechanism because it will be inconvenient if multiple
people share them simultaneously. The disadvantage of such devices is the inefficiency
caused by allocating the device to a single user for the whole duration of task execution, even
if the device is not used 100% of the time.
2. Shared Devices
3. Virtual Devices
Virtual devices are a hybrid of the two devices, and they are dedicated devices that have been
transformed into shared devices. For example, a printer can be transformed into a shareable
device by using a spooling program that redirects all print requests to a disk. A print job is
not sent directly to the printer; however, it is routed to the disk until it is fully prepared with
all of the required sequences and formatting, at which point it is transmitted to the printers.
The approach can transform a single printer into numerous virtual printers, improving
performance and ease of use.
Here, you will learn the features of device management in the operating system. Various
features of the device management are as follows:
1. The OS interacts with the device controllers via the device drivers while allocating the
device to the multiple processes executing on the system.
2. Device drivers can also be thought of as system software programs that bridge
processes and device controllers.
3. The device management function's other key job is to implement the API.
4. Device drivers are software programs that allow an operating system to control the
operation of numerous devices effectively.
5. The device controller used in device management operations mainly contains three
registers: command, status, and data.
2.10.3 File Systems
File attributes are configuration and information related to files. These attributes grant/deny
requests of a user/process for access, modifying, relocating, or deleting it. Some common
examples of file attributes are:
1. Direct
This method represents a file's disk model. Just like a disk, direct access mechanism allows
random access to any file block. A file is divided into a number of blocks. These blocks are
of the same size. The file is viewed as an ordered sequence of these blocks. Thus the OS can
do a read-write operation upon any random block. Following are the three operations under
direct mechanism:
2. Sequential Access
This is a simple way to access the information in a file. Contents of a file are accessed
sequentially (one record after another). This method is used by editors and compilers. Tape
drives use a sequential method, processing a memory block at a time. They were a form of
the storage medium in computers used in earlier times. Following are the 3 operations in
sequential access:
This method is a variant of the direct access method. It maintains an index that contains the
addresses of the file blocks. To access a record, the OS will first search its address in the
index which will then point to the actual address of that block of the file that has the required
record.
File Types:
There are a large number of file types. Each has a particular purpose. The type of a file
indicates its use cases, contents, etc. Some common types are:
1. Media:
Media files store media data such as images, audio, icons, video, etc. Common
extensions: img, mp3, mp4, jpg, png, flac, etc.
2. Programs:
These files store code, markup, commands, scripts, and are usually executable. Common
extensions: c, cpp, java, xml, html, css, js, ts, py, sql, etc.
These files are present with the OS for its internal use. Common extensions: bin, sh, bat, dl,
etc.
5. Document:
These files are used for managing office programs such as documents, spreadsheets, etc.
Common extensions: xl, doc, docx, pdf, ppt, etc.
6. Miscellaneous:
File Types in an OS
There are numerous file types that an operating system uses internally and are not generally
used or required by the system user. These files could be application software files, kernel
files, configuration files, metadata files, etc. Windows supports the following two file types:
1. Regular Files
Regular files consist of information related to the user. The files are usually either ASCII or
binary. ASCII files contain lines of text. The major benefit of an ASCII file is that it can be
displayed or printed as it is, and it can be edited using a text editor.
Binary files on printing may give some random junk content. Usually, a binary file would
have some sort of internal structure that is only known to the program that uses it. A binary
file is a sequence of bytes, which if is in the proper format, can be executed by the operating
system. Regular files are supported by both Windows as well as UNIX-based operating
systems.
2. Directories
A directory in the filesystem is a structure that contains references to other files and possibly
other directories. Files could be arranged by storing related files in the same directory.
Directories are supported by both Windows as well as UNIX-based operating systems.
A character special file provides access to an I/O device. Examples of character special files
include a terminal file, a system console file, a NULL file, a file descriptor file, etc.
Each character special file has a device major number and a device minor number. The
device major number associated with a character special file identifies the device type. The
device minor number associated with a character special file identifies a specific device of a
given device type. Character special files are supported by UNIX-based operating systems.
Block special files enable buffered access to hardware devices They also provide some
abstraction from their specifics. Unlike character special files, block special files always
allow the programmer to read and write a block of any size or alignment. Block special files
are supported by UNIX-based operating systems.
Functions of a File
1. File
2. Directory
3. Partition
A part of the storage medium is virtually separate from the rest of the storage.
4. Access Mechanism
5. File Extension
A label appended to the name of a file after a dot. Gives information of the purpose of and
information in the file.
Space allocation
1. Contiguous
In this method, every file occupies a set of consecutive addresses on the storage (secondary
memory/disk). Each entry in a directory contains a number of blocks, starting address of the
first block, and the file name. During writing, if the file size is expandable, then either extra
space is left, or the file is copied somewhere else leaving no extra space behind.
Contiguous Allocation
2. Indexed
A set of pointers is maintained in an index table. The index table is in turn stored in several
index blocks. In an index block, the ith entry (the pointer at position i) holds the disk address
of the ith file block.
Index Allocation
3. Linked
Each data block in the file contains the address of the next block. Each entry in a directory
contains file-name, block address, and (not necessarily) a pointer to the last block. In this file
allocation method, each file is treated as a linked list of disks blocks. In the linked space
allocation method, it is not necessary that disk blocks be assigned to a file in a contiguous
(consecutive) manner on the disk.
Linked Allocation
Directory Structure
For any particular file system, there is a starting point. From the starting point, we can
locate files and/or directories (folders). The directories themselves can also contain more
directories and files. This system of directories and files is known as a file tree. In nature,
trees begin with a root and grow to leaves. In the computing world, this is the same.
The root of a file tree is the beginning of the file system, and all other nodes trace back to it.
The only difference is that the root starts at the top and grows down from the root in a file
system. In the image below, the node labeled “A” is the root of the system.
Directory Structure
3 Multiprocessor and Multicore Operating Systems
3.1 Introduction
A multiprocessor system with multiple CPUs allows programs to be processed
simultaneously. On the other hand, the multicore system is a single processor with
multiple independent processing units called cores that may read and execute program
instructions.
There are various advantages and disadvantages of the multiprocessor system. Some
advantages and disadvantages of the multiprocessor system are as follows:
Advantages
There are various advantages of the multiprocessor system. Some advantages of the
multiprocessor system are as follows:
1. It is a very reliable system because multiple processors may share their work between
the systems, and the work is completed with collaboration.
2. It requires complex configuration.
3. Parallel processing is achieved via multiprocessing.
4. If multiple processors work at the same time, the throughput may increase.
5. Multiple processors execute the multiple processes a few times.
Disadvantages
There are various disadvantages of the multiprocessor system. Some disadvantages of the
multiprocessor system are as follows:
There are various advantages and disadvantages of the multicore system. Some advantages
and disadvantages of the multicore system are as follows:
Advantages
There are various advantages of the multicore system. Some advantages of the multicore
system are as follows:
There are various disadvantages of the multicore system. Some disadvantages of the
multicore system are as follows:
Traffic It has high traffic It has less traffic than the multiprocessors.
than the multicore
system.
Cost It is more expensive These are cheaper than the multiprocessors system.
as compared to a
multicore system.
Configuration It requires complex It doesn't need to be configured.
configuration.
Symmetric Multiprocessing
Asymmetric Multiprocessing
Here, all the processors share the physical memory in a centralized manner with equal
access time to all the memory words. Each processor may have a private cache memory.
Same rule is followed for peripheral devices. When all the processors have equal access
to all the peripheral devices, the system is called a symmetric multiprocessor. When only
one or a few processors can access the peripheral devices, the system is called an
asymmetric multiprocessor. When a CPU wants to access a memory location, it checks if
the bus is free, then it sends the request to the memory interface module and waits for the
requested data to be available on the bus. Multicore processors are small UMA
multiprocessor systems, where the first shared cache is actually the communication
channel. Fig 1 shows the uniform memory access model. Shared memory can quickly
become a bottleneck for system performances, since all processors must synchronize on
the single bus and memory access.
In NUMA multiprocessor model, the access time varies with the location of the memory
word. Here, the shared memory is physically distributed among all the processors, called
local memories. The collection of all local memories forms a global address space which
can be accessed by all the processors. NUMA systems also share CPUs and the address
space, but each processor has a local memory, visible to all other processors. In NUMA
systems access to local memory blocks is quicker than access to remote memory blocks.
Programs written for UMA systems run with no change in NUMA ones, possibly with
different performances because of slower access times to remote memory blocks. Single
bus UMA systems are limited in the number of processors, and costly hardware is
necessary to connect more processors. Current technology prevents building UMA
systems with more than 256 processors. To build larger processors, a compromise is
mandatory: not all memory blocks can have the same access time with respect to each
CPU. Since all NUMA systems have a single logical address space shared by all CPUs,
while physical memory is distributed among processors, there are two types of memories:
local and remote memory. Fig. represents the Non-uniform Memory Access Model.
Caching can alleviate the problem due to remote data access, but brings the cache
coherency issue.
A method to enforce coherency is obviously bus snooping, but this techniques gets too
expensive beyond a certain number of CPUs, and it is much too difficult to implement in
systems that do not rely on bus-based interconnections.
The common approach in CC-NUMA systems with many CPUs to enforce cache coherency
is the directory-based protocol.
The basic idea is to associate each node in the system with a directory for its RAM blocks: a
database stating in which cache is located a block, and what is its state.
When a block of memory is addressed, the directory in the node where the block is located is
queried, to know if the block is in any cache and, if so, if it has been changed respect to the
copy in RAM.
Since a directory is queried at each access by an instruction to the corresponding memory
block, it must be implemented with very quick hardware, as an instance with an associative
cache, or at least with static RAM.
Cache-Coherent NUMA
3.4.1 Shared Memory
Message passing means how a message can be sent from one end to the other end. Either it
may be a client-server model or it may be from one node to another node. The formal
model for distributed message passing has two timing models one is synchronous and the
other is asynchronous.
The fundamental points of message passing are:
1. In message-passing systems, processors communicate with one another by sending and
receiving messages over a communication channel. So how the arrangement should be
done?
2. The pattern of the connection provided by the channel is described by some topology
systems.
3. The collection of the channels are called a network.
4. So by the definition of distributed systems, we know that they are geographically set of
computers. So it is not possible for one computer to directly connect with some other
node.
5. So all channels in the Message-Passing Model are private.
6. The sender decides what data has to be sent over the network. An example is, making a
phone call.
7. The data is only fully communicated after the destination worker decides to receive the
data. Example when another person receives your call and starts to reply to you.
8. There is no time barrier. It is in the hand of a receiver after how many rings he receives
your call. He can make you wait forever by not picking up the call.
9. For successful network communication, it needs active participation from both sides.
A mobile operating system (OS) is software that allows smartphones, tablet PCs (personal
computers) and other devices to run applications and programs.A mobile OS typically starts
up when a device powers on, presenting a screen with icons or tiles that present information
and provide application access. Mobile operating systems also manage cellular and wireless
network connectivity, as well as phone access.
An operating system has many functions. Let us explore them. Some functions of operating
system OS is given below:
1. Device Management: The operations may require the use of gadgets. The operating
system is in charge of this. The Operating System:
4. File Management: On a computer, files are organised into folders. The Operating System:
5. Security: Through authentication, the OS maintains the system and programmes safe and
secure. The user's legitimacy is determined by their user id and password.
6. Other Functions: Other features of the operating system include:
Detection of errors.
Maintaining track on the system's performance
Interaction between various software applications.
As we have discussed above there are lot of mobile operating systems existing in the market.
Some of them are listed below.
Android is an open source and Linux-based Operating System for mobile devices such as
smartphones and tablet computers. Android was developed by the Open Handset Alliance,
led by Google, and other companies.
Android offers a unified approach to application development for mobile devices which
means developers need only develop for Android, and their applications should be able to run
on different devices powered by Android.
The first beta version of the Android Software Development Kit (SDK) was released by
Google in 2007 where as the first commercial version, Android 1.0, was released in
September 2008.
On June 27, 2012, at the Google I/O conference, Google announced the next Android
version, 4.1 Jelly Bean. Jelly Bean is an incremental update, with the primary aim of
improving the user interface, both in terms of functionality and performance.
The source code for Android is available under free and open source software licenses.
Google publishes most of the code under the Apache License version 2.0 and the rest, Linux
kernel changes, under the GNU General Public License version 2.
Features of Android
Android is a powerful operating system competing with Apple 4GS and supports great
features. Few of them are listed below −
2 Connectivity
GSM/EDGE, IDEN, CDMA, EV-DO, UMTS, Bluetooth, Wi-Fi, LTE, NFC and
WiMAX.
3 Storage
SQLite, a lightweight relational database, is used for data storage purposes.
4 Media support
H.263, H.264, MPEG-4 SP, AMR, AMR-WB, AAC, HE-AAC, AAC 5.1, MP3, MIDI,
Ogg Vorbis, WAV, JPEG, PNG, GIF, and BMP.
5 Messaging
SMS and MMS
6 Web browser
Based on the open-source WebKit layout engine, coupled with Chrome's V8 JavaScript
engine supporting HTML5 and CSS3.
7 Multi-touch
Android has native support for multi-touch which was initially made available in
handsets such as the HTC Hero.
8 Multi-tasking
User can jump from one task to another and same time various application can run
simultaneously.
9 Resizable widgets
Widgets are resizable, so users can expand them to show more content or shrink them
to save space.
10 Multi-Language
Supports single direction and bi-directional text.
11 GCM
Google Cloud Messaging (GCM) is a service that lets developers send short message
data to their users on Android devices, without needing a proprietary sync solution.
12 Wi-Fi Direct
A technology that lets apps discover and pair directly, over a high-bandwidth peer-to-
peer connection.
13 Android Beam
A popular NFC-based technology that lets users instantly share, just by touching two
NFC-enabled phones together.
1. Kernel
A kernel is the core/heart of an OS. It contains all the functions and operations to manage the
working of OS.
2. Process Execution
The OS executes various process so that the statements will execute and connect the application
program to the hardware. Whenever a process executes it uses memory, space and other
resources as well.
3. Interrupt
Interrupts are basically used be the hardware devices to communicate with the CPU. It is
basically a signal which the device generates to request the CPU. Moreover, whenever an
interrupt occurs the CPU temporarily stops executing its current process.
4. Memory Management
5. Multitasking
It is performing more than one tasks at a time. The OS allows the user to work with more than
one process at a time without any problem.
6. Security
The OS keeps the system and programs safe and secure through authentication. A user id and
password decide the authenticity of the user.
7. User Interface
GUI stands for Graphical User Interface. As the name suggests, it provides a graphical interface
for the user to interact with the computer. It uses icons, menus, etc. to interact with the user.
Moreover, the user can easily interact by just clicking these items. Therefore, it is very user
friendly and there is no need to remember any commands.
The distributed operating system consists of different computers which are connected with a
network within a virtual shell. A user interacts with the virtual shell in order to perform
different tasks as and when needs arises but the distributed operating system architecture
delegates the responsibility of performing a task to one or more computers within the virtual
shell.
The dotted circle (boundary) in the Figure gives an idea about the user’s view of distributed
operating system and the internal details are not visible to a user. However, the developer’s
view of the distributed operating system will be component available within the dotted circle.
The Figure 1.1 shows the availability of components from 1 up to N which can vary from one
scenario to other. The components within a distributed operating system can be computers of
different operating system which are located at different sites for execution. All the
components are connected using a strong local area network as the same plays a vital role in
execution of a task within a distributed architecture. The operating system generally provides
different functions like process management, input/ output management, memory
management, file organization, security & protection and network management. The
distributed operating system has different components which are responsible from different
functions of an operating system and all these components are connected with a network.
Whenever a user submits a command to a distributed operating system, the instruction may
require the services of one or more than one components of an operating system. The user
gets a feel as if the whole system is a single unit but internally the whole system consists of
sub systems which work in tandem in order to achieve the centralized objective of a
distributed operating system. Plain distributed processing shares the workload among
computers that can communicate with one another. True distributed processing has separate
computers to perform different tasks in such a way that their combined work can contribute
to a goal. The latter type of processing requires a highly structured environment that allows
hardware and software to communicate, share resources, and exchange information freely
2. Transparency :
An important goal of a distributed system is to hide the fact that its process and
resources are physically distributed across multiple computers. A distributed system
that is capable of presenting itself to users and applications such that it is only a single
computer system is called transparent.
The concept of transparency can be applied to many aspects of a distributed system as
shown in table.
Different Forms of Transparency –
2. Scalability :
The uncertain trend in distributed systems is towards larger systems. This observation
has implications for distributed file system design. Algorithms that work well for
systems with 100 machines can work for systems with 1000 machines and none at all
for systems with 10, 000 machines. for starters, the centralized algorithm does not scale
well. If opening a file requires contacting a single centralized server to record the fact
that the file is open then the server will eventually become a bottleneck as the system
grows.
3. Reliability :
The main goal of building distributed systems was to make them more reliable than
single processor systems. The idea is that if some machine goes down, some other
machine gets used to it. In other words, theoretically the reliability of the overall system
can be a Boolean OR of the component reliability. For example, with four file servers,
each with a 0.95 chance of being up at any instant, the probability of all four being
down simultaneously is 0.000006, so the probability of at least one being available is
(1-0.000006)= 0.999994, far better than any individual server.
4. Performance :
Building a transparent, flexible, reliable distributed system is useless if it is slow like
molasses. In particular application on a distributed system, it should not deteriorate
better than running some application on a single processor. Various performance
metrics can be used. Response time is one, but so are throughput, system utilization,
and amount of network capacity consumed. Furthermore, The results of any benchmark
are often highly dependent on the nature of the benchmark. A benchmark involves a
large number of independent highly CPU-bound computations which give radically
different results than a benchmark that consists of scanning a single large file for same
pattern.
There are numerous distributed operating system examples. Here are a few of them:
Solaris – Made for multiprocessor SUN workstations.
OSF/1 – Created by the very Open Foundation Software Company and is Unix compatible.
Micros – While allocating particular jobs to all nodes present in the system, the MICROS OS
ensures a balanced data load.
DYNIX – Created specifically for Symmetry multiprocessor systems.
Locus – It can access both local and distant files at the same time, regardless of location.
Mach – It has multithreading and multitasking capabilities.
Benefits
Let us see the major benefits of virtual machines for operating-system designers and users
which are as follows −
The multiple Operating system environments exist simultaneously on the same
machine, which is isolated from each other.
Virtual machine offers an instruction set architecture which differs from real
computer.
Using virtual machines, there is easy maintenance, application provisioning,
availability and convenient recovery.
Virtual Machine encourages the users to go beyond the limitations of hardware to achieve
their goals.
The operating system achieves virtualization with the help of a specialized software called a
hypervisor, which emulates the PC client or server CPU, memory, hard disk, network and
other hardware resources completely, enabling virtual machines to share resources.
The hypervisor can emulate multiple virtual hardware platforms that are isolated from each
other allowing virtual machines to run Linux and window server operating machines on the
same underlying physical host.
Extra Reading: Load balancing
Load balancing is a core networking solution used to distribute traffic across multiple servers
in a server farm. Load balancers improve application availability and responsiveness and
prevent server overload. Each load balancer sits between client devices and backend servers,
receiving and then distributing incoming requests to any available server capable of fulfilling
them
How it Works
A load balancer may be:
A physical device, a virtualized instance running on specialized hardware, or a software
process
Incorporated into application delivery controllers (ADCs) designed to more broadly improve
the performance and security of three-tier web and micro services-based applications,
regardless of where they’re hosted
Able to leverage many possible load balancing algorithms including round robin, server
response time, and the least connection method to distribute traffic in line with current
requirements
Load balancers detect the health of backend resources and do not send traffic to servers that
are not able to fulfil requests. Regardless of whether it’s hardware or software, or what
algorithm(s) it uses, a load balancer disburses traffic to different web servers in the resource
pool to ensure that no single server becomes overworked and subsequently unreliable. It
effectively minimizes server response time and maximizes throughput.
The role of a load balancer is sometimes likened to that of a traffic cop, as it is meant to
systematically route requests to the right locations at any given moment, thereby preventing
costly bottlenecks and unforeseen incidents. Load balancers should ultimately deliver the
performance and security necessary for sustaining complex IT environments, as well as the
intricate workflows occurring within them.
Load balancing is the most scalable methodology for handling the multitude of requests from
modern multi-application, multi-device workflows. In tandem with platforms that enable
seamless access to the numerous applications and desktops within today’s digital workspaces,
load balancing supports a more consistent and dependable end-user experience for
employees.
Real-time operating system (RTOS) is an operating system with two key features:
predictability and determinism. In an RTOS, repeated tasks are performed within a tight time
boundary, while in a general-purpose operating system, this is not necessarily so.
Predictability and determinism, in this case, go hand in hand: We know how long a task will
take, and that it will always produce the same result.
RTOSes are subdivided into “soft” real-time and “hard” real- time systems. Soft real-time
systems operate within a few hundred milliseconds, at the scale of a human reaction. Hard
real-time systems, however, provide responses that are predictable within tens of
milliseconds or less.
An RTOS is a type of operating system, but it is vastly different from the kind most
consumers are familiar with. Operating systems in phones or personal computers are,
comparatively, bloated with apps and features; they must be able to support anything the user
might want to do today. An RTOS, on the other hand, is streamlined, meant to execute its
tasks quickly and effectively. It is a fraction of the size, sometimes only a few megabytes (vs.
more than 20 gigabytes), with a simple graphical interface, and it lacks many familiar
features, such as a web browser.
Characteristics of an RTOS
High performance: rtos systems are fast and responsive, often executing actions
within a small fraction of the time needed by a general os.
Safety and security: rtoses are frequently used in critical systems when failures can
have catastrophic consequences, such as robotics or flight controllers. To protect
those around them, they must have higher security standards and more reliable safety
features.
Small footprint: versus their hefty general os counterparts, rtoses weigh in at just a
fraction of the size. for example, windows 10, with post-install updates, takes up
approximately 20 gb. vxworks®, on the other hand, is approximately 20,000 times
smaller, measured in the low single-digit megabytes.
Use of RTOS
Due to its benefits, a real-time operating system is most often used in an embedded system —
that is, a system that operates behind the scenes of a larger operation. The RTOS usually has
no graphical interface. Occasionally, multiple OSes are integrated simultaneously, to provide
operational capability coupled with the usability of a general-purpose OS.
RTOSes are often in intelligent edge devices, also known as electromechanical edge or cyber-
physical systems. This means that the device is both producing and operating upon data. So a
car, for example, would be able to monitor its surroundings and act upon them
instantaneously on its own. Such devices often couple artificial intelligence or machine
learning, or both, with real-time components to increase the capabilities of the underlying
structure.
Some companies try to produce their own RTOS in house, tailor-made for their project,
instead of buying a commercial off-the-shelf operating system. This has some advantages:
The operating system is designed specifically for the use case, and the company understands
its mechanics and inner workings. However, this approach is often more expensive and more
time-consuming, and developers who are not used to working on operating systems take a
great deal of time to produce one. Using a commercial system is faster, easier, and brings the
benefit of an experienced technical team that can answer questions and provide support. An
operating system is a tool, much like a hammer or a drill. While you could make one — one
that you would thoroughly understand and that might fit your project better — it would take a
lot of time, without guarantees of performance or capability.