0% found this document useful (0 votes)
4 views

Operating System and System Programming_ (2)

The document provides an overview of operating systems (OS), detailing their functions, types, history, and evolution from basic batch processing systems to modern mobile and AI-integrated systems. It highlights the OS's role as a resource manager, managing hardware and software resources, and discusses the advantages and disadvantages of using different OS types. Additionally, it outlines the impact of OS development on computer hardware and distributed systems.

Uploaded by

eliyasawu05
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

Operating System and System Programming_ (2)

The document provides an overview of operating systems (OS), detailing their functions, types, history, and evolution from basic batch processing systems to modern mobile and AI-integrated systems. It highlights the OS's role as a resource manager, managing hardware and software resources, and discusses the advantages and disadvantages of using different OS types. Additionally, it outlines the impact of OS development on computer hardware and distributed systems.

Uploaded by

eliyasawu05
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 124

Bahir Dar University

Bahir Dar Institute of Technology


Faculty of Computing

Department: Software Engineering


Course: Operating Systems and Systems Programming

BiT, Bahir Dar , Ethiopia


Feb, 2017E.C
Chapter-1
Operating System
An Operating System (OS) is a software that manages and handles hardware and software resources of a
computing device.
 Responsible for managing and controlling all the activities and sharing of computer resources among
different running applications.
 A low-level Software that includes all the basic functions like processor management, memory
management, file management, etc.
 It mainly acts a government for your system that has different departments to manage different
resources.
 Examples are Linux, Unix, Windows 11, MS DOS, Android, macOS and iOS.

Basics:
1. Introduction
2. Types of OS
3. Functions of OS
4. System Initialization
5. Kernel in OS
6. System Call
7. Privileged Instructions
Types of Operating System

 Batch OS (e.g. Transactions Process, Payroll System, etc.)

 Multi-programmed OS(e.g. Windows, UNIX, macOS, etc.)

 Timesharing OS(e.g. Multics, Linux, etc.)

 Real-Time OS(e.g. PSOS, VRTX, etc.)

 Distributed OS(e.g. LOCUS, Solaris, etc.)

Operating System Functions

 Memory and processor Management

 Network Management

 Security Management

 File Management

 Error Detection

 Job Accounting

Introduction to Operating System


An operating system acts as an intermediary between the user of a computer and computer hardware. In short its
an interface between computer hardware and user.
 The purpose of an operating system is to provide an environment in which a user can execute programs
conveniently and efficiently.
 An operating system is software that manages computer hardware and softwares. The hardware must
provide appropriate mechanisms to ensure the correct operation of the computer system and to prevent
user programs from interfering with the proper operation of the system.
 Operating system is a program running at all times on the computer (usually called the kernel), with all
else being application programs.
 Concerned with the assignment of resources among programs e.g. memory, processors and input / output
devices.

History of Operating System


The operating system has been evolving through the years. The following table shows the history of OS.

Era Key Developments Examples

The first Operating System was GN-NAA I/O in 1956 by


1956 GM-NAA I/O (1956)
Genera; Motors.
Era Key Developments Examples

OS/360, DOS/360 and


1960s IBM developed a time sharing system TSS/360
TSS/360

Unix popularized simplicity and multitasking; rise of


1970s Unix (1971), CP/M (1974)
personal computers with basic OSs.

GUI-based OSs gained traction; networking features Apple Macintosh (1984),


1980s
became standard. Windows (1985)

Open-source Linux emerged; GUIs in Windows and Mac Linux (1991), Windows 95
1990s
OS improved. (1995)

2000s- Mobile OSs dominated; cloud and virtualization


iOS (2007), Android (2008)
Present technologies advanced computing.

An operating system is a type of software that acts as an interface between the user and the hardware. It
is responsible for handling various critical functions of the computer and utilizing resources very
efficiently so the operating system is also known as a resource manager. The operating system also acts
like a government because just as the government has authority over everything, similarly the operating
system has authority over all resources. Various tasks that are handled by OS are file management, task
management, garbage management, memory management, process management, disk management, I/O
management, peripherals management, etc.
Generations of Operating Systems
 1940s-1950s: Early Beginnings
o Computers operated without operating systems (OS).
o Programs were manually loaded and run, one at a time.
o The first operating system was introduced in 1956. It was a batch processing system GM-
NAA I/O (1956) that automated job handling.
 1960s: Multiprogramming and Timesharing
o Introduction of multiprogramming to utilize CPU efficiently.
o Timesharing systems, like CTSS (1961) and Multics (1969), allowed multiple users to
interact with a single system.
 1970s: Unix and Personal Computers
o Unix (1971) revolutionized OS design with simplicity, portability, and multitasking.
o Personal computers emerged, leading to simpler OSs like CP/M (1974) and PC-DOS
(1981).
 1980s: GUI and Networking
o Graphical User Interfaces (GUIs) gained popularity with systems like Apple Macintosh
(1984) and Microsoft Windows (1985).
o Networking features, like TCP/IP in Unix, became essential.
 1990s: Linux and Advanced GUIs
o Linux (1991) introduced open-source development.
o Windows and Mac OS refined GUIs and gained widespread adoption.
 2000s-Present: Mobility and Cloud
o Mobile OSs like iOS (2007) and Android (2008) dominate.
o Cloud-based and virtualization technologies reshape computing, with OSs like Windows
Server and Linux driving innovation.
 AI Integration – (Ongoing):

With the growth of time, Artificial intelligence came into picture. Operating system integrates
features of AI technology like Siri, Google Assistant, and Alexa and became more powerful and
efficient in many way. These AI features with operating system create a entire new feature like
voice commands, predictive text, and personalized recommendations.
Note: The above mentioned OS basically tells how the OS evolved with the time by adding new features
but it doesn’t mean that only new generation OS are in use and previously OS system are not in use,
according to the need, all these OS are still used in software industry.
Operating systems have evolved from basic program execution to complex ecosystems supporting
diverse devices and users.
Function of Operating System
 Memory management
 Process management
 File management
 Device Management
 Deadlock Prevention
 Input/Output device management
History According to Types of Operating Systems
Operating Systems have evolved in past years. It went through several changes before getting its current
form.
1. No OS – (0s to 1940s)
As we know that before 1940s, there was no use of OS . Earlier, people are lacking OS in their computer
system so they had to manually type instructions for each tasks in machine language(0-1 based language)
. And at that time , it was very hard for users to implement even a simple task. And it was very time
consuming and also not user-friendly. Because not everyone had that much level of understanding to
understand the machine language and it required a deep understanding.
2. Batch Processing Systems -(1950s)
With the growth of time, batch processing system came into the market .Now Users had facility to write
their programs on punch cards and load it to the computer operator. And then operator make different
batches of similar types of jobs and then serve the different batch(group of jobs) one by one to the CPU
.CPU first executes jobs of one batch and them jump to the jobs of other batch in a sequence manner.
3. Multiprogramming Systems -(1960s and 1970s)
Multiprogramming was the first operating system where actual revolution began. It provide user facility
to load the multiple program into the memory and provide a specific portion of memory to each program.
When one program is waiting for any I/O operations (which take much time) at that time the OS give
permission to CPU to switch from previous program to other program(which is first in ready queue) for
continuous execution of program with interrupt.
4. Personal Computers Systems -(1970s)
Unix (1971) revolutionized OS design with simplicity, portability, and multitasking. Personal computers
emerged, leading to simpler OSs like CP/M (1974) and PC-DOS (1981).
5. Introduction of GUI -(1980s)
With the growth of time, Graphical User Interfaces (GUIs) came. First time OS became more user-
friendly and changed the way of people to interact with computer. GUI provides computer system visual
elements which made user’s interaction with computer more comfortable and user-friendly. User can just
click on visual elements rather than typing commands. Here are some feature of GUI in Microsoft’s
windows icons, menus and windows.
6. Networked Systems – (1990s)
At 1980s, the craze of computer networks at it’s peak .A special type of Operating Systems needed to
manage the network communication. The OS like Novell NetWare and Windows NT were developed to
manage network communication which provide users facility to work in collaborative environment and
made file sharing and remote access very easy.
7. Mobile Operating Systems – (2000s)
Invention of smartphones create a big revolution in software industry, To handle the operation of
smartphones , a special type of operating systems were developed. Some of them are : iOS and Android
etc. These operating systems were optimized with the time and became more powerful.
8. AI Integration – (2010s to ongoing)
With the growth of time, Artificial intelligence came into picture. Operating system integrates features
of AI technology like Siri, Google Assistant, and Alexa and became more powerful and efficient in many
way. These AI features with operating system create a entire new feature like voice commands, predictive
text, and personalized recommendations.
Advantages of Operating System
 Operating System manages external and internal devices for example, printers, scanners, and
other.
 Operating System provides interfaces and drivers for proper communication between system and
hardware devices.
 Allows multiple applications to run simultaneously.
 Manages the execution of processes, ensuring that the system remains responsive.
 Organizes and manages files on storage devices.
 Operating system allocates resources to various applications and ensures their efficient utilization.
Disadvantages of Operating System
 If an error occurred in your operating system, then there may be a chance that your data may not
be recovered therefore always have a backup of your data.
 Threats and viruses can attack our operating system at any time, making it challenging for the OS
to keep the system protected from these dangers.
 For learning about new operating system can be a time-consuming and challenging, Specially for
those who using particular Operating system for example switching from Windows OS to Linux
is difficult.
 Keeping an operating system up-to-date requires regular maintenance, which can be time-
consuming.
 Operating systems consume system resources, including CPU, memory, and storage, which can
affect the performance of other applications.

How has the development of computer hardware been impacted by the evolution of operating systems?
The design and advancement of computer hardware have been significantly influenced by the development of
operating systems. by time hardware producers added new features and capabilities to their products as operating
systems improved in order to better support the functionality offered by the operating systems after a long variation
of time. Like for instance, basically the development of memory management units (MMUs) in hardware to handle
memory addressing and protection followed the introduction of virtual memory in operating systems. Similarly,
the demand for operating system multitasking and multiprocessing support prompted the creation of more potent
and effective processors and other hardware components.
How has the development of distributed systems impacted how operating systems have changed over time?
Operating systems have been significantly impacted by the rise of distributed systems, such as client-server
architectures and cloud computing. To support network communication, distributed file systems, and resource
sharing across multiple machines, operating systems had to develop. Distributed operating systems also developed
to offer scalability, fault tolerance, and coordination in distributed environments. These modifications improved
the ability to manage resources across interconnected systems by advancing networking protocols, remote
procedure calls, and distributed file systems.

Characteristics of Operating Systems


Let us now discuss some of the important characteristic features of operating systems:
 Device Management: The operating system keeps track of all the devices. So, it is also called the
Input/Output controller that decides which process gets the device, when, and for how much time.
 File Management: It allocates and de-allocates the resources and also decides who gets the resource.
 Job Accounting: It keeps track of time and resources used by various jobs or users.
 Error-detecting Aids: These contain methods that include the production of dumps, traces, error
messages, and other debugging and error-detecting methods.
 Memory Management: It is responsible for managing the primary memory of a computer, including
what part of it are in use by whom also check how much amount free or used and allocate process
 Processor Management: It allocates the processor to a process and then de-allocates the processor
when it is no longer required or the job is done.
 Control on System Performance: It records the delays between the request for a service and the
system.
 Security: It prevents unauthorized access to programs and data using passwords or some kind of
protection technique.
 Convenience: An OS makes a computer more convenient to use.
 Efficiency: An OS allows the computer system resources to be used efficiently.
 Ability to Evolve: An OS should be constructed in such a way as to permit the effective development,
testing, and introduction of new system functions at the same time without interfering with service.
 Throughput: An OS should be constructed so that It can give maximum throughput (Number of tasks
per unit time).

List of common Operating Systems


There are multiple types of operating systems each having its own unique features:
Windows OS
 Developer : Microsoft
 Main Features : User-friendly interface, software compatibility, hardware support, Strong gaming
support.
 Advantages : Easy to use for most users, Broad support from third-party applications ,Frequent updates
and support.
 Typical Use Cases : Personal computing, Business environment, Gaming.
macOS
 Developer : Apple.
 Main Features : Sleek, intuitive user interface, Strong integration with other Apple products, Robust
security features, High performance and stability.
 Advantages : Optimized for Apple hardware, Seamless experience across Apple ecosystem, Superior
graphics and multimedia capabilities.
 Typical Use Cases : Creative industries (design, video editing, music production), Personal computing,
Professional environments.
Linux
 Developer : Community-driven (various distributions).
 Main Features : Open-source and highly customizable, Robust security and stability, Lightweight and
can run on older hardware, Large selection of distributions (e.g., Ubuntu, Fedora, Debian).
 Advantages : Free to use and distribute, Strong community support, Suitable for servers and development
environments.
 Typical Use Cases : Servers and data centers, Development and programming, Personal computing for
tech enthusiasts.
Unix
 Developer: Originally AT&T Bell Labs, various commercial and open-source versions available
 Main Features: Multiuser and multitasking capabilities, Strong security and stability, Powerful
command-line interface, Portability across different hardware platforms
 Advantages: Reliable and robust performance, Suitable for high-performance computing and servers,
Extensive support for networking
 Typical Use Cases: Servers and workstations, Development environments, Research and academic
settings
Functionalities of Operating System
 Resource Management: When parallel accessing happens in the OS means when multiple users are
accessing the system the OS works as Resource Manager, Its responsibility is to provide hardware to the
user. It decreases the load in the system.
 Process Management: It includes various tasks like scheduling and termination of the process. It is
done with the help of CPU Scheduling algorithms .
 Storage Management: The file system mechanism used for the management of the
storage. NIFS , CIFS , CFS , NFS , etc. are some file systems. All the data is stored in various tracks of
Hard disks that are all managed by the storage manager. It included Hard Disk .
 Memory Management: Refers to the management of primary memory. The operating system has to keep
track of how much memory has been used and by whom. It has to decide which process needs memory
space and how much. OS also has to allocate and deallocate the memory space.
 Security/Privacy Management: Privacy is also provided by the Operating system using passwords so
that unauthorized applications can’t access programs or data. For example, Windows
uses Kerberos authentication to prevent unauthorized access to data.
The process operating system as User Interface:
1. User
2. System and application programs
3. Operating system
4. Hardware
Every general-purpose computer consists of hardware, an operating system(s), system programs, and application
programs. The hardware consists of memory, CPU, ALU, I/O devices, peripheral devices, and storage devices.
The system program consists of compilers, loaders, editors, OS, etc.

Conceptual View of Computer System


Every computer must have an operating system to run other programs. The operating system coordinates the use
of the hardware among the various system programs and application programs for various users. It simply provides
an environment within which other programs can do useful work.
An OS is a package of some programs that runs on a computer machine, allowing it to perform efficiently. It
manages the simple tasks of recognizing input from the keyboard, managing files and directories on disk,
displaying output on the screen, and controlling peripheral devices.
Layered Design of Operating System

Fig. Layered OS
The extended machine provides operations like context save, dispatching, swapping, and I/O initiation. The
operating system layer is located on top of the extended machine layer. This arrangement considerably simplifies
the coding and testing of OS modules by separating the algorithm of a function from the implementation of its
primitive operations. It is now easier to test, debug, and modify an OS module than in a monolithic OS. We say
that the lower layer provides an abstraction that is the extended machine. We call the operating system layer the
top layer of the OS.
Purposes and Tasks of Operating Systems
Several tasks are performed by the Operating Systems and it also helps in serving a lot of purposes which are
mentioned below. We will see how Operating System helps us in serving in a better way with the help of the task
performed by it.
Purposes of an Operating System
 It controls the allocation and use of the computing System’s resources among the various user and tasks.
 It provides an interface between the computer hardware and the programmer that simplifies and makes it
feasible for coding and debugging of application programs.
Tasks of an Operating System
1. Provides the facilities to create and modify programs and data files using an editor.
2. Access to the compiler for translating the user program from high-level language to machine language.
3. Provide a loader program to move the compiled program code to the computer’s memory for execution.
4. Provide routines that handle the details of I/O programming.
I/O System Management
The module that keeps track of the status of devices is called the I/O traffic controller. Each I/O device has a
device handler that resides in a separate process associated with that device.
The I/O subsystem consists of
 A memory Management component that includes buffering caching and spooling.
 A general device driver interface.
Drivers for Specific Hardware Devices
Below mentioned are the drivers which are required for a specific Hardware Device. Here we discussed
Assemblers, compilers, and interpreters, loaders.
Assembler
The input to an assembler is an assembly language program. The output is an object program plus information that
enables the loader to prepare the object program for execution. At one time, the computer programmer had at his
disposal a basic machine that interpreted, through hardware, certain fundamental instructions. He would program
this computer by writing a series of ones and Zeros (Machine language) and placing them into the memory of the
machine. Examples of assembly languages include
Compiler and Interpreter
The High-level languages – examples are C, C++, Java, Python, etc (around 300+ famous high-level languages)
are processed by compilers and interpreters. A compiler is a program that accepts a source program in a “high-
level language “and produces machine code in one go. Some of the compiled languages are FORTRAN, COBOL,
C, C++, Rust, and Go. An interpreter is a program that does the same thing but converts high-level code to machine
code line-by-line and not all at once. Examples of interpreted languages are
 Python
 Perl
 Ruby
Loader
A Loader is a routine that loads an object program and prepares it for execution. There are various loading schemes:
absolute, relocating, and direct-linking. In general, the loader must load, relocate and link the object program. The
loader is a program that places programs into memory and prepares them for execution. In a simple loading scheme,
the assembler outputs the machine language translation of a program on a secondary device and a loader places it
in the core. The loader places into memory the machine language version of the user’s program and transfers
control to it. Since the loader program is much smaller than the assembler, those make more core available to the
user’s program.
Components of an Operating Systems
There are two basic components of an Operating System.
 Shell
 Kernel
Shell
Shell is the outermost layer of the Operating System and it handles the interaction with the user. The main task of
the Shell is the management of interaction between the User and OS. Shell provides better communication with
the user and the Operating System Shell does it by giving proper input to the user it also interprets input for the
OS and handles the output from the OS. It works as a way of communication between the User and the OS.
Kernel
The kernel is one of the components of the Operating System which works as a core component. The rest of the
components depends on Kernel for the supply of the important services that are provided by the Operating System.
The kernel is the primary interface between the Operating system and Hardware.
Functions of Kernel
The following functions are to be performed by the Kernel.
 It helps in controlling the System Calls.
 It helps in I/O Management.
 It helps in the management of applications, memory, etc.
Types of Kernel
There are four types of Kernel that are mentioned below.
 Monolithic Kernel
 Microkernel
 Hybrid Kernel
 Exokernel

Difference Between 32-Bit and 64-Bit Operating Systems

32-Bit Operating System 64-Bit Operating System

32-Bit OS is required for running of 32-Bit


64-Bit Processors can run on any of the Operating
Processors, as they are not capable of running on 64-
Systems, like 32-Bit OS or 64-Bit OS.
bit processors.

64-Bit Operating System provides highly efficient


32-Bit OS gives a low efficient performance.
Performance.
32-Bit Operating System 64-Bit Operating System

Less amount of data is managed in 32-Bit Operating A large amount of data can be stored in 64-Bit
System as compared to 64-Bit Os. Operating System.

32-Bit Operating System can address 2^32 bytes of 64-Bit Operating System can address 2^64 bytes of
RAM. RAM.

Difference Between 32-bit and 64-bit Operating Systems


In computing, there are two types of processors existing, i.e., 32-bit and 64-bit processors. These types of
processors tell us how much memory a processor can access from a CPU register. For instance, A 32-bit system
can access 232 different memory addresses, i.e. 4 GB of RAM or physical memory ideally, it can access more
than 4 GB of RAM also.
A 64-bit system can access 264 different memory addresses, i.e. actually 18-quintillion bytes of RAM. In short,
any amount of memory greater than 4 GB can be easily handled by it.
 What is a 32-Bit Operating System?
 What is a 64-Bit Operating System?
 Difference Between 32-bit and 64-bit Operating System
 Advantages of 64-bit Over 32-bit
What is a 32-bit Operating System?
Most computers made in the 1990s and early 2000s were 32-bit machines. The CPU register stores memory
addresses, which is how the processor accesses data from RAM. One bit in the register can reference an individual
byte in memory, so a 32-bit system can address a maximum of 4 GB (4,294,967,296 bytes) of RAM. The actual
limit is often less than around 3.5 GB since part of the register is used to store other temporary values besides
memory addresses. Most computers released over the past two decades were built on a 32-bit architecture, hence
most operating systems were designed to run on a 32-bit processor.
What is a 64-bit Operating System?
A 64-bit register can theoretically reference 18,446,744,073,709,551,616 bytes or 17,179,869,184 GB (16
exabytes) of memory. This is several million times more than an average workstation would need to access. What’s
important is that a 64-bit computer (which means it has a 64-bit processor) can access more than 4 GB of RAM.
A computer with 8 GB of RAM better has a 64-bit processor. Otherwise, at least 4 GB of memory will be
inaccessible by the CPU.
Difference Between 32-bit and 64-bit Operating System
A major difference between 32-bit processors and 64-bit processors is the number of calculations per second they
can perform, which affects the speed at which they can complete tasks. 64-bit processors can come in dual-
core, quad-core, six-core, and eight-core versions for home computing. Multiple cores allow for an increased
number of calculations per second that can be performed, which can increase the processing power and help make
a computer run faster. Software programs that require many calculations to function smoothly can operate faster
and more efficiently on multi-core 64-bit processors, for the most part.

Feature 32-bit OS 64-bit OS

Maximum of several terabytes of


Memory Maximum of 4 GB RAM
RAM

Can run on both 32-bit and 64-bit


Processor Requires a 64-bit processor
processors

Can take advantage of more


Limited by the maximum amount
Performance memory, enabling faster
of RAM it can access
performance

Can run 32-bit and 16-bit Can run 32-bit and 64-bit
Compatibility
applications applications

Address Space Uses 32-bit address space Uses 64-bit address space

Supports newer hardware with


Hardware support May not support newer hardware
64-bit drivers

More advanced security features,


Security Limited security features
such as hardware-level protection

Supports newer software


Application support Limited support for new software
designed for 64-bit architecture

Price Less expensive than 64-bit OS More expensive than 32-bit OS

Can handle multiple tasks more


Can handle multiple tasks but
Multitasking efficiently
with limited efficiency

Can run high graphical games and


Can run high graphical games,
handle complex software more
Gaming but may not be as efficient as with
efficiently
64-bit OS
Feature 32-bit OS 64-bit OS

Virtualization Limited support for virtualization Better support for virtualization

Advantages of 64-bit Over 32-bit


 Using 64-bit one can do a lot of multi-tasking, the user can easily switch between various applications
without any Windows hanging problems.

 Gamers can easily play High graphical games like Modern Warfare, and GTA V, or use high-end
software like Photoshop or CAD which takes a lot of memory since it makes multi-tasking with big
software, easy and efficient for users. However, upgrading the video card instead of getting a 64-bit
processor would be more beneficial.

Note:
 A computer with a 64-bit processor can have a 64-bit or 32-bit version of an operating system installed.
However, with a 32-bit operating system, the 64-bit processor would not run at its full capability.
 On a computer with a 64-bit processor, we can’t run a 16-bit legacy program. Many 32-bit programs will
work with a 64-bit processor and operating system, but some older 32-bit programs may not function
properly, or at all, due to limited or no compatibility.

The fundamental goals of operating system are


 Efficient use: Ensure efficient use of a computer’s resources.
 User convenience: Provide convenient methods of using a computer system.
 Non interference: Prevent interference in the activities of its users.
Efficient use
An operating system must ensure efficient use of the fundamental computer system resources of memory, CPU,
and I/O devices such as disks and printers. Poor efficiency can result if a program does not use a resource allocated
to it. Efficient use of resources can be obtained by monitoring use of resources and performing corrective actions
when necessary. However, monitoring use of resources increases the overhead, which lowers efficiency of use. In
practice, operating systems that emphasize efficient use limit their overhead by either restricting their focus to
efficiency of a few important resources, like the CPU and the memory, or by not monitoring the use of resources
at all, and instead handling user programs and resources in a manner that guarantees high efficiency.
User convenience
In the early days of computing, user convenience was synonymous with bare necessity—the mere ability to execute
a program written in a higher level language was considered adequate. Experience with early operating systems
led to demands for better service, which in those days meant only fast response to a user request. Other facets of
user convenience evolved with the use of computers in new fields. Early operating systems had command-line
interfaces, which required a user to type in a command and specify values of its parameters. Users needed
substantial training to learn use of the commands, which was acceptable because most users were scientists or
computer professionals. However, simpler interfaces were needed to facilitate use of computers by new classes of
users. Hence graphical user interfaces (GUIs) were evolved. These interfaces used icons on a screen to represent
programs and files and interpreted mouse clicks on the icons and associated menus as commands concerning them.
In many ways, this move can be compared to the spread of car driving skills in the first half of the twentieth
century. Over a period of time, driving became less of a specialty and more of a skill that could be acquired with
limited training and experience.
Non interference
A computer user can face different kinds of interference in his computational activities. Execution of his program
can be disrupted by actions of other persons, or the OS services which he wishes to use can be disrupted in a similar
manner. The OS prevents such interference by allocating resources for exclusive use of programs and OS services,
and preventing illegal accesses to resources. Another form of interference concerns programs and data stored in
user files.
Advantages of Operating System
 It helps in managing the data present in the device i.e. Memory Management.
 It helps in making the best use of computer hardware.
 It helps in maintaining the security of the device.
 It helps different applications in running them efficiently.
Disadvantages of Operating System
 Operating Systems can be difficult for someone to use.
 Some OS are expensive and they require heavy maintenance.
 Operating Systems can come under threat if used by hackers.

Types of Operating Systems


Operating Systems can be categorized according to different criteria like whether an operating system is for
mobile devices (examples Android and iOS) or desktop (examples Windows and Linux). In this article, we are
going to classify based on functionalities an operating system provides.

1. Batch Operating System

This type of operating system does not interact with the computer directly. There is an operator which takes
similar jobs having the same requirements and groups them into batches. It is the responsibility of the operator
to sort jobs with similar needs. Batch Operating System is designed to manage and execute a large number of
jobs efficiently by processing them in groups.
Batch Operating System

Advantages of Batch Operating System

 Multiple users can share the batch systems.

 The idle time for the batch system is very less.

 It is easy to manage large work repeatedly in batch systems.

Disadvantages of Batch Operating System

 CPU is not used efficiently. When the current process is doing IO, CPU is free and could be utilized by
other processes waiting.

 The other jobs will have to wait for an unknown time if any job fails.

 In batch operating system, average response time increases as all processes are processed one by one.

Examples of Batch Operating Systems: Payroll Systems, Bank Statements, etc.

2. Multi-Programming Operating System

Multiprogramming Operating Systems can be simply illustrated as more than one program is present in the
main memory and any one of them can be kept in execution. This is basically used for better utilization of
resources.

MultiProgramming
Advantages of Multi-Programming Operating System

 CPU is better utilized and overall performance of the system improves.

 It helps in reducing the response time.

Multi-Tasking/Time-sharing Operating systems

It is a type of Multiprogramming system with every process running in round robin manner. Each task is given
some time to execute so that all the tasks work smoothly. Each user gets the time of the CPU as they use a
single system. These systems are also known as Multitasking Systems. The task can be from a single user or
different users also. The time that each task gets to execute is called quantum. After this time interval is over
OS switches over to the next task.

Advantages of Time-Sharing OS

 Each task gets an equal opportunity.

 Fewer chances of duplication of software.

 CPU idle time can be reduced.

 Resource Sharing: Time-sharing systems allow multiple users to share hardware resources such as the
CPU, memory, and peripherals, reducing the cost of hardware and increasing efficiency.

 Improved Productivity: Time-sharing allows users to work concurrently, thereby reducing the waiting
time for their turn to use the computer. This increased productivity translates to more work getting
done in less time.

 Improved User Experience: Time-sharing provides an interactive environment that allows users to
communicate with the computer in real time, providing a better user experience than batch processing.

Disadvantages of Time-Sharing OS

 Reliability problem.

 One must have to take care of the security and integrity of user programs and data.

 Data communication problem.


 High Overhead: Time-sharing systems have a higher overhead than other operating systems due to the
need for scheduling, context switching, and other overheads that come with supporting multiple users.

 Complexity: Time-sharing systems are complex and require advanced software to manage multiple users
simultaneously. This complexity increases the chance of bugs and errors.

 Security Risks: With multiple users sharing resources, the risk of security breaches increases. Time-
sharing systems require careful management of user access, authentication, and authorization to ensure
the security of data and software.

Examples of Time-Sharing OS with explanation

 IBM VM/CMS : IBM VM/CMS is a time-sharing operating system that was first introduced in 1972. It is
still in use today, providing a virtual machine environment that allows multiple users to run their own
instances of operating systems and applications.

 TSO (Time Sharing Option) : TSO is a time-sharing operating system that was first introduced in the 1960s
by IBM for the IBM System/360 mainframe computer. It allowed multiple users to access the same
computer simultaneously, running their own applications.

 Windows Terminal Services : Windows Terminal Services is a time-sharing operating system that allows
multiple users to access a Windows server remotely. Users can run their own applications and access
shared resources, such as printers and network storage, in real-time.

3. Multi-Processing Operating System

Multi-Processing Operating System is a type of Operating System in which more than one CPU is used for the
execution of resources. It betters the throughput of the System.

Multiprocessing Operating System

Advantages of Multi-Processing Operating System

 It increases the throughput of the system as processes can be parallelized.

 As it has several processors, so, if one processor fails, we can proceed with another processor.
4. Multi User Operating Systems

These systems allow multiple users to be active at the same time. These system can be either multiprocessor or
single processor with interleaving.

Time-Sharing OS

5. Distributed Operating System

These types of operating system is a recent advancement in the world of computer technology and are being
widely accepted all over the world and, that too, at a great pace. Various autonomous interconnected computers
communicate with each other using a shared communication network. Independent systems possess their own
memory unit and CPU. These are referred to as loosely coupled systems or distributed systems . These systems’
processors differ in size and function. The major benefit of working with these types of the operating system is
that it is always possible that one user can access the files or software which are not actually present on his
system but some other system connected within this network i.e., remote access is enabled within the devices
connected in that network.

Distributed OS
Advantages of Distributed Operating System

 Failure of one will not affect the other network communication, as all systems are independent of each
other.

 Electronic mail increases the data exchange speed.

 Since resources are being shared, computation is highly fast and durable.

 Load on host computer reduces.

 These systems are easily scalable as many systems can be easily added to the network.

 Delay in data processing reduces.

Disadvantages of Distributed Operating System

 Failure of the main network will stop the entire communication.

 To establish distributed systems the language is used not well-defined yet.

 These types of systems are not readily available as they are very expensive. Not only that the underlying
software is highly complex and not understood well yet.

Examples of Distributed Operating Systems are LOCUS, etc.

Issues With Distributed Operating Systems

 Networking causes delays in the transfer of data between nodes of a distributed system. Such delays
may lead to an inconsistent view of data located in different nodes, and make it difficult to know the
chronological order in which events occurred in the system.

 Control functions like scheduling, resource allocation, and deadlock detection have to be performed in
several nodes to achieve computation speedup and provide reliable operation when computers or
networking components fail.

 Messages exchanged by processes present in different nodes may travel over public networks and pass
through computer systems that are not controlled by the distributed operating system. An intruder may
exploit this feature to tamper with messages, or create fake messages to fool the authentication
procedure and masquerade as a user of the system.

6. Network Operating System

These systems run on a server and provide the capability to manage data, users, groups, security, applications,
and other networking functions. These types of operating systems allow shared access to files, printers, security,
applications, and other networking functions over a small private network. One more important aspect of
Network Operating Systems is that all the users are well aware of the underlying configuration, of all other users
within the network, their individual connections, etc. and that’s why these computers are popularly known
as tightly coupled systems .
Network Operating System

Advantages of Network Operating System

 Highly stable centralized servers.

 Security concerns are handled through servers.

 New technologies and hardware up-gradation are easily integrated into the system.

 Server access is possible remotely from different locations and types of systems.

Disadvantages of Network Operating System

 Servers are costly.

 User has to depend on a central location for most operations.

 Maintenance and updates are required regularly.

Examples of Network Operating Systems are Microsoft Windows Server 2003, Microsoft Windows Server
2008, UNIX, Linux, Mac OS X, Novell NetWare, BSD, etc.

7. Real-Time Operating System

These types of OSs serve real-time systems. The time interval required to process and respond to inputs is very
small. This time interval is called response time.Real-time systems are used when there are time requirements
that are very strict like missile systems, air traffic control systems, robots, etc.

Types of Real-Time Operating Systems

 Hard Real-Time Systems: Hard Real-Time OSs are meant for applications where time constraints are
very strict and even the shortest possible delay is not acceptable. These systems are built for saving life
like automatic parachutes or airbags which are required to be readily available in case of an accident.
Virtual memory is rarely found in these systems.

 Soft Real-Time Systems: These OSs are for applications where time-constraint is less strict.
Real-Time Operating System

Advantages of RTOS

 Maximum Consumption: Maximum utilization of devices and systems, thus more output from all the
resources.

 Task Shifting: The time assigned for shifting tasks in these systems is very less. For example, in older
systems, it takes about 10 microseconds in shifting from one task to another, and in the latest systems,
it takes 3 microseconds.

 Focus on Application: Focus on running applications and less importance on applications that are in the
queue.

 Real-time operating system in the embedded system: Since the size of programs is small, RTOS can
also be used in embedded systems like in transport and others.

 Error Free: These types of systems are error-free.

 Memory Allocation: Memory allocation is best managed in these types of systems.

Disadvantages of RTOS

 Limited Tasks: Very few tasks run at the same time and their concentration is very less on a few
applications to avoid errors.

 Use heavy system resources: Sometimes the system resources are not so good and they are expensive
as well.

 Complex Algorithms: The algorithms are very complex and difficult for the designer to write on.

 Device driver and interrupt signals: It needs specific device drivers and interrupts signal to respond
earliest to interrupts.

 Thread Priority: It is not good to set thread priority as these systems are very less prone to switching
tasks.
Examples of Real-Time Operating Systems are Scientific experiments, medical imaging systems, industrial
control systems, weapon systems, robots, air traffic control systems, etc.

8. Mobile Operating Systems

These operating systems are mainly for mobile devices. Examples of such operating systems are Android and
iOS.

Functions of Operating System

An Operating System acts as a communication interface between the user and computer hardware. Its purpose
is to provide a platform on which a user can execute programs conveniently and efficiently. An operating system
is software that manages the allocation of Computer Hardware. The coordination of the hardware must be
appropriate to ensure the computer system’s correct operation and to prevent user programs from interfering
with it. The main goal of the Operating System is to make the computer environment more convenient to use
and the Secondary goal is to use the resources most efficiently. In this article we will see functions of operating
system in detail.

Why Operating Systems Used?

Operating System is used as a communication channel between the Computer hardware and the user. It works
as an intermediate between System Hardware and End-User. Operating System handles the following
responsibilities:

 It controls all the computer resources.

 It provides valuable services to user programs.

 It coordinates the execution of user programs.

 It provides resources for user programs.

 It provides an interface (virtual machine) to the user.

 It hides the complexity of software.

 It supports multiple execution modes.

 It monitors the execution of user programs to prevent errors.

Functions of an Operating System

Memory Management

The operating system manages the Primary Memory or Main Memory. Main memory is made up of a large array
of bytes or words where each byte or word is assigned a certain address. Main memory is fast storage and it can
be accessed directly by the CPU. For a program to be executed, it should be first loaded in the main memory. An
operating system manages the allocation and deallocation of memory to various processes and ensures that the
other process does not consume the memory allocated to one process. An Operating System performs the
following activities for Memory Management:

 It keeps track of primary memory, i.e., which bytes of memory are used by which user program. The
memory addresses that have already been allocated and the memory addresses of the memory that
has not yet been used.

 In multiprogramming, the OS decides the order in which processes are granted memory access, and for
how long.

 It Allocates the memory to a process when the process requests it and deallocates the memory when
the process has terminated or is performing an I/O operation.

Memory Management

Processor Management

In a multi-programming environment, the OS decides the order in which processes have access to the processor,
and how much processing time each process has. This function of OS is called Process Scheduling. An Operating
System performs the following activities for Processor Management.

An operating system manages the processor’s work by allocating various jobs to it and ensuring that each process
receives enough time from the processor to function properly.

Keeps track of the status of processes. The program which performs this task is known as a traffic controller.
Allocates the CPU that is a processor to a process. De-allocates processor when a process is no longer required.
Process management

Device Management

An OS manages device communication via its respective drivers. It performs the following activities for device
management.

 Keeps track of all devices connected to the system. Designates a program responsible for every device
known as the Input/Output controller.

 Decide which process gets access to a certain device and for how long.

 Allocates devices effectively and efficiently. Deallocates devices when they are no longer required.

 There are various input and output devices. An OS controls the working of these input-output devices.

 It receives the requests from these devices, performs a specific task, and communicates back to the
requesting process.

File Management

A file system is organized into directories for efficient or easy navigation and usage. These directories may contain
other directories and other files. An Operating System carries out the following file management activities. It
keeps track of where information is stored, user access settings, the status of every file, and more. These facilities
are collectively known as the file system. An OS keeps track of information regarding the creation, deletion,
transfer, copy, and storage of files in an organized way. It also maintains the integrity of the data stored in these
files, including the file directory structure, by protecting against unauthorized access.
File Management

I/O Management

I/O management is the important function of operating system refers to how the OS
handles input and output operations between the computer and external devices, such as keyboards, mice,
printers, hard drives, and monitors.

User Interface or Command Interpreter

The user interacts with the computer system through the operating system. Hence OS acts as an interface
between the user and the computer hardware. This user interface is offered through a set of commands or a
graphical user interface (GUI). Through this interface, the user makes interacts with the applications and the
machine hardware.

Command interpreter

Booting the Computer

The process of starting or restarting the computer is known as booting. If the computer is switched off completely
and if turned on then it is called cold booting. Warm booting is a process of using the operating system to restart
the computer.

Security

The operating system uses password protection to protect user data and similar other techniques. it also prevents
unauthorized access to programs and user data. The operating system provides various techniques which assure
the integrity and confidentiality of user data. The following security measures are used to protect user data:

 Protection against unauthorized access through login.

 Protection against intrusion by keeping the firewall active.

 Protecting the system memory against malicious access.

 Displaying messages related to system vulnerabilities.


Control Over System Performance

Operating systems play a pivotal role in controlling and optimizing system performance. They act as
intermediaries between hardware and software, ensuring that computing resources are efficiently utilized. One
fundamental aspect is resource allocation, where the OS allocates CPU time, memory, and I/O devices to
different processes, striving to provide fair and optimal resource utilization. Process scheduling, a critical
function, helps decide which processes or threads should run when preventing any single task from monopolizing
the CPU and enabling effective multitasking.

Job Accounting

The operating system Keeps track of time and resources used by various tasks and users, this information can be
used to track resource usage for a particular user or group of users. In a multitasking OS where multiple programs
run simultaneously, the OS determines which applications should run in which order and how time should be
allocated to each application.

Error-Detecting Aids

The operating system constantly monitors the system to detect errors and avoid malfunctioning computer
systems. From time to time, the operating system checks the system for any external threat or malicious software
activity. It also checks the hardware for any type of damage. This process displays several alerts to the user so
that the appropriate action can be taken against any damage caused to the system.

Coordination Between Other Software and Users

Operating systems also coordinate and assign interpreters, compilers, assemblers, and other software to the
various users of the computer systems. In simpler terms, think of the operating system as the traffic cop of your
computer. It directs and manages how different software programs can share your computer’s resources without
causing chaos. It ensures that when you want to use a program, it runs smoothly without crashing or causing
problems for others. So, it’s like the friendly officer ensuring a smooth flow of traffic on a busy road, making sure
everyone gets where they need to go without any accidents or jams.

Performs Basic Computer Tasks

The management of various peripheral devices such as the mouse, keyboard, and printer is carried out by the
operating system. Today most operating systems are plug-and-play. These operating systems automatically
recognize and configure the devices with no user interference.

Network Management

 Network Communication: Think of them as traffic cops for your internet traffic. Operating systems help
computers talk to each other and the internet. They manage how data is packaged and sent over the
network, making sure it arrives safely and in the right order.

 Settings and Monitoring: Think of them as the settings and security guard for your internet connection.
They also let you set up your network connections, like Wi-Fi or Ethernet, and keep an eye on how your
network is doing. They make sure your computer is using the network efficiently and securely, like
adjusting the speed of your internet or protecting your computer from online threats.

Services Provided by an Operating System

The Operating System provides certain services to the users which can be listed in the following manner:

 User Interface: Almost all operating systems have a user interface (UI). This interface can take several
forms. One is a command-line interface(CLI), which uses text commands and a method for entering them
(say, a keyboard for typing in commands in a specific format with specific options). Another is a batch
interface, in which commands and directives to control those commands are entered into files, and those
files are executed. Most commonly, a graphical user interface (GUI) is used. the interface is a window
system with a pointing device to direct I/O, choose from menus, and make selections and a keyboard to
enter text.

 Program Execution: The Operating System is responsible for the execution of all types of programs
whether it be user programs or system programs. The Operating System utilizes various resources
available for the efficient running of all types of functionalities.

 Handling Input/Output Operations: The Operating System is responsible for handling all sorts of inputs,
i.e., from the keyboard, mouse, desktop, etc. The Operating System does all interfacing most
appropriately regarding all kinds of Inputs and Outputs.
For example, there is a difference between all types of peripheral devices such as mice or keyboards, the
Operating System is responsible for handling data between them.
 Manipulation of File System: The Operating System is responsible for making decisions regarding the
storage of all types of data or files, i.e., floppy disk/hard disk/pen drive, etc. The Operating System
decides how the data should be manipulated and stored.

 Resource Allocation: The Operating System ensures the proper use of all the resources available by
deciding which resource to be used by whom for how much time. All the decisions are taken by the
Operating System.

 Accounting: The Operating System tracks an account of all the functionalities taking place in the
computer system at a time. All the details such as the types of errors that occurred are recorded by the
Operating System.

 Information and Resource Protection: The Operating System is responsible for using all the information
and resources available on the machine in the most protected way. The Operating System must foil an
attempt from any external resource to hamper any sort of data or information.

 Communication: The operating system implements communication between one process to another
process to exchange information. Such communication may occur between processes that are executing
on the same computer or between processes that are executing on different computer systems tied
together by a computer network.

 System Services: The operating system provides various system services, such as printing, time and date
management, and event logging.

 Error Detection: The operating system needs to be detecting and correcting errors constantly. Errors
may occur in the CPU and memory hardware ( for eg. a memory error or a power failure), in I/O devices
(such as a parity error on disk, a connection failure on a network, or a lack of paper in the printer), and
in the user program ( an arithmetic overflow, an attempt to access an illegal memory location or a too-
great use of CPU time). For each type of error, the operating system should take the appropriate action
to ensure correct and consistent computing.

All these services are ensured by the Operating System for the convenience of the users to make the
programming task easier. All different kinds of Operating Systems more or less provide the same services.

Characteristics of Operating System

 Virtualization: Operating systems can provide Virtualization capabilities, allowing multiple operating
systems or instances of an operating system to run on a single physical machine. This can improve
resource utilization and provide isolation between different operating systems or applications.

 Networking: Operating systems provide networking capabilities, allowing the computer system to
connect to other systems and devices over a network. This can include features such as network
protocols, network interfaces, and network security.

 Scheduling: Operating systems provide scheduling algorithms that determine the order in which tasks
are executed on the system. These algorithms prioritize tasks based on their resource requirements and
other factors to optimize system performance.

 Interprocess Communication: Operating systems provide mechanisms for applications to communicate


with each other, allowing them to share data and coordinate their activities.
 Performance Monitoring: Operating systems provide tools for monitoring system performance,
including CPU usage, memory usage, disk usage, and network activity. This can help identify performance
bottlenecks and optimize system performance.

 Backup and Recovery: Operating systems provide backup and recovery mechanisms to protect data in
the event of system failure or data loss.

 Debugging: Operating systems provide debugging tools that allow developers to identify and fix
software bugs and other issues in the system.

What happens when we turn on computer?

A computer without a program running is just an inert hunk of electronics. The first thing a computer has to do
when it is turned on is to start up a special program called an operating system. The operating system’s job is to
help other computer programs work by handling the messy details of controlling the computer’s hardware.

What happens when we turn on computer?

1. The power supply sends electricity to the components of the computer, such as the motherboard, hard
drive, and fans.

2. The BIOS (basic input/output system) or UEFI initializes and performs a power-on self-test (POST), which
checks the basic hardware components to ensure they are working properly. If any issues are detected,
error messages may be displayed.

3. The operating system (OS), such as Windows or macOS, is loaded from the hard drive or another storage
device into the computer’s RAM (random access memory).

4. The OS then initializes its own components and drivers and presents the login screen or desktop
environment to the user.

An overview of the boot process


The boot process is something that happens every time you turn your computer on. You don’t really see it,
because it happens so fast. You press the power button and come back a few sec (or minutes if on slow storage
like HDD) later and Windows 10, or Windows 11, or whatever Operating System you use is all loaded.

The BIOS chip tells it to look in a fixed place, usually on the lowest-numbered hard disk (the boot disk) for a
special program called a boot loader (under Linux the boot loader is called Grub or LILO). The boot loader is pulled
into memory and started. The bootloader’s job is to start the real operating system.

Functions of BIOS

1. POST (Power On Self Test): The Power On Self Test happens each time you turn your computer on. It sounds
complicated and that’s because it kind of is. Your computer does so much when it’s turned on and this is just part
of that.

 It initializes the various hardware devices.

 It is an important process to ensure that all the devices operate smoothly without any conflicts. BIOSes
following ACPI create tables describing the devices in the computer.

 The POST first checks the bios and then tests the CMOS RAM.

 If there is no problem with this then POST continues to check the CPU, hardware devices such as the
Video Card, and the secondary storage devices such as the Hard Drive, Floppy Drives, Zip Drive, or
CD/DVD Drives.

 If some errors are found then an error message is displayed on the screen or a number of beeps are
heard.

 These beeps are known as POST beep codes.

2. Master Boot Record: The Master Boot Record (MBR) is a special boot sector at the beginning of the disk. The
MBR contains the code that loads the rest of OS, known as bootloader. This complicated process (called the
Boot Process) starts with the POST (Power On Self Test) and ends when the Bios searches for the MBR on the
Hard Drive, which is generally located in the first sector, first head, first cylinder (cylinder 0, head 0, sector 1).
A typical structure looks like this:
The bootstrap loader is stored in the computer’s EPROM, ROM, or another non-volatile memory. When the
computer is turned on or restarted, it first performs the power-on-self-test, also known as POST. If the POST is
successful and no issues are found, the bootstrap loader will load the operating system for the computer into
memory. The computer will then be able to quickly access, load, and run the operating system.

3. init: init is the last step of the kernel boot sequence. It looks for the file /etc/inittab to see if there is an entry
for initdefault. It is used to determine the initial run level of the system. A run-level is used to decide the initial
state of the operating system.
Some of the run levels are:

 Level 0: System Halt.

 Level 1: Single user mode.

 Level 2: Full multiuser mode without network.

 Level 3: Full multiuser mode with network.

 Level 4: user definable.

 Level 5: Full multiuser mode with network and X display manager.


 Level 6: Reboot.

The above design of init is called SysV- pronounced as System five. Several other implementations of init have
been written now. Some of the popular implementations are systemd and upstart. Upstart is being used by
ubuntu since 2006. More details of the upstart can be found here.

The next step of init is to start up various daemons that support networking and other services. X server daemon
is one of the most important daemons. It manages the display, keyboard, and mouse. When X server daemon is
started you see a Graphical Interface and a login screen is displayed.

4. System Configuration:
The BIOS allows the user to configure various system settings, such as:

1. Boot order: This determines the order in which the computer checks for bootable devices. For example,
if the boot order is set to “hard drive” first, the computer will try to boot from the hard drive before
checking other devices such as a CD/DVD drive or a USB drive.

2. Time and date: The BIOS stores the time and date information, which can be set and adjusted by the
user. This information is used by the operating system and various applications.

3. Hardware settings: The BIOS provides options to configure various hardware settings such as CPU
voltage, clock speed, and memory timings. These settings can be used to optimize system performance,
but should only be changed by advanced users with the proper knowledge.

5. Security:
The BIOS can also provide security features such as:

1. Password protection: The BIOS can be set to require a password to access certain features or to prevent
unauthorized booting of the computer. This can be useful in preventing unauthorized access to sensitive
data.

2. Secure boot: Secure boot is a feature that ensures that only trusted operating system boot loaders,
drivers, and firmware are loaded during the boot process. This helps to prevent malware and other
unauthorized software from running on the system.

3. TPM (Trusted Platform Module): Some modern motherboards have a built-in TPM that provides
hardware-based security features such as encryption, digital certificates, and secure key storage. This
can help to protect sensitive data and prevent unauthorized access to the system.

Kernel in Operating System

A kernel is the core part of an operating system. It acts as a bridge between software applications and the
hardware of a computer.

 The kernel manages system resources, such as the CPU, memory, and devices, ensuring everything
works together smoothly and efficiently.

 It handles tasks like running programs, accessing files, and connecting to devices like printers and
keyboards.
 An Operating System includes the kernel as its core, but also provides a user interface, file system
management, network services, and various utility applications that allow users to interact with the
system

 Facilitates communication between hardware and user applications.

 Ensures efficient and secure multitasking.

 Manages system stability and prevents unauthorized resource access.

Types of Kernel

The kernel manages the system’s resources and facilitates communication between hardware and software
components. These kernels are of different types let’s discuss each type along with its advantages and
disadvantages:

1. Monolithic Kernel

It is one of the types of kernel where all operating system services operate in kernel space. It has dependencies
between systems components. It has huge lines of code which is complex.

Example:

Unix, Linux, Open VMS, XTS-400 etc.

Advantages

 Efficiency: Monolithic kernels are generally faster than other types of kernels because they don’t have
to switch between user and kernel modes for every system call, which can cause overhead.

 Tight Integration: Since all the operating system services are running in kernel space, they can
communicate more efficiently with each other, making it easier to implement complex functionalities
and optimizations.

 Simplicity: Monolithic kernels are simpler to design, implement, and debug than other types of kernels
because they have a unified structure that makes it easier to manage the code.
 Lower latency: Monolithic kernels have lower latency than other types of kernels because system calls
and interrupts can be handled directly by the kernel.

Disadvantages

 Stability Issues: Monolithic kernels can be less stable than other types of kernels because any bug or
security vulnerability in a kernel service can affect the entire system.

 Security Vulnerabilities: Since all the operating system services are running in kernel space, any
security vulnerability in one of the services can compromise the entire system.

 Maintenance Difficulties: Monolithic kernels can be more difficult to maintain than other types of
kernels because any change in one of the services can affect the entire system.

 Limited Modularity: Monolithic kernels are less modular than other types of kernels because all the
operating system services are tightly integrated into the kernel space. This makes it harder to add or
remove functionality without affecting the entire system.

2. Micro Kernel

It is kernel types which has minimalist approach. It has virtual memory and thread scheduling. Micro Kernel is
more stable with less services in kernel space. It puts rest in user space. It is use in small os.
Example :

Mach, L4, AmigaOS, Minix, K42 etc.

Advantages

 Reliability: Microkernel architecture is designed to be more reliable than monolithic kernels. Since
most of the operating system services run outside the kernel space, any bug or security vulnerability in
a service won’t affect the entire system.

 Flexibility : Microkernel architecture is more flexible than monolithic kernels because it allows different
operating system services to be added or removed without affecting the entire system.

 Modularity: Microkernel architecture is more modular than monolithic kernels because each operating
system service runs independently of the others. This makes it easier to maintain and debug the
system.

 Portability: Microkernel architecture is more portable than monolithic kernels because most of the
operating system services run outside the kernel space. This makes it easier to port the operating
system to different hardware architectures.

Disadvantages

 Performance: Microkernel architecture can be slower than monolithic kernels because it requires more
context switches between user space and kernel space.

 Complexity: Microkernel architecture can be more complex than monolithic kernels because it requires
more communication and synchronization mechanisms between the different operating system
services.
 Development Difficulty: Developing operating systems based on microkernel architecture can be more
difficult than developing monolithic kernels because it requires more attention to detail in designing
the communication and synchronization mechanisms between the different services.

 Higher Resource Usage: Microkernel architecture can use more system resources, such as memory and
CPU, than monolithic kernels because it requires more communication and synchronization
mechanisms between the different operating system services.

3. Hybrid Kernel

It is the combination of both monolithic kernel and microkernel. It has speed and design of monolithic kernel
and modularity and stability of microkernel.
Example :

Windows NT, Netware, BeOS etc.

Advantages

 Performance: Hybrid kernels can offer better performance than microkernels because they reduce the
number of context switches required between user space and kernel space.

 Reliability: Hybrid kernels can offer better reliability than monolithic kernels because they
isolate drivers and other kernel components in separate protection domains.

 Flexibility: Hybrid kernels can offer better flexibility than monolithic kernels because they allow
different operating system services to be added or removed without affecting the entire system.

 Compatibility: Hybrid kernels can be more compatible than microkernels because they can support a
wider range of device drivers.

Disadvantages

 Complexity: Hybrid kernels can be more complex than monolithic kernels because they include both
monolithic and microkernel components, which can make the design and implementation more
difficult.

 Security: Hybrid kernels can be less secure than microkernels because they have a larger attack surface
due to the inclusion of monolithic components.

 Maintenance: Hybrid kernels can be more difficult to maintain than microkernels because they have a
more complex design and implementation.

 Resource Usage: Hybrid kernels can use more system resources than microkernels because they
include both monolithic and microkernel components.

4. Exo Kernel

It is the type of kernel which follows end-to-end principle. It has fewest hardware abstractions as possible. It
allocates physical resources to applications.

Example :
Nemesis, ExOS etc.

Advantages

 Flexibility: Exokernels offer the highest level of flexibility, allowing developers to customize and
optimize the operating system for their specific application needs.

 Performance: Exokernels are designed to provide better performance than traditional kernels because
they eliminate unnecessary abstractions and allow applications to directly access hardware resources.

 Security: Exokernels provide better security than traditional kernels because they allow for fine-grained
control over the allocation of system resources, such as memory and CPU time.

 Modularity: Exokernels are highly modular, allowing for the easy addition or removal of operating
system services.

Disadvantages

 Complexity: Exokernels can be more complex to develop than traditional kernels because they require
greater attention to detail and careful consideration of system resource allocation.

 Development Difficulty: Developing applications for exokernels can be more difficult than for
traditional kernels because applications must be written to directly access hardware resources.

 Limited Support: Exokernels are still an emerging technology and may not have the same level of
support and resources as traditional kernels.

 Debugging Difficulty: Debugging applications and operating system services on exokernels can be more
difficult than on traditional kernels because of the direct access to hardware resources.

5. Nano Kernel

It is the type of kernel that offers hardware abstraction but without system services. Micro Kernel also does not
have system services therefore the Micro Kernel and Nano Kernel have become analogous.

Example :

EROS etc.

Advantages

 Small Size: Nanokernels are designed to be extremely small, providing only the most essential functions
needed to run the system. This can make them more efficient and faster than other kernel types.

 High Modularity: Nanokernels are highly modular, allowing for the easy addition or removal of
operating system services, making them more flexible and customizable than traditional monolithic
kernels.

 Security: Nanokernels provide better security than traditional kernels because they have a smaller
attack surface and a reduced risk of errors or bugs in the code.

 Portability: Nanokernels are designed to be highly portable, allowing them to run on a wide range of
hardware architectures.
Disadvantages

 Limited Functionality: Nanokernels provide only the most essential functions, making them unsuitable
for more complex applications that require a broader range of services.

 Complexity: Because nanokernels provide only essential functionality, they can be more complex to
develop and maintain than other kernel types.

 Performance: While nanokernels are designed for efficiency, their minimalist approach may not be
able to provide the same level of performance as other kernel types in certain situations.

 Compatibility: Because of their minimalist design, nanokernels may not be compatible with all
hardware and software configurations, limiting their practical use in certain contexts.

Functions of Kernel

The kernel is responsible for various critical functions that ensure the smooth operation of the computer
system. These functions include:

1. Process Management

 Scheduling and execution of processes.

 Context switching between processes.

 Process creation and termination.

2. Memory Management

 Allocation and deallocation of memory space.

 Managing virtual memory.

 Handling memory protection and sharing.

3. Device Management

 Managing input/output devices.

 Providing a unified interface for hardware devices.

 Handling device driver communication.

4. File System Management

 Managing file operations and storage.

 Handling file system mounting and unmounting.

 Providing a file system interface to applications.

5. Resource Management

 Managing system resources (CPU time, disk space, network bandwidth)


 Allocating and deallocating resources as needed

 Monitoring resource usage and enforcing resource limits

6. Security and Access Control

 Enforcing access control policies.

 Managing user permissions and authentication.

 Ensuring system security and integrity.

7. Inter-Process Communication

 Facilitating communication between processes.

 Providing mechanisms like message passing and shared memory.

Working of Kernel

 A kernel loads first into memory when an operating system is loaded and remains in memory until the
operating system is shut down again. It is responsible for various tasks such as disk management , task
management, and memory management .

 The kernel has a process table that keeps track of all active processes

 The process table contains a per-process region table whose entry points to entries in the region table.

 The kernel loads an executable file into memory during the ‘exec’ system call’.

 It decides which process should be allocated to the processor to execute and which process should be
kept in the main memory to execute. It basically acts as an interface between user applications and
hardware. The major aim of the kernel is to manage communication between software i.e. user-level
applications and hardware i.e., CPU and disk memory.

Objectives of Kernel

 To establish communication between user-level applications and hardware.


 To decide the state of incoming processes.
 To control disk management.
 To control memory management.
 To control task management.

Introduction of System Call

A system call is a programmatic way in which a computer program requests a service from the kernel of the
operating system it is executed on. A system call is a way for programs to interact with the operating system. A
computer program makes a system call when it requests the operating system’s kernel. System call provides the
services of the operating system to the user programs via the Application Program Interface(API). System calls
are the only entry points into the kernel system and are executed in kernel mode.
 A user program can interact with the operating system using a system call. A number of services are
requested by the program, and the OS responds by launching a number of systems calls to fulfill the
request.

 A system call can be written in high-level languages like C or Pascal or in assembly language. If a high-
level language is used, the operating system may directly invoke system calls, which are predefined
functions.

 A system call is initiated by the program executing a specific instruction, which triggers a switch to kernel
mode, allowing the program to request a service from the OS. The OS then handles the request, performs
the necessary operations, and returns the result back to the program.

 System calls are essential for the proper functioning of an operating system, as they provide a
standardized way for programs to access system resources. Without system calls, each program would
need to implement its methods for accessing hardware and system services, leading to inconsistent and
error-prone behavior.

Services Provided by System Calls


 Process Creation and Management
 Main Memory Management
 File Access, Directory, and File System Management
 Device Handling(I/O)
 Protection
 Networking, etc.
o Process Control: end, abort, create, terminate, allocate, and free memory.
o File Management: create, open, close, delete, read files, etc.
o Device Management
o Information Maintenance
o Communication
Features of System Calls

 Interface: System calls provide a well-defined interface between user programs and the operating
system. Programs make requests by calling specific functions, and the operating system responds by
executing the requested service and returning a result.
 Protection: System calls are used to access privileged operations that are not available to normal user
programs. The operating system uses this privilege to protect the system from malicious or unauthorized
access.

 Kernel Mode: When a system call is made, the program is temporarily switched from user mode
to kernel mode. In kernel mode, the program has access to all system resources, including hardware,
memory, and other processes.

 Context Switching: A system call requires a context switch, which involves saving the state of the current
process and switching to the kernel mode to execute the requested service. This can introduce overhead,
which can impact system performance.

 Error Handling: System calls can return error codes to indicate problems with the requested service.
Programs must check for these errors and handle them appropriately.

 Synchronization: System calls can be used to synchronize access to shared resources, such as files or
network connections. The operating system provides synchronization mechanisms, such as locks
or semaphores, to ensure that multiple programs can access these resources safely.

How does System Call Work?

Here is a detailed explanation step by step how system calls work:

 Users need special resources: Sometimes programs need to do some special things that can’t be done
without the permission of the OS like reading from a file, writing to a file, getting any information from
the hardware, or requesting a space in memory.

 The program makes a system call request: There are special predefined instructions to make a request
to the operating system. These instructions are nothing but just a “system call”. The program uses these
system calls in its code when needed.

 Operating system sees the system call: When the OS sees the system call then it recognizes that the
program needs help at this time so it temporarily stops the program execution and gives all the control
to a special part of itself called ‘Kernel’. Now ‘Kernel’ solves the need of the program.

 The operating system performs the operations: Now the operating system performs the operation that
is requested by the program. Example: reading content from a file etc.

 Operating system give control back to the program : After performing the special operation, OS give
control back to the program for further execution of program.

Different Types of System Calls in OS

System calls are interfaces provisioned by the operating system to allow user-level applications to interact with
low-level hardware components & make use of all the services provided by the kernel, which is a core component
and the heart of an operating system that manages all the hardware and the services provided by the OS.
These system calls are essential for every process to interact with the kernel and properly use the services
provided by it. System calls are an interface between a process and the operating system. And they're the only
way to switch from user mode to kernel mode.

Types of System Calls

Services provided by an OS are typically related to any kind of operation that a user program can perform like
creation, termination, forking, moving, communication, etc. Similar types of operations are grouped into one
single system call category. System calls are classified into the following categories:

Types of System Calls

1. File System Operations

These system calls are made while working with files in OS, File manipulation operations such as creation,
deletion, termination etc.

 open(): Opens a file for reading or writing. A file could be of any type like text file, audio file etc.

 read(): Reads data from a file. Just after the file is opened through open() system call, then if some
process want to read the data from a file, then it will make a read() system call.

 write(): Writes data to a file. Wheneve the user makes any kind of modification in a file and saves it,
that's when this is called.

 close(): Closes a previously opened file.

 seek(): Moves the file pointer within a file. This call is typically made when we the user tries to read the
data from a specific position in a file. For example, read from line - 47. Than the file pointer will move
from line 1 or wherever it was previously to line-47.
Algorithm for Reading a File

algorithm read

input: user file descriptor, address of buffer in user process, number of bytes to read

output: count of bytes copied into user space

get file table entry from user file descriptor;

check file accessibility;

set parameters in u area for user address, byte count, I/O to user;

get inode from file table;

lock inode;

set byte offset in u area from file table offset;

while(count not satisfied)

convert file offset to disk block(algorithm bmap);

calculate offset into block, number of bytes to read;

if(number of bytes to read is 0)

break;

read block (algorithm breada if with read ahead, algorithm bread otherwise);

copy data from system buffer to user address;

update u area fields for file byte offset, read count, address to write into user space;

release buffer;

unlock inode;

update file table offset for next read;

return(total number of bytes read);

2. Process Control

These types of system calls deal with process creation, process termination, process allocation, deallocation etc.
Basically manages all the process that are a part of OS.
 fork(): Creates a new process (child) by duplicating the current process (parent). This call is made when
a process makes a copy of itself and the parent process is halted temporarily until the child process
finishes its execution.

 exec(): Loads and runs a new program in the current process and replaces the current process with a
new process. All the data such as stack, register, heap memory everything is replaced by a new process
and this is known as overlay. For example, when you execute a java byte code using command - java
"filename". Then in the background, exec() call will be made to execute the java file and JVM will also be
executed.

 wait(): The primary purpose of this call is to ensure that the parent process doesn't proceed further with
its execution until all its child processes have finished their execution. This call is made when one or more
child processes are forked.

 exit(): It simply terminates the current process.

 kill(): This call sends a signal to a specific process and has various purpose including - requesting it to quit
voluntarily, or force quit, or reload configuration.

3. Memory Management

These types of system calls deals with memory allocation, deallocation & dynamically changing the size of a
memory allocated to a process. In short, the overall management of memory is done by making these system
calls.

 brk(): Changes the data segment size for a process in HEAP Memory. It takes an address as argument to
define the end of the heap and explicitly sets the size of HEAP.

 sbrk(): This call is also for memory management in heap, it also takes an argument as an integer (+ve or
-ve) specifying whether to increase or decrease the size respectively.

 mmap(): Memory Map - It basically maps a file or device into main memory and further into a process's
address space for performing operations. And any changes made in the content of a file will be reflected
in the actual file.

 munmap(): Unmaps a memory-mapped file from a process's address space and out of main memory

 mlock() and unlock(): memory lock defines a mechanism through which certain pages stay in memory
and are not swapped out to the swap space in the disk. This could be done to avoid page faults. Memory
unlock is the opposite of lock, it releases the lock previously acquired on pages.

4. Interprocess Communication (IPC)

When two or more process are required to communicate, then various IPC mechanism are used by the OS which
involves making numerous system calls. Some of them are :

 pipe(): Creates a unidirectional communication channel between processes. For example, a parent
process may communicate to its child process through a pipe making a parent process as input source of
its child process.
 socket(): Creates a network socket for communication. Processes in same or other networks can
communicate through this socket, provided that they have necessary network permissions granted.

 shmget(): It is short for - 'shared-memory-get'. It allows one or more processes to share a portion of
memory and achieve interprocess communication.

 semget(): It is short for - 'semaphore-get'. This call typically manages the coordination of multiple
processes while accessing a shared resource that is, the critical section.

 msgget(): It is short for - 'message-get'. IPC mechanism has one of the fundamental concept called -
'message queue' which is a queue data structure inside memory through which various processes
communicate with each other. This message queue is allocated through this call allowing other processes
a structured way of communication for data exchange purpose.

5. Device Management

The device management system calls are used to interact with various peripherial devices attached to the PC or
even the management of the current device.

 SetConsoleMode(): This call is made to set the mode of console (input or output). It allows a process to
control various console modes. In windows, it is used to control the behaviour of command line.

 WriteConsole(): It allows us to write data on console screen.

 ReadConsole(): It allows us to read data from console screen (if any arguments are provided).

 open(): This call is made whenever a device or a file is opened. A unique file descriptor is created to
maintain the control access to the opened file or device.

 close(): This call is made when the system or the user closes the file or device.

Importance of System Calls

 Efficient Resource Management: System Calls help your computer manage its resources efficiently. They
allocate and manage memory so programs run smoothly without using up too many resources. This is
important for multitasking and overall performance.

 Security and Isolation: System Calls ensure that one program cannot interfere with or access the
memory of another program. This enhances the security and stability of your device.

 Multitasking Capabilities: System Calls support multitasking, allowing multiple programs to run
simultaneously. This improves productivity and makes it easy to switch between applications.

 Enhanced Control: System Calls provide a high level of control over your device’s operations. They allow
you to start and stop processes, manage files, and perform various system-related tasks.

 Input/Output (I/O) Operations: System Calls enable communication with input and output devices, such
as your keyboard, mouse, and screen. They ensure that these devices work effectively.

 Networking and Communication: System Calls facilitate networking and communication between
different applications. They make it easy to transfer data over networks, browse the web, send emails,
and connect online.
What is The Purpose of System Calls in OS?

System Calls act as a bridge between an operating system (OS) and a running program. They are usually written
as assembly language instructions and are detailed in manuals for programmers working with assembly
language.

When a program running in user mode needs to access a resource, it makes a System Call. This request is sent to
the OS kernel to obtain the needed resource.

System Calls are used for various tasks, such as:


 Creating or executing files in the file system.
 Reading from and writing to files.
 Developing and managing new procedures in programs.
 Making network connections, including sending and receiving data packets.
 Accessing hardware devices like printers and scanners.

Examples of a System Call in Windows and Unix

System calls for Windows and Unix come in many different forms. These are listed in the table below as follows:

Process Windows Unix

CreateProcess() Fork()
Process Control ExitProcess() Exit()
WaitForSingleObject() Wait()

Open()
CreateFile()
Read()
File manipulation ReadFile()
Write()
WriteFile()
Close()

SetConsoleMode() Ioctl()
Device Management ReadConsole() Read()
WriteConsole() Write()

GetCurrentProcessID() Getpid()
Information Maintenance SetTimer() Alarm()
Sleep() Sleep()

CreatePipe() Pipe()
Communication
CreateFileMapping() Shmget()
Process Windows Unix

MapViewOfFile() Mmap()

SetFileSecurity() Chmod()
Protection InitializeSecurityDescriptor() Umask()
SetSecurityDescriptorgroup() Chown()

Open(): Accessing a file on a file system is possible with the open() system call. It gives the file resources it needs
and a handle the process can use. A file can be opened by multiple processes simultaneously or just one process.
Everything is based on the structure and file system.

Read(): Data from a file on the file system is retrieved using it. In general, it accepts three arguments:

 A description of a file.

 A buffer for read data storage.

 How many bytes should be read from the file Before reading, the file to be read could be identified by
its file descriptor and opened using the open() function.

Wait(): In some systems, a process might need to hold off until another process has finished running before
continuing. When a parent process creates a child process, the execution of the parent process is halted until the
child process is complete. The parent process is stopped using the wait() system call. The parent process regains
control once the child process has finished running.

Write(): Data from a user buffer is written using it to a device like a file. A program can produce data in one way
by using this system call. generally, there are three arguments:

 A description of a file.

 A reference to the buffer where data is stored.

 The amount of data that will be written from the buffer in bytes.

Fork(): The fork() system call is used by processes to create copies of themselves. It is one of the methods used
the most frequently in operating systems to create processes. When a parent process creates a child process,
the parent process’s execution is suspended until the child process is finished. The parent process regains control
once the child process has finished running.

Exit(): A system call called exit() is used to terminate a program. In environments with multiple threads, this call
indicates that the thread execution is finished. After using the exit() system function, the operating system
recovers the resources used by the process.

Methods to Pass Parameters to OS

If a system call occur, we have to pass parameter to the Kernal part of the Operating system.
For example look at the given open() system call:
//function call example
#include <fcntl.h>

int open(const char *pathname, int flags, mode_t mode);


Here pathname, flags and mode_t are the parameters.
So it is to be noted that :
 We can’t pass the parameters directly like in an ordinary function call.
 In Kernal mode there is a different way to perform a function call.
So we can’t run it in the normal address space that the process had already created and hence we cant place the
parameters in the top of the stack because it is not available to the Kernal of the operating system for processing.
so we have to adopt any other methods to pass the parameters to the Kernal of the OS.
We can done it through,
 Passing parameters in registers
 Address of the block is passed as a parameter in a register.
 Parameters are pushed into a stack.
Let us discuss about each points in detail:
1. Passing Parameters in Registers
 It is the simplest method among the three
 Here we directly pass the parameters to registers.
 But it will it is limited when, number of parameters are greater than the number of registers.
 Here is the C program code:
// Passing parameters in registers.

#include <fcntl.h>
#include <stdio.h>

int main()
{
const char* pathname = "example.txt";
int flags = O_RDONLY;
mode_t mode = 0644;

int fd = open(pathname, flags, mode);


// in function call open(), we passed the parameters pathanme,flags,mode to the kernal directly

if (fd == -1) {
perror("Error opening file");
return 1;
}

// File operations here...

close(fd);
return 0;
}
2. Address of The Block is Passed as Parameters
 It can be applied when the number of parameters are greater than the number of registers.
 Parameters are stored in blocks or table.
 The address of the block is passed to a register as a parameter.
 Most commonly used in Linux and Solaris.
 Here is the C program code:
//Address of the block is passed as parameters

#include <stdio.h>
#include <fcntl.h>

int main() {
const char *pathname = "example.txt";
int flags = O_RDONLY;
mode_t mode = 0644;

int params[3];
// Block of data(parameters) in array
params[0] = (int)pathname;
params[1] = flags;
params[2] = mode;

int fd = syscall(SYS_open, params);


// system call

if (fd == -1) {
perror("Error opening file");
return 1;
}

// File operations here...

close(fd);
return 0;
}
3. Parameters Are Pushed in a Stack
 In this method parameters can be pushed in using the program and popped out using the operating
system
 So the Kernal can easily access the data by retrieving information from the top of the stack.
 Here is the C program code
//parameters are pushed into the stack

#include <stdio.h>
#include <fcntl.h>
#include <unistd.h>

int main() {
const char *pathname = "example.txt";
int flags = O_RDONLY;
mode_t mode = 0644;

int fd;
asm volatile(
"mov %1, %%rdi\n"
"mov %2, %%rsi\n"
"mov %3, %%rdx\n"
"mov $2, %%rax\n"
"syscall"
: "=a" (fd)
: "r" (pathname), "r" (flags), "r" (mode)
: "%rdi", "%rsi", "%rdx"
);

if (fd == -1) {
perror("Error opening file");
return 1;
}

// File operations here...

close(fd);
return 0;
}
Advantages of System Calls
 Access to Hardware Resources: System calls allow programs to access hardware resources such as disk
drives, printers, and network devices.
 Memory Management: System calls provide a way for programs to allocate and deallocate memory, as
well as access memory-mapped hardware devices.
 Process Management: System calls allow programs to create and terminate processes, as well as manage
inter-process communication.
 Security: System calls provide a way for programs to access privileged resources, such as the ability to
modify system settings or perform operations that require administrative permissions.
 Standardization: System calls provide a standardized interface for programs to interact with the
operating system, ensuring consistency and compatibility across different hardware platforms and
operating system versions.
Disadvantages of System Call
 Performance Overhead: System calls involve switching between user mode and kernel mode, which can
slow down program execution.
 Security Risks: Improper use or vulnerabilities in system calls can lead to security breaches or
unauthorized access to system resources.
 Error Handling Complexity: Handling errors in system calls, such as resource allocation failures or
timeouts, can be complex and require careful programming.
 Compatibility Challenges: System calls may vary between different operating systems, requiring
developers to write code that works across multiple platforms.
 Resource Consumption: System calls can consume significant system resources, especially in
environments with many concurrent processes making frequent calls.

Privileged and Non-Privileged Instructions in Operating System


In an operating system, instructions executed by the CPU can be classified into privileged and non-privileged
instructions. These classifications help the operating system ensure security, stability, and efficient resource
management. In this article, we will discuss Privileged and Non-Privileged Instructions in Operating Systems.
What are Privileged Instructions?
Privileged instructions are those that can only be executed by the operating system kernel or a privileged process,
such as a device driver. These instructions typically perform operations that require direct access to hardware or
other privileged resources, such as setting up memory mappings or accessing I/O devices. The Instructions that
can run only in Kernel Mode are called Privileged Instructions. Privileged Instructions possess the following
characteristics:
 If any attempt is made to execute a Privileged Instruction in User Mode, then it will not be executed and
treated as an illegal instruction. The Hardware traps it in the Operating System.
 Before transferring the control to any User Program, it is the responsibility of the Operating System to
ensure that the Timer is set to interrupt. Thus, if the timer interrupts then the Operating System regains
control.
 Thus, any instruction which can modify the contents of the Timer is Privileged Instruction.
 Privileged Instructions are used by the Operating System to achieve correct operation.
 Various examples of Privileged Instructions include:
o I/O instructions and Halt instructions
o Turn off all Interrupts
o Set the Timer
o Context Switching
o Clear the Memory or Remove a process from the Memory
o Modify entries in the Device-status table

Role of OS in managing Privileged Instructions


 Access control: The operating system employs access control mechanisms to limit access to privileged
instructions. These mechanisms are restricted to authorized processes or users with elevated privileges.
This guarantees that privileged instructions can only be executed by trusted processes and thwarts
unauthorized access of malicious programs to system resources.
 Memory protection: By deploying memory protection techniques, the operating system restricts
processes from accessing any memory location that belongs to it or other processes. This aids in
preventing tampering with the operating system or other processes by ensuring that processes cannot
do so. Furthermore, it hinders malevolent programs from resulting in system crashes or risking the
system’s safety.
 Interrupt handling: The execution of privileged instructions like system calls or exceptions is handled by
the operating system through interrupt handling to ensure safety and accuracy, so when an interrupt
occurs during the execution of a process it’s necessary to save its state before transferring control to a
suitable handler and then restoring it on completion.
 Virtualization: Using virtualization techniques allows the operating system to create a simulated
environment where processes can execute privileged instructions without having direct access to the
underlying hardware thus creating a more secure and isolated execution environment for privileged
instructions by limiting process access to authorized hardware resources only.

What are Non-Privileged Instructions?


Non-privileged instructions are those that can be executed by any process, including user-level processes. These
instructions are typically used for performing computations, accessing user-level resources such as files and
memory, and managing process control. Non-privileged instructions are executed in user mode, which provides
limited access to system resources and ensures that processes cannot interfere with one another. The
Instructions that can run only in User Mode are called Non-Privileged Instructions.
Various examples of Non-Privileged Instructions include:
 Reading the status of Processor
 Reading the System Time
 Generate any Trap Instruction
 Sending the final printout of Printer
Also, it is important to note that in order to change the mode from Privileged to Non-Privileged, we require a
Non-privileged Instruction that does not generate any interrupt.

Differences between Privileged and Non-Privileged Instructions

Criteria Privileged Instructions Non-Privileged Instructions

Access to
Direct access to system resources Limited access to system resources
Resources

Execution Mode Executed in kernel mode Executed in user mode

Execution Do not require special permissions to


Require special permissions to execute
Permissions execute
Criteria Privileged Instructions Non-Privileged Instructions

Used for performing low-level system


Purpose Used for general-purpose computing
operations

Higher risk of causing system crashes or Less risky in terms of system crashes or
Risks
security vulnerabilities security vulnerabilities

Advantages of Privileged and Non-Privileged Instructions


 Security: This ensures that unauthorized persons do not access resources of their system. The operating
system or other trusted processes can only execute privileged instructions while user programs can run
non-privileged instructions.
 Performance: This reduces overhead and latency by allowing them to access hardware resources
directly.
 Stability: Non-privileged instructions are limited in accessing system resources while privileged
instructions can only be executed by trusted processes. The distinction between user programs and
critical system functions keeps any harm caused by the user program at bay.
 Flexibility: Operating systems support a variety of applications and hardware devices because they have
both types of instructions. Developers find it easy to create new applications and hardware through an
interface defined within the operating system for compatibility purposes.
 Debugging: With a clear demarcation between trusted and untrusted processes, this makes it easier to
debug the operating system itself plus the applications running on top of it. To resolve issues, developers
must identify them first which is made possible through this separation.

Disadvantages of Privileged and Non-Privileged Instructions


 Overhead: Instructions that are either privileged or non-privileged can have an impact on overhead as
well as performance degradation. Changes between the two modes, privileged and non-privileged ones,
may take long time causing the system performance to drop.
 Complexity: Operating systems using such instructions with distinct access levels (privileged or non-
privileged) become extremely complex and challenging to develop and maintain. The necessity of both
types of instructions makes it even harder to design system features and ensure system stability.
 Compatibility: Privileged and non-privileged instructions may bring compatibility problems among
different hardware platforms or operating systems. Diverse implementations of privileged instructions
make it hard for creation of applications working on various platforms.
 Vulnerabilities: These resources are targeted by attackers who exploit security flaws in the operating
system.
Chapter-2
System Programming
This topic provides an overview of main topics in system programming, focusing on the UNIX environment. It will
cover UNIX architecture, the C library, the C compiler, API and ABI, file and directory management, UNIX
standardization efforts, and finally, a comparative analysis of various Unix system implementations.

I. UNIX Architecture
 What is System Programming?
o The art of writing system software
o System software lives at a low level, interfacing directly with the kernel and core system libraries.
o System software includes your shell and your text editor, your compiler and your debugger, your
core utilities and system daemons.
o These components are entirely system software, based on the kernel and the C library.
o traditional system programming—Apache, bash, cp, Emacs, init, gcc, gdb, glibc, ls, mv, vim, and
X.
o The umbrella of system programming often includes kernel development, or at least device
driver writing
o There are three cornerstones to system programming in Linux: system calls, the C library, and
the C compiler. Each deserves an introduction.
o Programming directly interacting with the operating system kernel.
o Focus on low-level resource management, hardware access, and process control.
o Essential for developing system utilities, device drivers, and embedded systems.
 Monolithic Kernel vs. Microkernel:
o Monolithic: All OS services run in kernel space (e.g., Linux).
 Advantages: Performance, simplicity.
 Disadvantages: Stability, security (one crash can bring down the whole system).
o Microkernel: Minimal kernel functionalities, most services run in user space (e.g., Mach, QNX).
 Advantages: Stability, security, modularity.
 Disadvantages: Performance overhead due to inter-process communication.
 Layered Structure of Unix:
o Hardware: The physical computer system.
o Kernel: The core of the OS, responsible for:
 Process management (scheduling, context switching).
 Memory management (virtual memory, allocation).
 File system management (storage, access control).
 Device drivers (hardware interaction).
 Inter-process communication (IPC).
o System Calls: The interface between user-space programs and the kernel.
 Examples: open(), read(), write(), fork(), exec(), exit().
o Shell: A command-line interpreter that allows users to interact with the OS.
o User Applications: Programs built on top of the OS.
 Main Concepts:
o Everything is a file: Treating hardware devices and inter-process communication mechanisms as
files.
o Hierarchical file system: A tree-like structure for organizing files and directories.
o Pipes and Redirection: Connecting processes through pipelines and redirecting input/output.
II. The C Library (libc)
 Purpose: A set of pre-written functions that provide common functionalities to C programs.
 Standardization: Defined by ANSI/ISO C standards.
 Major Functionality:
o Input/Output (I/O): stdio.h (e.g., printf(), scanf(), fopen(), fread(), fwrite()).
o String Manipulation: string.h (e.g., strcpy(), strlen(), strcmp()).
o Memory Management: stdlib.h (e.g., malloc(), calloc(), free()).
o Mathematics: math.h (e.g., sin(), cos(), sqrt()).
o Time and Date: time.h (e.g., time(), localtime()).
o System Calls (Wrappers): The C library often provides wrappers around system calls, making
them easier to use.
 Implementation Variations: Different Unix systems have different implementations of libc (e.g., glibc on
Linux, musl on Alpine Linux, bionic on Android). These can have slight behavioral differences.
 Static vs. Dynamic Linking:
o Static: The library code is copied into the executable at compile time. Larger executable, but no
external dependencies.
o Dynamic: The library is linked at runtime. Smaller executable, but requires the library to be
present on the system.
III. The C Compiler (gcc/clang)
 Purpose: Translates C source code into executable machine code.
 Compilation Process:
o Preprocessing: Handles preprocessor directives (e.g., #include, #define).
o Compilation: Translates preprocessed code into assembly code.
o Assembly: Converts assembly code into object code (machine code modules).
o Linking: Combines object code modules and libraries to create the final executable.
 Key Compiler Options:
o -c: Compile to object code only (no linking).
o -o <output_file>: Specify the output file name.
o -I <include_path>: Add a directory to the include path.
o -L <library_path>: Add a directory to the library path.
o -l <library_name>: Link with a specific library.
o -Wall: Enable all warnings. Crucial for writing robust code.
o -Werror: Treat warnings as errors.
o -O0, -O1, -O2, -O3: Optimization levels.
o -g: Include debugging information.
 Understanding Linking Errors: Common problems and how to resolve them.
 Compiler Standards Support: -std=c99, -std=c11, -std=gnu99, etc. Specify which C standard the compiler
should adhere to.
IV. API and ABI
 API (Application Programming Interface): The set of functions, procedures, data structures, and
protocols that a software component provides to support requests from other software components. It
is a source-level interface.
 ABI (Application Binary Interface): The low-level interface between two binary program modules. It
defines:
o Data types: Sizes, alignment, and representations of data.
o Calling conventions: How arguments are passed to functions, and how return values are
returned.
o System call conventions: How user-space programs make requests to the kernel.
o Object file format: Structure and format of object files and executables.
 Importance: ABI compatibility is crucial for ensuring that pre-compiled libraries and executables can run
correctly across different versions of a system or on different systems. An incompatible ABI will result in
crashes or undefined behavior.
 Impact of Standards: Standards like POSIX and the Single UNIX Specification aim to standardize both APIs
and ABIs to promote portability.
 Example Differences: Word sizes (32-bit vs. 64-bit), endianness (big-endian vs. little-endian), structure
packing.
V. Files and Directories
 File System Hierarchy:
o /: Root directory.
o /bin: Essential user command binaries.
o /sbin: Essential system administration binaries.
o /usr: User-related programs and data.
o /etc: System configuration files.
o /home: User home directories.
o /tmp: Temporary files.
o /var: Variable data (logs, databases).
 File Attributes:
o Name, size, modification time, permissions (read, write, execute).
o User ID (UID), Group ID (GID).
o File type (regular file, directory, symbolic link, device file, etc.).
 System Calls for File Manipulation:
o open(): Open a file for reading, writing, or both.
o close(): Close a file descriptor.
o read(): Read data from a file.
o write(): Write data to a file.
o lseek(): Move the file pointer.
o stat(), fstat(), lstat(): Get file status information.
o unlink(): Delete a file.
o rename(): Rename a file.
o chmod(): Change file permissions.
o chown(): Change file owner and group.
 System Calls for Directory Manipulation:
o mkdir(): Create a directory.
o rmdir(): Remove a directory.
o opendir(): Open a directory stream.
o readdir(): Read a directory entry.
o closedir(): Close a directory stream.
o chdir(): Change the current working directory.
o getcwd(): Get the current working directory.
 File Permissions and Access Control: Understanding the Unix permission model (user, group, others;
read, write, execute). Using chmod and chown.
 Symbolic Links vs. Hard Links: Differences and use cases.
VI. Unix Standardization
 Motivation: To ensure portability of applications across different Unix systems.
 Key Standards:
o ISO-C (ANSI C): Standardizes the C programming language itself. Ensures that C code written
according to the standard will compile and run correctly on any platform with a conforming C
compiler.
o IEEE POSIX (Portable Operating System Interface): Defines a set of APIs and command-line
interfaces for Unix-like operating systems. Focuses on operating system services, such as file I/O,
process management, and inter-process communication.
o The Single UNIX Specification (SUS): A stricter superset of POSIX. It defines a complete system,
including the C library, system calls, and utilities. A system that conforms to SUS is branded as
"UNIX."
o FIPS (Federal Information Processing Standards): Standards developed by the U.S. National
Institute of Standards and Technology (NIST) for use in U.S. federal government computer
systems. Often references or incorporates other standards (like POSIX).
 Relationship between Standards:
o ISO C defines the C programming language.
o POSIX defines a set of operating system interfaces that are typically accessed through the C
programming language (using the C library).
o SUS builds upon POSIX, adding further requirements and specifications for a complete Unix
system.
 Impact on System Programming: Standards provide a common baseline for developers, making it easier
to write portable code.
 Limitations of Standards: Standards can sometimes lag behind the latest technological developments.
Also, not all Unix systems fully conform to all standards.
VII. Unix System Implementations
 BSD (Berkeley Software Distribution):
o Descended from the original Unix developed at Bell Labs.
o Known for its permissive licensing, which has allowed it to be incorporated into many other
systems.
o Examples: FreeBSD, NetBSD, OpenBSD, macOS (Darwin kernel).
 Linux:
o An open-source Unix-like operating system kernel.
o Developed by Linus Torvalds.
o Used in a wide range of devices, from embedded systems to servers.
o Often combined with the GNU userland tools (e.g., bash, coreutils, gcc).
 macOS (previously OS X):
o Based on the Darwin kernel, which is a hybrid kernel derived from BSD and Mach.
o Provides a user-friendly graphical interface on top of a Unix-like operating system.
 Solaris:
o Developed by Sun Microsystems (now Oracle).
o Known for its advanced features, such as ZFS file system and DTrace dynamic tracing.
 Comparative Analysis:
o Kernel architecture: Monolithic vs. hybrid vs. microkernel.
o File system: Ext4 (Linux), ZFS (Solaris), APFS (macOS), UFS (BSD).
o Process management: Scheduling algorithms, memory management techniques.
o Security features: Access control mechanisms, security modules.
o Standards compliance: POSIX, SUS, etc.
VIII. Limits, Options, and Primitive System Datatypes
 Limits: Maximum values for various system resources (e.g., maximum number of open files, maximum
size of a file).
o Defined in limits.h.
o Accessed using getrlimit() and setrlimit().
o Examples: OPEN_MAX, ARG_MAX, CHILD_MAX.
 Options: Features supported by a specific Unix system implementation.
o Determined using sysconf(), pathconf(), and fpathconf().
o Defined in unistd.h.
o Examples: _POSIX_VERSION, _XOPEN_VERSION.
 Primitive System Datatypes: Integer and other data types defined by POSIX standards to ensure
portability across different architectures.
o Defined in <sys/types.h> and other system headers.
o Examples: pid_t, uid_t, gid_t, off_t, size_t, ssize_t, time_t.
 Importance: Using limits, options, and standard data types ensures that your code is more portable and
robust.
 Example Scenario: Determining the maximum length of a file name on a specific file system.
IX. Conflicts Between Standards and Implementations
 Version Differences: Older systems may not fully implement newer versions of POSIX or SUS.
 Optional Features: POSIX allows some features to be optional. An application relying on an optional
feature may not be portable.
 Implementation-Specific Extensions: Unix systems often add their own extensions to the standard APIs.
These extensions can provide additional functionality, but they can also reduce portability.
 Conflicting Definitions: Sometimes, different standards (or different versions of the same standard) may
define the same function or constant with conflicting meanings.
 Practical Implications: Porting code between different Unix systems can be challenging due to these
conflicts. Careful testing and conditional compilation may be necessary. Use Autoconf/Automake to
automatically configure your build environment.
Chapter-3
Process Management
3.1. Process Introduction
3.2. Process Creation and Deletion
3.3. States of a Process
3.4. Process Implementation (Process Table and Control Block)
3.5. Types of Processes in Process Table
3.6. Inter Process Communication
3.7. Multithreading
3.8. CPU Scheduling
3.9. Deadlock

3.1. Process introduction


A process is a program in execution. For example, when we write a program in C or C++ and compile it, the
compiler creates binary code. The original code and binary code are both programs. When we actually run the
binary code, it becomes a process.
 A process is an 'active' entity instead of a program, which is considered a 'passive' entity.
 A single program can create many processes when run multiple times; for example, when we open a .exe
or binary file multiple times, multiple instances begin (multiple processes are created). .
How Does a Process Look Like in Memory?
A process in memory is divided into several distinct sections, each serving a different purpose. Here's how a
process typically looks in memory:

 Text Section: A text or code segment contains executable instructions. It is typically a read only section
 Stack: The stack contains temporary data, such as function parameters, returns addresses, and local
variables.
 Data Section: Contains the global variable.
 Heap Section: Dynamically memory allocated to process during its run time.

Attributes of a Process
A process has several important attributes that help the operating system manage and control it. These
attributes are stored in a structure called the Process Control Block (PCB) (sometimes called a task control block).
The PCB keeps all the key information about the process, including:
1. Process ID (PID): A unique number assigned to each process so the operating system can identify it.
2. Process State: This shows the current status of the process, like whether it is running, waiting, or ready
to execute.
3. Priority and other CPU Scheduling Information: Data that helps the operating system decide which
process should run next, like priority levels and pointers to scheduling queues.
4. I/O Information: Information about input/output devices the process is using.
5. File Descriptors: Information about open files files and network connections.
6. Accounting Information: Tracks how long the process has run, the amount of CPU time used, and other
resource usage data.
7. Memory Management Information: Details about the memory space allocated to the process, including
where it is loaded in memory and the structure of its memory layout (stack, heap, etc.).
These attributes in the PCB help the operating system control, schedule, and manage each process effectively.

States of Process
A process is in one of the following states:
 New: Newly Created Process (or) being-created process.
 Ready: After the creation process moves to the Ready state, i.e. the process is ready for execution.
 Running: Currently running process in CPU (only one process at a time can be under execution in a single
processor).
 Wait (or Block): When a process requests I/O access.
 Complete (or Terminated): The process completed its execution.
 Suspended Ready: When the ready queue becomes full, some processes are moved to a suspended
ready state
 Suspended Block: When the waiting queue becomes full.

3.2. Process Creation and Deletions in Operating Systems


A process is an instance of a program running, and its lifecycle includes various stages such as creation, execution,
and deletion.
 The operating system handles process creation by allocating necessary resources and assigning each
process a unique identifier.
 Process deletion involves releasing resources once a process completes its execution.
 Processes are often organized in a hierarchy, where parent processes create child processes, forming a
tree-like structure.

Process Creation
As discussed above, processes in most of the operating systems (both Windows and Linux) form hierarchy. So a
new process is always created by a parent process. The process that creates the new one is called the parent
process, and the newly created process is called the child process. A process can create multiple new processes
while it’s running by using system calls to create them.
1. When a new process is created, the operating system assigns a unique Process Identifier (PID) to it and
inserts a new entry in the primary process table.
2. Then required memory space for all the elements of the process such as program, data, and stack is allocated
including space for its Process Control Block (PCB).
3. Next, the various values in PCB are initialized such as,
1. The process identification part is filled with PID assigned to it in step (1) and also its parent’s PID.
2. The processor register values are mostly filled with zeroes, except for the stack pointer and program
counter. The stack pointer is filled with the address of the stack-allocated to it in step (2) and the program
counter is filled with the address of its program entry point.
3. The process state information would be set to ‘New’.
4. Priority would be lowest by default, but the user can specify any priority during creation. Then the
operating system will link this process to the scheduling queue and the process state would be changed
from ‘New’ to ‘Ready’. Now the process is competing for the CPU.
5. Additionally, the operating system will create some other data structures such as log files or accounting
files to keep track of process activity.

Understanding System Calls for Process Creation in UNIX Operating System:


Process creation is achieved through the fork() system call. The new process that gets created is called the child
process, and the one that started it (the one that was already running) is called the parent process. After
the fork() call, you end up with two processes: the parent and the child, both running independently.
 The fork() system call creates a copy of the current process, including all its resources, but with just one
thread.
 The exec() system call replaces the current process’s memory with the code and data from a specified
executable file. It doesn’t return; instead, it “transfers” the process to the new program.
 The waitpid() function makes the parent process wait until a specific child process finishes executing.
Process creation in Unix
Example:
int pid = fork();
if (pid == 0)
{
/* Child process */
exec(“foo”);
}
else
{
/* Parent process */
waitpid(pid, &status, options);
}

Understanding System Calls for Process Creation in Windows Operating System:


In Windows, the system call used for process creation is CreateProcess(). This function is responsible for creating
a new process, initializing its memory, and loading the specified program into the process’s address space.
 CreateProcess() in Windows combines the functionality of both UNIX’s fork() and exec(). It creates a new
process with its own memory space rather than duplicating the parent process like fork() does. It also
allows specifying which program to run, similar to how exec() works in UNIX.
 When you use CreateProcess(), you need to provide some extra details to handle any changes between
the parent and child processes. These details control things like the process’s environment, security
settings, and how the child process works with the parent or other processes. It gives you more control
and flexibility compared to the UNIX system.
Process Deletion
Processes terminate themselves when they finish executing their last statement, after which the operating
system uses the exit() system call to delete their context. Then all the resources held by that process like physical
and virtual memory, 10 buffers, open files, etc., are taken back by the operating system. A process P can be
terminated either by the operating system or by the parent process of P.
A parent may terminate a process due to one of the following reasons:
1. When task given to the child is not required now.
2. When the child has taken more resources than its limit.
3. The parent of the process is exiting, as a result, all its children are deleted. This is called cascaded
termination.
A process can be terminated/deleted in many ways. Some of the ways are:
1. Normal termination: The process completes its task and calls an exit() system call. The operating system
cleans up the resources used by the process and removes it from the process table.
2. Abnormal termination/Error exit: A process may terminate abnormally if it encounters an error or needs
to stop immediately. This can happen through the abort() system call.
3. Termination by parent process: A parent process may terminate a child process when the child finishes
its task. This is done by the using kill() system call.
4. Termination by signal: The parent process can also send specific signals like SIGSTOP to pause the child
or SIGKILL to immediately terminate it.

3.3. States of a Process


In an operating system, a process is a program that is being executed. During its execution, a process goes
through different states. Understanding these states helps us see how the operating system manages processes,
ensuring that the computer runs efficiently. Please refer Process in Operating System to understand more details
about processes.
Each process goes through several stages throughout its life cycle. In this article, We discuss different states of
process in detail.
Process Lifecycle
When you run a program (which becomes a process), it goes through different phases before it completion. These
phases, or states, can vary depending on the operating system, but the most common process lifecycle
includes two, five, or seven states. Here’s a simple explanation of these states:
The Two-State Model
The simplest way to think about a process’s lifecycle is with just two states:
1. Running: This means the process is actively using the CPU to do its work.
2. Not Running: This means the process is not currently using the CPU. It could be waiting for something,
like user input or data, or it might just be paused.

Two State Process Model


When a new process is created, it starts in the not running state. Initially, this process is kept in a program called
the dispatcher.
Here’s what happens step by step:
1. Not Running State: When the process is first created, it is not using the CPU.
2. Dispatcher Role: The dispatcher checks if the CPU is free (available for use).
3. Moving to Running State: If the CPU is free, the dispatcher lets the process use the CPU, and it moves
into the running state.
4. CPU Scheduler Role: When the CPU is available, the CPU scheduler decides which process gets to run
next. It picks the process based on a set of rules called the scheduling scheme, which varies from one
operating system to another.

The Five-State Model


The five-state process lifecycle is an expanded version of the two-state model. The two-state model works well
when all processes in the not running state are ready to run. However, in some operating systems, a process may
not be able to run because it is waiting for something, like input or data from an external device. To handle this
situation better, the not running state is divided into two separate states:

Five state Process Model


Here’s a simple explanation of the five-state process model:
 New: This state represents a newly created process that hasn’t started running yet. It has not been
loaded into the main memory, but its process control block (PCB) has been created, which holds
important information about the process.
 Ready: A process in this state is ready to run as soon as the CPU becomes available. It is waiting for the
operating system to give it a chance to execute.
 Running: This state means the process is currently being executed by the CPU. Since we’re assuming
there is only one CPU, at any time, only one process can be in this state.
 Blocked/Waiting: This state means the process cannot continue executing right now. It is waiting for
some event to happen, like the completion of an input/output operation (for example, reading data from
a disk).
 Exit/Terminate: A process in this state has finished its execution or has been stopped by the user for
some reason. At this point, it is released by the operating system and removed from memory.

The Seven-State Model


The states of a process are as follows:
 New State: In this step, the process is about to be created but not yet created. It is the program that is
present in secondary memory that will be picked up by the OS to create the process.
 Ready State: New -> Ready to run. After the creation of a process, the process enters the ready state i.e.
the process is loaded into the main memory. The process here is ready to run and is waiting to get the
CPU time for its execution. Processes that are ready for execution by the CPU are maintained in a queue
called a ready queue for ready processes.
 Run State: The process is chosen from the ready queue by the OS for execution and the instructions
within the process are executed by any one of the available processors.
 Blocked or Wait State: Whenever the process requests access to I/O needs input from the user or needs
access to a critical region(the lock for which is already acquired) it enters the blocked or waits state. The
process continues to wait in the main memory and does not require CPU. Once the I/O operation is
completed the process goes to the ready state.
 Terminated or Completed State: Process is killed as well as PCB is deleted. The resources allocated to
the process will be released or deallocated.
 Suspend Ready: Process that was initially in the ready state but was swapped out of main memory(refer
to Virtual Memory topic) and placed onto external storage by the scheduler is said to be in suspend ready
state. The process will transition back to a ready state whenever the process is again brought onto the
main memory.
 Suspend Wait or Suspend Blocked: Similar to suspend ready but uses the process which was performing
I/O operation and lack of main memory caused them to move to secondary memory. When work is
finished it may go to suspend ready.

 CPU and I/O Bound Processes: If the process is intensive in terms of CPU operations, then it is called CPU
bound process. Similarly, If the process is intensive in terms of I/O operations then it is called I/O bound
process.
How Does a Process Move From One State to Other State?
A process can move between different states in an operating system based on its execution status and resource
availability. Here are some examples of how a process can move between different states:
 New to Ready: When a process is created, it is in a new state. It moves to the ready state when the
operating system has allocated resources to it and it is ready to be executed.
 Ready to Running: When the CPU becomes available, the operating system selects a process from the
ready queue depending on various scheduling algorithms and moves it to the running state.
 Running to Blocked: When a process needs to wait for an event to occur (I/O operation or system call),
it moves to the blocked state. For example, if a process needs to wait for user input, it moves to the
blocked state until the user provides the input.
 Running to Ready: When a running process is preempted by the operating system, it moves to the ready
state. For example, if a higher-priority process becomes ready, the operating system may preempt the
running process and move it to the ready state.
 Blocked to Ready: When the event a blocked process was waiting for occurs, the process moves to the
ready state. For example, if a process was waiting for user input and the input is provided, it moves to
the ready state.
 Running to Terminated: When a process completes its execution or is terminated by the operating
system, it moves to the terminated state.

3.4. Process Implementation


Process Table and Process Control Block (PCB)

While creating a process, the operating system performs several operations. To identify the processes, it assigns
a process identification number (PID) to each process. As the operating system supports multi-programming, it
needs to keep track of all the processes. For this task, the process control block (PCB) is used to track the process’s
execution status. Each block of memory contains information about the process state, program counter, stack
pointer, status of opened files, scheduling algorithms, etc.
All this information is required and must be saved when the process is switched from one state to another. When
the process makes a transition from one state to another, the operating system must update information in the
process’s PCB. A Process Control Block (PCB) contains information about the process, i.e. registers, quantum,
priority, etc. The Process Table is an array of PCBs, which logically contains a PCB for all of the current processes
in the system.

Structure of the Process Control Block


A Process Control Block (PCB) is a data structure used by the operating system to manage information about a
process. The process control keeps track of many important pieces of information needed to manage processes
efficiently. The diagram helps explain some of these key data items.

 Pointer: It is a stack pointer that is required to be saved when the process is switched from one state to
another to retain the current position of the process.
 Process state: It stores the respective state of the process.
 Process number: Every process is assigned a unique id known as process ID or PID which stores the
process identifier.
 Program counter: Program Counter stores the counter, which contains the address of the next
instruction that is to be executed for the process.
 Register: Registers in the PCB, it is a data structure. When a processes is running and it’s time slice
expires, the current value of process specific registers would be stored in the PCB and the process would
be swapped out. When the process is scheduled to be run, the register values is read from the PCB and
written to the CPU registers. This is the main purpose of the registers in the PCB.
 Memory limits: This field contains the information about memory management system used by the
operating system. This may include page tables, segment tables, etc.
 List of Open files: This information includes the list of files opened for a process.

Additional Points to Consider for Process Control Block (PCB)


 Interrupt Handling: The PCB also contains information about the interrupts that a process may have
generated and how they were handled by the operating system.
 Context Switching: The process of switching from one process to another is called context switching. The
PCB plays a crucial role in context switching by saving the state of the current process and restoring the
state of the next process.
 Real-Time Systems: Real-time operating systems may require additional information in the PCB, such as
deadlines and priorities, to ensure that time-critical processes are executed in a timely manner.
 Virtual Memory Management: The PCB may contain information about a process virtual
memory management, such as page tables and page fault handling.
 Fault Tolerance: Some operating systems may use multiple copies of the PCB to provide fault tolerance
in case of hardware failures or software errors.

Location of the Process Control Block


The Process Control Block (PCB) is stored in a special part of memory that normal users can’t access. This is
because it holds important information about the process. Some operating systems place the PCB at the start of
the kernel stack for the process, as this is a safe and secure spot.
Advantages
Advantages of Process Table
 Keeps Track of Processes: It helps the operating system know which processes are running, waiting, or
completed.
 Helps in Scheduling: The process table provides information needed to decide which process should run
next.
 Easy Process Management: It organizes all the details about processes in one place, making it simple for
the OS to manage them.
Advantages of Process Control Block (PCB)
 Stores Process Details: PCB keeps all the important information about a process, like its state, ID, and
resources it uses.
 Helps Resume Processes: When a process is paused, PCB saves its current state so it can continue later
without losing data.
 Ensures Smooth Execution: By storing all the necessary details, PCB helps the operating system run
processes efficiently and without interruptions.
Disadvantages
Disadvantages of Process Table
 Takes Up Memory : The process table needs space to store information about all processes, which can
use a lot of memory in systems with many processes.
 Slower Operations : When there are too many processes, searching or updating the table can take more
time, slowing down the system.
 Extra Work for the System : The operating system has to constantly update the process table, which
adds extra work and can reduce overall system performance.
Disadvantages of Process Control Block (PCB)
 Uses More Memory : Each process needs its own PCB, so having many processes can consume a lot of
memory.
 Slows Context Switching : During context switching , the system has to update the PCB of the old process
and load the PCB of the new one, which takes time and affects performance.
 Security Risks : If the PCB is not well-protected, someone could access or modify it, causing security
problems for processes.

3.5. Types of Processes in Process Table


The process table is a data structure used by the operating system to keep track of all processes. It is the
collection of Program control Blocks (PCBs) which contains information about each process, such as its ID (PID),
current state (e.g., running, ready, waiting), CPU usage, memory allocation, and open files.
 The process table helps the operating system manage processes efficiently by tracking their progress and
resource usage.
 Each entry in the table corresponds to one process, and the operating system updates it as the process
moves through different states inits lifecycle.
Process states
The states of a process represent the different stages a process goes through during its lifecycle in an operating
system. A process can be in states such as new, ready, running, waiting, or terminated, depending on what it is
doing and the resources it needs. These states help the operating system manage processes efficiently, ensuring
that tasks are executed smoothly and system resources are used effectively.
Different types of processes in process table can be:
1. New Process
2. Ready Process
3. Running Process
4. Blocked / Waiting Process
5. Terminated Process
6. Zombie Process
7. Orphan Process
8. Daemon Process
1. New (Created) Process
A process that has been created but is not yet ready for execution. It remains in this state until the operating
system assigns the necessary resources.
2. Ready Process
It is the process that is ready to be loaded into the main memory. This process is ready to run and is waiting to
get the CPU time for its execution. Processes that are ready for execution by the CPU are maintained in a queue
called a ready queue for ready processes.
3. Running Process
A process that is currently being executed by the CPU. Only one process (or more in multicore systems) can be
in the running state at a time.
4. Blocked (Waiting) Process
A process that is waiting for an event to occur, such as input/output (I/O) completion or a resource becoming
available. It cannot continue execution until the required condition is met.
5. Terminated Process
A process that has completed its execution or has been explicitly killed. It remains in the process table briefly
until the operating system removes it.
6. Zombie Process
A zombie process is a process that has completed its execution but still remains in the process table because its
parent process has not yet read its exit status. It is called a "zombie" because it is no longer active or running,
but it still exists as a placeholder in the system.
Entry of child process remains in the process table until the parent process retrieves the exit status. During this
time, the child process is referred to as a zombie process. This happens because the operating system keeps the
process table entry to allow the parent to gather information about the terminated child.
7. Orphan Process
An orphan process is a child process currently performing its execution, whose parent has finished its execution
and has terminated, leaving the process table without waiting for the child process to finish. Orphan processes
are still active and continue to run normally, but they no longer have their original parent process to monitor or
control them.
8. Daemon Process
A daemon process is a background process that runs independently of any user control and performs specific
tasks for the system. Daemons are usually started when the system starts, and they run until the system stops.
A daemon process typically performs system services and is available at all times to more than one task or user.
Daemon processes are started by the root user or root shell and can be stopped only by the root user.
3.6. Inter Process Communication (IPC)
Processes need to communicate with each other in many situations, for example, to count occurrences of a word
in text file, output of grep command needs to be given to wc command, something like grep -o -i <word> <file>
| wc -l. Inter-Process Communication or IPC is a mechanism that allows processes to communicate. It helps
processes synchronize their activities, share information, and avoid conflicts while accessing shared resources.

Types of Process
Let us first talk about types of types of processes.
 Independent process: An independent process is not affected by the execution of other processes.
Independent processes are processes that do not share any data or resources with other processes. No
inte-process communication required here.
 Co-operating process: Interact with each other and share data or resources. A co-operating process can
be affected by other executing processes. Inter-process communication (IPC) is a mechanism that allows
processes to communicate with each other and synchronize their actions. The communication between
these processes can be seen as a method of cooperation between them.

3.6.1. Inter Process Communication


Inter process communication (IPC) allows different programs or processes running on a computer to share
information with each other. IPC allows processes to communicate by using different techniques like sharing
memory, sending messages, or using files. It ensures that processes can work together without interfering with
each other. Cooperating processes require an Inter Process Communication (IPC) mechanism that will allow them
to exchange data and information.
The two fundamental models of Inter Process Communication are:
 Shared Memory
 Message Passing
Figure 1 below shows a basic structure of communication between processes via the shared memory method
and via the message passing method.
An operating system can implement both methods of communication. First, we will discuss the shared memory
methods of communication and then message passing. Communication between processes using shared memory
requires processes to share some variable, and it completely depends on how the programmer will implement
it. One way of communication using shared memory can be imagined like this: Suppose process1 and process2
are executing simultaneously, and they share some resources or use some information from another process.
Process1 generates information about certain computations or resources being used and keeps it as a record in
shared memory. When process2 needs to use the shared information, it will check in the record stored in shared
memory and take note of the information generated by process1 and act accordingly. Processes can use shared
memory for extracting information as a record from another process as well as for delivering any specific
information to other processes.
Figure 1 below shows a basic structure of communication between processes via the shared memory method
and via the message passing method.
Methods in Inter process Communication
Inter-Process Communication refers to the techniques and methods that allow processes to exchange data and
coordinate their activities. Since processes typically operate independently in a multitasking environment, IPC is
essential for them to communicate effectively without interfering with one another. There are several methods
of IPC, each designed to suit different scenarios and requirements. These methods include shared memory,
message passing, semaphores, and signals, etc.

Methods in Inter process Communication


Inter Process communication (IPC) refers to the mechanisms and techniques used by operating systems to allow
different processes to communicate with each other. This allows running programs concurrently in an Operating
System.
The two fundamental models of Inter Process Communication are:
 Shared Memory
 Message Passing

Shared Memory
IPC through Shared Memory is a method where multiple processes are given access to the same region of
memory. This shared memory allows the processes to communicate with each other by reading and writing data
directly to that memory area.
Shared Memory in IPC can be visualized as Global variables in a program which are shared in the entire program
but shared memory in IPC goes beyond global variables, allowing multiple processes to share data through a
common memory space, whereas global variables are restricted to a single process.

Message Passing
IPC through Message Passing is a method where processes communicate by sending and receiving messages to
exchange data. In this method, one process sends a message, and the other process receives it, allowing them to
share information. Message Passing can be achieved through different methods like Sockets, Message Queues
or Pipes.
Sockets provide an endpoint for communication, allowing processes to send and receive messages over a
network. In this method, one process (the server) opens a socket and listens for incoming connections, while the
other process (the client) connects to the server and sends data. Sockets can use different communication
protocols, such as TCP (Transmission Control Protocol) for reliable, connection-oriented communication
or UDP (User Datagram Protocol) for faster, connectionless communication.
Different methods of Inter process Communication (IPC) are as follows:
1. Pipes – A pipe is a unidirectional communication channel used for IPC between two related processes.
One process writes to the pipe, and the other process reads from it.
Types of Pipes are Anonymous Pipes and Named Pipes (FIFOs)
2. Sockets – Sockets are used for network communication between processes running on different hosts.
They provide a standard interface for communication, which can be used across different platforms and
programming languages.
3. Shared memory – In shared memory IPC, multiple processes are given access to a common memory
space. Processes can read and write data to this memory, enabling fast communication between them.
4. Semaphores – Semaphores are used for controlling access to shared resources. They are used to prevent
multiple processes from accessing the same resource simultaneously, which can lead to data corruption.
5. Message Queuing – This allows messages to be passed between processes using either a single queue
or several message queue. This is managed by system kernel these messages are coordinated using an
API.

Inter Process Communication across the System


Inter-Process Communication (IPC) across the system refers to the methods that allow processes to communicate
and exchange data, even when they are running on different machines or in a distributed environment.
Using Remote Procedure calls
Remote Procedure Calls (RPC) allows a program to call a procedure (or function) on another machine in a
network, as though it were a local call. It abstracts the details of communication and makes distributed systems
easier to use. RPC is a technique used for distributed computing. It allows processes running on different hosts
to call procedures on each other as if they were running on the same host.
Using Remote Method Invocation
Remote Method Invocation (RMI) is a Java-based technique used for Inter-Process Communication (IPC) across
systems, specifically for calling methods on objects located on remote machines. It allows a program running on
one computer (the client) to execute a method on an object residing on another computer (the server), as if it
were a local method call.
Each method of IPC has its own advantages and disadvantages, and the choice of which method to use depends
on the specific requirements of the application. For example, if high-speed communication is required between
processes running on the same host, shared memory may be the best choice. On the other hand, if
communication is required between processes running on different hosts, sockets or RPC may be more
appropriate.

Role of Synchronization in IPC


In IPC, synchronization is essential for controlling access to shared resources and guaranteeing that processes do
not conflict with one another. Data consistency is ensured and problems like race situations are avoided with
proper synchronization.
Advantages of IPC
 Enables processes to communicate with each other and share resources, leading to increased efficiency
and flexibility.
 Facilitates coordination between multiple processes, leading to better overall system performance.
 Allows for the creation of distributed systems that can span multiple computers or networks.
 Can be used to implement various synchronization and communication protocols, such as semaphores,
pipes, and sockets.
Disadvantages of IPC
 Increases system complexity, making it harder to design, implement, and debug.
 Can introduce security vulnerabilities, as processes may be able to access or modify data belonging to
other processes.
 Requires careful management of system resources, such as memory and CPU time, to ensure that IPC
operations do not degrade overall system performance.
Can lead to data inconsistencies if multiple processes try to access or modify the same data at the same
time.
 Overall, the advantages of IPC outweigh the disadvantages, as it is a necessary mechanism for modern
operating systems and enables processes to work together and share resources in a flexible and efficient
manner. However, care must be taken to design and implement IPC systems carefully, in order to avoid
potential security vulnerabilities and performance issues.

3.6.2. Introduction of Process Synchronization


Process Synchronization is used in a computer system to ensure that multiple processes or threads can run
concurrently without interfering with each other.
The main objective of process synchronization is to ensure that multiple processes access shared resources
without interfering with each other and to prevent the possibility of inconsistent data due to concurrent access.
To achieve this, various synchronization techniques such as semaphores, monitors, and critical sections are used.
In a multi-process system, synchronization is necessary to ensure data consistency and integrity, and to avoid the
risk of deadlocks and other synchronization problems. Process synchronization is an important aspect of modern
operating systems, and it plays a crucial role in ensuring the correct and efficient functioning of multi-process
systems.
On the basis of synchronization, processes are categorized as one of the following two types:
 Independent Process: The execution of one process does not affect the execution of other processes.
 Cooperative Process: A process that can affect or be affected by other processes executing in the system.
Process synchronization problem arises in the case of Cooperative processes also because resources are shared
in Cooperative processes.

Process Synchronization
Process Synchronization is the coordination of execution of multiple processes in a multi-process system to
ensure that they access shared resources in a controlled and predictable manner. It aims to resolve the problem
of race conditions and other synchronization issues in a concurrent system.
Lack of Synchronization in Inter Process Communication Environment leads to following problems:
1. Inconsistency: When two or more processes access shared data at the same time without proper
synchronization. This can lead to conflicting changes, where one process’s update is overwritten by
another, causing the data to become unreliable and incorrect.
2. Loss of Data: Loss of data occurs when multiple processes try to write or modify the same shared
resource without coordination. If one process overwrites the data before another process finishes,
important information can be lost, leading to incomplete or corrupted data.
3. Deadlock: Lack of Synchronization leads to Deadlock which means that two or more processes get stuck,
each waiting for the other to release a resource. Because none of the processes can continue, the system
becomes unresponsive and none of the processes can complete their tasks.

Types of Process Synchronization


The two primary type of process Synchronization in an Operating System are:
1. Competitive: Two or more processes are said to be in Competitive Synchronization if and only if they
compete for the accessibility of a shared resource.
Lack of Synchronization among competing process may lead to either Inconsistency or Data loss.
2. Cooperative: Two or more processes are said to be in Cooperative Synchronization if and only if they
get affected by each other i.e. execution of one process affects the other process.
Lack of Synchronization among Cooperating process may lead to Deadlock.

Example:
Let consider a Linux code:
>>ps/grep "chrome"/wc
 ps command produces list of processes running in linux.
 grep command find/count the lines form the output of the ps command.
 wc command counts how many words are in the output.
Therefore, three processes are created which are ps, grep and wc. grep takes input from ps and wc takes input
from grep.
From this example, we can understand the concept of cooperative processes, where some processes produce
and others consume, and thus work together. This type of problem must be handled by the operating system, as
it is the manager.
Conditions That Require Process Synchronization
1. Critical Section: It is that part of the program where shared resources are accessed. Only one process
can execute the critical section at a given point of time. If there are no shared resources, then no need
of synchronization mechanisms.
2. Race Condition: It is a situation wherein processes are trying to access the critical section and the final
result depends on the order in which they finish their update. Process Synchronization mechanism need
to ensure that instructions are being executed in a required order only.
3. Pre Emption: Preemption is when the operating system stops a running process to give the CPU to
another process. This allows the system to make sure that important tasks get enough CPU time. This is
important as mainly issues arise when a process has not finished its job on shared resource and got
preempted. The other process might end up reading an inconsistent value if process synchronization is
not done.

What is Race Condition?


A race condition is a situation that may occur inside a critical section. This happens when the result of multiple
process/thread execution in the critical section differs according to the order in which the threads execute. Race
conditions in critical sections can be avoided if the critical section is treated as an atomic instruction. Also, proper
thread synchronization using locks or atomic variables can prevent race conditions.
Let us consider the following example.
 There is a shared variable balance with value 100.
 There are two processes deposit(10) and withdraw(10). The deposit process does balance = balance + 10
and withdraw process does balance = balance – 10.
 Suppose these processes run in an interleaved manner. The deposit() fetches the balance as 100, then
gets preempted.
 Now withdraw() get scheduled and makes balance 90.
 Finally deposit is rescheduled and makes the value as 110. This value is not correct as the balance after
both operations should be 100 only
We can not notice that the different segments of two processes running in different order would give different
values of balance.

3.6.3. Critical Section Problem


A critical section is a code segment that can be accessed by only one process at a time. The critical section
contains shared variables that need to be synchronized to maintain the consistency of data variables. So the
critical section problem means designing a way for cooperative processes to access shared resources without
creating data inconsistencies.
In the above example, the operations that involve balance variable should be put in critical sections of both
deposit and withdraw.
In the entry section, the process requests for entry in the Critical Section.
Any solution to the critical section problem must satisfy three requirements:
 Mutual Exclusion: If a process is executing in its critical section, then no other process is allowed to
execute in the critical section.
 Progress: If no process is executing in the critical section and other processes are waiting outside the
critical section, then only those processes that are not executing in their remainder section can
participate in deciding which will enter the critical section next, and the selection cannot be postponed
indefinitely.
 Bounded Waiting: A bound must exist on the number of times that other processes are allowed to enter
their critical sections after a process has made a request to enter its critical section and before that
request is granted.

Classical IPC Problems


Various classical Inter-Process Communication (IPC) problems include:
 Producer Consumer Problem
 Readers-Writers Problem
 Dining Philosophers Problem

Producer Consumer Problem


The Producer-Consumer Problem is a classic example of process synchronization. It describes a situation where
two types of processes producers and consumers share a common, limited-size buffer or storage.
 Producer: A producer creates or generates data and puts it into the shared buffer. It continues to produce
data as long as there is space in the buffer.
 Consumer: A consumer takes data from the buffer and uses it. It continues to consume data as long as
there is data available in the buffer.
The challenge arises because both the producer and the consumer need to access the same buffer at the same
time, and if they do not properly synchronize their actions, issues can occur.
Key Problems in the Producer-Consumer Problem:
1. Buffer Overflow: If the producer tries to add data when the buffer is full, there will be no space for new
data, causing the producer to be blocked.
2. Buffer Underflow: If the consumer tries to consume data when the buffer is empty, it has nothing to
consume, causing the consumer to be blocked.

Readers-Writers Problem
The Readers-Writers Problem is a classic synchronization problem where multiple processes are involved in
reading and writing data from a shared resource. This problem typically involves two types of processes:
 Readers: These processes only read data from the shared resource and do not modify it.
 Writers: These processes modify or write data to the shared resource.
The challenge in the Reader-Writer problem is to allow multiple readers to access the shared data simultaneously
without causing issues. However, only one writer should be allowed to write at a time, and no reader should be
allowed to read while a writer is writing. This ensures the integrity and consistency of the data.
Readers-Writers Problem – Solution (Readers Preference Solution)
Readers-Writers Problem – Solution (Writers Preference Solution)

Dining Philosophers Problem


The Dining Philosophers Problem is a well-known example that shows the difficulties of sharing resources and
preventing deadlock when multiple processes are involved. The problem involves a set of philosophers sitting
around a dining table. Each philosopher thinks deeply, but when they are hungry, they need to eat. However, to
eat, they must pick up two forks from the table, one from the left and one from the right.
Problem Setup:
 There are five philosophers sitting around a circular table.
 Each philosopher has a plate of food in front of them and a fork to their left and right.
 A philosopher needs both forks to eat. If they pick up a fork, they hold it until they finish eating.
 After eating, they put down both forks and start thinking again.
The problem arises when multiple philosophers try to pick up forks at the same time, which can lead to a situation
where each philosopher holds one fork but cannot get the second fork, leading to a deadlock. Additionally,
there’s a risk of starvation if some philosophers are continually denied the opportunity to eat.
Dining Philosophers Problem – Solution (using Semaphores)
Advantages of Process Synchronization
 Ensures data consistency and integrity
 Avoids race conditions
 Prevents inconsistent data due to concurrent access
 Supports efficient and effective use of shared resources
Disadvantages of Process Synchronization
 Adds overhead to the system
 This can lead to performance degradation
 Increases the complexity of the system
 Can cause deadlock if not implemented properly.

Critical Section in Synchronization


A critical section is a part of a program where shared resources like memory or files are accessed by multiple
processes or threads. To avoid issues like data inconsistency or race conditions, synchronization techniques
ensure that only one process or thread uses the critical section at a time.
 The critical section contains shared variables or resources that need to be synchronized to maintain the
consistency of data variables.
 In simple terms, a critical section is a group of instructions/statements or regions of code that need to
be executed atomically, such as accessing a resource (file, input or output port, global data, etc.)
In concurrent programming, if one process tries to change the value of shared data at the same time as
another thread tries to read the value (i.e., data race across threads), the result is unpredictable. The
access to such shared variables (shared memory, shared files, shared port, etc.) is to be synchronized.
Few programming languages have built-in support for synchronization. It is critical to understand the importance
of race conditions while writing kernel-mode programming (a device driver, kernel thread, etc.) since the
programmer can directly access and modify kernel data structures
Although there are some properties that should be followed if any code in the critical section
1. Mutual Exclusion: If process Pi is executing in its critical section, then no other processes can be
executing in their critical sections.
2. Progress: If no process is executing in its critical section and some processes wish to enter their critical
sections, then only those processes that are not executing in their remainder sections can participate in
deciding which will enter its critical section next, and this selection cannot be postponed indefinitely.
3. Bounded Waiting: There exists a bound, or limit, on the number of times that other processes are
allowed to enter their critical sections after a process has made a request to enter its critical section and
before that request is granted.
Two general approaches are used to handle critical sections:
1. Preemptive kernels: A preemptive kernel allows a process to be preempted while it is running in kernel
mode.
2. Non preemptive kernels: A non-preemptive kernel does not allow a process running in kernel mode to
be preempted. A kernel-mode process will run until it exists in kernel mode, blocks, or voluntarily yields
control of the CPU. A non-preemptive kernel is essentially free from race conditions on kernel data
structures, as only one process is active in the kernel at a time.
Critical Section Problem
The use of critical sections in a program can cause a number of issues, including:
 Deadlock: When two or more threads or processes wait for each other to release a critical section, it can
result in a deadlock situation in which none of the threads or processes can move. Deadlocks can be
difficult to detect and resolve, and they can have a significant impact on a program’s performance and
reliability.
 Starvation: When a thread or process is repeatedly prevented from entering a critical section, it can
result in starvation, in which the thread or process is unable to progress. This can happen if the critical
section is held for an unusually long period of time, or if a high-priority thread or process is always given
priority when entering the critical section.
 Overhead: When using critical sections, threads or processes must acquire and release locks or
semaphores, which can take time and resources. This may reduce the program’s overall performance.

Critical section
It could be visualized using the pseudo-code below –
do{
flag=1;
while(flag); // (entry section)
// critical section
if (!flag)
// remainder section
} while(true);

Solution to Critical Section Problem


A simple solution to the critical section can be thought of as shown below,
acquireLock();
Process Critical Section
releaseLock();
A thread must acquire a lock prior to executing a critical section. The lock can be acquired by only one thread.
There are various ways to implement locks in the above pseudo-code.

Strategies for avoiding problems: While deadlocks, starvation, and overhead are mentioned as potential issues,
there are more specific strategies for avoiding or mitigating these problems. For example, using timeouts to
prevent deadlocks, implementing priority inheritance to prevent priority inversion and starvation, or optimizing
lock implementation to reduce overhead.
Examples of critical sections in real-world applications: While the article describes critical sections in a general
sense, it could be useful to provide examples of how critical sections are used in specific real-world applications,
such as database management systems or web servers.
Impact on scalability: The use of critical sections can impact the scalability of a program, particularly in
distributed systems where multiple nodes are accessing shared resources.
In process synchronization, a critical section is a section of code that accesses shared resources such as variables
or data structures, and which must be executed by only one process at a time to avoid race conditions and other
synchronization-related issues.
A critical section can be any section of code where shared resources are accessed, and it typically consists of two
parts: the entry section and the exit section. The entry section is where a process requests access to the critical
section, and the exit section is where it releases the resources and exits the critical section.
To ensure that only one process can execute the critical section at a time, process synchronization mechanisms
such as semaphores and mutexes are used. A semaphore is a variable that is used to indicate whether a resource
is available or not, while a mutex is a semaphore that provides mutual exclusion to shared resources.
When a process enters a critical section, it must first request access to the semaphore or mutex associated with
the critical section. If the resource is available, the process can proceed to execute the critical section. If the
resource is not available, the process must wait until it is released by the process currently executing the critical
section.
Once the process has finished executing the critical section, it releases the semaphore or mutex, allowing another
process to enter the critical section if necessary.
Proper use of critical sections and process synchronization mechanisms is essential in concurrent programming
to ensure proper synchronization of shared resources and avoid race conditions, deadlocks, and other
synchronization-related issues.
Advantages of Critical Section in Process Synchronization
1. Prevents race conditions: By ensuring that only one process can execute the critical section at a time,
race conditions are prevented, ensuring consistency of shared data.
2. Provides mutual exclusion: Critical sections provide mutual exclusion to shared resources, preventing
multiple processes from accessing the same resource simultaneously and causing synchronization-
related issues.
3. Reduces CPU utilization: By allowing processes to wait without wasting CPU cycles, critical sections can
reduce CPU utilization, improving overall system efficiency.
4. Simplifies synchronization: Critical sections simplify the synchronization of shared resources, as only one
process can access the resource at a time, eliminating the need for more complex synchronization
mechanisms.
Disadvantages of Critical Section in Process Synchronization
1. Overhead: Implementing critical sections using synchronization mechanisms like semaphores and
mutexes can introduce additional overhead, slowing down program execution.
2. Deadlocks: Poorly implemented critical sections can lead to deadlocks, where multiple processes are
waiting indefinitely for each other to release resources.
3. Can limit parallelism: If critical sections are too large or are executed frequently, they can limit the
degree of parallelism in a program, reducing its overall performance.
4. Can cause contention: If multiple processes frequently access the same critical section, contention for
the critical section can occur, reducing performance.
Important Points Related to Critical Section in Process Synchronization
1. Understanding the concept of critical section and why it’s important for synchronization.
2. Familiarity with the different synchronization mechanisms used to implement critical sections, such as
semaphores, mutexes, and monitors.
3. Knowledge of common synchronization problems that can arise in critical sections, such as race
conditions, deadlocks, and live locks.
4. Understanding how to design and implement critical sections to ensure proper synchronization of shared
resources and prevent synchronization-related issues.
5. Familiarity with best practices for using critical sections in concurrent programming.

3.6.4. Peterson’s Algorithm in Process Synchronization


Peterson’s Algorithm is a classic solution to the critical section problem in process synchronization. It ensures
mutual exclusion meaning only one process can access the critical section at a time and avoids race conditions.
The algorithm uses two shared variables to manage the turn-taking mechanism between two processes ensuring
that both processes follow a fair order of execution. It’s simple and effective for solving synchronization issues in
two-process scenarios. In this article we will learn about Peterson’s Algorithm, its working, and practical examples
to help understand its use in process synchronization.
What is Peterson’s Algorithm?
Peterson’s Algorithm is a well-known solution for ensuring mutual exclusion in process synchronization. It is
designed to manage access to shared resources between two processes in a way that prevents conflicts or data
corruption. The algorithm ensures that only one process can enter the critical section at any given time while the
other process waits its turn. Peterson’s Algorithm uses two simple variables one to indicate whose turn it is to
access the critical section and another to show if a process is ready to enter. This method is often used in
scenarios where two processes need to share resources or data without interfering with each other. It is simple,
easy to understand, and serves as a foundational concept in process synchronization.

Algorithm for Pi process Algorithm for Pi process

do{ do{
flag[i] = true; flag[j] = true;
turn = j; turn = i;
while (flag[j] && turn == j); while (flag[i] && turn == i);
//critical section //critical section

flag[i] = false; flag[j] = false;

//remainder section //remainder section


}while(true); }while(true);

Peterson’s Algorithm Explanation


Peterson’s Algorithm is a mutual exclusion solution used to ensure that two processes do not enter into the
critical sections at the same time. The algorithm uses two main components: a turn variable and a flag array.
 The turn variable is an integer that indicates whose turn it is to enter the critical section.
 The flag array contains Boolean values for each process, indicating whether a process wants to enter the
critical section.
Here’s how Peterson’s Algorithm works step-by-step:
 Initial Setup: Initially, both processes set their respective flag values to false, meaning neither wants to
enter the critical section. The turn variable is set to the ID of one of the processes (either 0 or 1),
indicating that it’s that process’s turn to enter.
 Intention to Enter: When a process wants to enter the critical section, it sets its flag value
to true signaling its intent to enter.
 Set the Turn: Then the process, which is having the next turn, sets the turn variable to its own ID. This
will indicate that it is its turn to enter the critical section.
 Waiting Loop: Both processes enter a loop where they check the flag of the other process and the turn
variable:
o If the other process wants to enter (i.e., flag[1 - processID] == true), and
o It’s the other process’s turn (i.e., turn == 1 - processID), then the process waits, allowing the
other process to enter the critical section.
This loop ensures that only one process can enter the critical section at a time, preventing a race condition.
 Critical Section: Once a process successfully exits the loop, it enters the critical section, where it can
safely access or modify the shared resource without interference from the other process.
 Exiting the Critical Section: After finishing its work in the critical section, the process resets its flag
to false. This signals that it no longer wants to enter the critical section, and the other process can now
have its turn.
By alternating turns and using these checks, Peterson’s algorithm ensures mutual exclusion, meaning only one
process can access the critical section at a time, and both processes get an equal opportunity to do so.
Example of Peterson’s Algorithm
Peterson’s solution is often used as a simple example of mutual exclusion in concurrent programming. Here are
a few scenarios where it can be applied:
 Accessing a shared printer: Peterson’s solution ensures that only one process can access the printer at
a time when two processes are trying to print documents.
 Reading and writing to a shared file: It can be used when two processes need to read from and write to
the same file, preventing concurrent access issues.
 Competing for a shared resource: When two processes are competing for a limited resource, such as a
network connection or critical hardware, Peterson’s solution ensures mutual exclusion to avoid conflicts.
Peterson’s algorithm –
C++CJavaPythonC#JavaScript
import threading

N = 2 # Number of threads (producer and consumer)


flag = [False] * N # Flags to indicate readiness
turn = 0 # Variable to indicate turn

# Function for producer thread


def producer(j):
while True:
flag[j] = True # Producer j is ready to produce
turn = 1 - j # Allow consumer to consume
while flag[1 - j] and turn == 1 - j:
# Wait for consumer to finish
# Producer waits if consumer is ready and it's consumer's turn
pass

# Critical Section: Producer produces an item and puts it into the buffer

flag[j] = False # Producer is out of the critical section

# Remainder Section: Additional actions after critical section

# Function for consumer thread


def consumer(i):
while True:
flag[i] = True # Consumer i is ready to consume
turn = i # Allow producer to produce
while flag[1 - i] and turn == i:
# Wait for producer to finish
# Consumer waits if producer is ready and it's producer's turn
pass

# Critical Section: Consumer consumes an item from the buffer


flag[i] = False # Consumer is out of the critical section

# Remainder Section: Additional actions after critical section

# Create producer and consumer threads


producer_thread = threading.Thread(target=producer, args=(0,))
consumer_thread = threading.Thread(target=consumer, args=(1,))

# Start the threads


producer_thread.start()
consumer_thread.start()

# Wait for the threads to finish


producer_thread.join()
consumer_thread.join()
Explanation of the above code:
This code shows a simple version of Peterson’s Algorithm for synchronizing two processes: a producer and a
consumer. The goal is to make sure that both processes don’t interfere with each other while accessing a shared
resource, which is a buffer in this case. The producer creates items, and the consumer takes them.
Here’s a detailed explanation of the code:
Producer (j)
 Producer is ready to produce:
o flag[j] = true;: This indicates that the producer (process j) is ready to produce an item.
The flag array holds the intentions of both processes (producer and consumer). When the
producer sets its flag to true, it means it’s willing to access the shared resource (the buffer).
 Set the turn for consumer:
o turn = i;: The turn variable is used to decide whose turn it is to enter the critical section (where
the shared resource is accessed). Here, the producer is setting the turn to the consumer
(process i), meaning the producer is willing to wait if the consumer wants to consume an item.
 Wait until the consumer is done:
o while (flag[i] == true && turn == i) : This is the crucial part of the algorithm. The producer checks
if the consumer has indicated that it is ready (flag[i] == true) and whether it’s the consumer’s
turn (turn == i). If both conditions are true the producer must wait (it cannot enter the critical
section). This ensures that the consumer gets a chance to consume before the producer
produces a new item.
 Producer produces an item:
o Once the condition in the while loop is no longer true (meaning either the consumer is not ready
or it is the producer’s turn), the producer can safely enter the critical section, produce an item,
and place it into the buffer.
 Exit the critical section:
o flag[j] = false;: After the producer finishes its work in the critical section, it sets its flag to false,
indicating that it is done and no longer wants to produce. This allows the consumer to have the
opportunity to consume the item next.
Consumer (i)
 Consumer is ready to consume:
o flag[i] = true;: The consumer sets its flag to true, indicating that it is ready to consume an item
from the buffer.
 Set the turn for producer:
o turn = j;: The consumer sets the turn variable to the producer’s process ID (j). This indicates that
it is the producer’s turn to produce if the consumer is done consuming.
 Wait until the producer is done:
o while (flag[j] == true && turn == j) : This is similar to the producer’s waiting condition. The
consumer checks if the producer is ready to produce (flag[j] == true) and whether it’s the
producer’s turn (turn == j). If both conditions are true, the consumer must wait, allowing the
producer to produce an item.
 Consumer consumes an item:
o Once the while loop exits (meaning the consumer is allowed to consume), the consumer enters
the critical section and consumes an item from the buffer.
 Exit the critical section:
o flag[i] = false;: After consuming the item, the consumer sets its flag to false, indicating it no longer
wants to consume. This allows the producer to have the opportunity to produce the next item.

Advantages of the Peterson’s Solution


1. With Peterson’s solution, multiple processes can access and share a resource without causing any
resource conflicts.
2. Every process has a chance to be carried out.
3. It uses straightforward logic and is easy to put into practice.
4. Since it is entirely software dependent and operates in user mode, it can be used with any hardware.
eliminates the chance of a deadlock.
Disadvantages of the Peterson’s Solution
1. Waiting for the other processes to exit the critical region may take a long time. We call it busy waiting.
2. On systems that have multiple CPUs, this algorithm might not function.

3.6.5. Lock Variable Synchronization Mechanism


A lock variable provides the simplest synchronization mechanism for processes. Some noteworthy points
regarding Lock Variables are-
1. It’s a software mechanism implemented in user mode, i.e. no support required from the Operating
System.
2. It’s a busy waiting solution (process continuously checks for a condition and hence wastes CPU cycles).
3. It can be used for more than two processes.
When Lock = 0 implies critical section is vacant (initial value ) and Lock = 1 implies critical section occupied.
The pseudocode looks something like this –
// Entry section
while(lock != 0); // Note the semicolon
Lock = 1;
// critical section
...............................

// Exit section
Lock = 0;
A more formal approach to the Lock Variable method for process synchronization can be seen in the following
code snippet :
C++C
#include <mutex>
#include <condition_variable>

char buffer[SIZE];
int count = 0,
start = 0,
end = 0;

std::mutex mtx;
std::condition_variable cv;

void put(char c)
{
std::unique_lock<std::mutex> lock(mtx);

while (count == SIZE) {


cv.wait(lock);
}

count++;
buffer[start] = c;
start++;

if (start == SIZE) {
start = 0;
}

cv.notify_all();
}

char get()
{
std::unique_lock<std::mutex> lock(mtx);

while (count == 0) {
cv.wait(lock);
}

count--;
char c = buffer[end];
end++;

if (end == SIZE) {
end = 0;
}

cv.notify_all();

return c;
}
Here we can see a classic implementation of the reader-writer’s problem. The buffer here is the shared memory
and many processes are either trying to read or write a character to it. To prevent any ambiguity of data we
restrict concurrent access by using a lock variable. We have also applied a constraint on the number of
readers/writers that can have access.
Now every Synchronization mechanism is judged on the basis of three primary parameters :
1. Mutual Exclusion.
2. Progress.
3. Bounded Waiting.
Of which mutual exclusion is the most important of all parameters. The Lock Variable doesn’t provide mutual
exclusion in some cases. This fact can be best verified by writing its pseudo-code in the form of an assembly
language code as given below.
1. Load Lock, R0 ; (Store the value of Lock in Register R0.)
2. CMP R0, #0 ; (Compare the value of register R0 with 0.)
3. JNZ Step 1 ; (Jump to step 1 if value of R0 is not 0.)
4. Store #1, Lock ; (Set new value of Lock as 1.)
Enter critical section
5. Store #0, Lock ; (Set the value of lock as 0 again.)
Now let’s suppose that processes P1 and P2 are competing for Critical Section and their sequence of execution
be as follows (initial value of Lock = 0) –
1. P1 executes statement 1 and gets pre-empted.
2. P2 executes statement 1, 2, 3, 4 and enters Critical Section and gets pre-empted.
3. P1 executes statement 2, 3, 4 and also enters Critical Section.
Here initially the R0 of process P1 stores lock value as 0 but fails to update the lock value as 1. So when P2
executes it also finds the LOCK value as 0 and enters Critical Section by setting LOCK value as 1. But the real
problem arises when P1 executes again it doesn’t check the updated value of Lock. It only checks the previous
value stored in R0 which was 0 and it enters critical section.
This is only one possible sequence of execution among many others. Some may even provide mutual exclusion
but we cannot dwell on that. According to murphy’s law “Anything that can go wrong will go wrong“. So like all
easy things the Lock Variable Synchronization method comes with its fair share of Demerits but its a good starting
point for us to develop better Synchronization Algorithms to take care of the problems that we face here.
3.6.6. Semaphores in Process Synchronization

Semaphores are a tool used in operating systems to help manage how different processes (or programs) share
resources, like memory or data, without causing conflicts. A semaphore is a special kind of synchronization data
that can be used only through specific synchronization primitives. Semaphores are used to implement critical
sections, which are regions of code that must be executed by only one process at a time. By using semaphores,
processes can coordinate access to shared resources, such as shared memory or I/O devices.

What is Semaphores?
A semaphore is a synchronization tool used in concurrent programming to manage access to shared resources.
It is a lock-based mechanism designed to achieve process synchronization, built on top of basic locking
techniques.
Semaphores use a counter to control access, allowing synchronization for multiple instances of a resource.
Processes can attempt to access one instance, and if it is not available, they can try other instances. Unlike basic
locks, which allow only one process to access one instance of a resource. Semaphores can handle more complex
synchronization scenarios, involving multiple processes or threads. It help prevent problems like race
conditions by controlling when and how processes access shared data.
The process of using Semaphores provides two operations:
 wait (P): The wait operation decrements the value of the semaphore
 signal (V): The signal operation increments the value of the semaphore.
When the value of the semaphore is zero, any process that performs a wait operation will be blocked until
another process performs a signal operation.
When a process performs a wait operation on a semaphore, the operation checks whether the value of the
semaphore is >0. If so, it decrements the value of the semaphore and lets the process continue its execution;
otherwise, it blocks the process on the semaphore. A signal operation on a semaphore activates a process
blocked on the semaphore if any, or increments the value of the semaphore by 1. Due to these semantics,
semaphores are also called counting semaphores. The initial value of a semaphore determines how many
processes can get past the wait operation.
Semaphores are required for process synchronization to make sure that multiple processes can safely share
resources without interfering with each other. They help control when a process can access a shared resource,
preventing issues like race conditions.
Types of Semaphores
Semaphores are of two Types:
 Binary Semaphore: This is also known as a mutex lock, as they are locks that provide mutual exclusion.
It can have only two values – 0 and 1. Its value is initialized to 1. It is used to implement the solution of
critical section problems with multiple processes and a single resource.
 Counting Semaphore: Counting semaphores can be used to control access to a given resource consisting
of a finite number of instances. The semaphore is initialized to the number of resources available. Its
value can range over an unrestricted domain.

Working of Semaphore
A semaphore is a simple yet powerful synchronization tool used to manage access to shared resources in a system
with multiple processes. It works by maintaining a counter that controls access to a specific resource, ensuring
that no more than the allowed number of processes access the resource at the same time.
There are two primary operations that a semaphore can perform:
1. Wait (P operation): This operation checks the semaphore’s value. If the value is greater than 0, the
process is allowed to continue, and the semaphore’s value is decremented by 1. If the value is 0, the
process is blocked (waits) until the semaphore value becomes greater than 0.
2. Signal (V operation): After a process is done using the shared resource, it performs the signal operation.
This increments the semaphore’s value by 1, potentially unblocking other waiting processes and allowing
them to access the resource.
Now let us see how it does so. First, look at two operations that can be used to access and change the value of
the semaphore variable.

A critical section is surrounded by both operations to implement process synchronization. The below image
demonstrates the basic mechanism of how semaphores are used to control access to a critical section in a multi-
process environment, ensuring that only one process can access the shared resource at a time

Now, let us see how it implements mutual exclusion. Let there be two processes P1 and P2 and a semaphore s is
initialized as 1. Now if suppose P1 enters in its critical section then the value of semaphore s becomes 0. Now if
P2 wants to enter its critical section then it will wait until s > 0, this can only happen when P1 finishes its critical
section and calls V operation on semaphore s.
This way mutual exclusion is achieved. Look at the below image for details which is a Binary semaphore.
Implementation – Binary Semaphores
C++
struct semaphore {

enum value(0, 1);

// q contains all Process Control Blocks (PCBs)


// corresponding to processes got blocked
// while performing down operation.
Queue<process> q;

};
P(semaphore s)
{
if (s.value == 1) {
s.value = 0;
}
else {
// add the process to the waiting queue
q.push(P) sleep();
}
}
V(semaphore s)
{
if (s.q is empty) {
s.value = 1;
}
else {

// select a process from waiting queue


Process p = q.front();
// remove the process from waiting as it has been
// sent for CS
q.pop();
wakeup(p);
}
}
The description above is for binary semaphore which can take only two values 0 and 1 and ensure mutual
exclusion. There is one other type of semaphore called counting semaphore which can take values greater than
one.
Now suppose there is a resource whose number of instances is 4. Now we initialize S = 4 and the rest is the same
as for binary semaphore. Whenever the process wants that resource it calls P or waits for function and when it
is done it calls V or signal function. If the value of S becomes zero then a process has to wait until S becomes
positive. For example, Suppose there are 4 processes P1, P2, P3, and P4, and they all call wait operation on
S(initialized with 4). If another process P5 wants the resource then it should wait until one of the four processes
calls the signal function and the value of semaphore becomes positive.
Limitations
 One of the biggest limitations of semaphore is priority inversions.
 Deadlock, suppose a process is trying to wake up another process that is not in a sleep state. Therefore,
a deadlock may be blocked indefinitely.
 The operating system has to keep track of all calls to wait and signal the semaphore.
The main problem with semaphores is that they require busy waiting, If a process is in the critical section, then
other processes trying to enter the critical section will be waiting until the critical section is not occupied by any
process. Whenever any process waits then it continuously checks for semaphore value (look at this line while
(s==0); in P operation) and wastes CPU cycle.
There is also a chance of “spinlock” as the processes keep on spinning while waiting for the lock. To avoid this
another implementation is provided below.
Implementation – Counting semaphore
C++
struct Semaphore {

int value;

// q contains all Process Control Blocks(PCBs)


// corresponding to processes got blocked
// while performing down operation.
Queue<process> q;

};
P(Semaphore s)
{
s.value = s.value - 1;
if (s.value < 0) {

// add process to queue


// here p is a process which is currently executing
q.push(p);
block();
}
else
return;
}

V(Semaphore s)
{
s.value = s.value + 1;
if (s.value <= 0) {

// remove process p from queue


Process p = q.pop();
wakeup(p);
}
else
return;
}
In this implementation whenever the process waits it is added to a waiting queue of processes associated with
that semaphore. This is done through the system call block() on that process. When a process is completed it
calls the signal function and one process in the queue is resumed. It uses the wakeup() system call.
Uses of Semaphores
 Mutual Exclusion : Semaphore ensures that only one process accesses a shared resource at a time.
 Process Synchronization : Semaphore coordinates the execution order of multiple processes.
 Resource Management : Limits access to a finite set of resources, like printers, devices, etc.
 Reader-Writer Problem : Allows multiple readers but restricts the writers until no reader is present.
 Avoiding Deadlocks : Prevents deadlocks by controlling the order of allocation of resources.

Advantages of Semaphores
 Semaphore is a simple and effective mechanism for process synchronization
 Supports coordination between multiple processes. By controlling the access to critical sections,
semaphores help in managing multiple processes without them interfering with each other.
 When used correctly, semaphores can help avoid deadlocks by managing access to resources efficiently
and ensuring that no process is indefinitely blocked from accessing necessary resources.
 Semaphores help prevent race conditions by ensuring that only one process can access a shared resource
at a time.
 Provides a flexible and robust way to manage shared resources.

Disadvantages of Semaphores
 It Can lead to performance degradation due to overhead associated with wait and signal operations.
 If semaphores are not managed carefully, they can lead to deadlock. This often occurs when semaphores
are not released properly or when processes acquire semaphores in an inconsistent order.
 It can cause performance issues in a program if not used properly.
 It can be difficult to debug and maintain. Debugging systems that rely heavily on semaphores can be
challenging, as it is hard to track the state of each semaphore and ensure that all processes are correctly
synchronized
 It can be prone to race conditions and other synchronization problems if not used correctly.
 It can be vulnerable to certain types of attacks, such as denial of service attacks.

Classical Synchronization Problems using Semaphores Concept


These are the classical synchronization problem using the concept of semaphores.
1. Producer-Consumer Problem
The producer-consumer problem involves two types of processes producers that generate data and consumers
that process data. The Producer-Consumer Problem is like a restaurant where the chef (producer) makes food
and the customer (consumer) eats it. The counter (buffer: A fixed-size queue where producers place items and
consumers remove items.) holds food temporarily. A special lock (semaphore) ensures the chef doesn’t overflow
the counter and the customer doesn’t take food when it’s not available. In This way, everything runs smoothly
and efficiently and gives faster results.
Semaphores Used
 empty: Counts the number of empty slots in the buffer
 full: Counts the number of full slots in the buffer
 mutex: Locks access to the buffer (mutual exclusion)

2. Traffic Light Control


Description: Traffic lights at an intersection can be managed using semaphores to control the flow of traffic and
ensure safe crossing.
Example:
Traffic Lights: Represented by semaphores that control the green, yellow, and red lights for different directions.
Semaphores Used: Each light direction is controlled by a semaphore that manages the timing and transitions
between light states.
Implementation
Light Controller: Uses semaphores to cycle through green, yellow, and red lights. The controller ensures that only
one direction has the green light at a time and manages transitions to avoid conflicts.

3. Bank Transaction Processing


 Multiple Transactions: Processes that need to access shared resources (accounts)
 Account Balance: Shared resource represented by a semaphore
 Semaphore value: 1 (only one transaction can access and modify an account at a time is crucial for
maintaining data integrity).
Process
 Acquire Semaphore: A transaction must need to acquire the semaphore first before modifying an
account balance.
 The account is : Then operations like debiting or crediting the account are performed by the transaction.
 Release Semaphore: After Completing the transactions the semaphore is released to allow other
transactions to proceed.

4. Print Queue Management


 Multiple Print Jobs: Processes that need to access shared resources (printers)
 Printer: Shared resource represented by a semaphore
 Semaphore Value: 1 (only one print job can access the printer at a time)
Process
 Acquire Semaphore: First, acquire a semaphore for the printer to begin the print job.
 Print Job Execution: The printer processes the print job.
 Release Semaphore: Now the semaphore is released after the job is done.

5. Railway Track Management


 Multiple Trains: Processes that need to access shared resources (tracks)
 Track: Shared resource represented by a semaphore
 Semaphore Value: 1 (only one train can access the track at a time)
Process
 Acquire Semaphore: A train acquires the semaphore before entering a track.
 Travel Across Track: train Travels across the track.
 Release Semaphore: The semaphores are released after the train passes the track.

6. Dining Philosopher’s Problem


 Multiple Philosophers: Process that needs to access shared resources(forks).
 Forks: Shared resources represented by semaphores, that need to eat. Each philosopher needs two forks
to eat.
 Semaphore Value: 1 (Only one philosopher can access a fork at a time preventing deadlock and
starvation).
Process
 Acquire Forks: Philosophers acquire the semaphore for both the left and right forks before eating.
 Eat eat : When both the forks are acquired then the philosopher eats.
 Release Forks: After eating, both the forks are released for others to use them.
Solution – Dining Philosopher Problem Using Semaphores

7. Reader-Writer Problem
In Reader-Writer problem Synchronizing access to shared data where multiple readers can read the data
simultaneously, but writers need exclusive access to modify it. In simple terms imagine a library where multiple
readers and writers come and all readers want to read a book, and some people(writers) want to update or edit
the book. That’s why we need a system to ensure that these actions are done smoothly without errors or
conflicts.
 Multiple Readers and Writers: Processes that need to access a shared resource(data).
 Data: Shared resources represented by a Semaphore.
 Semaphore Value for Reader: Process that reads shared data, Can access simultaneously(value>1, more
than one reader can access at the same time)
 Semaphore Value for Writers: Processes that modify shared data need exclusive access(value =1, one at
a time)
Solution –
 Readers Preference Solution
 Writers Preference Solution

3.6.7. Classical IPC Problems

Inter-Process Communication (IPC) is necessary for processes to communicate and share data. While basic
communication between processes may sound simple, certain situations can cause issues that need specific
solutions. These situations are known as Classical IPC Problems, which involve managing synchronization,
avoiding deadlock, and ensuring that resources are accessed in a controlled manner.
Some of the well-known Inter Process Communication problems are:
1. Producer Consumer Problem
2. Readers-Writers Problem
3. Dining Philosophers Problem
Producer Consumer Problem
The Producer-Consumer Problem involves two types of processes: the Producer, which creates data, and the
Consumer, which processes that data. The challenge is ensuring that the Producer doesn't overfill the buffer, and
the Consumer doesn't try to consume data from an empty buffer.
Key Problems in the Producer-Consumer Problem:
1. Buffer Overflow: If the producer tries to add data when the buffer is full, there will be no space for new
data, causing the producer to be blocked.
2. Buffer Underflow: If the consumer tries to consume data when the buffer is empty, it has nothing to
consume, causing the consumer to be blocked.
Solution to Producer Consumer Problem
This problem can be solved using synchronization techniques such as semaphores or mutexes to control access
to the shared buffer and ensure proper synchronization between the Producer and Consumer.
Producer-Consumer Problem - Solution (using Semaphores)
Reader-Writer Problem
The Reader-Writer Problem involves multiple processes that need to read from and write to shared data. Here,
we have two types of processes: Readers, which only read the data, and Writers, which can modify the data.
 Readers: These processes only read data from the shared resource and do not modify it.
 Writers: These processes modify or write data to the shared resource.
The challenge in the Reader-Writer problem is to allow multiple readers to access the shared data simultaneously
without causing issues. However, only one writer should be allowed to write at a time, and no reader should be
allowed to read while a writer is writing. This ensures the integrity and consistency of the data.
Solution to Reader-Writer Problem
There are two fundamental solutions to the Readers-Writers problem:
 Readers Preference: In this solution, readers are given preference over writers. That means that till
readers are reading, writers will have to wait. The Writers can access the resource only when no reader
is accessing it.
 Writer’s Preference: Preference is given to the writers. It simply means that, after arrival, the writers can
go ahead with their operations, though perhaps there are readers currently accessing the resource.
Readers-Writers Problem - Solution (Readers Preference Solution)
Readers-Writers Problem - Solution (Writers Preference Solution)
Dining Philosophers Problem
The Dining Philosopher Problem is a classic synchronization and concurrency problem that deals with resource
sharing, deadlock, and starvation in systems where multiple processes require limited resources. In this article,
we will discuss the Dining Philosopher Problem in detail along with proper implementation.
The Dining Philosopher Problem involves ‘n’ philosophers sitting around a circular table. Each philosopher
alternates between two states: thinking and eating. To eat, a philosopher needs two chopsticks, one on their left
and one on their right. However, the number of chopsticks is equal to the number of philosophers, and each
chopstick is shared between two neighboring philosophers.
The standard problem considers the value of ‘n’ as 5 i.e. we deal with 5 Philosophers sitting around a circular
table.
Solution to Dining Philosophers Problem
Solutions to this problem typically involve using synchronization techniques like semaphores or mutexes to
ensure that the philosophers don’t deadlock. One common approach is to use a monitor to coordinate access to
the forks, allowing each philosopher to pick up and put down forks in a way that prevents deadlock.
Dining Philosophers Problem - Solution (using Semaphores)
Dining-Philosophers Problem - Solution (Using Monitors)

Sleeping Barber Problem


In the Sleeping Barber Problem, there is a barber shop with one barber and several chairs for customers. The
barber sleeps while there are no customers. When a customer arrives, they sit in an empty chair. If all chairs are
taken, the customer leaves. If the barber is sleeping, the customer wakes him up; if the barber is busy, the
customer waits.
The main challenge is to avoid deadlock (where no customer can get a haircut) and ensure fairness so that
customers don’t starve waiting for service
Solution to Sleeping Barber Problem
This problem can be solved using synchronization mechanisms like semaphores. A semaphore can be used to
manage the availability of the barber and the chairs. Another semaphore is used to manage the customers
waiting in line. The barber uses a semaphore to control whether he is sleeping or awake, and customers use
another semaphore to wait for their turn.
Sleeping Barber Problem - Solution (using Semaphores)

3.6.8. Communication between two process using signals in C


Prerequisite : C signal handling In this post, the communication between child and parent processes is done
using kill() and signal(), fork() system call.
 fork() creates the child process from the parent. The pid can be checked to decide whether it is the child
(if pid == 0) or the parent (pid = child process id).
 The parent can then send messages to child using the pid and kill().
 The child picks up these signals with signal() and calls appropriate functions.
Example of how 2 processes can talk to each other using kill() and signal():
#include <signal.h>
#include <stdio.h>
#include <stdlib.h>
#include <sys/types.h>
#include <unistd.h>

// function declaration
void sighup();
void sigint();
void sigquit();

void main()
{
int pid;

/* get child process */


if ((pid = fork()) < 0) {
perror("fork");
exit(1);
}

/* child */
if (pid == 0) {
signal(SIGHUP, sighup);
signal(SIGINT, sigint);
signal(SIGQUIT, sigquit);
for (;;)

/* loop forever */
;
}

/* parent */
else

/* pid holds id of child */


{
write(STDOUT_FILENO, "\nPARENT: sending SIGHUP\n\n", 25);
kill(pid, SIGHUP);

/* pause for 3 secs */


sleep(3);
write(STDOUT_FILENO, "\nPARENT: sending SIGINT\n\n", 25);
kill(pid, SIGINT);

/* pause for 3 secs */


sleep(3);
write(STDOUT_FILENO, "\nPARENT: sending SIGQUIT\n\n", 26);
kill(pid, SIGQUIT);
sleep(3);
}
}

// sighup() function definition


void sighup()
{

/* reset signal */
signal(SIGHUP, sighup);
write(STDOUT_FILENO, "CHILD: I have received a SIGHUP\n", 31);
}

// sigint() function definition


void sigint()
{

/* reset signal */
signal(SIGINT, sigint);
write(STDOUT_FILENO, "CHILD: I have received a SIGINT\n", 32);
}

// sigquit() function definition


void sigquit()
{
write(STDOUT_FILENO, "My DADDY has Killed me!!!\n", 26);
exit(0);
}
Output:
3.6.9. Mutex vs Semaphore

In the Operating System, Mutex and Semaphores are kernel resources that provide synchronization services
(also known as synchronization primitives). Synchronization is required when multiple processes are executing
concurrently, to avoid conflicts between processes using shared resources. In this article we will see differences
between Mutex and Semaphore, their advantages and disadvantages.
What is Mutex?
A mutex is different from a binary semaphore, which provides a locking mechanism. It stands for Mutual
Exclusion Object. Mutex is mainly used to provide mutual exclusion to a specific portion of the code so that the
process can execute and work with a particular section of the code at a particular time. A mutex enforces strict
ownership. Only the thread that locks the mutex can unlock it. It is specifically used for locking a resource to
ensure that only one thread accesses it at a time. Due to this strict ownership, a mutex is not only typically used
for signaling between threads, but it is used for mutual exclusion also to ensuring that a resource is accessed by
only one thread at a time.
Mutex uses a priority inheritance mechanism to avoid priority inversion issues. The priority inheritance
mechanism keeps higher-priority processes in the blocked state for the minimum possible time. However, this
cannot avoid the priority inversion problem, but it can reduce its effect up to an extent.
Advantages of Mutex
 No race condition arises, as only one process is in the critical section at a time.
 Data remains consistent and it helps in maintaining integrity.
 It is a simple locking mechanism that into a critical section and is released while leaving the critical
section.
Disadvantages of Mutex
 If after entering into the critical section, the thread sleeps or gets preempted by a high-priority process,
no other thread can enter into the critical section. This can lead to starvation.
 When the previous thread leaves the critical section, then only other processes can enter into it, there is
no other mechanism to lock or unlock the critical section.
 Implementation of mutex can lead to busy waiting, which leads to the wastage of the CPU cycle.
Using Mutex
The producer-consumer problem: Consider the standard producer-consumer problem. Assume, we have a
buffer of 4096-byte length. A producer thread collects the data and writes it to the buffer. A consumer thread
processes the collected data from the buffer. The objective is, that both the threads should not run at the same
time.
Solution: A mutex provides mutual exclusion, either producer or consumer can have the key (mutex) and proceed
with their work. As long as the buffer is filled by the producer, the consumer needs to wait, and vice versa. At
any point in time, only one thread can work with the entire buffer. The concept can be generalized using
semaphore.

Mutex
What is Semaphore?
A semaphore is a non-negative integer variable that is shared between various threads. Semaphore works upon
signaling mechanism, in this a thread can be signaled by another thread. It provides a less restrictive control
mechanism. Any thread can invoke signal() (also known as release() or up()), and any other thread can
invoke wait() (also known as acquire() or down()). There is no strict ownership in semaphores, meaning the
thread that signals doesn’t necessarily have to be the same one that waited. Semaphores are often used for
coordinating signaling between threads. Semaphore uses two atomic operations for process synchronisation:
 Wait (P)
 Signal (V)
Advantages of Semaphore
 Multiple threads can access the critical section at the same time.
 Semaphores are machine-independent.
 Only one process will access the critical section at a time, however, multiple threads are allowed.
 Semaphores are machine-independent, so they should be run over microkernel.
 Flexible resource management.
Disadvantages of Semaphore
 It has priority inversion.
 Semaphore operations (Wait, Signal) must be implemented in the correct manner to avoid deadlock.
 It leads to a loss of modularity, so semaphores can’t be used for large-scale systems.
 Semaphore is prone to programming error and this can lead to deadlock or violation of mutual exclusion
property.
 Operating System has to track all the calls to wait and signal operations.
Using Semaphore
The producer-consumer problem: Consider the standard producer-consumer problem. Assume, we have a
buffer of 4096-byte length. A producer thread collects the data and writes it to the buffer. A consumer thread
processes the collected data from the buffer. The objective is, both the threads should not run at the same time.
Solution: A semaphore is a generalized mutex. instead a single buffer, we can split the 4 KB buffer into four 1 KB
buffers (identical resources). A semaphore can be associated with these four buffers. The consumer and producer
can work on different buffers at the same time.

Semaphore
Difference Between Mutex and Semaphore

Mutex Semaphore

A mutex is an object. A semaphore is an integer.

Mutex works upon the locking mechanism. Semaphore uses signaling mechanism.

Operations on mutex: Operation on semaphore:


 Lock  Wait
 Unlock  Signal

Semaphore is of two types:


Mutex does not have any subtypes.  Counting Semaphore
 Binary Semaphore
Mutex Semaphore

A mutex can only be modified by the process Semaphore work with two atomic operations (Wait,
that is requesting or releasing a resource. signal) which can modify it.

If the mutex is locked then the process needs to If the process needs a resource, and no resource is free.
wait in the process queue, and mutex can only So, the process needs to perform a wait operation until
be accessed once the lock is released. the semaphore value is greater than zero.

Misconception of Mutex and Semaphore


There is an ambiguity between binary semaphore and mutex. We might have come across that a mutex is a binary
semaphore. But it is not! The purposes of mutex and semaphore are different. Maybe, due to similarity in their
implementation of a mutex would be referred to as a binary semaphore. A mutex is a locking mechanism used
to synchronize access to a resource. Only one task (can be a thread or process based on OS abstraction) can
acquire the mutex. It means there is ownership associated with a mutex, and only the owner can release the lock
(mutex).
A semaphore is a signaling mechanism used to control access to shared resources in an operating system. For
example, imagine you are downloading a large file on your computer (Task A) while simultaneously trying to print
a document (Task B). When the print job is initiated, it triggers a semaphore that checks if the download is
complete.
If the download is still in progress, the semaphore prevents the print task from proceeding, effectively saying,
“Wait until the download finishes.” Once the download completes, the semaphore signals that Task B can start
printing. This ensures that both tasks do not interfere with each other and helps manage system resources
efficiently, allowing tasks to run smoothly without conflict.

3.6.10. Monitors in Process Synchronization

Monitors are a higher-level synchronization construct that simplifies process synchronization by providing a high-
level abstraction for data access and synchronization. Monitors are implemented as programming language
constructs, typically in object-oriented languages, and provide mutual exclusion, condition variables, and data
encapsulation in a single construct.
1. A monitor is essentially a module that encapsulates a shared resource and provides access to that
resource through a set of procedures. The procedures provided by a monitor ensure that only one
process can access the shared resource at any given time, and that processes waiting for the resource
are suspended until it becomes available.
2. Monitors are used to simplify the implementation of concurrent programs by providing a higher-level
abstraction that hides the details of synchronization. Monitors provide a structured way of sharing data
and synchronization information, and eliminate the need for complex synchronization primitives such as
semaphores and locks.
3. The key advantage of using monitors for process synchronization is that they provide a simple, high-level
abstraction that can be used to implement complex concurrent systems. Monitors also ensure that
synchronization is encapsulated within the module, making it easier to reason about the correctness of
the system.
However, monitors have some limitations. For example, they can be less efficient than lower-level
synchronization primitives such as semaphores and locks, as they may involve additional overhead due to their
higher-level abstraction. Additionally, monitors may not be suitable for all types of synchronization problems,
and in some cases, lower-level primitives may be required for optimal performance.
The monitor is one of the ways to achieve Process synchronization. The monitor is supported by programming
languages to achieve mutual exclusion between processes. For example Java Synchronized methods. Java
provides wait() and notify() constructs.
1. It is the collection of condition variables and procedures combined together in a special kind of module
or a package.
2. The processes running outside the monitor can’t access the internal variable of the monitor but can call
procedures of the monitor.
3. Only one process at a time can execute code inside monitors.
Syntax:

Condition Variables: Two different operations are performed on the condition variables of the monitor.
Wait.
signal.
let say we have 2 condition variables condition x, y; // Declaring variable Wait operation x.wait() : Process
performing wait operation on any condition variable are suspended. The suspended processes are placed in block
queue of that condition variable. Note: Each condition variable has its unique block queue. Signal
operation x.signal(): When a process performs signal operation on condition variable, one of the blocked
processes is given chance.
If (x block queue empty)
// Ignore signal
else
// Resume a process from block queue.
Advantages of Monitor: Monitors have the advantage of making parallel programming easier and less error
prone than using techniques such as semaphore. Disadvantages of Monitor: Monitors have to be implemented
as part of the programming language . The compiler must generate code for them. This gives the compiler the
additional burden of having to know what operating system facilities are available to control access to critical
sections in concurrent processes. Some languages that do support monitors are Java,C#,Visual Basic,Ada and
concurrent Euclid.

Reading Assignment

3.6.11. Dekker’s algorithm

3.6.12. Bakery Algorithm

3.6.13. Mutex lock for Linux Thread Synchronization

3.6.14. Priority Inversion

3.7. Multithreading
3.7.1. Thread
3.7.2. Threads and its types
3.7.3. Operating System | User Level thread Vs Kernel Level thread
3.7.4. Process-based and Thread-based Multitasking
3.7.5. Multi-threading models
3.6.6. Benefits of Multithreading
3.6.7. Operating System | Remote Procedure call (RPC)

3.7.1. Thread
A thread is a single sequence stream within a process. Threads are also called lightweight processes as they
possess some of the properties of processes. Each thread belongs to exactly one process.
 In an operating system that supports multithreading, the process can consist of many threads. But
threads can be effective only if the CPU is more than 1 otherwise two threads have to context switch for
that single CPU.
 All threads belonging to the same process share – code section, data section, and OS resources (e.g. open
files and signals)
 But each thread has its own (thread control block) – thread ID, program counter, register set, and a stack
 Any operating system process can execute a thread. we can say that single process can have multiple
threads.
Why Do We Need Thread?
 Threads run in concurrent manner that improves the application performance. Each such thread has its
own CPU state and stack, but they share the address space of the process and the environment. For
example, when we work on Microsoft Word or Google Docs, we notice that while we are typing, multiple
things happen together (formatting is applied, page is changed and auto save happens).
 Threads can share common data so they do not need to use inter-process communication. Like the
processes, threads also have states like ready, executing, blocked, etc.
 Priority can be assigned to the threads just like the process, and the highest priority thread is scheduled
first.
 Each thread has its own Thread Control Block (TCB). Like the process, a context switch occurs for the
thread, and register contents are saved in (TCB). As threads share the same address space and resources,
synchronization is also required for the various activities of the thread.
Components of Threads
These are the basic components of the Operating System.
 Stack Space: Stores local variables, function calls, and return addresses specific to the thread.
 Register Set: Hold temporary data and intermediate results for the thread’s execution.
 Program Counter: Tracks the current instruction being executed by the thread.
Types of Thread in Operating System
Threads are of two types. These are described below.
 User Level Thread
 Kernel Level Thread

Threads
1. User Level Thread
User Level Thread is a type of thread that is not created using system calls. The kernel has no work in the
management of user-level threads. User-level threads can be easily implemented by the user. In case when user-
level threads are single-handed processes, kernel-level thread manages them. Let’s look at the advantages and
disadvantages of User-Level Thread.
Advantages of User-Level Threads
 Implementation of the User-Level Thread is easier than Kernel Level Thread.
 Context Switch Time is less in User Level Thread.
 User-Level Thread is more efficient than Kernel-Level Thread.
 Because of the presence of only Program Counter, Register Set, and Stack Space, it has a simple
representation.
Disadvantages of User-Level Threads
 The operating system is unaware of user-level threads, so kernel-level optimizations, like load balancing
across CPUs, are not utilized.
 If a user-level thread makes a blocking system call, the entire process (and all its threads) is blocked,
reducing efficiency.
 User-level thread scheduling is managed by the application, which can become complex and may not be
as optimized as kernel-level scheduling.
2. Kernel Level Threads
A kernel Level Thread is a type of thread that can recognize the Operating system easily. Kernel Level Threads
has its own thread table where it keeps track of the system. The operating System Kernel helps in managing
threads. Kernel Threads have somehow longer context switching time. Kernel helps in the management of
threads.
Advantages of Kernel-Level Threads
 Kernel-level threads can run on multiple processors or cores simultaneously, enabling better utilization
of multicore systems.
 The kernel is aware of all threads, allowing it to manage and schedule them effectively across available
resources.
 Applications that block frequency are to be handled by the Kernel-Level Threads.
 The kernel can distribute threads across CPUs, ensuring optimal load balancing and system performance.
Disadvantages of Kernel-Level threads
 Context switching between kernel-level threads is slower compared to user-level threads because it
requires mode switching between user and kernel space.
 Managing kernel-level threads involves frequent system calls and kernel interactions, leading to
increased CPU overhead.
 A large number of threads may overload the kernel scheduler, leading to potential performance
degradation in systems with many threads.
 Implementation of this type of thread is a little more complex than a user-level thread.

Difference Between Process and Thread


The primary difference is that threads within the same process run in a shared memory space, while processes
run in separate memory spaces. Threads are not independent of one another like processes are, and as a result,
threads share with other threads their code section, data section, and OS resources (like open files and signals).
But, like a process, a thread has its own program counter (PC), register set, and stack space.

What is Multi-Threading?
A thread is also known as a lightweight process. The idea is to achieve parallelism by dividing a process into
multiple threads. For example, in a browser, multiple tabs can be different threads. MS Word uses multiple
threads: one thread to format the text, another thread to process inputs, etc. More advantages of multithreading
are discussed below.
Multithreading is a technique used in operating systems to improve the performance and responsiveness of
computer systems. Multithreading allows multiple threads (i.e., lightweight processes) to share the same
resources of a single process, such as the CPU, memory, and I/O devices.
Single Threaded vs Multi-threaded Process

Multithreading can be done without OS support, as seen in Java’s multithreading model. In Java, threads are
implemented using the Java Virtual Machine (JVM), which provides its own thread management. These threads,
also called user-level threads, are managed independently of the underlying operating system.
Application itself manages the creation, scheduling, and execution of threads without relying on the operating
system’s kernel. The application contains a threading library that handles thread creation, scheduling, and
context switching. The operating system is unaware of User-Level threads and treats the entire process as a
single-threaded entity.
Benefits of Thread in Operating System
 Responsiveness: If the process is divided into multiple threads, if one thread completes its execution,
then its output can be immediately returned.
 Faster context switch: Context switch time between threads is lower compared to the process context
switch. Process context switching requires more overhead from the CPU.
 Effective utilization of multiprocessor system: If we have multiple threads in a single process, then we
can schedule multiple threads on multiple processors. This will make process execution faster.
 Resource sharing: Resources like code, data, and files can be shared among all threads within a process.
Note: Stacks and registers can’t be shared among the threads. Each thread has its own stack and
registers.
 Communication: Communication between multiple threads is easier, as the threads share a common
address space. while in the process we have to follow some specific communication techniques for
communication between the two processes.
 Enhanced throughput of the system: If a process is divided into multiple threads, and each thread
function is considered as one job, then the number of jobs completed per unit of time is increased, thus
increasing the throughput of the system.

3.7.2. Threads and its types


A thread is a single sequence stream within a process. Threads have the same properties as the process so they
are called lightweight processes. Threads are executed one after another, but it gives the illusion that they are
executing in parallel. Each thread has different states. In this session, we are going to discuss threads in detail
along with similarities between Threads and Processes, Differences Between Threads and Processes.
What are Threads?
Threads are small units of a computer program that can run independently. They allow a program to perform
multiple tasks at the same time, like having different parts of the program run simultaneously. This makes
programs more efficient and responsive, especially for tasks that can be divided into smaller parts.
Each thread has:
 A program counter
 A register set
 A stack space
Threads are not independent of each other as they share the code, data, OS resources, etc.

Similarity Between Threads and Process


 Only one thread or process is active at a time in an operating system.
 Within the process, both execute in a sequential manner.
 Both can create children.
 Both can be scheduled by the operating system: Both threads and processes can be scheduled by the
operating system to execute on theCPU . The operating system is responsible for assigning CPU time to
the threads and processes based on various scheduling algorithms .
 Both have their own execution context: Each thread and process has its own execution context, which
includes its own register set, program counter, and stack. This allows each thread or process to execute
independently and make progress without interfering with other threads or processes.
 Both can communicate with each other: Threads and processes can communicate with each other using
variousinter-process communication (IPC) mechanisms such as shared memory, message queues, and
pipes. This allows threads and processes to share data and coordinate their activities.
 Both can be preempted: Threads and processes can be preempted by the operating system, which means
that their execution can be interrupted at any time. This allows the operating system to switch to another
thread or process that needs to execute.
 Both can be terminated: Threads and processes can be terminated by the operating system or by other
threads or processes. When a thread or process is terminated, all of its resources, including its execution
context, are freed up and made available to other threads or processes.

Differences Between Threads and Process


 Resources: Processes have their own address space and resources, such as memory and file handles,
whereas threads share memory and resources with the program that created them.
 Scheduling: Processes are scheduled to use the processor by the operating system, whereas threads are
scheduled to use the processor by the operating system or the program itself.
 Creation: The operating system creates and manages processes, whereas the program or the operating
system creates and manages threads.
 Communication: Because processes are isolated from one another and must rely on inter-process
communication mechanisms, they generally have more difficulty communicating with one another than
threads do. Threads, on the other hand, can interact with other threads within the same programme
directly.
Threads, in general, are lighter than processes and are better suited for concurrent execution within a single
programme. Processes are commonly used to run separate programmes or to isolate resources between
programmes.

Types of Threads
There are two main types of threads User Level Thread and Kernal Level Thread let’s discuss each one by one in
detail:
User Level Thread (ULT)
User Level Thread is implemented in the user level library, they are not created using the system calls. Thread
switching does not need to call OS and to cause interrupt to Kernel. Kernel doesn’t know about the user level
thread and manages them as if they were single-threaded processes.
Advantages of ULT
 Can be implemented on an OS that doesn’t supportmultithreading .
 Simple representation since thread has onlyprogram counter , register set, stack space.
 Simple to create since no intervention of kernel.
 Thread switching is fast since no OS calls need to be made.
Disadvantages of ULT
 No or less co-ordination among the threads and Kernel.
 If one thread causes a page fault, the entire process blocks.
Kernel Level Thread (KLT)
Kernel knows and manages the threads. Instead of thread table in each process, the kernel itself has thread table
(a master one) that keeps track of all the threads in the system. In addition kernel also maintains the traditional
process table to keep track of the processes. OS kernel provides system call to create and manage threads.
Advantages of KLT
 Since kernel has full knowledge about the threads in the system, scheduler may decide to give more time
to processes having large number of threads.
 Good for applications that frequently block.
Disadvantages of KLT
 Slow and inefficient.
 It requires thread control block so it is an overhead.
Threading Issues
 The fork() and exec() System Calls : The semantics of the fork() and exec() system calls change in a
multithreaded program. If one thread in a program calls fork(), does the new process duplicate all
threads, or is the new process single-threaded? Some UNIX systems have chosen to have two versions
of fork(), one that duplicates all threads and another that duplicates only the thread that invoked the
fork() system call. The exec() system , That is, if a thread invokes the exec() system call , the program
specified in the parameter to exec() will replace the entire process—including all threads.
 Signal Handling : A signal is used in UNIX systems to notify a process that a particular event has occurred.
A signal may be received either synchronously or asynchronously depending on the source of and the
reason for the event being signaled. All signals, whether synchronous or asynchronous, follow the same
pattern:1. A signal is generated by the occurrence of a particular event.2. The signal is delivered to a
process.3. Once delivered, the signal must be handled. A signal may be handled by one of two possible
handlers: 1. A default signal handler .2. A user-defined signal handler. Every signal has a default signal
handler that the kernel runs when handling that signal. This default action can be overridden by a user-
defined signal handler that is called to handle the signal.
 Thread Cancellation : Thread cancellation involves terminating a thread before it has completed. For
example, if multiple threads are concurrently searching through a database and one thread returns the
result, the remaining threads might be canceled. Another situation might occur when a user presses a
button on a web browser that stops a web page from loading any further. Often, a web page loads using
several threads—each image is loaded in a separate thread. When a user presses the stop button on the
browser, all threads loading the page are canceled. A thread that is to be canceled is often referred to as
the target thread. Cancellation of a target thread may occur in two different scenarios:1. Asynchronous
cancellation. One thread immediately terminates the target thread.2. Deferred cancellation. The target
thread periodically checks whether it should terminate, allowing it an opportunity to terminate itself in
an orderly fashion.
 Thread-Local Storage : Threads belonging to a process share the data of the process. Indeed, this data
sharing provides one of the benefits of multithreaded programming. However, in some circumstances,
each thread might need its own copy of certain data. We will call such data thread-local storage (or TLS.)
For example, in a transaction-processing system, we might service each transaction in a separate thread.
Furthermore, each transaction might be assigned a unique identifier. To associate each thread with its
unique identifier, we could use thread-local storage.
 Scheduler Activations : One scheme for communication between the user-thread library and the kernel
is known as scheduler activation. It works as follows: The kernel provides an application with a set of
virtual processors (LWPs), and the application can schedule user threads onto an available virtual
processor.
Advantages of Threading
 Responsiveness: A multithreaded application increases responsiveness to the user.
 Resource Sharing: Resources like code and data are shared between threads, thus allowing a
multithreaded application to have several threads of activity within the same address space.
 Increased Concurrency: Threads may be running parallelly on different processors, increasing
concurrency in a multiprocessor machine.
 Lesser Cost: It costs less to create and context-switch threads than processes.
 Lesser Context-Switch Time: Threads take lesser context-switch time than processes.
Disadvantages of Threading
 Complexity: Threading can make programs more complicated to write and debug because threads need
to synchronize their actions to avoid conflicts.
 Resource Overhead: Each thread consumes memory and processing power, so having too many threads
can slow down a program and use up system resources.
 Difficulty in Optimization: It can be challenging to optimize threaded programs for different hardware
configurations, as thread performance can vary based on the number of cores and other factors.
 Debugging Challenges: Identifying and fixing issues in threaded programs can be more difficult
compared to single-threaded programs, making troubleshooting complex.

3.7.3. User Level thread Vs Kernel Level thread


User-level threads are threads that are managed entirely by the user-level thread library, without any direct
intervention from the operating system’s kernel, whereas, Kernel-level threads are threads that are managed
directly by the operating system’s kernel. In this article, we will see the overview of the User Level thread and
Kernel Level thread. and also understand the basic required terms.
User-Level Thread
The User-level Threads are implemented by the user-level software. These threads are created and managed by
the thread library, which the operating system provides as an API for creating, managing, and synchronizing
threads. it is faster than the kernel-level threads, it is basically represented by the program counter, stack,
register, and PCB.
User-level threads are typically employed in scenarios where fine control over threading is necessary, but the
overhead of kernel threads is not desired. They are also useful in systems that lack native multithreading support,
allowing developers to implement threading in a portable way.
Example – User threads library includes POSIX threads, Mach C-Threads
Advantages of User-Level Threads
 Quick and easy to create: User-level threads can be created and managed more rapidly.
 Highly portable: They can be implemented across various operating systems.
 No kernel mode privileges required: Context switching can be performed without transitioning to kernel
mode.
Disadvantages of User-Level Threads
 Limited use of multiprocessing: Multithreaded applications may not fully exploit multiple processors.
 Blocking issues: A blocking operation in one thread can halt the entire process.
Kernel-Level Thread
Threads are the units of execution within an operating system process. The OS kernel is responsible for
generating, scheduling, and overseeing kernel-level threads since it controls them directly. The Kernel-level
threads are directly handled by the OS directly whereas the thread’s management is done by the kernel.
Each kernel-level thread has its own context, including information about the thread’s status, such as its name,
group, and priority.
Example – The example of Kernel-level threads are Java threads, POSIX thread on Linuxs, etc.
Advantages of Kernel-Level Threads
 True parallelism: Kernel threads allow real parallel execution in multithreading.
 Execution continuity: Other threads can continue to run even if one is blocked.
 Access to system resources: Kernel threads have direct access to system-level features, including I/O
operations.
Disadvantages of Kernel-Level Threads
 Management overhead: Kernel threads take more time to create and manage.
 Kernel mode switching: Requires mode switching to the kernel, adding overhead.

Difference Between User-Level Thread and Kernel-Level Thread

Parameters User Level Thread Kernel Level Thread

User threads are implemented by user- Kernel threads are implemented by


Implemented by
level libraries. Operating System (OS).
Parameters User Level Thread Kernel Level Thread

The operating System doesn’t recognize Kernel threads are recognized


Recognize
user-level threads directly. by Operating System.

Implementation of Kernel-Level thread is


Implementation Implementation of User threads is easy.
complicated.

Context switch
Context switch time is less. Context switch time is more.
time

No hardware support is required for


Hardware support Hardware support is needed.
context switching.

If one user-level thread performs a If one kernel thread performs a blocking


Blocking
blocking operation then the entire operation then another thread can
operation
process will be blocked. continue execution.

Multithreaded applications cannot take


Multithreading Kernels can be multithreaded.
full advantage of multiprocessing.

Creation and User-level threads can be created and Kernel-level level threads take more time
Management managed more quickly. to create and manage.

Any operating system can support user- Kernel-level threads are operating
Operating System
level threads. system-specific.

The application code doesn’t contain


Thread Managed by a thread library at the user
thread management code; it’s an API to
Management level.
the kernel mode.

Example POSIX threads, Mach C-Threads. Java threads, POSIX threads on Linux.

Allows for true parallelism,


Simple and quick to create, more
multithreading in kernel routines, and
Advantages portable, does not require kernel mode
can continue execution if one thread is
privileges for context switching.
blocked.
Parameters User Level Thread Kernel Level Thread

Cannot fully utilize multiprocessing,


Requires more time to create/manage,
Disadvantages entire process blocked if one thread
involves mode switching to kernel mode.
blocks.

In kernel-level threads have their own


In user-level threads, each thread has its
Memory stacks and their own separate address
own stack, but they share the same
management spaces, so they are better isolated from
address space.
each other.

User-level threads are less fault-tolerant


Kernel-level threads can be managed
than kernel-level threads. If a user-level
Fault tolerance independently, so if one thread crashes, it
thread crashes, it can bring down the
doesn’t necessarily affect the others.
entire process.

Resource Limited access to system resources, It can access to the system-level features
utilization cannot directly perform I/O operations. like I/O operations.

User-level threads are more portable Less portable due to dependence on OS-
Portability
than kernel-level threads. specific kernel implementations.

3.7.4. Process-based and Thread-based Multitasking

Multitasking operating system is an operating system that gives you the perception of two or more
tasks/jobs/processes running simultaneously. It does this by dividing system resources amongst these
tasks/jobs/processes and switching between the tasks/jobs/processes while they are executing over and over
again. The CPU processes only one task at a time, but in Multitasking the switching is so fast that it looks like
the CPU is executing multiple processes simultaneously. They can support either preemptive multitasking, where
the OS doles out time to applications (virtually all modern OSes) or cooperative multitasking, where the OS waits
for the program to give back control (Windows 3.x, Mac OS 9 and earlier), leading to hangs and crashes. Also
known as Timesharing, multitasking is a logical extension of multiprogramming.
Multitasking Programming has Two Types:
1. Process-based Multitasking
2. Thread-based Multitasking
Process-Based Multitasking Thread-Based Multitasking

In process-based multitasking, two or


In thread-based multitasking, two or more threads can be run
more processes and programs can be
concurrently.
run concurrently.

In process-based multitasking, a process


In thread-based multitasking, a thread is the smallest unit.
or a program is the smallest unit.

The program is a bigger unit. Thread is a smaller unit.

Process-based multitasking requires


Thread-based multitasking requires less overhead.
more overhead.

The process requires its own address


Threads share the same address space.
space.

The process to Process communication


Thread to Thread communication is not expensive.
is expensive.

Here, it is unable to gain access over the


It allows taking gain access over idle time taken by the CPU.
idle time of the CPU.

It is a comparatively heavyweight. It is comparatively lightweight.

It has a faster data rate for multi-tasking


because two or more
It has a comparatively slower data rate multi-tasking.
processes/programs can be run
simultaneously.

Example: Using a browser we can navigate through the webpage


Example: We can listen to music and
and at the same time download a file. In this example, navigation
browse the internet at the same time.
is one thread, and downloading is another thread. Also in a word-
The processes in this example are the
processing application like MS Word, we can type text in one
music player and browser.
thread, and spell checker checks for mistakes in another thread.

3.7.5. Multi-threading models


Multi threading- It is a process of multiple threads executes at same time.There are two main threading models
in process management: user-level threads and kernel-level threads.
User-level threads: In this model, the operating system does not directly support threads. Instead, threads are
managed by a user-level thread library, which is part of the application. The library manages the threads and
schedules them on available processors. The advantages of user-level threads include greater flexibility and
portability, as the application has more control over thread management. However, the disadvantage is that
user-level threads are not as efficient as kernel-level threads, as they rely on the application to manage thread
scheduling.
Kernel-level threads: In this model, the operating system directly supports threads as part of the kernel. Each
thread is a separate entity that can be scheduled and executed independently by the operating system. The
advantages of kernel-level threads include better performance and scalability, as the operating system can
schedule threads more efficiently. However, the disadvantage is that kernel-level threads are less flexible and
portable than user-level threads, as they are managed by the operating system.
There are also hybrid models that combine elements of both user-level and kernel-level threads. For example,
some operating systems use a hybrid model called the “two-level model”, where each process has one or more
user-level threads, which are mapped to kernel-level threads by the operating system.
Overall, the choice of threading model depends on the requirements of the application and the capabilities of
the underlying operating system.

Here are some advantages and disadvantages of each threading model:


User-level threads:
Advantages:
Greater flexibility and control: User-level threads provide more control over thread management, as the thread
library is part of the application. This allows for more customization and control over thread scheduling.
Portability: User-level threads can be more easily ported to different operating systems, as the thread library is
part of the application.
Disadvantages:
Lower performance: User-level threads rely on the application to manage thread scheduling, which can be less
efficient than kernel-level thread scheduling. This can result in lower performance for multithreaded
applications.
Limited parallelism: User-level threads are limited to a single processor, as the application has no control over
thread scheduling on other processors.
Kernel-level threads:
Advantages:
Better performance: Kernel-level threads are managed by the operating system, which can schedule threads
more efficiently. This can result in better performance for multithreaded applications.
Greater parallelism: Kernel-level threads can be scheduled on multiple processors, which allows for greater
parallelism and better use of available resources.
Disadvantages:
Less flexibility and control: Kernel-level threads are managed by the operating system, which provides less
flexibility and control over thread management compared to user-level threads.
Less portability: Kernel-level threads are more tightly coupled to the operating system, which can make them
less portable to different operating systems.
Hybrid models:
Advantages:
Combines advantages of both models: Hybrid models combine the advantages of user-level and kernel-level
threads, providing greater flexibility and control while also improving performance.
More scalable: Hybrid models can scale to larger numbers of threads and processors, which allows for better use
of available resources.
Disadvantages:
More complex: Hybrid models are more complex than either user-level or kernel-level threading, which can make
them more difficult to implement and maintain.
Requires more resources: Hybrid models require more resources than either user-level or kernel-level threading,
as they require both a thread library and kernel-level support.

Many operating systems support kernel thread and user thread in a combined way. Example of such system is
Solaris. Multi threading model are of three types.
 Many to many model.
 Many to one model.
 one to one model.

Many to Many Model


In this model, we have multiple user threads multiplex to same or lesser number of kernel level threads. Number
of kernel level threads are specific to the machine, advantage of this model is if a user thread is blocked we can
schedule others user thread to other kernel thread. Thus, System doesn’t block if a particular thread is blocked.
It is the best multi threading model.

Many to One Model


In this model, we have multiple user threads mapped to one kernel thread. In this model when a user thread
makes a blocking system call entire process blocks. As we have only one kernel thread and only one user thread
can access kernel at a time, so multiple threads are not able access multiprocessor at the same time.
The thread management is done on the user level so it is more efficient.
One to One Model
In this model, one to one relationship between kernel and user thread. In this model multiple thread can run on
multiple processor. Problem with this model is that creating a user thread requires the corresponding kernel
thread.
As each user thread is connected to different kernel , if any user thread makes a blocking system call, the other
user threads won’t be blocked.

Thread Libraries:
A thread library provides the programmer with an API for creating and managing threads. There are two primary
ways of implementing a thread library. The first approach is to provide a library entirely in user space with no
kernel support. All code and data structures for the library exist in user space. This means that invoking a function
in the library results in a local function call in user space and not a system call. The second approach is to
implement a kernel-level library supported directly by the operating system. In this case, code and data structures
for the library exist in kernel space. Invoking a function in the API for the library typically results in a system call
to the kernel.
Three main thread libraries are in use today: POSIX Pthreads, Windows, and Java. Pthreads, the threads extension
of the POSIX standard, may be provided as either a user-level or a kernel-level library. The Windows thread library
is a kernel-level library available on Windows systems. The Java thread API allows threads to be created and
managed directly in Java programs.
Advantages of Multithreading in OS:
 Minimize the time of context switching- Context Switching is used for storing the context or state of a
process so that it can be reloaded when required.
 By using threads, it provides concurrency within a process- Concurrency is the execution of multiple
instruction sequences at the same time.
 Creating and context switching is economical – Thread switching is very efficient because it involves
switching out only identities and resources such as the program counter, registers, and stack pointers.
 Allows greater utilization of multiprocessor architecture
Disadvantages of Multithreading in OS:
 A multithreading system operates without interruptions.
 The code might become more intricate to comprehend.
 The costs associated with handling various threads could be excessive for straightforward tasks.
 Identifying and resolving problems may become more demanding due to the intricate nature of the code.

3.6.6. Benefits of Multithreading


Multithreading is a crucial concept in modern computing that allows multiple threads to execute concurrently,
enabling more efficient utilization of system resources. By breaking down tasks into smaller threads,
applications can achieve higher performance, better responsiveness, and enhanced scalability. Whether it’s
handling multiple user requests or performing complex operations in parallel, multithreading is an essential
technique in both single-processor and multi-processor systems. This session explores the key benefits of
multithreading and how it contributes to optimizing program execution.
1. Responsiveness
Multithreading in an interactive application may allow a program to continue running even if a part of it is blocked
or is performing a lengthy operation, thereby increasing responsiveness to the user. In a non multi threaded
environment, a server listens to the port for some request and when the request comes, it processes the request
and then resume listening to another request. The time taken while processing of request makes other users
wait unnecessarily. Instead a better approach would be to pass the request to a worker thread and continue
listening to port. For example, a multi threaded web browser allow user interaction in one thread while an video
is being loaded in another thread. So instead of waiting for the whole web-page to load the user can continue
viewing some portion of the web-page.
2. Resource Sharing
Processes may share resources only through techniques such as- Such techniques must be explicitly organized
by programmer. However, threads share the memory and the resources of the process to which they belong by
default. The benefit of sharing code and data is that it allows an application to have several threads of activity
within same address space.
 Message Passing
 Shared Memory
3. Economy
Allocating memory and resources for process creation is a costly job in terms of time and space. Since, threads
share memory with the process it belongs, it is more economical to create and context switch threads. Generally
much more time is consumed in creating and managing processes than in threads. In Solaris, for example,
creating process is 30 times slower than creating threads and context switching is 5 times slower.
4. Scalability
The benefits of multi-programming greatly increase in case of multiprocessor architecture, where threads may
be running parallel on multiple processors. If there is only one thread then it is not possible to divide the
processes into smaller tasks that different processors can perform. Single threaded process can run only on one
processor regardless of how many processors are available. Multi-threading on a multiple CPU machine increases
parallelism.
5. Better Communication System
To improve the inter-process communication, thread synchronization functions can be used. Also, when need to
share huge amounts of data across multiple threads of execution inside the same address space then provides
extremely high bandwidth and low communication across the various tasks within the application.
6. Microprocessor Architecture Utilization
Every thread could be execute in parallel on a distinct processor which might be considerably amplified in a
microprocessor architecture. Multithreading enhances concurrency on a multi CPU machine. Also the CPU
switches among threads very quickly in a single processor architecture where it creates the illusion of parallelism,
but at a particular time only one thread can running.
7. Minimized system resource usage
Threads have a minimal influence on the system’s resources. The overhead of creating, maintaining, and
managing threads is lower than a general process.
8. Enhanced Concurrency
Multithreading can enhance the concurrency of a multi-CPU machine. This is because the multithreading allows
every thread to be executed in parallel on a distinct processor.
9. Reduced Context Switching Time
The threads minimize the context switching time as in Thread Context Switching, the virtual memory space
remains the same.

3.6.7. Remote Procedure call (RPC)

Remote Procedure Call (RPC) is a powerful technique for constructing distributed, client-server based
applications. It is based on extending the conventional local procedure calling so that the called procedure does
not exist in the same address space as the calling procedure. The two processes may be on the same system, or
they may be on different systems with a network connecting them.
What is Remote Procedure Call (RPC)?
Remote Procedure Call (RPC) is a type of technology used in computing to enable a program to request a service
from software located on another computer in a network without needing to understand the network’s details.
RPC abstracts the complexities of the network by allowing the developer to think in terms of function calls rather
than network details, facilitating the process of making a piece of software distributed across different systems.
RPC works by allowing one program (a client) to directly call procedures (functions) on another machine (the
server). The client makes a procedure call that appears to be local but is run on a remote machine. When an RPC
is made, the calling arguments are packaged and transmitted across the network to the server. The server
unpacks the arguments, performs the desired procedure, and sends the results back to the client.
Working of a RPC
1. A client invokes a client stub procedure, passing parameters in the usual way. The client stub resides within
the client’s own address space.
2. The client stub marshalls(pack) the parameters into a message. Marshalling includes converting the
representation of the parameters into a standard format, and copying each parameter into the message.
3. The client stub passes the message to the transport layer, which sends it to the remote server machine. On
the server, the transport layer passes the message to a server stub, which demarshalls(unpack) the parameters
and calls the desired server routine using the regular procedure call mechanism.
4. When the server procedure completes, it returns to the server stub (e.g., via a normal procedure call return),
which marshalls the return values into a message.
5. The server stub then hands the message to the transport layer. The transport layer sends the result message
back to the client transport layer, which hands the message back to the client stub.
6. The client stub demarshalls the return parameters and execution returns to the caller.
How to Make a Remote Procedure Call?
The calling environment is suspended, procedure parameters are transferred across the network to the
environment where the procedure is to execute, and the procedure is executed there. When the procedure
finishes and produces its results, its results are transferred back to the calling environment, where execution
resumes as if returning from a regular procedure call.
Note : RPC is especially well suited for client-server (e.g. query-response) interaction in which the flow of control
alternates between the caller and callee. Conceptually, the client and server do not both execute at the same
time. Instead, the thread of execution jumps from the caller to the callee and then back again.
Types of RPC
 Callback RPC: Callback RPC allows processes to act as both clients and servers. It helps with remote
processing of interactive applications. The server gets a handle to the client, and the client waits during
the callback. This type of RPC manages callback deadlocks and enables peer-to-peer communication
between processes.
 Broadcast RPC: In Broadcast RPC, a client’s request is sent to all servers on the network that can handle
it. This type of RPC lets you specify that a client’s message should be broadcast. You can set up
special broadcast ports. Broadcast RPC helps reduce the load on the network.
 Batch-mode RPC: Batch-mode RPC collects multiple RPC requests on the client side and sends them to
the server in one batch. This reduces the overhead of sending many separate requests. Batch-mode RPC
works best for applications that don’t need to make calls very often. It requires a reliable way to send
data.
What Does RPC do?
RPC stands for Remote Procedure Call. It lets a program on one computer use code on another computer as if it
were on the same computer. When a program with RPC is made ready to run, it includes a helper part called
a stub. This stub acts like the remote code. When the program runs and tries to use the remote code, the stub
gets this request. It then sends it to another helper program on the same computer. The first time this happens,
the helper program asks a special computer where to find the remote code.
The helper program then sends a message over the internet to the other computer, asking it to run the remote
code. The other computer also has helper programs that work with the remote code. When the remote code is
done, it sends the results back the same way. This whole process makes it seem like the remote code is running
on the local computer, even though it’s actually running somewhere else.
Issues of the RPC
RPC Runtime: RPC run-time system is a library of routines and a set of services that handle the network
communications that underlie the RPC mechanism. In the course of an RPC call, client-side and server-side run-
time systems’ code handle binding, establish communications over an appropriate protocol, pass call data
between the client and server, and handle communications errors.
Stub: The function of the stub is to provide transparency to the programmer-written application code. On the
client side, the stub handles the interface between the client’s local procedure call and the run-time system,
marshalling and unmarshalling data, invoking the RPC run-time protocol, and if requested, carrying out some of
the binding steps.
On the server side, the stub provides a similar interface between the run-time system and the local manager
procedures that are executed by the server.
Binding: The most flexible solution is to use dynamic binding and find the server at run time when the RPC is first
made. The first time the client stub is invoked, it contacts a name server to determine the transport address at
which the server resides. Binding consists of two parts
 Naming: A Server having a service to offer exports an interface for it. Exporting an interface registers it
with the system so that clients can use it.
 Locating: A Client must import an (exported) interface before communication can begin.
The call semantics associated with RPC
 Retry Request Message: Whether to retry sending a request message when a server has failed or the
receiver didn’t receive the message.
 Duplicate Filtering: Remove the duplicate server requests.
 Retransmission of Results: To resend lost messages without re-executing the operations at the server
side.
Advantages
 Easy Communication: RPC lets clients talk to servers using normal procedure calls in high-level
programming languages. This makes it simple for programmers to work with.
 Hidden Complexity: RPC hides the details of how messages are sent between computers. This means
programmers don’t need to worry about the underlying network communication.
 Flexibility: RPC can be used in both local and distributed environments. This makes it versatile for
different types of applications.
Disadvantages
 Limited Parameter Passing: RPC can only pass parameters by value. It can’t pass pointers, which limits
what can be sent between computers.
 Slower Than Local Calls: Remote procedure calls take longer than local procedure calls because they
involve network communication.
 Vulnerable to Failures: RPC depends on network connections, other machines, and separate processes.
This makes it more likely to fail than local procedure calls.
RPC vs REST
RPC and REST are two ways to make computer programs talk to each other over the internet. They’re different,
but both are useful. RPC is good for some things, and REST is good for others. Some companies need RPC, while
others prefer REST. Sometimes, developers use both RPC and REST in the same project, but not in the same part
of the program. RPC is an old idea, but new versions like gRPC and DRPC are making it popular again. Developers
are still using and improving these new types of RPC.
It’s hard to say which one is better – RPC or REST. They’re both good when used the right way. The best choice
depends on what you’re trying to do with your program.

3.8. Scheduling

3.9. Deadlock

Chapter-4
Memory Management

You might also like