0% found this document useful (0 votes)
75 views198 pages

Advanced Operating Systems-23Pcs06 Unit - 1

Uploaded by

maha sri
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
75 views198 pages

Advanced Operating Systems-23Pcs06 Unit - 1

Uploaded by

maha sri
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 198

ADVANCED OPERATING SYSTEMS-23PCS06

UNIT – 1

Basics of Operating Systems: What is an Operating System? – Main frame Systems –Desktop
Systems – Multiprocessor Systems – Distributed Systems – Clustered Systems –Real-Time
Systems – Handheld Systems – Feature Migration – Computing Environments -Process
Scheduling – Cooperating Processes – Inter Process Communication- Deadlocks –Prevention
– Avoidance – Detection – Recovery.

BASICS OF OPERATING SYSTEMS:

WHAT IS AN OPERATING SYSTEM?

 An operating system acts as an interface between the software and different parts of
the computer or the computer hardware.
 The operating system is designed in such a way that it can manage the overall
resources and operations of the computer.
 Operating System is a fully integrated set of specialized programs that handle all the
operations of the computer.
 It controls and monitors the execution of all other programs that reside in the
computer, which also includes application programs and other system software of the
computer. Examples of Operating Systems are Windows, Linux, Mac OS, etc.
An Operating System (OS) is a collection of software that manages computer hardware
resources and provides common services for computer programs. The operating system is the
most important type of system software in a computer system.
What is an Operating System Used for?
 The operating system helps in improving the computer software as well as hardware.
 Without OS, it became very difficult for any application to be user-friendly. The
Operating System provides a user with an interface that makes any application
attractive and user-friendly.
 The operating System comes with a large number of device drivers that make OS
services reachable to the hardware environment.
 Each and every application present in the system requires the Operating System.
 The operating system works as a communication channel between system hardware
and system software.
 The operating system helps an application with the hardware part without knowing
about the actual hardware configuration.
 It is one of the most important parts of the system and hence it is present in every
device, whether large or small device.

Functions of the Operating System


 Resource Management: The operating system manages and allocates memory,
CPU time, and other hardware resources among the various programs and
processes running on the computer.
 Process Management: The operating system is responsible for starting,
stopping, and managing processes and programs. It also controls the scheduling
of processes and allocates resources to them.
 Memory Management: The operating system manages the computer’s primary
memory and provides mechanisms for optimizing memory usage.
 Security: The operating system provides a secure environment for the user,
applications, and data by implementing security policies and mechanisms such
as access controls and encryption.
 Job Accounting: It keeps track of time and resources used by various jobs or
users.
 File Management: The operating system is responsible for organizing and
managing the file system, including the creation, deletion, and manipulation of
files and directories.
 Device Management: The operating system manages input/output devices such
as printers, keyboards, mice, and displays. It provides the necessary drivers and
interfaces to enable communication between the devices and the computer.
 Networking: The operating system provides networking capabilities such as
establishing and managing network connections, handling network protocols,
and sharing resources such as printers and files over a network.
 User Interface: The operating system provides a user interface that enables
users to interact with the computer system. This can be a Graphical User
Interface (GUI), a Command-Line Interface (CLI), or a combination of both.
 Backup and Recovery: The operating system provides mechanisms for backing
up data and recovering it in case of system failures, errors, or disasters.
 Virtualization: The operating system provides virtualization capabilities that
allow multiple operating systems or applications to run on a single physical
machine. This can enable efficient use of resources and flexibility in managing
workloads.
 Performance Monitoring: The operating system provides tools for monitoring
and optimizing system performance, including identifying bottlenecks,
optimizing resource usage, and analyzing system logs and metrics.
 Time-Sharing: The operating system enables multiple users to share a computer
system and its resources simultaneously by providing time-sharing mechanisms
that allocate resources fairly and efficiently.
 System Calls: The operating system provides a set of system calls that enable
applications to interact with the operating system and access its resources.
System calls provide a standardized interface between applications and the
operating system, enabling portability and compatibility across different
hardware and software platforms.
 Error-detecting Aids: These contain methods that include the production of
dumps, traces, error messages, and other debugging and error-detecting
methods.
Objectives of Operating Systems
Let us now see some of the objectives of the operating system, which are mentioned below.
 Convenient to use: One of the objectives is to make the computer system more
convenient to use in an efficient manner.
 User Friendly: To make the computer system more interactive with a more
convenient interface for the users.
 Easy Access: To provide easy access to users for using resources by acting as an
intermediary between the hardware and its users.
 Management of Resources: For managing the resources of a computer in a
better and faster way.
 Controls and Monitoring: By keeping track of who is using which resource,
granting resource requests, and mediating conflicting requests from different
programs and users.
 Fair Sharing of Resources: Providing efficient and fair sharing of resources
between the users and programs.
Types of Operating Systems
 Batch Operating System: A Batch Operating System is a type of operating
system that does not interact with the computer directly. There is an operator
who takes similar jobs having the same requirements and groups them into
batches.
 Time-sharing Operating System: Time-sharing Operating System is a type of
operating system that allows many users to share computer resources (maximum
utilization of the resources).
 Distributed Operating System: Distributed Operating System is a type of
operating system that manages a group of different computers and makes appear
to be a single computer. These operating systems are designed to operate on a
network of computers. They allow multiple users to access shared resources and
communicate with each other over the network. Examples include Microsoft
Windows Server and various distributions of Linux designed for servers.
 Network Operating System: Network Operating System is a type of operating
system that runs on a server and provides the capability to manage data, users,
groups, security, applications, and other networking functions.
 Real-time Operating System: Real-time Operating System is a type of
operating system that serves a real-time system and the time interval required to
process and respond to inputs is very small. These operating systems are
designed to respond to events in real time. They are used in applications that
require quick and deterministic responses, such as embedded systems, industrial
control systems, and robotics.
 Multiprocessing Operating System: Multiprocessor Operating Systems are
used in operating systems to boost the performance of multiple CPUs within a
single computer system. Multiple CPUs are linked together so that a job can be
divided and executed more quickly.
 Single-User Operating Systems: Single-User Operating Systems are designed
to support a single user at a time. Examples include Microsoft Windows for
personal computers and Apple macOS.
 Multi-User Operating Systems: Multi-User Operating Systems are designed to
support multiple users simultaneously. Examples include Linux and Unix.
 Embedded Operating Systems: Embedded Operating Systems are designed to
run on devices with limited resources, such as smartphones, wearable devices,
and household appliances. Examples include Google’s Android and Apple’s
iOS.
 Cluster Operating Systems: Cluster Operating Systems are designed to run on
a group of computers, or a cluster, to work together as a single system. They are
used for high-performance computing and for applications that require high
availability and reliability. Examples include Rocks Cluster Distribution and
OpenMPI.
How to Check the Operating System?
There are so many factors to be considered while choosing the best Operating System for our
use. These factors are mentioned below.
 Price Factor: Price is one of the factors to choose the correct Operating System
as there are some OS that is free, like Linux, but there is some more OS that is
paid like Windows and macOS.
 Accessibility Factor: Some Operating Systems are easy to use like macOS and
iOS, but some OS are a little bit complex to understand like Linux. So, you must
choose the Operating System in which you are more accessible.
 Compatibility factor: Some Operating Systems support very less applications
whereas some Operating Systems supports more application. You must choose
the OS, which supports the applications which are required by you.
 Security Factor: The security Factor is also a factor in choosing the correct OS,
as macOS provide some additional security while Windows has little fewer
security features.
Examples of Operating Systems
 Windows (GUI-based, PC)
 GNU/Linux (Personal, Workstations, ISP, File, and print server, Three-tier
client/Server)
 macOS (Macintosh), used for Apple’s personal computers and workstations
(MacBook, iMac).
 Android (Google’s Operating System for smartphones/tablets/smartwatches)
 iOS (Apple’s OS for iPhone, iPad, and iPod Touch
MAIN FRAME SYSTEMS

What Is a Mainframe?

 A mainframe computer, often colloquially known as a big iron or mainframe, is


typically used by large enterprises for mission-critical applications.
 This involves processing massive amounts of data for activities like censuses, industry
and consumer analytics, enterprise resource planning, or large transaction processing.
 Today’s mainframes are far smaller than the “Big Iron” giants of the past.
 The most recent mainframe might cohabit with various systems in the data center
using a 19-inch rack.
 Modern mainframes are also referred to as data servers (even though servers are not
identical to mainframes).
 This is because they are meant to execute up to 1 trillion daily online transactions
with the highest degrees of safety and dependability.
 mainframes have a high degree of availability, as they are frequently used for
applications in which downtimes would be expensive and, at times, challenging for an
organization.
Reliability, availability, and serviceability or RAS, is the distinguishing feature of mainframe
computers. Other primary features include:
 Mainframes may boost or modify system capacity on the go without
interrupting system operations. Its precision and granularity offer expertise and
sophistication uncommon amongst server solutions.
 Modern mainframes, such as the IBM zSeries, provide two virtualization levels:
logical partitions and virtual machines. Many mainframe users maintain two
machines: one at their primary data center and the other in their backup data
center, which may be fully active, partly active, or in standby mode in the event
of a disaster affecting the primary data center.
 Testing, developing, training and production workloads for applications and
databases may work on a single system unless the demand is exceedingly high
and the machine’s capacity is exhausted. This configuration of two mainframes
may enable continuous business service, preventing both planned and
unscheduled interruptions.
 Mainframes are intended to manage very large input and output (I/O) volumes
and prioritize throughput. Ever since the 1950s, mainframe architectures have
included auxiliary hardware to control I/O devices, freeing the CPU to focus
solely on high-speed memory.
 It is typical for mainframes to administer enormous databases and files. Records
of gigabyte to terabyte-size capacity are quite common. Mainframes often
contain enormous volumes of online data repositories compared to a regular PC
and can also be accessed rapidly.
Evolution of mainframe computers
 From the 1950s to the early twenty-first century, many manufacturers and their
successors built mainframe computers, with the number steadily decreasing as the
cloud matured.
 The 700/7000 series and subsequent production of the 360 series mainframes led to
IBM’s unquestionable ascendancy.
 Their present zSeries mainframe computers have continued to advance from the later
design.
 Germany’s Siemens and Telefunken, U.K.’s ICL, and Japan’s Fujitsu and Hitachi
were notable foreign manufacturers.
 During the Cold War, the Soviet Union and Warsaw Pact nations produced
indistinguishable clones of IBM mainframes.
 In the 1980s, minicomputer-led systems became increasingly advanced and were able
to replace the lower portions of the mainframes.
 And over the next few decades, businesses discovered that servers built on
microcomputer designs could be implemented for a fraction of the purchase cost and
offer local users far more autonomy over their own systems, given the then-current IT
policies and practices.
 Personal computers progressively replaced the terminals used for communicating with
mainframe systems.
 As a result, demand decreased, and future mainframe installations were mainly
limited to the financial sector and the government.
 IBM unveiled their most recent mainframe system, the IBM z16, in April 2022, which
had an on-chip artificial intelligence (AI) accelerator and a new CPU, giving
mainframes a fresh lease on life.
What do mainframes do?
Mainframes carry out three essential tasks. Let’s understand each one in detail.
 Act as a data warehouse orchestration system: Every computer has a hard
drive for long-term data storage, but mainframe systems store the entire data as
an application inside themselves. When remote users with linked terminals
attempt to log in, the mainframe grants all remote terminals access to their files
and applications.
 Help enforce authentication and access permissions: Storing data and
software files on a single mainframe system may increase efficiency, but it may
also jeopardize data security. In mainframe systems, administrators have
control over programs and data. They can also determine the individuals who
have access. Therefore, mainframes may serve as firewalls against intruder
attacks.
 Allocate processor time and resources: Mainframe systems can divide a finite
amount of processing power among all concurrently logged-in users.
Consequently, the mainframe determines which types of priority correlate to
which types of users. The mainframe administrator has the authority to
determine these priorities and allot processor resources.
Features of a Mainframe Computer
A mainframe computer offers the following features.
Features of a Mainframe
Computer

1. Presence of two processors


There are two types of processors in mainframe computers: the primary processor and the
system assistance processor, or SAP. The latter doesn’t process data but transfers it from one
location to another as quickly as possible. Each CPU may contain up to seven to ten
specifically-built and constructed cores for increased throughput.
2. Multiple input/output (I/O) cards
Each mainframe may contain as many as 160 I/O cards because they are designed for
redundancy. This means that if one card malfunctions, others will take up its tasks until it is
replaced.
3. High storage capacity
These systems have tremendous storage capacity, allowing them to process massive volumes
of data on demand. It can store a vast quantity of data and interpret it according to user
specifications. After data processing, the system can provide accurate findings with zero data
inaccuracies.
4. RAS-based performance
All applications on mainframes are designed with reliability, availability, and serviceability
(RAS) in mind, which distinguishes the machine from other systems. With the aid of these
computers, data processing is simple, and businesses use the scalability characteristic of the
system to work with varying storage capacities. The CPUs within the system sustain the
computational power of all of these apps.
5. No interruptions in the functioning
When updating software on a mainframe, workloads are distributed across the processors so
that productivity is not hindered. In other cases, pausing the system might be prohibitively
expensive for the business. If the organization is a financial institution, it could even
endanger national security because of the inability to process applications. The primary
function of mainframes is to make important systems accessible around the clock.
6. Multiple operating systems on the same machine
Multiple operating systems may be hosted on a particular mainframe. For instance, it is
typical to utilize z/OS alongside Linux on a single mainframe. z/VM, z/VSE, Linux for
System z, and z/TPF are the four dominant operating systems for mainframes, along with
z/OS.
7. Throughput-driven fault-tolerant computing
A substantial quantity of output and input data is sent to the system. This means that
mainframes must be able to manage all of this data, applications, and processes with ease.
The quantity of data transported to or from a system does not affect mainframes. In addition,
the mainframe ensures no errors occur while moving massive volumes of data inside its
database. This feature is known as fault-tolerant computing.
8. Clustering technology
Mainframe systems support clustering technologies with close coupling (called Parallel
Sysplex in an IBM environment). This capability enables the operation of up to 32 machines
as a unified system configuration. Even if a system crashes, work will be completed
seamlessly on the subsequent live system with no performance loss.
9. Centralization of computing processes
The mainframe system centralizes the administration of computing tasks. This implies that all
activities occur in the mainframe’s processing section, and the results are shown on a client’s
desktop monitor. The user may interact with an application or utility operating on the desktop
while the mainframe operates in the background.
10. A move towards flexibility
Today, however, the difference between centralized and distributed computing is rapidly
diminishing. Consequently, mainframes are routinely combined with clusters of simpler
servers in a range of topologies. Modern mainframe hardware and software assets (like
processors, storage, and device interfaces) may be reconfigured dynamically while programs
continue to operate. This highlights the adaptable and evolving nature of modern mainframes.
11. Performance advantages over servers
The properties of mainframes must be comprehended in relation to servers and their intrinsic
differences. Although the words are often used interchangeably, mainframes and servers are
unique in the following ways:
 Size: Physically, a standard commodity server is smaller in size than any
mainframe. This is not due to the scale of mainframe computers. These days,
mainframe computers are roughly the equivalent of a refrigerator. However, a
server tray of the same size might accommodate around 12 low-cost servers.
Mainframes will most certainly be bulkier than conventional servers due to the
computing hardware resources it contains.
 Throughput: If a standard server can process 300 transactions per second, this
translates to around 26 million transactions each day. This is a substantial
figure, but it pales compared to the trillions a mainframe can manage. IBM
claims that Z13 mainframes can process 2.5 billion daily transactions.
 Versatility: It is not possible to migrate mainframe workloads to commodity
servers. However, you may shift tasks to a mainframe that would ordinarily be
executed on a commodity server. What this means is that mainframes offer the
best of both worlds. Users can access mission-critical applications that cannot
operate elsewhere and manage server workloads on commodity hardware.
Benefits of Mainframes
1. Enable cloud-ready and scalable infrastructure
For cloud deployment, mainframes enable a range of highly secure virtualized environments.
This comprises the z/VM operating system, blade servers, hypervisors, as well as logical
partitions (LPARs). In addition to supporting millions of users with greater speed,
mainframes are the best platform for big data analytics, data management, and web
applications. Consequently, the technology is highly scalable.
2. Maintain compliance and security
Mainframes support industry standards, compliance regulations, and best practices with the
help of data encryption, role segregation, privileged user monitoring, secure communication
systems, audit reporting, and other mechanisms. It provides enterprise-wide visibility and a
high degree of security transparency, enabling improved control. In addition, private clouds
built on mainframes may reduce the inherent security risks of public cloud services with open
networks.
3. Simplify the migration and consolidation of workloads
Transferring dispersed tasks to the mainframe setup is simple. This decreases the number of
distributed systems that must be controlled. When your virtual environment has been
optimized, it is simple to consolidate various tasks on the mainframe while maintaining the
necessary separation between systems. This also minimizes the license expenses that
dispersed systems would incur.
4. Reduce the total cost of ownership
The biggest benefit of mainframe computers is their unparalleled longevity. These computers
have an average lifetime of over ten years. Until that point, mainframe computers are often
problem-free. Once the average lifetime has been achieved, consumers can choose between
replacing or upgrading the unit.
In addition, there is a threshold at which increasing server numbers becomes more expensive
than operating the workload on a mainframe. Research on security managementOpens a new
window determined that the total cost of ownership (TCO) over three years for a private
cloud built on IBM zEnterprise systems was 76% lower than for a public cloud offered by a
third-party service provider.
5. Ensure compatibility across generations
The operating system for mainframe computers supports a vast array of software and
hardware. However, a mainframe will support most software, irrespective of the OS version.
Even after an update, the system is still capable of running legacy programs. In addition,
mainframe computers don’t limit the number of concurrent operating systems. Multiple
operating systems can be created to function, thereby enhancing the system’s overall
performance.
6. Compatible with blockchain technology
Blockchain is among the most fascinating new applications for which mainframes are an
ideal match. In terms of reaction speed, transaction throughput, scalability, or security, the
mainframe is the perfect blockchain host over x86 servers.
Additionally, its security advantage is a decisive advantage. The blockchain approach is
predicated on transaction data carried in a network of immutable data blocks that cannot be
altered once assembled. Mainframes can deliver 100% encryption without affecting
performance due to their higher computing capability.
While mainframes remain essential for the reasons mentioned above, they also have a few
drawbacks. Before setting up a mainframe computer system, one should examine the
following:
 Complex implementation: Due to its physical components, establishing a
mainframe computer is more challenging than installing a typical computer.
 High initial outlay: The initial outlay of a mainframe is substantially more than
that of a standard server or the cloud.
 Complex maintenance: The management of mainframe computers cannot be
undertaken by ordinary IT personnel. It requires operations management and, in
particular, system debugging.
 Environmental conditions: Mainframes have additional environmental
limitations like maintenance of temperature and humidity.
Examples of Mainframes
While mainframe-like computing techniques are widely used, actual mainframe computers
are not very commonly seen in circulation (apart from IBM models). Keeping this in mind,
here are a few notable examples of mainframes:
1. IBM Z
IBM refers to all of its z/Architecture mainframe machines as IBM Z. In July 2017, with the
introduction of a new generation of products, IBM z Systems was rebranded as IBM Z. The
IBM Z line of mainframes presently comprises the latest model, IBM z16, along with z15,
z14, and z13, as well as IBM zEnterprise, IBM System z10, IBM System z9, and IBM
eServer zSeries models.
The IBM Z family preserves 100% backward compatibility. In practice, modern systems are
the direct offspring of the System/360, which was introduced in 1964. Half a century later,
the newest IBM Z system is compatible with most software created for older systems.
2. FUJITSU Server GS21
FUJITSU Server GS21 is ideal for mission-critical corporate and social infrastructure
systems that must operate 24×7. Fujitsu has been continuously enhancing mainframe
processing speed, functionality, and standards over the last 50 years to meet emerging
demands.
The FUJITSU Server GS21 can manage massive amounts of data and ensure high availability
at a reduced total cost of ownership. However, Fujitsu has declared that it would cease selling
mainframes in 2030, with maintenance & support ending in 2035.
3. UNIVAC 9400
Several decades ago, the 9400 was created for mid-sized organizations seeking simple system
expansion. In the 1960s, a UNIVAC 9400 mainframe was used in the computer center of a
Cologne industrial complex. After being replaced by new technologies and hardware, the
system was donated to a school in Cologne. From there, it was moved to the technikum29, a
German computer museum, in 2005, where it remains functional to this day.
DESKTOP SYSTEMS

 An operating system (OS) acts as an interface between the hardware and software of a
desktop system. It manages system resources, facilitates software execution, and
provides a user-friendly environment.
 Different operating systems offer distinct features, compatibility, and performance,
catering to the diverse needs and preferences of users.

Components of a Desktop System

 Central Processing Unit (CPU): The CPU is the brain of a desktop system,
responsible for executing instructions and performing calculations. It processes data
and carries out tasks based on the instructions provided by software programs. The
CPU’s performance is measured by its clock speed, number of cores, and cache size.
 Random Access Memory (RAM): RAM is a type of volatile memory that
temporarily stores data and instructions for the CPU to access quickly. It allows for
efficient multitasking and faster data retrieval, significantly impacting the overall
performance of the system. The amount of RAM in a desktop system determines its
capability to handle multiple programs simultaneously.
 Storage Devices: Desktop systems utilize various storage devices to store and
retrieve data. Hard Disk Drives (HDDs) are the traditional storage medium, offering
large capacities but slower read/write speeds. Solid-State Drives (SSDs) are a newer
technology that provides faster data access, enhancing the system’s responsiveness
and reducing loading times.
 Graphics Processing Unit (GPU): The GPU is responsible for rendering images,
videos, and animations on the computer screen. It offloads the graphical processing
tasks from the CPU, ensuring smooth visuals and enabling resource-intensive
applications such as gaming, video editing, and 3D modeling. High-performance
GPUs are essential for users who require demanding graphical capabilities.
 Input and Output Devices: Desktop systems are equipped with various input and
output devices. Keyboards and mice are the primary input devices, allowing users to
interact with the system and input commands. Monitors, printers, speakers, and
headphones serve as output devices, providing visual or auditory feedback based on
the system’s output.

Operating System Tutorials Main Page

Operating System Tutorial Main Page


Evolution of Desktop Systems

Desktop systems have evolved significantly over the years. From the bulky and
limited-capability systems of the past to the sleek and powerful computers of today,
technological advancements have revolutionized the desktop computing experience.

Smaller form factors, increased processing power, improved storage technologies, and
enhanced user interfaces are some of the notable advancements that have shaped the
evolution of desktop systems.

Popular Desktop Operating Systems

 Windows: Windows, developed by Microsoft, is one of the most widely used desktop
operating systems globally.
 macOS: macOS is the operating system designed specifically for Apple’s Mac
computers. Known for its sleek and intuitive interface, macOS offers seamless
integration with other Apple devices and services.
 Linux: Linux is an open-source operating system that provides a high degree of
customization and flexibility. It is favored by developers, system administrators, and
tech enthusiasts due to its stability, security, and vast array of software options.

Future Trends in Desktop Systems

The future of desktop systems holds exciting possibilities. As technology continues to


advance, we can expect further improvements in processing power, storage capacities, and
energy efficiency.

Virtual reality (VR) and augmented reality (AR) integration, cloud-based computing,
artificial intelligence (AI) integration, and seamless connectivity across devices are some of
the trends that will shape the future of desktop systems.

MULTIPROCESSOR SYSTEMS

 Multiple CPUs are interconnected so that a job can be divided among them for
faster execution.
 When a job finishes, results from all CPUs are collected and compiled to give
the final output. Jobs needed to share main memory and they may also share
other system resources among themselves.
 Multiple CPUs can also be used to run multiple jobs simultaneously.

For Example: UNIX Operating system is one of the most widely used multiprocessing
systems.

The basic organization of a typical multiprocessing system is shown in the given figure

To employ a multiprocessing operating system effectively, the computer system must


have the following things:

o motherboard is capable of handling multiple processors in a multiprocessing operating


system.
o Processors are also capable of being used in a multiprocessing system.

Advantages of multiprocessing operating system are:


o Increased reliability: Due to the multiprocessing system, processing tasks can be
distributed among several processors. This increases reliability as if one processor fails;
the task can be given to another processor for completion.

o Increased throughout: As several processors increase, more work can be done in less
o The economy of Scale: As multiprocessors systems share peripherals, secondary
storage devices, and power supplies, they are relatively cheaper than single-processor
systems.

Disadvantages of Multiprocessing operating System


o Operating system of multiprocessing is more complex and sophisticated as it takes care
of multiple CPUs at the same time.

Types of multiprocessing systems

o Symmetrical multiprocessing operating system


o Asymmetric multiprocessing operating system

Symmetrical multiprocessing operating system:

In a Symmetrical multiprocessing system, each processor executes the same copy of the
operating system, takes its own decisions, and cooperates with other processes to smooth the
entire functioning of the system. The CPU scheduling policies are very simple. Any new job
submitted by a user can be assigned to any processor that is least burdened. It also results in a
system in which all processors are equally burdened at any time.

The symmetric multiprocessing operating system is also known as a "shared every-thing"


system, because the processors share memory and the Input output bus or data path. In this
system processors do not usually exceed more than 16.
Characteristics of Symmetrical multiprocessing operating system:

o In this system, any processor can run any job or process.

o In this, any processor initiates an Input and Output operation.

Advantages of Symmetrical multiprocessing operating system:

o These systems are fault-tolerant. Failure of a few processors does not bring the entire
system to a halt.

Disadvantages of Symmetrical multiprocessing operating system:

o It is very difficult to balance the workload among processors rationally.

o Specialized synchronization schemes are necessary for managing multiple processors.

Asymmetric multiprocessing operating system

In an asymmetric multiprocessing system, there is a master slave relationship between the


processors.

Further, one processor may act as a master processor or supervisor processor while others are
treated as shown below.

In the above figure, the asymmetric processing system shows that CPU n1 acts as a supervisor
whose function controls other following processors.
In this type of system, each processor is assigned a specific task, and there is a designated
master processor that controls the activities of other processors.

For example, we have a math co-processor that can handle mathematical jobs better than the
main CPU. Similarly, we have an MMX processor that is built to handle multimedia-related
jobs. Similarly, we have a graphics processor to handle the graphics-related job better than the
main processor. When a user submits a new job, the OS has to decide which processor can
perform it better, and then that processor is assigned that newly arrived job. This processor acts
as the master and controls the system. All other processors look for masters for instructions or
have predefined tasks. It is the responsibility of the master to allocate work to other processors.

Advantages of Asymmetric multiprocessing operating system:

o In this type of system execution of Input and Output operation or an application


program may be faster in some situations because many processors may be available
for a single job.

Disadvantages of Asymmetric multiprocessing operating system:

o In this type of multiprocessing operating system the processors are unequally burdened.
One processor may be having a long job queue, while another one may be sitting idle.

o In this system, if the process handling a specific work fails, the entire system will go
down.

DISTRIBUTED SYSTEMS

 Distributed System is a collection of autonomous computer systems that are


physically separated but are connected by a centralized computer network that is
equipped with distributed system software.
 The autonomous computers will communicate among each system by sharing
resources and files and performing the tasks assigned to them.

Example of Distributed System:


Any Social Media can have its Centralized Computer Network as its Headquarters and
computer systems that can be accessed by any user and using their services will be the
Autonomous Systems in the Distributed System Architecture.
 Distributed System Software: This Software enables computers to coordinate
their activities and to share the resources such as Hardware, Software, Data, etc.
 Database: It is used to store the processed data that are processed by each
Node/System of the Distributed systems that are connected to
the Centralized network.
 As we can see that each Autonomous System has a common Application that can
have its own data that is shared by the Centralized Database System.
 To Transfer the Data to Autonomous Systems, Centralized System should be
having a Middleware Service and should be connected to a Network.
 Middleware Services enable some services which are not present in the local
systems or centralized system default by acting as an interface between the
Centralized System and the local systems. By using components of Middleware
Services systems communicate and manage data.
 The Data which is been transferred through the database will be divided into
segments or modules and shared with Autonomous systems for processing.
 The Data will be processed and then will be transferred to the Centralized system
through the network and will be stored in the database.
Characteristics of Distributed System:
 Resource Sharing: It is the ability to use any Hardware, Software, or Data
anywhere in the System.
 Openness: It is concerned with Extensions and improvements in the system (i.e.,
How openly the software is developed and shared with others)
 Concurrency: It is naturally present in Distributed Systems, that deal with the
same activity or functionality that can be performed by separate users who are in
remote locations. Every local system has its independent Operating Systems and
Resources.
 Scalability: It increases the scale of the system as a number of processors
communicate with more users by accommodating to improve the responsiveness
of the system.
 Fault tolerance: It cares about the reliability of the system if there is a failure in
Hardware or Software, the system continues to operate properly without
degrading the performance the system.
 Transparency: It hides the complexity of the Distributed Systems to the Users
and Application programs as there should be privacy in every system.
 Heterogeneity: Networks, computer hardware, operating systems, programming
languages, and developer implementations can all vary and differ among
dispersed system components.
Advantages of Distributed System:
 Applications in Distributed Systems are Inherently Distributed Applications.
 Information in Distributed Systems is shared among geographically distributed
users.
 Resource Sharing (Autonomous systems can share resources from remote
locations).
 It has a better price performance ratio and flexibility.
 It has shorter response time and higher throughput.
 It has higher reliability and availability against component failure.
 It has extensibility so that systems can be extended in more remote locations and
also incremental growth.
Disadvantages of Distributed System:
 Relevant Software for Distributed systems does not exist currently.
 Security possess a problem due to easy access to data as the resources are shared
to multiple systems.
 Networking Saturation may cause a hurdle in data transfer i.e., if there is a lag in
the network then the user will face a problem accessing data.
 In comparison to a single user system, the database associated with distributed
systems is much more complex and challenging to manage.
 If every node in a distributed system tries to send data at once, the network may
become overloaded.
Applications Area of Distributed System:
 Finance and Commerce: Amazon, eBay, Online Banking, E-Commerce
websites.
 Information Society: Search Engines, Wikipedia, Social Networking, Cloud
Computing.
 Cloud Technologies: AWS, Salesforce, Microsoft Azure, SAP.
 Entertainment: Online Gaming, Music, youtube.
 Healthcare: Online patient records, Health Informatics.
 Education: E-learning.
 Transport and logistics: GPS, Google Maps.
 Environment Management: Sensor technologies.
Challenges of Distributed Systems:

While distributed systems offer many advantages, they also present some challenges that
must be addressed. These challenges include:

 Network latency: The communication network in a distributed system can


introduce latency, which can affect the performance of the system.
 Distributed coordination: Distributed systems require coordination among the
nodes, which can be challenging due to the distributed nature of the system.
 Security: Distributed systems are more vulnerable to security threats than
centralized systems due to the distributed nature of the system.
 Data consistency: Maintaining data consistency across multiple nodes in a
distributed system can be challenging.

CLUSTERED SYSTEMS

 Cluster systems are similar to parallel systems because both systems use multiple CPUs.
 The primary difference is that clustered systems are made up of two or more
independent systems linked together.
 They have independent computer systems and a shared storage media, and all systems
work together to complete all tasks.
 All cluster nodes use two different approaches to interact with one another,
like message passing interface (MPI) and parallel virtual machine (PVM).

you will learn about the Clustered Operating system, its types, classification, advantages, and
disadvantages.

What is the Clustered Operating System?


 Cluster operating systems are a combination of software and hardware clusters.
 Hardware clusters aid in the sharing of high-performance disks among all computer
systems, while software clusters give a better environment for all systems to operate.
 A cluster system consists of various nodes, each of which contains its cluster software.
 The cluster software is installed on each node in the clustered system, and it monitors
the cluster system and ensures that it is operating properly.
 If one of the clustered system's nodes fails, the other nodes take over its storage and
resources and try to restart.
 Cluster components are generally linked via fast area networks, and each node
executing its instance of an operating system.
 In most cases, all nodes share the same hardware and operating system, while different
hardware or different operating systems could be used in other cases.The primary
purpose of using a cluster system is to assist with weather forecasting, scientific
computing, and supercomputing systems.

There are two clusters available to make a more efficient cluster. These are as follows:

1. Software Cluster

2. Hardware Cluster
Software Cluster

The Software Clusters allows all the systems to work together.

Hardware Cluster

It helps to allow high-performance disk sharing among systems.

Types of Clustered Operating System

There are mainly three types of the clustered operating system:

1. Asymmetric Clustering System

2. Symmetric Clustering System


3. Parallel Cluster System

Asymmetric Clustering System

In the asymmetric cluster system, one node out of all nodes is in hot standby mode, while the
remaining nodes run the essential applications. Hot standby mode is completely fail-safe and
also a component of the cluster system. The node monitors all server functions; the hot standby
node swaps this position if it comes to a halt.

Symmetric Clustering System

Multiple nodes help run all applications in this system, and it monitors all nodes
simultaneously. Because it uses all hardware resources, this cluster system is more reliable than
asymmetric cluster systems.

Parallel Cluster System

A parallel cluster system enables several users to access similar data on the same shared storage
system. The system is made possible by a particular software version and other apps.

Classification of clusters
Computer clusters are managed to support various purposes, from general-purpose business
requirements like web-service support to computation-intensive scientific calculations. There
are various classifications of clusters. Some of them are as follows:

1. Fail Over Clusters

The process of moving applications and data resources from a failed system to another system
in the cluster is referred to as fail-over. These are the databases used to cluster important
missions, application servers, mail, and file.

2. Load Balancing Cluster

The cluster requires better load balancing abilities amongst all available computer systems. All
nodes in this type of cluster can share their computing workload with other nodes, resulting in
better overall performance. For example, a web-based cluster can allot various web queries to
various nodes, so it helps to improve the system speed. When it comes to grabbing requests,
only a few cluster systems use the round-robin method.

3. High Availability Clusters

These are also referred to as "HA clusters". They provide a high probability that all resources
will be available. If a failure occurs, such as a system failure or the loss of a disk volume, the
queries in the process are lost. If a lost query is retried, it will be handled by a different cluster
computer. It is widely used in news, email, FTP servers, and the web.

Advantages and Disadvantages of Cluster Operating System

Various advantages and disadvantages of the Clustered Operating System are as follows:

Advantages

Various advantages of Clustered Operating System are as follows:

1. High Availability
Although every node in a cluster is a standalone computer, the failure of a single node doesn't
mean a loss of service. A single node could be pulled down for maintenance while the
remaining clusters take on a load of that single node.

2. Cost Efficiency

When compared to highly reliable and larger storage mainframe computers, these types of
cluster computing systems are thought to be more cost-effective and cheaper. Furthermore,
most of these systems outperform mainframe computer systems in terms of performance.

3. Additional Scalability

A cluster is set up in such a way that more systems could be added to it in minor increments.
Clusters may add systems in a horizontal fashion. It means that additional systems could be
added to clusters to improve their performance, fault tolerance, and redundancy.

4. Fault Tolerance

Clustered systems are quite fault-tolerance, and the loss of a single node does not result in the
system's failure. They might also have one or more nodes in hot standby mode, which allows
them to replace failed nodes.

5. Performance

The clusters are commonly used to improve the availability and performance over the single
computer systems, whereas usually being much more cost-effective than the single computer
system of comparable speed or availability.

6. Processing Speed

The processing speed is also similar to mainframe systems and other types of supercomputers
on the market.

Disadvantages

Various disadvantages of the Clustered Operating System are as follows:

1. Cost-Effective
One major disadvantage of this design is that it is not cost-effective. The cost is high, and the
cluster will be more expensive than a non-clustered server management design since it requires
good hardware and a design.

2. Required Resources

Clustering necessitates the use of additional servers and hardware, making monitoring and
maintenance difficult. As a result, infrastructure must be improved.

3. Maintenance

It isn't easy to system establishment, monitor, and maintenance this system.

REAL-TIME SYSTEMS

 Real-time operating systems (RTOS) are used in environments where a large


number of events, mostly external to the computer system, must be accepted and
processed in a short time or within certain deadlines.
 such applications are industrial control, telephone switching equipment, flight
control, and real-time simulations.
 With an RTOS, the processing time is measured in tenths of seconds.
 This system is time-bound and has a fixed deadline.
 The processing in this type of system must occur within the specified constraints.
Otherwise, This will lead to system failure.
Examples of real-time operating systems are airline traffic control systems, Command
Control Systems, airline reservation systems, Heart pacemakers, Network Multimedia
Systems, robots, etc.
The real-time operating systems can be of 3 types –
RTOS

1. Hard Real-Time Operating System: These operating systems guarantee that


critical tasks are completed within a range of time.
For example, a robot is hired to weld a car body. If the robot welds too early or
too late, the car cannot be sold, so it is a hard real-time system that requires
complete car welding by the robot hardly on time., scientific experiments,
medical imaging systems, industrial control systems, weapon systems, robots,
air traffic control systems, etc.

2. Soft real-time operating system: This operating system provides some


relaxation in the time limit.
For example – Multimedia systems, digital audio systems, etc. Explicit,
programmer-defined, and controlled processes are encountered in real-time
systems. A separate process is changed by handling a single external event. The
process is activated upon the occurrence of the related event signaled by an
interrupt.
Multitasking operation is accomplished by scheduling processes for execution
independently of each other. Each process is assigned a certain level of priority
that corresponds to the relative importance of the event that it services. The
processor is allocated to the highest-priority processes. This type of schedule,
called, priority-based preemptive scheduling is used by real-time systems.
3. Firm Real-time Operating System: RTOS of this type have to follow deadlines
as well. In spite of its small impact, missing a deadline can have unintended
consequences, including a reduction in the quality of the product. Example:
Multimedia applications.
4. Deterministic Real-time operating System: Consistency is the main key in this
type of real-time operating system. It ensures that all the task and processes
execute with predictable timing all the time,which make it more suitable for
applications in which timing accuracy is very
important. Examples: INTEGRITY, PikeOS.

Advantages:
The advantages of real-time operating systems are as follows-
1. Maximum consumption: Maximum utilization of devices and systems. Thus
more output from all the resources.

2. Task Shifting: Time assigned for shifting tasks in these systems is very less.
For example, in older systems, it takes about 10 microseconds. Shifting one task
to another and in the latest systems, it takes 3 microseconds.

3. Focus On Application: Focus on running applications and less importance to


applications that are in the queue.

4. Real-Time Operating System In Embedded System: Since the size of


programs is small, RTOS can also be embedded systems like in transport and
others.

5. Error Free: These types of systems are error-free.

6. Memory Allocation: Memory allocation is best managed in these types of


systems.
Disadvantages:
The disadvantages of real-time operating systems are as follows-
1. Limited Tasks: Very few tasks run simultaneously, and their concentration is
very less on few applications to avoid errors.

2. Use Heavy System Resources: Sometimes the system resources are not so good
and they are expensive as well.

3. Complex Algorithms: The algorithms are very complex and difficult for the
designer to write on.

4. Device Driver And Interrupt signals: It needs specific device drivers and
interrupts signals to respond earliest to interrupts.

5. Thread Priority: It is not good to set thread priority as these systems are very
less prone to switching tasks.

6. Minimum Switching: RTOS performs minimal task switching.


Comparison of Regular and Real-Time operating systems:

Regular OS Real-Time OS (RTOS)

Complex Simple

Best effort Guaranteed response

Fairness Strict Timing constraints

Average Bandwidth Minimum and maximum limits

Unknown components Components are known

Unpredictable behavior Predictable behavior

Plug and play RTOS is upgradeable


HANDHELD SYSTEMS

Handheld operating systems are available in all handheld devices like Smartphones and
tablets. It is sometimes also known as a Personal Digital Assistant. The popular handheld
device in today’s world is Android and iOS. These operating systems need a high-processing
processor and are also embedded with various types of sensors.

Some points related to Handheld operating systems are as follows:

1. Since the development of handheld computers in the 1990s, the demand for
software to operate and run on these devices has increased.
2. Three major competitors have emerged in the handheld PC world with three
different operating systems for these handheld PCs.
3. Out of the three companies, the first was the Palm Corporation with their PalmOS.
4. Microsoft also released what was originally called Windows CE. Microsoft’s
recently released operating system for the handheld PC comes under the name of
Pocket PC.
5. More recently, some companies producing handheld PCs have also started
offering a handheld version of the Linux operating system on their machines.
Features of Handheld Operating System:
1. Its work is to provide real-time operations.
2. There is direct usage of interrupts.
3. Input/Output device flexibility.
4. Configurability.
Types of Handheld Operating Systems:
Types of Handheld Operating Systems are as follows:

1. Palm OS
2. Symbian OS
3. Linux OS
4. Windows
5. Android
Palm OS:

 Since the Palm Pilot was introduced in 1996, the Palm OS platform has provided
various mobile devices with essential business tools, as well as the capability that
they can access the internet via a wireless connection.
 These devices have mainly concentrated on providing basic personal-information-
management applications. The latest Palm products have progressed a lot, packing
in more storage, wireless internet, etc.

Symbian OS:

 It has been the most widely-used smartphone operating system because of its
ARM architecture before it was discontinued in 2014. It was developed by
Symbian Ltd.
 This operating system consists of two subsystems where the first one is the
microkernel-based operating system which has its associated libraries a nd the
second one is the interface of the operating system with which a user can interact.
 Since this operating system consumes very less power, it was developed for
smartphones and handheld devices.
 It has good connectivity as well as stability.
 It can run applications that are written in Python, Ruby, .NET, etc.

Linux OS:

 Linux OS is an open-source operating system project which is a cross-platform


system that was developed based on UNIX.
It was developed by Linus Torvalds. It is a system software that basically allows
the apps and users to perform some tasks on the PC.
 Linux is free and can be easily downloaded from the internet and it is considered
that it has the best community support.
 Linux is portable which means it can be installed on different types of devices like
mobile, computers, and tablets.
 It is a multi-user operating system.
 Linux interpreter program which is called BASH is used to execute commands.
 It provides user security using authentication features.

Windows OS:

 Windows is an operating system developed by Microsoft. Its interface which is


called Graphical User Interface eliminates the need to memorize commands for
the command line by using a mouse to navigate through menus, dialog boxes, and
buttons.
 It is named Windows because its programs are displayed in the form of a square.
It has been designed for both a beginner as well professional.
 It comes preloaded with many tools which help the users to complete all types of
tasks on their computer, mobiles, etc.
 It has a large user base so there is a much larger selection of available software
programs.
 One great feature of Windows is that it is backward compatible which means that
its old programs can run on newer versions as well.

Android OS:

 It is a Google Linux-based operating system that is mainly designed for


touchscreen devices such as phones, tablets, etc.
There are three architectures which are ARM, Intel, and MIPS which are used by
the hardware for supporting Android. These lets users manipulate the devices
intuitively, with movements of our fingers that mirror some common motions
such as swiping, tapping, etc.
 Android operating system can be used by anyone because it is an open-source
operating system and it is also free.
 It offers 2D and 3D graphics, GSM connectivity, etc.
 There is a huge list of applications for users since Play Store offers over one
million apps.
 Professionals who want to develop applications for the Android OS can download
the Android Development Kit. By downloading it they can easily develop apps
for android.
Advantages of Handheld Operating System:
Some advantages of a Handheld Operating System are as follows:

1. Less Cost.
2. Less weight and size.
3. Less heat generation.
4. More reliability.
Disadvantages of Handheld Operating System:
Some disadvantages of Handheld Operating Systems are as follows:

1. Less Speed.
2. Small Size.
3. Input / Output System (memory issue or less memory is available).

How Handheld operating systems are different from Desktop operating systems?
 Since the handheld operating systems are mainly designed to run on machines that
have lower speed resources as well as less memory, they were designed in a way
that they use less memory and require fewer resources.
 They are also designed to work with different types of hardware as compared to
standard desktop operating systems.
It happens because the power requirements for standard CPUs far exceed the
power of handheld devices.
 Handheld devices aren’t able to dissipate large amounts of heat generated by
CPUs. To deal with such kind of problem, big companies like Intel and Motorola
have designed smaller CPUs with lower power requirements and also lower heat
generation. Many handheld devices fully depend on flash memory cards for their
internal memory because large hard drives do not fit into handheld devices.
FEATURE MIGRATION

A process is essentially a program in execution. The execution of a process should advance


in a sequential design. A process is characterized as an entity that addresses the essential unit
of work to be executed in the system.

Process migration is a particular type of process management by which processes are moved
starting with one computing environment then onto the next.
There are two types of Process Migration:

 Non-preemptive process: If a process is moved before it begins execution on its


source node which is known as a non-preemptive process.
 Preemptive process: If a process is moved at the time of its execution that is
known as preemptive process migration. Preemptive process migration is all the
more expensive in comparison to the non-preemptive on the grounds that the
process environment should go with the process to its new node.
Why use Process Migration?
The reason to use process migration are:

 Dynamic Load Balancing: It permits processes to exploit less stacked nodes by


relocating from overloaded ones.
 Accessibility: Processes that inhibit defective nodes can be moved to other
perfect nodes.
 System Administration: Processes that inhabit a node if it is going through
system maintenance can be moved to different nodes.
 The locality of data: Processes can exploit the region of information or other
extraordinary abilities of a specific node.
 Mobility: Processes can be relocated from a hand-operated device or computer to
an automatic server-based computer before the device gets detached from the
network.
 Recovery of faults: The component to stop, transport and resume a process is
actually valuable to support in recovering the fault in applications that are based
on transactions.
What are the steps involved in Process Migration?
The steps which are involved in migrating the process are:

 The process is chosen for migration.


 Choose the destination node for the process to be relocated.
 Move the chosen process to the destination node.
The subcategories to migrate a process are:

 The process is halted on its source node and is restarted on its destination node.
 The address space of the process is transferred from its source node to its
destination node.
 Message forwarding is implied for the transferred process.
 Managing the communication between collaborating processes that have been
isolated because of process migration.

Methods of Process Migration

The methods of Process Migration are:

1. Homogeneous Process Migration: Homogeneous process migration implies relocating a


process in a homogeneous environment where all systems have a similar operating system as
well as architecture. There are two unique strategies for performing process migration. These
are i) User-level process migration ii) Kernel level process migration.
 User-level process migration: In this procedure, process migration is managed
without converting the operating system kernel. User-level migration executions
are more simple to create and handle but have usually two issues: i) Kernel state
is not accessible by them. ii) They should cross the kernel limit utilizing kernel
demands which are slow and expensive.
 Kernel level process migration: In this procedure, process migration is finished
by adjusting the operating system kernel. Accordingly, process migration will
become more simple and more proficient. This facility permits the migration
process to be done faster and relocate more types of processes.
Homogeneous Process Migration Algorithms:

There are five fundamental calculations for homogeneous process migration:

 Total Copy Algorithm


 Pre-Copy Algorithm
 Demand Page Algorithm
 File Server Algorithm
 Freeze Free Algorithm
2. Heterogeneous Process Migration: Heterogeneous process migration is the relocation of
the process across machine architectures and operating systems. Clearly, it is more complex
than the homogeneous case since it should review the machine and operating designs and
attributes, as well as send similar data as homogeneous process migration including process
state, address space, file, and correspondence data. Heterogeneous process migration is
particularly appropriate in the portable environment where is almost certain that the portable
unit and the base help station will be different machine types. It would be attractive to relocate
a process from the versatile unit to the base station as well as the other way around during
calculation. In most cases, this couldn’t be accomplished by homogeneous migration. There
are four essential types of heterogeneous migration:
 Passive object: The information is moved and should be translated
 Active object, move when inactive: The process is relocated at the point when it
isn’t executing. The code exists in the two areas, and just the information is moved
and translated.
 Active object, interpreted code: The process is executing through an interpreter
so just information and interpreter state need to be moved.
 Active object, native code: Both code and information should be translated as
they are accumulated for a particular architecture.
COMPUTING ENVIRONMENTS

Computing environments refer to the technology infrastructure and software platforms that
are used to develop, test, deploy, and run software applications. There are several types of
computing environments, including:

1. Mainframe: A large and powerful computer system used for critical applications
and large-scale data processing.
2. Client-Server: A computing environment in which client devices access
resources and services from a central server.
3. Cloud Computing: A computing environment in which resources and services
are provided over the Internet and accessed through a web browser or client
software.
4. Mobile Computing: A computing environment in which users access
information and applications using handheld devices such as smartphones and
tablets.
5. Grid Computing: A computing environment in which resources and services are
shared across multiple computers to perform large-scale computations.
6. Embedded Systems: A computing environment in which software is integrated
into devices and products, often with limited processing power and memory.
Each type of computing environment has its own advantages and disadvantages, and the
choice of environment depends on the specific requirements of the software application and
the resources available.

In the world of technology where every tasks are performed with help of computers, these
computers have become one part of human life. Computing is nothing but process of
completing a task by using this computer technology and it may involve computer hardware
and/or software. But computing uses some form of computer system to manage, process,
and communicate information. After getting some idea about computing now lets
understand about computing environments.
Computing Environments : When a problem is solved by the computer, during that
computer uses many devices, arranged in different ways and which work together to solve
problems. This constitutes a computing environment where various number of computer
devices arranged in different ways to solve different types of problems in different ways. In
different computing environments computer devices are arranged in different ways and they
exchange information in between them to process and solve problem. One computing
environment consists of many computers other computational devices, software and
networks that to support processing and sharing information and solving task. Based on the
organization of different computer devices and communication processes there exists
multiple types of computing environments.
Now lets know about different types of computing environments.

Types of Computing Environments : There are the various types of computing


environments. They are :
Computing Environments Types

1. Personal Computing Environment : In personal computing environment there


is a stand-alone machine. Complete program resides on computer and executed
there. Different stand-alone machines that constitute a personal computing
environment are laptops, mobiles, printers, computer systems, scanners etc. That
we use at our homes and offices.
2. Time-Sharing Computing Environment : In Time Sharing Computing
Environment multiple users share system simultaneously. Different users
(different processes) are allotted different time slice and processor switches
rapidly among users according to it. For example, student listening to music
while coding something in an IDE. Windows 95 and later versions, Unix, IOS,
Linux operating systems are the examples of this time sharing computing
environment.
3. Client Server Computing Environment : In client server computing
environment two machines are involved i.e., client machine and server machine,
sometime same machine also serve as client and server. In this computing
environment client requests resource/service and server provides that respective
resource/service. A server can provide service to multiple clients at a time and
here mainly communication happens through computer network.
4. Distributed Computing Environment : In a distributed computing
environment multiple nodes are connected together using network but physically
they are separated. A single task is performed by different functional units of
different nodes of distributed unit. Here different programs of an application run
simultaneously on different nodes, and communication happens in between
different nodes of this system over network to solve task.
5. Grid Computing Environment : In grid computing environment, multiple
computers from different locations works on single problem. In this system set
of computer nodes running in cluster jointly perform a given task by applying
resources of multiple computers/nodes. It is network of computing environment
where several scattered resources provide running environment for single task.
6. Cloud Computing Environment : In cloud computing environment on demand
availability of computer system resources like processing and storage are
availed. Here computing is not done in individual technology or computer rather
it is computed in cloud of computers where all required resources are provided
by cloud vendor. This environment primarily comprised of three services
i.e software-as-a-service (SaaS), infrastructure-as-a-service (IaaS), and platform-
as-a-service (PaaS).
7. Cluster Computing Environment : In cluster computing environment cluster
performs task where cluster is a set of loosely or tightly connected computers
that work together. It is viewed as single system and performs task parallelly
that’s why also it is similar to parallel computing environment. Cluster aware
applications are especially used in cluster computing environment.

Advantages of different computing environments:

1. Mainframe: High reliability, security, and scalability, making it suitable for


mission-critical applications.
2. Client-Server: Easy to deploy, manage and maintain, and provides a centralized
point of control.
3. Cloud Computing: Cost-effective and scalable, with easy access to a wide range
of resources and services.
4. Mobile Computing: Allows users to access information and applications from
anywhere, at any time.
5. Grid Computing: Provides a way to harness the power of multiple computers for
large-scale computations.
6. Embedded Systems: Enable the integration of software into devices and
products, making them smarter and more functional.

Disadvantages of different computing environments:

1. Mainframe: High cost and complexity, with a significant learning curve for
developers.
2. Client-Server: Dependence on network connectivity, and potential security risks
from centralized data storage.
3. Cloud Computing: Dependence on network connectivity, and potential security
and privacy concerns.
4. Mobile Computing: Limited processing power and memory compared to other
computing environments, and potential security risks.
5. Grid Computing: Complexity in setting up and managing the grid infrastructure.
6. Embedded Systems: Limited processing power and memory, and the need for
specialized skills for software development
PROCESS SCHEDULING

Definition

The process scheduling is the activity of the process manager that handles the removal of the
running process from the CPU and the selection of another process on the basis of a particular
strategy.

Process scheduling is an essential part of a Multiprogramming operating systems. Such


operating systems allow more than one process to be loaded into the executable memory at a
time and the loaded process shares the CPU using time multiplexing.
Categories of Scheduling

There are two categories of scheduling:

1. Non-preemptive: Here the resource can’t be taken from a process until the process
completes execution. The switching of resources occurs when the running process
terminates and moves to a waiting state.
2. Preemptive: Here the OS allocates the resources to a process for a fixed amount of
time. During resource allocation, the process switches from running state to ready
state or from waiting state to ready state. This switching occurs as the CPU may give
priority to other processes and replace the process with higher priority with the
running process.

Process Scheduling Queues

The OS maintains all Process Control Blocks (PCBs) in Process Scheduling Queues. The OS
maintains a separate queue for each of the process states and PCBs of all processes in the
same execution state are placed in the same queue. When the state of a process is changed, its
PCB is unlinked from its current queue and moved to its new state queue.

The Operating System maintains the following important process scheduling queues −

 Job queue − This queue keeps all the processes in the system.
 Ready queue − This queue keeps a set of all processes residing in main memory,
ready and waiting to execute. A new process is always put in this queue.
 Device queues − The processes which are blocked due to unavailability of an I/O
device constitute this queue.
The OS can use different policies to manage each queue (FIFO, Round Robin, Priority, etc.).
The OS scheduler determines how to move processes between the ready and run queues
which can only have one entry per processor core on the system; in the above diagram, it has
been merged with the CPU.

Two-State Process Model

Two-state process model refers to running and non-running states which are described below

S.N. State & Description

Running
1
When a new process is created, it enters into the system as in the running state.

Not Running
Processes that are not running are kept in queue, waiting for their turn to execute. Each
entry in the queue is a pointer to a particular process. Queue is implemented by using
2
linked list. Use of dispatcher is as follows. When a process is interrupted, that process is
transferred in the waiting queue. If the process has completed or aborted, the process is
discarded. In either case, the dispatcher then selects a process from the queue to execute.

Schedulers
Schedulers are special system software which handle process scheduling in various ways.
Their main task is to select the jobs to be submitted into the system and to decide which
process to run. Schedulers are of three types −

 Long-Term Scheduler
 Short-Term Scheduler
 Medium-Term Scheduler

Long Term Scheduler

It is also called a job scheduler. A long-term scheduler determines which programs are
admitted to the system for processing. It selects processes from the queue and loads them into
memory for execution. Process loads into the memory for CPU scheduling.

The primary objective of the job scheduler is to provide a balanced mix of jobs, such as I/O
bound and processor bound. It also controls the degree of multiprogramming. If the degree of
multiprogramming is stable, then the average rate of process creation must be equal to the
average departure rate of processes leaving the system.

On some systems, the long-term scheduler may not be available or minimal. Time-sharing
operating systems have no long term scheduler. When a process changes the state from new
to ready, then there is use of long-term scheduler.

Short Term Scheduler

It is also called as CPU scheduler. Its main objective is to increase system performance in
accordance with the chosen set of criteria. It is the change of ready state to running state of
the process. CPU scheduler selects a process among the processes that are ready to execute
and allocates CPU to one of them.

Short-term schedulers, also known as dispatchers, make the decision of which process to
execute next. Short-term schedulers are faster than long-term schedulers.

Medium Term Scheduler


Medium-term scheduling is a part of swapping. It removes the processes from the memory. It
reduces the degree of multiprogramming. The medium-term scheduler is in-charge of
handling the swapped out-processes.

A running process may become suspended if it makes an I/O request. A suspended processes
cannot make any progress towards completion. In this condition, to remove the process from
memory and make space for other processes, the suspended process is moved to the
secondary storage. This process is called swapping, and the process is said to be swapped out
or rolled out. Swapping may be necessary to improve the process mix.

Comparison among Scheduler

S.N. Long-Term Scheduler Short-Term Scheduler Medium-Term Scheduler

It is a process swapping
1 It is a job scheduler It is a CPU scheduler
scheduler.

Speed is in between both


Speed is lesser than short Speed is fastest among
2 short and long term
term scheduler other two
scheduler.

It provides lesser control


It controls the degree of It reduces the degree of
3 over degree of
multiprogramming multiprogramming.
multiprogramming

It is almost absent or
It is also minimal in time It is a part of Time sharing
4 minimal in time sharing
sharing system systems.
system

It selects processes from It selects those processes It can re-introduce the


5 pool and loads them into which are ready to process into memory and
memory for execution execute execution can be continued.

Context Switching
A context switching is the mechanism to store and restore the state or context of a CPU in
Process Control block so that a process execution can be resumed from the same point at a
later time. Using this technique, a context switcher enables multiple processes to share a
single CPU. Context switching is an essential part of a multitasking operating system
features.

When the scheduler switches the CPU from executing one process to execute another, the
state from the current running process is stored into the process control block. After this, the
state for the process to run next is loaded from its own PCB and used to set the PC, registers,
etc. At that point, the second process can start executing.
Context switches are computationally intensive since register and memory state must be
saved and restored. To avoid the amount of context switching time, some hardware systems
employ two or more sets of processor registers. When the process is switched, the following
information is stored for later use.

 Program Counter
 Scheduling information
 Base and limit register value
 Currently used register
 Changed State
 I/O State information
 Accounting information
COOPERATING PROCESSES

Before learning about Cooperating processes in operating systems let's learn a bit
about Operating Systems and Processes.

There are two types of software first is the application software and the other is the system
software.

An application software performs tasks for the user while system


software operates and controls the computer system and works as an interface to run the
application software.

Operating system is system software that manages the resources of a computer system that
is both hardware and software. It works as an interface between the user and the hardware so
that the user can interact with the hardware. It provides a convenient environment in which a
user can execute the programs. An operating system is a resource manager and
it hides the internal working complexity of the hardware so that users can perform a
specific task without any difficulty.

Some widely used operating systems are:

 Single-process operating system


 Batch-processing operating system
 Multiprogramming operating system
 Multitasking operating system
 Multi-processing operating system
 Distributed operating system
 Real-Time operating system

Now, Lets's talk about how the process is the most important part of an operating system:

A program under execution is known as the process. Every task in an operating system is
converted into a process. A process has several states from its start to its termination. After a
new process is generated the process gets admitted into a ready queue by job scheduler or
long-term scheduler where every process is ready for execution. Then the processes inside
the ready queue are admitted into the execution state by CPU scheduler or short-term
scheduler. After execution, the process gets terminated.

Before termination, there may be multiple processes being executed in a system. There are
two modes in which the processes can be executed. These two modes are:

1. Serial mode
2. Parallel mode

In serial mode, the process will be executed one after the other means the next process
cannot be executed until the previous process gets terminated.

On the contrary in parallel mode, there may be several processes being executed at the same
time quantum. In this way, there will be two types of processes which can be
either cooperating processes or independent processes.

Cooperating Process in the operating system is a process that gets affected by other
processes under execution or can affect any other process under execution. It shares data with
other processes in the system by directly sharing a logical space which is
both code and data or by sharing data through files or messages.

Whereas, an independent process in an operating system is one that does not affect or
impact any other process of the system. It does not share any data with other processes.

METHODS OF COOPERATING PROCESS IN OS


Cooperating processes in OS requires a communication method that will allow the processes
to exchange data and information.

There are two methods by which the cooperating process in OS can communicate:

 Cooperation by Sharing
 Cooperation by Message Passing

Details about the methods are given below:

Cooperation by Sharing

The cooperation processes in OS can communicate with each other using the shared
resource which includes data, memory, variables, files, etc.

Processes can then exchange the information by reading or writing data to the shared region.
We can use a critical section that provides data integrity and avoids data inconsistency.

Let's see a diagram to understand more clearly the communication by shared region:

 In the above diagram, We have two processes A and B which are communicating with
each other through a shared region of memory.
 Process A will write the information in the shared region and then Process B will read
the information from the shared memory and that's how the process of communication
takes place between the cooperating processes by sharing.

Cooperation by Message Passing

The cooperating processes in OS can communicate with each other with the help of message
passing. The production process will send the message and the consumer process will receive
the same message.

There is no concept of shared memory instead the producer process will first send the
message to the kernel and then the kernel sends that message to the consumer process.

A kernel is known as the heart and core of an operating system. The kernel interacts with the
hardware to execute the processes given by the user space. It works as a bridge between the
user space and hardware. Functions of the kernel include process management, file
management, memory management, and I/O management.

If a consumer process waits for a message from another process to execute a particular task
then this may cause a problem of deadlock and if the consumer process does not receive the
message then this may cause a problem of process starvation.

Let's see a diagram to understand more clearly the cooperation by communication:


In the above diagram, the processes A and B are communicating with each other.
Process A first sends the message to the kernel and then the kernel will interpret that this
message is meant for Process B.

The kernel then sends the message to the process P2 and that's how the process of
communication takes place between the cooperation processes by communication.

Need of Cooperating Processes in OS

One process will write to the file and the other process reads the file. Therefore, every
process in the system could be affected by the other process.

The need for cooperating processes in OS can be divided into four types:

1. Information Sharing
2. Computation Speed
3. Convenience
4. Modularity

Let's learn about them in detail.

Information Sharing

As we know the cooperating process in OS shares data and information between other
processes. There may be a possibility that different processes are accessing the same file.
Processes can access the files concurrently which makes the execution of the process more
efficient and faster.

Computation Speed

When a task is divided into several subtasks and starts executing them parallelly, this
improves the computation speed of the execution and makes it faster. Computation speed can
be achieved if a system has multiple CPUs and input/output devices.

When the tasks are assigned into several subtasks they become several different processes
that need to communicate with each other. That's why we need cooperating processes in the
operating system.
Convenience

A user may be performing several tasks at the same time which leads to the running of
different processes concurrently. These processes need to cooperate so that every process can
run smoothly without interrupting each other.

Modularity

We want to divide a system of complex tasks into several different modules and later they
will be established together to achieve a goal. This will help in completing tasks with more
efficiency as well as speed.

Advantages of Cooperating Process in Operating System

Let's discuss several advantages of cooperating processes in the operating system:

 With help of data and information sharing, the processes can be executed with much
faster speed and efficiency as processes can access the same files concurrently.
 Modularity gives the advantage of breaking a complex task into several modules
which are later put together to achieve the goal of faster execution of processes.
 Cooperating processes provide convenience as different processes running at the same
time can cooperate without any interruption among them.
 The computation speed of the processes increases by dividing processes into different
subprocesses and executing them parallelly at the same time.

Disadvantages of Cooperating Process in Operating System

Let's discuss the disadvantages of cooperating processes in the operating system:

 During the communication method of the cooperation processes, there may be a


problem of deadlock if a consumer process waits for the message from the production
process and the message does not receive at the consuming end.
 There may be a condition of process starvation where the next process will have to
wait until the message is received by the previous consuming process.
 The cooperating processes in the operating system can damage the data which may
occur due to modularity.
 During information sharing, it may also share any sensitive data of the user with the
other process that the user might not want to share.

Example of Cooperating Process in Operating System

Let's take the example of the producer-consumer problem which is also known as
a bounded buffer problemto understand cooperating processes in more detail:

Producer:

The process which generates the message that a consumer may consume is known as
the producer.

Consumer:

The process which consumes the information generated by the produces.

A producer produces a piece of information and stores it in a buffer(critical section) and the
consumer consumes that information.

For Example, A web server produces web pages that are consumed by the client. A compiler
produces an assembly code that is consumed by the assembler.

Buffer memory can be of two types:

 Unbounded buffer: It is a kind of buffer that has no practical limit on the size buffer.
The producer can produce new information but the consumer might have to wait for
them.
 Bounded buffer: It is a kind of buffer that assumes a fixed size. Here, the consumer
has to wait if the buffer is empty, while the producer has to wait if the buffer is full.
But here in the process-consumer problem, we have used a bounded buffer.

Producer and consumer both processes execute simultaneously. The problem arises when a
consumer wants to consume information when the buffer is empty or there is nothing to be
consumed and a producer produces a piece of information when the buffer is full or the
memory of the consumer is already full.
Producer Process:

while(true)
{
produce an item &
while(counter = = buffer-size);
buffer[int] = next produced;
in = (in+1) % buffer- size;
counter ++;
}

Consumer Process:

while(true)
{
while (counter = = 0);
next consumed = buffer[out];
out= (out+1) % buffer size;
counter--;
}

In the above producer code and consumer code, we have the following variables:

 counter: counter is used to identify the size of the buffer which is used by the
producer as well as consumer processes.
 in: in a variable is used by the producer to detect the next empty slot in
the buffer region.
 out: The consumer uses the out variable to detect where the items are stored.

Shared Resources:

In the process-consumer problem we have used two types of shared resources:

1. Buffer
2. Counter
When both the producer process and consumer process do not execute on time then it may
cause inconsistency in the process. The value of the counter variable will be incorrect if both
the producer and consumer processes will be executed concurrently.

Producer and consumer processes both share the following variables:

var n;
type item = .....;
var Buffer : array [0,n-1] of item;
in, out:0..n-1;

The shared buffer region holds two logical pointers i.e, in and out, and are implemented in
the form circular array. By default, the values of both the variables(in and out) are initialized
to 0. As we discussed earlier, the out variable points to the first filled location in the buffer
while the in variable points to the first free location in the buffer. The buffer will be empty
if in=out. The buffer will be filled if in+1 mod n=out.

INTER PROCESS COMMUNICATION

What is Inter Process Communication?

In general, Inter Process Communication is a type of mechanism usually provided by the


operating system (or OS). The main aim or goal of this mechanism is to provide
communications in between several processes. In short, the intercommunication allows a
process letting another process know that some event has occurred.

Let us now look at the general definition of inter-process communication, which will explain
the same thing that we have discussed above.

Definition

"Inter-process communication is used for exchanging useful information between numerous


threads in one or more processes (or programs)."

To understand inter process communication, you can consider the following given diagram that
illustrates the importance of inter-process communication:
Role of Synchronization in Inter Process Communication

It is one of the essential parts of inter process communication. Typically, this is provided by
interprocess communication control mechanisms, but sometimes it can also be controlled by
communication processes.

These are the following methods that used to provide the synchronization:

1. Mutual Exclusion
2. Semaphore

3. Barrier
4. Spinlock

Mutual Exclusion:-

It is generally required that only one process thread can enter the critical section at a time. This
also helps in synchronization and creates a stable state to avoid the race condition.

Semaphore:-

Semaphore is a type of variable that usually controls the access to the shared resources by
several processes. Semaphore is further divided into two types which are as follows:

1. Binary Semaphore

2. Counting Semaphore
Barrier:-

A barrier typically not allows an individual process to proceed unless all the processes does not
reach it. It is used by many parallel languages, and collective routines impose barriers.

Spinlock:-

Spinlock is a type of lock as its name implies. The processes are trying to acquire the spinlock
waits or stays in a loop while checking that the lock is available or not. It is known as busy
waiting because even though the process active, the process does not perform any functional
operation (or task).

Approaches to Interprocess Communication

We will now discuss some different approaches to inter-process communication which are as
follows:

These are a few different approaches for Inter- Process Communication:

1. Pipes
2. Shared Memory
3. Message Queue

4. Direct Communication

5. Indirect communication
6. Message Passing

7. FIFO

To understand them in more detail, we will discuss each of them individually.

Pipe:-

The pipe is a type of data channel that is unidirectional in nature. It means that the data in this
type of data channel can be moved in only a single direction at a time. Still, one can use two-
channel of this type, so that he can able to send and receive data in two processes. Typically, it
uses the standard methods for input and output. These pipes are used in all types of POSIX
systems and in different versions of window operating systems as well.

Shared Memory:-

It can be referred to as a type of memory that can be used or accessed by multiple processes
simultaneously. It is primarily used so that the processes can communicate with each other.
Therefore the shared memory is used by almost all POSIX and Windows operating systems as
well.

Message Queue:-

In general, several different messages are allowed to read and write the data to the message
queue. In the message queue, the messages are stored or stay in the queue unless their recipients
retrieve them. In short, we can also say that the message queue is very helpful in inter-process
communication and used by all operating systems.

To understand the concept of Message queue and Shared memory in more detail, let's take a
look at its diagram given below:
Message Passing:-

It is a type of mechanism that allows processes to synchronize and communicate with each
other. However, by using the message passing, the processes can communicate with each other
without restoring the hared variables.

Usually, the inter-process communication mechanism provides two operations that are as
follows:

o send (message)

o received (message)

Note: The size of the message can be fixed or variable.

Direct Communication:-

In this type of communication process, usually, a link is created or established between two
communicating processes. However, in every pair of communicating processes, only one link
can exist.

Indirect Communication

Indirect communication can only exist or be established when processes share a common
mailbox, and each pair of these processes shares multiple communication links. These shared
links can be unidirectional or bi-directional.
FIFO:-

It is a type of general communication between two unrelated processes. It can also be


considered as full-duplex, which means that one process can communicate with another process
and vice versa.

Some other different approaches

o Socket:-

It acts as a type of endpoint for receiving or sending the data in a network. It is correct for data
sent between processes on the same computer or data sent between different computers on the
same network. Hence, it used by several types of operating systems.

o File:-

A file is a type of data record or a document stored on the disk and can be acquired on demand
by the file server. Another most important thing is that several processes can access that file as
required or needed.

o Signal:-

As its name implies, they are a type of signal used in inter process communication in a minimal
way. Typically, they are the massages of systems that are sent by one process to another.
Therefore, they are not used for sending data but for remote commands between multiple
processes.

Usually, they are not used to send the data but to remote commands in between several
processes.

Why we need interprocess communication?

There are numerous reasons to use inter-process communication for sharing the data. Here are
some of the most important reasons that are given below:

o It helps to speedup modularity

o Computational

o Privilege separation
o Convenience
o Helps operating system to communicate with each other and synchronize their actions
as well.

DEADLOCKS

Every process needs some resources to complete its execution. However, the resource is
granted in a sequential order.

1. The process requests for some resource.


2. OS grant the resource if it is available otherwise let the process waits.
3. The process uses it and release on the completion.

A Deadlock is a situation where each of the computer process waits for a resource which is
being assigned to some another process. In this situation, none of the process gets executed
since the resource it needs, is held by some other process which is also waiting for some other
resource to be released.

Let us assume that there are three processes P1, P2 and P3. There are three different resources
R1, R2 and R3. R1 is assigned to P1, R2 is assigned to P2 and R3 is assigned to P3.

After some time, P1 demands for R1 which is being used by P2. P1 halts its execution since it
can't complete without R2. P2 also demands for R3 which is being used by P3. P2 also stops
its execution because it can't continue without R3. P3 also demands for R1 which is being used
by P1 therefore P3 also stops its execution.

In this scenario, a cycle is being formed among the three processes. None of the process is
progressing and they are all waiting. The computer becomes unresponsive since all the
processes got blocked.
Difference between Starvation and Deadlock

Sr. Deadlock Starvation

1 Deadlock is a situation where no process Starvation is a situation where the low


got blocked and no process proceeds priority process got blocked and the
high priority processes proceed.

2 Deadlock is an infinite waiting. Starvation is a long waiting but not


infinite.

3 Every Deadlock is always a starvation. Every starvation need not be deadlock.

4 The requested resource is blocked by the The requested resource is continuously


other process. be used by the higher priority
processes.

5 Deadlock happens when Mutual It occurs due to the uncontrolled


exclusion, hold and wait, No preemption priority and resource management.
and circular wait occurs simultaneously.

Necessary conditions for Deadlocks

1. Mutual Exclusion
A resource can only be shared in mutually exclusive manner. It implies, if two process
cannot use the same resource at the same time.

2. Hold and Wait

A process waits for some resources while holding another resource at the same time.

3. No preemption

The process which once scheduled will be executed till the completion. No other
process can be scheduled by the scheduler meanwhile.

4. Circular Wait

All the processes must be waiting for the resources in a cyclic manner so that the last
process is waiting for the resource which is being held by the first process.

PREVENTION

Introduction to Deadlock

Consider a one-way road with two cars approaching from opposite directions, blocking each
other. The road is the resource, and crossing it represents a process. Since it's a one-way road,
both cars can't move simultaneously, leading to a deadlock.

A deadlock in an operating system is an indefinite blocking of processes competing for


resources. It occurs when two or more processes require resources that can't be shared
simultaneously.
There are four necessary conditions for deadlock. Deadlock happens only when all four
conditions occur simultaneously for unshareable single instance resources.

Deadlock Characteristics/Conditions

1. Mutual exclusion
2. Hold and wait
3. No preemption
4. Circular wait.

There are three ways to handle deadlock:

1. Deadlock prevention: The possibility of deadlock is excluded before making


requests, by eliminating one of the necessary conditions for deadlock.

Example: Only allowing traffic from one direction, will exclude the possibility
of blocking the road.

2. Deadlock avoidance: The Operating system runs an algorithm on requests to check for
a safe state. Any request that may result in a deadlock is not granted.

Example: Checking each car and not allowing any car that can block the road. If
there is already traffic on the road, then a car coming from the opposite direction can
cause blockage.

3. Deadlock detection & recovery: OS detects deadlock by regularly checking the


system state, and recovers to a safe state using recovery
techniques. Example: Unblocking the road by backing cars from one side. Deadlock
prevention and deadlock avoidance are carried out before deadlock occurs.

In this article, we will learn about deadlock prevention in OS.

Deadlock prevention is a set of methods used to ensure that all requests are safe, by
eliminating at least one of the four necessary conditions for deadlock.

Deadlock Prevention in Operating System


A process is a set of instructions. When a process runs, it needs resources like CPU cycles,
Files, or Peripheral device access.

Some of the requests for resources can lead to deadlock.

Deadlock prevention is eliminating one of the necessary conditions of deadlock so that only
safe requests are made to OS and the possibility of deadlock is excluded before making
requests.

As now requests are made carefully, the operating system can grant all requests safely.

Here OS does not need to do any additional tasks as it does in deadlock avoidance by running
an algorithm on requests checking for the possibility of deadlock.

Deadlock Prevention Techniques

Deadlock prevention techniques refer to violating any one of the four necessary conditions.
We will see one by one how we can violate each of them to make safe requests and which is
the best approach to prevent deadlock.

Mutual Exclusion

Some resources are inherently unshareable, for example, Printers. For unshareable resources,
processes require exclusive control of the resources.

A mutual exclusion means that unshareable resources cannot be accessed simultaneously by


processes.

Shared resources do not cause deadlock but some resources can't be shared among processes,
leading to a deadlock.

For Example: read operation on a file can be done simultaneously by multiple processes, but
write operation cannot. Write operation requires sequential access, so, some processes have
to wait while another process is doing a write operation.

It is not possible to eliminate mutual exclusion, as some resources are inherently non-
shareable,
For Example Tape drive, as only one process can access data from a Tape drive at a time.

For other resources like printers, we can use a technique called Spooling.

Spooling: It stands for Simultaneous Peripheral Operations online.

A Printer has associated memory which can be used as a spooler directory (memory that is
used to store files that are to be printed next).

In spooling, when multiple processes request the printer, their jobs ( instructions of the
processes that require printer access) are added to the queue in the spooler directory.

The printer is allocated to jobs on a first come first serve (FCFS) basis. In this way, the
process does not have to wait for the printer and it continues its work after adding its job to
the queue.

We can understand the workings of the Spooler directory better with the diagram given
below:

Challenges of Spooling:

 Spooling can only be used for resources with associated memory, like a Printer.
 It may also cause race condition. A race condition is a situation
where two or more processes are accessing a resource and the final results cannot be
definitively determined.
For Example: In printer spooling, if process A overwrites the job of process B in
the queue, then process B will never receive the output.

 It is not a full-proof method as after the queue becomes full, incoming processes go
into a waiting state.

For Example: If the size of the queue is 10 blocks then whenever there are more
than 10 processes, they will go in a waiting state.
:::

Hold and Wait

Hold and wait is a condition in which a process is holding one resource while simultaneously
waiting for another resource that is being held by another process. The
process cannot continue till it gets all the required resources.

In the diagram given below:

 Resource 1 is allocated to Process 2


 Resource 2 is allocated to Process 1
 Resource 3 is allocated to Process 1
 Process 1 is waiting for Resource 1 and holding Resource 2 and Resource 3
 Process 2 is waiting for Resource 2 and holding Resource 1

There are two ways to eliminate hold and wait:-

1. By eliminating wait:

The process specifies the resources it requires in advance so that it does not have to
wait for allocation after execution starts.

For Example: Process1 declares in advance that it requires both Resource1 and
Resource2

2. By eliminating hold:

The process has to release all resources it is currently holding before making a new
request.

For Example: Process1 has to release Resource2 and Resource3 before making
request for Resource1

Challenges:

 As a process executes instructions one by one, it cannot know about all required
resources before execution.
 Releasing all the resources a process is currently holding is also problematic as they
may not be usable by other processes and are released unnecessarily.

For example: When Process1 releases both Resource2 and Resource3, Resource3 is
released unnecessarily as it is not required by Process2.

No preemption

Preemption is temporarily interrupting an executing task and later resuming it.

For example, if process P1 is using a resource and a high-priority process P2 requests for the
resource, process P1 is stopped and the resources are allocated to P2.
There are two ways to eliminate this condition by preemption:

1. If a process is holding some resources and waiting for other resources, then it should
release all previously held resources and put a new request for the required resources
again. The process can resume once it has all the required resources.

For example: If a process has resources R1, R2, and R3 and it is waiting for
resource R4, then it has to release R1, R2, and R3 and put a new request of all
resources again.

2. If a process P1 is waiting for some resource, and there is another process P2 that is
holding that resource and is blocked waiting for some other resource. Then the
resource is taken from P2 and allocated to P1. This way process P2 is preempted and
it requests again for its required resources to resume the task. The above approaches
are possible for resources whose states are easily restored and saved, such as memory
and registers.

Challenges:

 These approaches are problematic as the process might be actively using these
resources and halting the process by preempting can cause inconsistency.

For example: If a process is writing to a file and its access is revoked for the process
before it completely updates the file, the file will remain unusable and in an
inconsistent state.

 Putting requests for all resources again is inefficient and time-consuming.

Circular Wait

In circular wait, two or more processes wait for resources in a circular order. We can
understand this better by the diagram given below:
To eliminate circular wait, we assign a priority to each resource. A process can only request
resources in increasing order of priority.

In the example above, process P3 is requesting resource R1, which has a number lower than
resource R3 which is already allocated to process P3. So this request is invalid and cannot be
made, as R1 is already allocated to process P1.

Challenges:

 It is difficult to assign a relative priority to resources, as one resource can be


prioritized differently by different processes.

For Example: A media player will give a lesser priority to a printer while a document
processor might give it a higher priority. The priority of resources is different
according to the situation and use case.

Feasibility of Deadlock Prevention

 Mutual exclusion cannot be eliminated completely because some resources are


inherently non-shareable
 Hold and wait cannot be eliminated as we cannot know in advance about the required
resources to prevent waiting. It is inefficient to prevent a hold by releasing all the
resources while requesting a new one
 Preempting processes can cause inconsistency and starting the process over by putting
requests for all resources again is inefficient.
 Eliminating circular wait is the only practical way to prevent deadlock.

AVOIDANCE

Deadlock Avoidance is a process used by the Operating System to avoid Deadlock. Let's
first understand what is Deadlock in an Operating System is. Deadlock is a situation that
occurs in the Operating System when any Process enters a waiting state because another
waiting process is holding the demanded resource. Deadlock is a common problem in multi-
processing where several processes share a specific type of mutually exclusive resource
known as a soft lock or software.

But how can an Operating System avoid Deadlock?

The operating system avoids Deadlock by knowing the maximum resource requirements of
the processes initially, and also, the Operating System knows the free resources available at
that time. The operating system tries to allocate the resources according to the process
requirements and checks if the allocation can lead to a safe state or an unsafe state. If the
resource allocation leads to an unsafe state, then the Operating System does not proceed
further with the allocation sequence.
How does Deadlock Avoidance Work?

Let's understand the working of Deadlock Avoidance with the help of an intuitive example.

Process Maximum Required current Available


P1 9 5
P2 5 2
P3 3 1

Let's consider three processes P1, P2, P3. Some more information on which the processes tell
the Operating System are :

 P1 process needs a maximum of 9 resources (Resources can be any software or


hardware Resources like tape drive or printer etc..) to complete its execution. P1 is
currently allocated with 5 Resources and needs 4 more to complete its execution.
 P2 process needs a maximum of 5 resources and is currently allocated with 2
resources. So it needs 3 more resources to complete its execution.
 P3 process needs a maximum of 3 resources and is currently allocated with 1
resource. So it needs 2 more resources to complete its execution.
 The Operating System knows that only 2 resources out of the total available resources
are currently free.

But only 2 resources are free now. Can P1, P2, and P3 satisfy their requirements? Let's try
to find out.

As only 2 resources are free for now, only P3 can satisfy its need for 2 resources. If P3 takes
2 resources and completes its execution, then P3 can release its 3 (1+2) resources. Now the
three free resources that P3 released can satisfy the need of P2. Now, P2 after taking the three
free resources, can complete its execution and then release 5 (2+3) resources. Now five
resources are free. P1 can now take 4 out of the 5 free resources and complete its execution.
So, with 2 free resources available initially, all the processes were able to complete their
execution leading to a Safe State. The order of execution of the processes was <P3, P2, P1>.

What if initially there was only 1 free resource available? None of the processes would be
able to complete its execution. Thus leading to an unsafe state.
We use two words, safe and unsafe states. What are those states? Let's understand these
concepts.

Safe State and Unsafe State

Safe State - In the above example, we saw that the Operating System was able to satisfy the
needs of all three processes, P1, P2, and P3, with their resource requirements. So all the
processes were able to complete their execution in a certain order like P3->P2->P1.

So, If the Operating System is able to allocate or satisfy the maximum resource
requirements of all the processes in any order then the system is said to be in Safe State.

So safe state does not lead to Deadlock.

Unsafe State - If the Operating System is not able to prevent Processes from requesting
resources which can also lead to a Deadlock, then the System is said to be in an Unsafe State.

Unsafe State does not necessarily cause deadlock it may or may not cause deadlock.

So, in the above diagram shows the three states of the System. An unsafe state does not always
cause a Deadlock. Some unsafe states can lead to a Deadlock, as shown in the diagram.

Deadlock Avoidance Example


Let's take an example that has multiple resource requirements for every Process. Let there be
three Processes P1, P2, P3, and 4 resources R1, R2, R3, R4. The maximum resource
requirements of the Processes are shown in the below table.

Process R1 R2 R3 R4
P1 3 2 3 2
P2 2 3 1 4
P3 3 1 5 0

A number of currently allocated resources to the processes are:

Process R1 R2 R3 R4
P1 1 2 3 1
P2 2 1 0 2
P3 2 0 1 0

The total number of resources in the System are :

R1 R2 R3 R4
7 4 5 4

We can find out the no of available resources for each of P1, P2, P3, P4 by subtracting the
currently allocated resources from total resources.

Available Resources are :

R1 R2 R3 R4
2 1 1 1

Now, The need for the resources for the processes can be calculated by :

Need = Maximum Resources Requirement - Currently Allocated Resources.

The need for the Resources is shown below:


Process R1 R2 R3 R4
P1 2 1 0 1
P2 0 2 1 2
P3 1 1 4 0

The available free resources are <2,1,1,1> of resources of R1, R2, R3, and R4 respectively,
which can be used to satisfy only the requirements of process P1 only initially as process P2
requires 2 R2 resources which are not available. The same is the case with Process P3, which
requires 4 R3 resources which is not available initially.

The Steps for resources allotment is explained below:

1. Firstly, Process P1 will take the available resources and satisfy its resource need,
complete its execution and then release all its allocated resources. Process P1 is
initially allocated <1,2,3,1> resources of R1, R2, R3, and R4 respectively.
Process P1 needs <2,1,0,1> resources of R1, R2, R3 and R4 respectively to complete
its execution. So, process P1 takes the available free resources <2,1,1,1> resources
of R1, R2, R3, R4 respectively and can complete its execution and then release its
current allocated resources and also the free resources it used to complete its
execution. Thus P1 releases <1+2,2+1,3+1,1+1> = <3,3,4,2> resources of R1, R2, R3,
and R4 respectively.
2. After step 1 now, available resources are now <3,3,4,2>, which can satisfy the need of
Process P2 as well as process P3. After process P2 uses the available Resources and
completes its execution, the available resources are now <5,4,4,4>.
3. Now, the available resources are <5,4,4,4>, and the only Process left for execution is
Process P3, which requires <1,1,4,0> resources each of R1, R2, R3, and R4. So it can
easily use the available resources and complete its execution. After P3 is executed, the
resources available are <7,4,5,4>, which is equal to the maximum resources or total
resources available in the System.

So, the process execution sequence in the above example was <P1, P2, P3>. But it could
also have been <P1, P3, P2> if process P3 would have been executed before process P2,
which was possible as there were sufficient resources available to satisfy the need of both
Process P2 and P3 after step 1 above.
Deadlock Avoidance Solution

Deadlock Avoidance can be solved by two different algorithms:

 Resource allocation Graph


 Banker's Algorithm

We will discuss both algorithms in detail in their separate article.

Resource Allocation Graph

Resource Allocation Graph (RAG) is used to represent the state of the System in the form of
a Graph. The Graph contains all processes and resources which are allocated to them and also
the requesting resources of every Process. Sometimes if the number of processes is less, We
can easily identify a deadlock in the System just by observing the Graph, which can not be
done easily by using tables that we use in Banker's algorithm.

Resource Allocation Graph has a process vertex represented by a circle and a resource vertex
represented by a box. The instance of the resources is represented by a dot inside the box. The
instance can be single or multiple instances of the resource. An example of RAG is shown
below.
Banker's Algorithm

Banker's algorithm does the same as we explained the Deadlock avoidance with the help of
an example. The algorithm predetermines whether the System will be in a safe state or not by
simulating the allocation of the resources to the processes according to the maximum
available resources. It makes an "s-state" check before actually allocating the resources
to the Processes.

When there are more number of Processes and many Resources, then Banker's
Algorithm is useful.

DETECTION

Deadlock Detection and Recovery

OS doesn't apply any mechanism to avoid or prevent the deadlocks. Therefore the
system considers that the deadlock will definitely occur. In order to get rid of deadlocks, The
OS periodically checks the system for any deadlock. In case, it finds any of the deadlock then
the OS will recover the system using some recovery techniques.
The main task of the OS is detecting the deadlocks. The OS can detect the deadlocks
with the help of Resource allocation graph.

In single instanced resource types, if a cycle is being formed in the system then there
will definitely be a deadlock. On the other hand, in multiple instanced resource type graph,
detecting a cycle is not just enough. We have to apply the safety algorithm on the system by
converting the resource allocation graph into the allocation matrix and request matrix.

In order to recover the system from deadlocks, either OS considers resources or


processes.Fullscreen

Preempt the resource

We can snatch one of the resources from the owner of the resource (process) and give
it to the other process with the expectation that it will complete the execution and will release
this resource sooner. Well, choosing a resource which will be snatched is going to be a bit
difficult.

Rollback to a safe state

System passes through various states to get into the deadlock state. The operating
system canrollback the system to the previous safe state. For this purpose, OS needs to
implement check pointing at every state.
The moment, we get into deadlock, we will rollback all the allocations to get into the
previous safe state.

For Process

Kill a process

Killing a process can solve our problem but the bigger concern is to decide which
process to kill. Generally, Operating system kills a process which has done least amount of
work until now.

Kill all process

This is not a suggestible approach but can be implemented if the problem becomes very
serious. Killing all process will lead to inefficiency in the system because all the processes will
execute again from starting.

5 MARKS

1. What is an Operating System?


2. Give some benefits of Main frame Systems ?
3. What is the basic Desktop Systems?
4.what is Clustered Systems?

5. Write about Real-Time Systems?


10 MARKS

1.What do you mean by an operating system? What are its basic functions?
2. what is Deadlocks? Explain it?

3. Inter Process Communication?

4. explain the concept of IPC and its importance in systems programming?


5. Explain about Handheld systems?

MCQ

1.Handheld systems include ?


A. PFAs
B. PDAs
C. PZAs
D. PUAs
Ans : B
Explanation: Handheld systems include Personal Digital Assistants(PDAs)
2. Which of the following is an example of PDAs?
A. Palm-Pilots
B. Cellular Telephones
C. Both A and B
D. None of the above
Ans : C
Explanation: Handheld systems include Personal Digital Assistants(PDAs), such as Palm-
Pilots or Cellular Telephones with connectivity to a network such as the Internet.
3. Many handheld devices have between ___________ of memory
A. 256 KB and 8 MB
B. 512 KB and 2 MB
C. 256 KB and 4 MB
D. 512 KB and 8 MB
Ans : D
Explanation: Many handheld devices have between 512 KB and 8 MB of memory.
4. Handheld devices do not use virtual memory techniques.
A. TRUE
B. FALSE
C. Can be true or false
D. Can not say
Ans : A
Explanation: True, Currently, many handheld devices do not use virtual memory techniques,
thus forcing program developers to work within the confines of limited physical memory
5. Faster processors require ________ power
A. very less
B. less
C. more
D. very more
Ans : C
Explanation: Faster processors require more power.
6. To include a faster processor in a handheld device would require a ________ battery.
A. very small
B. small
C. medium
D. larger
Ans : D
Explanation: To include a faster processor in a handheld device would require a larger battery
that would have to be replaced more frequently.
7. Some handheld computers contains features of tiny built-in keyboards or microphone that
allows
A. text input
B. data input
C. print input
D. voice input
Ans : D
Explanation: Some handheld computers contains features of tiny built-in keyboards or
microphone that allows voice input.
8. Popular type of handheld computer is
A. Sart phone
B. Laptops
C. Personal digital assistant
D. TVs
Ans : C
Explanation: Popular type of handheld computer is Personal digital assistant.
9. A mobile device that functions as a personal information manager is
A. LPDs
B. CRTs
C. UCDs
D. PDAs
Ans : D
Explanation: A mobile device that functions as a personal information manager is PDAs
10. Some handheld devices may use wireless technology such as BlueTooth, allowing remote
access to e-mail and web browsing.
A. Yes
B. No
C. Can be yes or no
D. Can not say
Ans : A
Explanation: Yes, Some handheld devices may use wireless technology such as BlueTooth,
allowing remote access to e-mail and web browsing.
11. What is true about OS?
A. An operating system (OS) is a collection of software.
B. The operating system is a vital component of the system software in a computer system.
C. An Operating System (OS) is an interface between a computer user and computer
hardware.
D. All of the above
Ans : D
Explanation: An operating system is a program that acts as an interface between the user and
the computer hardware and controls the execution of all kinds of programs.
12. Which of the following is not a type of operating system?
A. OS/400
B. OS/200
C. VMS
D. z/OS
Ans : B
Explanation: OS/200 is not a type of operating system.
13. Which of the following is not an important functions of an operating System?
A. Memory Management
B. File Management
C. Virus Protection
D. Processor Management
Ans : C
Explanation: Virus Protection is not an important functions of an operating System.
14. In OS, Memory management refers to management of?
A. Primary Memory
B. Main Memory
C. Secondary Memory
D. Both A and B
Ans : D
Explanation: Memory management refers to management of Primary Memory or Main
Memory.
15. In multiprogramming environment, the OS decides which process gets the processor
when and for how much time. This function is called _____________.
A. process scheduling
B. process rescheduling
C. traffic controller
D. Processor Management
Ans : A
Explanation: In multiprogramming environment, the OS decides which process gets the
processor when and for how much time. This function is called process scheduling
16. Keeps tracks of processor and status of process. The program responsible for this task is
known as?
A. track manager
B. processor controller
C. traffic manager
D. traffic controller
Ans : D
Explanation: Keeps tracks of processor and status of process. The program responsible for
this task is known as traffic controller.
17. What does I/O controller do?
A. Keeps tracks of primary memory
B. Keeps tracks of all devices
C. Keeps tracks of processes
D. All of the above
Ans : B
Explanation: Keeps tracks of all devices. Program responsible for this task is known as the
I/O controller.
18. What does file system do?
A. Keeps track of information
B. Keeps track of location
C. Keeps track of information status
D. All of the above
Ans : D
Explanation: Keeps track of information, location, uses, status etc. The collective facilities
are often known as file system.
19. Which OS is mostly used?
A. IOS
B. Linux
C. Windows
D. AIX
Ans : C
Explanation: Windows is the most popular operating system for desktop and laptop
computers.
20. The first operating system created by Microsoft was?
A. Windows
B. MS-DOS
C. Seattle
D. AIX
Ans : B
Explanation: The first operating system created by Microsoft was not called Windows, it was
called MS-DOS and it was built in 1981.
21.The OS maintains all PCBs in?
A. Process Scheduling Queues
B. Job queue
C. Ready queue
D. Device queues
Ans : A
Explanation: The OS maintains all PCBs in Process Scheduling Queues
22. The processes which are blocked due to unavailability of an I/O device constitute this
queue.
A. Process Scheduling Queues
B. Job queue
C. Ready queue
D. Device queues
Ans : D
Explanation: Device queues : The processes which are blocked due to unavailability of an I/O
device constitute this queue.
23. Two-state process model refers to?
A. running states
B. non-running states
C. Both A and B
D. None of the above
Ans : C
Explanation: Two-state process model refers to running and non-running states
24. Which is not a type of Schedulers?
A. Long-Term Scheduler
B. Short-Term Scheduler
C. Medium-Term Scheduler
D. None of the above
Ans : D
Explanation: All of the above are a type of Schedulers.Schedulers are of three types : Long-
Term Scheduler, Short-Term Scheduler and Medium-Term Scheduler
25. Which scheduler is also called a job scheduler?
A. Long-Term Scheduler
B. Short-Term Scheduler
C. Medium-Term Scheduler
D. All of the above
Ans : A
Explanation: Long Term Scheduler is also called a job scheduler.
26. When the suspended process is moved to the secondary storage. This process is called?
A. process mix.
B. swapping
C. Swap-In
D. Swap-Out
Ans : B
Explanation: To remove the process from memory and make space for other processes, the
suspended process is moved to the secondary storage. This process is called swapping
27. Which scheduler Speed is fastest?
A. Long-Term Scheduler
B. Short-Term Scheduler
C. Medium-Term Scheduler
D. Swapping
Ans : B
Explanation: Short-term schedular Speed is fastest among other two .
28. Which Schedular is a part of Time sharing systems?
A. Long-Term Scheduler
B. Short-Term Scheduler
C. Medium-Term Scheduler
D. Swapping
Ans : C
Explanation: Medium-Term Scheduler is a part of Time sharing systems..
29. A_________ is the mechanism to store and restore the state
A. PCB
B. Program Counter
C. Scheduling information
D. context switch
Ans : D
Explanation: A context switch is the mechanism to store and restore the state or context of a
CPU in Process Control block so that a process execution can be resumed from the same
point at a later time.
30. Which of the following information is stored when the process is switched?
A. I/O State information
B. Accounting information
C. Base and limit register value
D. All of the above
Ans : D
Explanation: When the process is switched, the following information is stored for later use :
Program Counter, Scheduling information, Base and limit register value, Currently used
register, Changed State, I/O State information, Accounting information
31. A distributed system contains _____ nodes.
A. zero node
B. one node
C. two node
D. multiple node
Ans : D
Explanation: A distributed system contains multiple nodes that are physically separate but
linked together using the network.
32. All the nodes in distributed system communicate with each other and handle processes in
tandem.
A. TRUE
B. FALSE
C. Can be true or false
D. Can not say
Ans : A
Explanation: True, All the nodes in this system communicate with each other and handle
processes in tandem.
33. The nodes in the distributed systems can be arranged in the form of?
A. client/server systems
B. peer to peer systems
C. Both A and B
D. None of the above
Ans : C
Explanation: The nodes in the distributed systems can be arranged in the form of client/server
systems or peer to peer systems.
34. In which system, tasks are equally divided between all the nodes?
A. client/server systems
B. peer to peer systems
C. user to client system
D. All of the above
Ans : B
Explanation: The peer to peer systems contains nodes that are equal participants in data
sharing. All the tasks are equally divided between all the nodes
35. Which of the following is not an Advantages of Distributed Systems?
A. All the nodes in the distributed system are connected to each other
B. It can be scaled as required
C. Failure of one node does not lead to the failure of the entire distributed system
D. Some messages and data can be lost in the network while moving from one node to
another
Ans : D
Explanation: Some messages and data can be lost in the network while moving from one
node to another is disadvantages of Distributed Systems.
36. In distributed system, each processor has its own ___________
A. local memory
B. clock
C. Both A and B
D. None of the above
Ans : C
Explanation: In distributed system, each processor has its own local memory and clock.
37. Which routing technique is used in a distributed system?
A. fixed routing
B. virtual routing
C. dynamic routing
D. All of the above
Ans : D
Explanation: All of the above routing technique is used in a distributed system.
38. In distributed systems, link and site failure is detected by ___________
A. handshaking
B. polling
C. token passing
D. None of the above
Ans : A
Explanation: In distributed systems, link and site failure is detected by handshaking.
39. A server may serve _________ clients at the same time
A. No
B. Single
C. Multiple
D. Can not say
Ans : C
Explanation: A server may serve multiple clients at the same time while a client is in contact
with only one server.
40. It is easy to provide adequate security in distributed systems.
A. TRUE
B. FALSE
C. Can be true or false
D. Can not say
Ans : B
Explanation: False, It is difficult to provide adequate security in distributed systems because
the nodes as well as the connections need to be secured.\
41.Clustered systems have ___________ CPUs.
A. zero
B. one
C. two
D. multiple
Ans : D
Explanation: Clustered systems are similar to parallel systems as they both have multiple
CPUs
42. Clustered systems are created by ________ individual computer systems.
A. one
B. two
C. two or more
D. None of the above
Ans : C
Explanation: clustered systems are created by two or more individual computer systems
merged together
43. The clustered systems are a combination of hardware clusters and software clusters
A. TRUE
B. FALSE
C. Can be true or false
D. Can not say
Ans : A
Explanation: True, The clustered systems are a combination of hardware clusters and
software clusters
44. How many types of clustered systems?
A. 1
B. 2
C. 3
D. 4
Ans : B
Explanation: There are primarily two types of clustered systems i.e. asymmetric clustering
system and symmetric clustering system
45. In which clustering system two or more nodes all run applications as well as monitor each
other?
A. Asymmetric
B. Simple
C. Symmetric
D. All of the above
Ans : C
Explanation: In symmetric clustering system two or more nodes all run applications as well
as monitor each other.
46. Which of the following are Benefits of Clustered Systems?
A. Clustered systems result in high performance
B. Clustered systems are quite fault tolerant
C. Clustered systems are quite scalable
D. All of the above
Ans : D
Explanation: All of the above are benefits of clustered systems.
47. In which type of clusters, the nodes in the system share the workload to provide a better
performance?
A. High Availability Clusters
B. Load Balancing Clusters
C. Both A and B
D. None of the above
Ans : B
Explanation: In this type of clusters, the nodes in the system share the workload to provide a
better performance
48. Symmetric clustering system more efficient than asymmetric system.
A. TRUE
B. FALSE
C. Can be true or false
D. Can not say
Ans : A
Explanation: True, Symmetric Clustering Systemis more efficient than asymmetric system as
it uses all the hardware and doesn't keep a node merely as a hot standby.
49. Each node in the clustered systems contains the cluster?
A. Hardware
B. software
C. Both A and B
D. Can not say
Ans : B
Explanation: Each node in the clustered systems contains the cluster software. This software
monitors the cluster system and makes sure it is working as required.
50. The hot standby mode is a failsafe
A. Yes
B. No
C. Can be yes or no
D. Can not say
Ans : A

Explanation: True, The hot standby mode is a fail safe in which a hot standby node is part of
the system .
UNIT-2
Distributed Operating Systems: Issues – Communication Primitives – Lamport‟s Logical
Clocks – Deadlock handling strategies – Issues in deadlock detection and resolution-distributed
file systems –design issues – Case studies – The Sun Network File System-Coda.

DISTRIBUTED OPERATING SYSTEMS:

ISSUES:

What is a distributed system?

 “A collection of autonomous computers linked by a network with software designed to


produce an integrated facility”
 “A collection of independent computers that appear to the users of the system as a single
computer”

Examples¶

Distributed systems

 Department computing cluster


 Corporate systems
 Cloud systems (e.g. Google, Microsoft, etc.)

Application examples

 Email
 News
 Multimedia information systems - video conferencing
 Airline reservation system
 Banking system
 File downloads (BitTorrent)
 Messaging

Illustration
Design Issues

 Openness
 Resource Sharing
 Concurrency
 Scalability
 Fault-Tolerance
 Transparency
 High-Performance

Issues arising from Distributed Systems

 Naming - How to uniquely identify resources


 Communication - How to exchange data and information reliably with good
performance
 Software Structure - How to make software open, extensible, scalable, with high-
performance
 Workload Allocation - Where to perform computations and various services
 Consistency Maintenance - How to keep consistency at a reasonable cost

Naming

 A resource must have a name (or identifier) for access


 Name: Can be interpreted by user, e.g., a file name
 Identifier - Interpreted by programs, e.g., port number

Naming - Name Resolution

 “resolved” when it is translated into a form to be used to invoke an action on the


resource
 Usually a communication identified PLUS other attributes
 E.g., Internet communication id
 host id:port no
 also known as “IP address:port no”
 192:130.228.6:8000
 Name resolution may involve several translation steps

Naming - Design Considerations¶

 Name space for each type of resource


 e.g., files, ports, printers, etc.
 Must be resolvable to communication Ids
 typically achieved by names and their translation in a “name service”
 You must have come across “DNS” when using the WWW!!
 Frequently accessed resources, e.g., files are resolved by resource manager for
efficiency
 Hierarchical Name Space - each part is resolved relative to current context, e.g., file
names in UNIX

Communication

Communication is an essential part of distributed systems - e.g., clients and servers must
communicate for request and response

Communication normally involved - transfer of data from sender to receiver - synchronization


among processes

Communication accomplished by message passing

Synchronous or blocking - sender waits for receiver to execute a receive operation

Asynchronous or non-blocking

Types of Communication

 Client-Server
 Group Multicast
 Function Shipping
 Performance of distributed systems depends critically on communication performance
 We will study the software components involved in communication

Client-Server Communication

 Client sends request to server process


 Server executes the request
 Server transmits a reply and data, e.g., file servers, web server…

Client-Server Communication

Client-Server Communication¶

 Message Passing Operations


 send
 receive
 Remote Procedure Call (RPC)
 hides communication behind procedure call abstraction
 e.g., read(fp,buffer,….)
 Files reside with the server, thus there will be communication between client
and server to satisfy this request

Group Multicast¶

 A very important primitive for distributed systems


 Target of a message is a group of processes
 e.g., chat room, I sending a message to class list, video conference
 Where is multicast useful?
 Locating objects - client multicasts a message to many servers; server that can
satisfy request responds
 Fault-tolerance - more than one server does a job; even if one fails, results still
available
 Multiple updates
 Hardware support may or may not be available
 if no hardware support, each recipient is sent a message

Group Multicast

Software Structure

 In a centralized system, O/S manages resources and provides essential services


 Basic resource management
 memory allocation and protection
 process creation and processor scheduling
 peripheral device handling
 User and application services
 user authentication and access control (e.g., login)
 file management and access facilities
 clock facilities
Distributed System Software Structure

 It must be easy to add new services (flexibility, extensibility, openness requirements)


 Kernel is normally restricted to
 memory allocation
 process creation and scheduling
 interposes communication
 peripheral device handling
 E.g., Microkernels - represent light weight O/S, most services provided as applications
on top of microkernels

Distributed System Software Structure

Consistency Management

 When do consistency problems arise?


 concurrency
 sharing data
 caching
 Why cache data?
 for performance, scalability
 How?
 Subsequent requests (many of them) need not go over the NETWORK to
SERVERS
 better utilized servers, network and better response
 Caching is normally transparent, but creates consistency problems

Caching

 Suppose your program (pseudocode) adds numbers stored in a file as follows (assume
each number is 4 bytes:
 for I= 1, 1000
 tmp = read next number from file
 sum = sum + tmp
 end for
Copy code

 With no caching, each read will go over the network, which will send a new 4 byte
number. Assuming 1 millisecond (ms) to get a number, requres a total of 1s to get all
of the numbers.
 With caching, assuming 1000 byte pages, 249 of the 250 reads will be local requests
(from the cache).

Consistency¶

 Update consistency
 when multiple processes access and update data concurrently
 effect should be such that all processes sharing data see the same values
(consistent image)
 E.g., sharing data in a database
 Replication consistency
 when data replicated and once process updates it
 All other processes should see the updated data immediately
 e.g., replicated files, electronic bulletin board
 Cache consistency
 When data (normally at different levels of granularity, such as pages, disk
blocks, files…) is cached and updates by one process, it must be invalidated or
updated by others
 When and how depends on the consistency models used

Workload Allocation¶

 In distributed systems many resources (e.g., other workstations, servers etc.) may be
available for “computing”
 Capacity and size of memory of a workstation or server may determine what
applications may are able to run
 Parts of applications may be run on different workstations for parallelism (e.g.,
compiling different files of the same program)
 Some workstations or servers may have special hardware to do certain types of
applications fast (e.g., video compression)
 Idle workstations may be utilized for better performance and utilization

Processor Pool Model

In a processor pool model, processes are allocated to processors for their lifetime (e.g the
Amoeba research O/S supports this concept).

Processor Pool Model¶

Quality-of-Service¶

Quality of Service (a.k.a. QoS) refers to performance and other service expectations of a client
or an application.

 Performance
 Reliability and availability
 security

Examples where this is important.

 Voice over IP (VOIP) and telephony


 Video (e.g. Netflix and friends)

Issues in Distributed Systems

 the lack of global knowledge


 naming
 scalability
 compatibility
 process synchronization (requires global knowledge)
 resource management (requires global knowledge)
 security
 fault tolerance, error recovery

Lack of Global Knowledge

 Communication delays are at the core of the problem


 Information may become false before it can be acted upon
 these create some fundamental problems:
o no global clock -- scheduling based on fifo queue?
o no global state -- what is the state of a task? What is a correct program?

Naming

 named objects: computers, users, files, printers, services


 namespace must be large
 unique (or at least unambiguous) names are needed
 logical to physical mapping needed
 mapping must be changeable, expandable, reliable, fast

Scalability

 How large is the system designed for?


 How does increasing number of hosts affect overhead?
 broadcasting primitives, directories stored at every computer -- these design options
will not work for large systems.

Compatibility

 Binary level: same architecture (object code)


 Execution level: same source code can be compiled and executed (source code).
 Protocol level: only requires all system components to support a common set of
protocols.

Process synchronization

 test-and-set instruction won't work.


 Need all new synchronization mechanisms for distributed systems.

Distributed Resource Management

 Data migration: data are brought to the location that needs them.
o distributed filesystem (file migration)
o distributed shared memory (page migration)
 Computation migration: the computation migrates to another location.
o remote procedure call: computation is done at the remote machine.
o processes migration: processes are transferred to other processors.

Security

 Authetication: guaranteeing that an entity is what it claims to be.


 Authorization: deciding what privileges an entity has and making only those privileges
available.

Structuring

 the monolithic kernel: one piece


 the collective kernel structure: a collection of processes
 object oriented: the services provided by the OS are implemented as a set of objects.
 client-server: servers provide the services and clients use the services.

Communication Networks

 WAN and LAN


 traditional operating systems implement the TCP/IP protocol stack: host to network
layer, IP layer, transport layer, application layer.
 Most distributed operating systems are not concerned with the lower layer
communication primitives.

Communication Models

 message passing
 remote procedure call (RPC)
Message Passing Primitives

 Send (message, destination), Receive (source, buffer)


 buffered vs. unbuffered
 blocking vs. nonblocking
 reliable vs. unreliable
 synchronous vs. asynchronous

Example: Unix socket I/O primitives


#include <sys/socket.h>
ssize_t sendto(int socket, const void *message,
size_t length, int flags,
const struct sockaddr *dest_addr, size_t dest_len);
ssize_t recvfrom(int socket, void *buffer,
size_t length, int flags, struct sockaddr *address,
size_t *address_len);
int poll(struct pollfd fds[], nfds_t nfds,
int timeout);
int select(int nfds, fd_set *readfds, fd_set *writefds,
fd_set *errorfds, struct timeval *timeout);

You can find more information on these and other socket I/O operations in the Unix man pages.

COMMUNICATION PRIMITIVES

 Message Passing

 Locking

 Leader Election

 Atomic

 Consensus

 Replication

Message Passing

 A distributed system’s nodes can communicate with one another by using a protocol
called message forwarding. It permits communication between nodes that might be
dispersed geographically, uses various operating systems or coding languages, and has
various processing powers.
 e.g Message passing, for instance, can be used in a microservices architecture to
facilitate communication between several services that each carry out particular tasks.
When Service B receives a message from Service A, it may process it and reply to
Service A. This enables services to function independently of one another and provides
for flexible connectivity between them.

Locking

 In a distributed system, locking is a strategy for synchronizing access to resources in


order to avoid conflicts caused by many nodes trying to access the same resource
concurrently. It is frequently used to guard against discrimination and guarantee fair
access to data.

 e.g For instance, locking can be used in a distributed database to prevent multiple nodes
from writing to the same database record at the same time. The other nodes must wait
for the lock to be released, while only one node can acquire the lock and execute the
write operation.

Leader Election

 A distributed system’s leader node is chosen using the leader election protocol to control
coordination and decision-making. It’s frequently used in fault-tolerant systems to make
sure that only one node is in charge of managing operations and making decisions.

 e.g For instance, a leader election protocol can be used in a distributed system with
numerous nodes to guarantee that one node is designated as the major node in charge of
coordinating operations. Another node can be chosen as the new leader to take over the
coordinating and decision-making duties if the primary node fails.

Atomic Transactions

Atomic transactions are a method for ensuring that several activities are carried out as a
single, indivisible unit, thereby ensuring consistency and dependability. Atomic transactions in
a distributed system guarantee that a set of operations will either succeed completely or fail
completely.

e.g An atomic transaction, for instance, can be used in a banking application to guarantee
that a money transfer between two accounts is successful or completely unsuccessful. To
maintain consistency and dependability, the entire transaction is rolled back if any portion of it
fails.

Consensus

Consensus is a procedure that allows a group of nodes in a distributed system to come


to an understanding even when there are failures. Consensus methods guarantee that the agreed-
upon value is trustworthy and consistent and that all nodes concur on it.

e.g For instance, a consensus mechanism like proof of work or proof of stake is used in
a blockchain network to make sure that all nodes concur on the network’s state, the sequencing
of transactions, and the generation of new blocks.

Replication

To provide fault tolerance and scalability in a distributed system, replication is a


technology used to duplicate data or services across several nodes.

Through replication, it is made possible for another node to take over processing duties
in the event of a failed node without affecting the system’s overall performance.

e.g Replication, for instance, can be used in a web application to guarantee that many
instances of the program are active at once, offering high availability and scalability. Without
affecting the general user experience, processing can continue if one instance fails.

LAMPORT‟S LOGICAL CLOCKS

Lamport’s Logical Clock was created by Leslie Lamport. It is a procedure to


determine the order of events occurring. It provides a basis for the more advanced Vector
Clock Algorithm. Due to the absence of a Global Clock in a Distributed Operating
System Lamport Logical Clock is needed.
Algorithm:
 Happened before relation(->): a -> b, means ‘a’ happened before ‘b’.
 Logical Clock: The criteria for the logical clocks are:
 [C1]: Ci (a) < Ci(b), [ Ci -> Logical Clock, If ‘a’ happened before ‘b’,
then time of ‘a’ will be less than ‘b’ in a particular process. ]
 [C2]: Ci(a) < Cj(b), [ Clock value of Ci(a) is less than Cj(b) ]
Reference:
 Process: Pi
 Event: Eij, where i is the process in number and j: jth event in the ith process.
tm: vector time span for message m.
 Ci vector clock associated with process Pi, the jth element is Ci[j] and
contains Pi‘s latest value for the current time in process Pj.
 d: drift time, generally d is 1.
Implementation Rules[IR]:
 [IR1]: If a -> b [‘a’ happened before ‘b’ within the same process]
then, Ci(b) =Ci(a) + d
 [IR2]: Cj = max(Cj, tm + d) [If there’s more number of processes, then tm = value
of Ci(a), Cj = max value between Cj and tm + d]
For Example:

 Take the starting value as 1, since it is the 1st event and there is no incoming value
at the starting point:
 e11 = 1
 e21 = 1
 The value of the next point will go on increasing by d (d = 1), if there is no
incoming value i.e., to follow [IR1].
 e12 = e11 + d = 1 + 1 = 2
 e13 = e12 + d = 2 + 1 = 3
 e14 = e13 + d = 3 + 1 = 4
 e15 = e14 + d = 4 + 1 = 5
 e16 = e15 + d = 5 + 1 = 6
 e22 = e21 + d = 1 + 1 = 2
 e24 = e23 + d = 3 + 1 = 4
 e26 = e25 + d = 6 + 1 = 7
 When there will be incoming value, then follow [IR2] i.e., take the maximum
value between Cj and Tm + d.
 e17 = max(7, 5) = 7, [e16 + d = 6 + 1 = 7, e24 + d = 4 + 1 = 5, maximum
among 7 and 5 is 7]
 e23 = max(3, 3) = 3, [e22 + d = 2 + 1 = 3, e12 + d = 2 + 1 = 3, maximum
among 3 and 3 is 3]
 e25 = max(5, 6) = 6, [e24 + 1 = 4 + 1 = 5, e15 + d = 5 + 1 = 6, maximum
among 5 and 6 is 6]
Limitation:
 In case of [IR1], if a -> b, then C(a) < C(b) -> true.
 In case of [IR2], if a -> b, then C(a) < C(b) -> May be true or may not be true.

Below is the C program to implement Lamport’s Logical Clock:

 C++
 C
 Java
 Python3
 C#
 Javascript

// C++ program to illustrate the Lamport's


// Logical Clock

#include <bits/stdc++.h>
using namespace std;

// Function to find the maximum timestamp


// between 2 events
int max1(int a, int b)
{
// Return the greatest of the two
if (a > b)
return a;
else
return b;
}

// Function to display the logical timestamp


void display(int e1, int e2,
int p1[5], int p2[3])
{
int i;

cout << "\nThe time stamps of "


"events in P1:\n";

for (i = 0; i < e1; i++) {


cout << p1[i] << " ";
}

cout << "\nThe time stamps of "


"events in P2:\n";

// Print the array p2[]


for (i = 0; i < e2; i++)
cout << p2[i] << " ";
}

// Function to find the timestamp of events


void lamportLogicalClock(int e1, int e2,
int m[5][3])
{
int i, j, k, p1[e1], p2[e2];

// Initialize p1[] and p2[]


for (i = 0; i < e1; i++)
p1[i] = i + 1;

for (i = 0; i < e2; i++)


p2[i] = i + 1;
cout << "\t";
for (i = 0; i < e2; i++)
cout << "\te2" << i + 1;

for (i = 0; i < e1; i++) {

cout << "\n e1" << i + 1 << "\t";


for (j = 0; j < e2; j++)
cout << m[i][j] << "\t";
}

for (i = 0; i < e1; i++) {


for (j = 0; j < e2; j++) {

// Change the timestamp if the


// message is sent
if (m[i][j] == 1) {
p2[j] = max1(p2[j], p1[i] + 1);
for (k = j + 1; k < e2; k++)
p2[k] = p2[k - 1] + 1;
}

// Change the timestamp if the


// message is received
if (m[i][j] == -1) {
p1[i] = max1(p1[i], p2[j] + 1);
for (k = i + 1; k < e1; k++)
p1[k] = p1[k - 1] + 1;
}
}
}

// Function Call
display(e1, e2, p1, p2);
}

// Driver Code
int main()
{
int e1 = 5, e2 = 3, m[5][3];

// message is sent and received


// between two process

/*dep[i][j] = 1, if message is sent


from ei to ej
dep[i][j] = -1, if message is received
by ei from ej
dep[i][j] = 0, otherwise*/
m[0][0] = 0;
m[0][1] = 0;
m[0][2] = 0;
m[1][0] = 0;
m[1][1] = 0;
m[1][2] = 1;
m[2][0] = 0;
m[2][1] = 0;
m[2][2] = 0;
m[3][0] = 0;
m[3][1] = 0;
m[3][2] = 0;
m[4][0] = 0;
m[4][1] = -1;
m[4][2] = 0;

// Function Call
lamportLogicalClock(e1, e2, m);

return 0;
}

Learn Data Structures & Algorithms with GeeksforGeeks

Output
e21 e22 e23
e11 0 0 0
e12 0 0 1
e13 0 0 0
e14 0 0 0
e15 0 -1 0
The time stamps of events in P1:
12345
The time stamps of events in P2:
123
Time Complexity: O(e1 * e2 * (e1 + e2))
Auxiliary Space: O(e1 + e2)
DEADLOCK HANDLING STRATEGIES

The following are the strategies used for Deadlock Handling in Distributed System:
 Deadlock Prevention
 Deadlock Avoidance
 Deadlock Detection and Recovery
1. Deadlock Prevention: As the name implies, this strategy ensures that deadlock can never
happen because system designing is carried out in such a way. If any one of the deadlock-
causing conditions is not met then deadlock can be prevented. Following are the three
methods used for preventing deadlocks by making one of the deadlock conditions to be
unsatisfied:
 Collective Requests: In this strategy, all the processes will declare the required
resources for their execution beforehand and will be allowed to execute only if
there is the availability of all the required resources. When the process ends up
with processing then only resources will be released. Hence, the hold and wait
condition of deadlock will be prevented.
 But the issue is initial resource requirements of a process before it starts are based
on an assumption and not because they will be required. So, resources will be
unnecessarily occupied by a process and prior allocation of resources also affects
potential concurrency.
 Ordered Requests: In this strategy, ordering is imposed on the resources and
thus, process requests for resources in increasing order. Hence, the circular wait
condition of deadlock can be prevented.
 An ordering strictly indicates that a process never asks for a low
resource while holding a high one.
 There are two more ways of dealing with global timing and
transactions in distributed systems, both of which are based on the
principle of assigning a global timestamp to each transaction as soon
as it begins.
 During the execution of a process, if a process seems to be blocked
because of the resource acquired by another process then the timestamp
of the processes must be checked to identify the larger timestamp
process. In this way, cycle waiting can be prevented.
 It is better to give priority to the old processes because of their long
existence and might be holding more resources.
 It also eliminates starvation issues as the younger transaction will
eventually be out of the system.
 Preemption: Resource allocation strategies that reject no-preemption conditions
can be used to avoid deadlocks.
 Wait-die: If an older process requires a resource held by a younger
process, the latter will have to wait. A young process will be destroyed
if it requests a resource controlled by an older process.
 Wound-wait: If an old process seeks a resource held by a young
process, the young process will be preempted, wounded, and killed,
and the old process will resume and wait. If a young process needs a
resource held by an older process, it will have to wait.
2. Deadlock Avoidance: In this strategy, deadlock can be avoided by examining the state of
the system at every step. The distributed system reviews the allocation of resources and
wherever it finds an unsafe state, the system backtracks one step and again comes to the safe
state. For this, resource allocation takes time whenever requested by a process. Firstly, the
system analysis occurs whether the granting of resources will make the system in a safe state
or unsafe state then only allocation will be made.
 A safe state refers to the state when the system is not in deadlocked state and order
is there for the process regarding the granting of requests.
 An unsafe state refers to the state when no safe sequence exists for the system.
Safe sequence implies the ordering of a process in such a way that all the processes
run to completion in a safe state.
3. Deadlock Detection and Recovery: In this strategy, deadlock is detected and an attempt
is made to resolve the deadlock state of the system. These approaches rely on a Wait-For-
Graph (WFG), which is generated and evaluated for cycles in some methods. The following
two requirements must be met by a deadlock detection algorithm:
 Progress: In a given period, the algorithm must find all existing deadlocks. There
should be no deadlock existing in the system which is undetected under this
condition. To put it another way, after all, wait-for dependencies for a deadlock
have arisen, the algorithm should not wait for any additional events to detect the
deadlock.
 No False Deadlocks: Deadlocks that do not exist should not be reported by the
algorithm which is called phantom or false deadlocks.
There are different types of deadlock detection techniques:
 Centralized Deadlock Detector: The resource graph for the entire system is
managed by a central coordinator. When the coordinator detects a cycle, it
terminates one of the processes involved in the cycle to break the deadlock.
Messages must be passed when updating the coordinator’s graph. Following are
the methods:
 A message must be provided to the coordinator whenever an arc is
created or removed from the resource graph.
 Every process can transmit a list of arcs that have been added or
removed since the last update periodically.
 When information is needed, the coordinator asks for it.
 Hierarchical Deadlock Detector: In this approach, deadlock detectors are
arranged in a hierarchy. Here, only those deadlocks can be detected that fall within
their range.
 Distributed Deadlock Detector: In this approach, detectors are distributed so
that all the sites can fully participate to resolve the deadlock state. In one of the
following below four classes for the Distributed Detection Algorithm- The probe-
based scheme can be used for this purpose. It follows local WFGs to detect local
deadlocks and probe messages to detect global deadlocks.
There are four classes for the Distributed Detection Algorithm:
 Path-pushing: In path-pushing algorithms, the detection of distributed deadlocks
is carried out by maintaining an explicit global WFG.
 Edge-chasing: In an edge-chasing algorithm, probe messages are used to detect
the presence of a cycle in a distributed graph structure along the edges of the
graph.
 Diffusion computation: Here, the computation for deadlock detection is
dispersed throughout the system’s WFG.
 Global state detection: The detection of Distributed deadlocks can be made by
taking a snapshot of the system and then inspecting it for signs of a deadlock.
To recover from a deadlock, one of the methods can be followed:
 Termination of one or more processes that created the unsafe state.
 Using checkpoints for the periodic checking of the processes so that whenever
required, rollback of processes that makes the system unsafe can be carried out
and hence, maintained a safe state of the system.
 Breaking of existing wait-for relationships between the processes.
 Rollback of one or more blocked processes and allocating their resources to
stopped processes, allowing them to restart operation.

ISSUES IN DEADLOCK DETECTION AND RESOLUTION

Deadlock


Deadlock is a fundamental problem in distributed systems.
 A process may request resources in any order, which may not be known a priori
and a process can request resource while holding others.
 If the sequence of the allocations of resources to the processes is not controlled.
 A deadlock is a state where a set of processes request resources that are held by
other processes in the set.
DEADLOCK DETECTION:

1. Resource Allocation Graph (RAG) Algorithm:

 Deadlock detection typically involves constructing a resource allocation graph


based on the current resource allocation and request status.
 The RAG algorithm identifies cycles in the graph, indicating the presence of a
potential deadlock.
 However, the RAG algorithm suffers from scalability issues in large systems
due to the overhead of maintaining the graph.
2. Resource-Requesting Algorithms:

Another approach is to periodically check the state of resource requests and
allocations to identify potential deadlocks.
 This approach involves tracking the resource allocation state and examining
resource requests to detect circular waits.
 However, this method may have high overhead and can only identify deadlocks
when they occur during the detection phase.
DEADLOCK RESOLUTION:

1. Deadlock Prevention:

 Prevention involves ensuring that at least one of the necessary conditions for
deadlock (mutual exclusion, hold and wait, no preemption, circular wait) is not
satisfied.
 By carefully managing resource allocation and enforcing certain policies,
deadlocks can be avoided altogether.
 However, prevention methods can be complex, restrictive, and may limit system
performance or resource utilization.
2. Deadlock Avoidance:

 Avoidance involves dynamically analyzing resource requests and allocations to


ensure that the system avoids entering an unsafe state where a deadlock can
occur.
 Resource allocation is made based on resource requirement forecasts and
resource availability to prevent circular waits.
 Avoidance requires a safe state detection algorithm to determine if a resource
allocation will lead to a deadlock.
 However, avoidance techniques may suffer from increased overhead and may
limit system responsiveness.
3. Deadlock Detection with Recovery:

 Deadlock detection algorithms can be used to periodically check the system’s


state for potential deadlocks.
 Once a deadlock is detected, recovery mechanisms can be employed to resolve
the deadlock.
 Recovery may involve aborting one or more processes, rolling back their
progress, and reallocating resources to allow the system to continue.
 However, recovery mechanisms can be complex and may result in data loss or
system instability.

What is Distributed File System?

A distributed file system (DFS) is a file system that is distributed on various file servers and
locations. It permits programs to access and store isolated data in the same method as in the
local files. It also permits the user to access files from any system. It allows network users to
share information and files in a regulated and permitted manner. Although, the servers have
complete control over the data and provide users access control.

DFS's primary goal is to enable users of physically distributed systems to share resources and
information through the Common File System (CFS). It is a file system that runs as a part of
the operating systems. Its configuration is a set of workstations and mainframes that a LAN
connects. The process of creating a namespace in DFS is transparent to the clients.

DFS has two components in its services, and these are as follows:

1. Local Transparency
2. Redundancy

Local Transparency

It is achieved via the namespace component.

Redundancy

It is achieved via a file replication component.

In the case of failure or heavy load, these components work together to increase data
availability by allowing data from multiple places to be logically combined under a single
folder known as the "DFS root".

It is not required to use both DFS components simultaneously; the namespace component can
be used without the file replication component, and the file replication component can be used
between servers without the namespace component.

Features

There are various features of the DFS. Some of them are as follows:

Transparency

There are mainly four types of transparency. These are as follows:

1. Structure Transparency

The client does not need to be aware of the number or location of file servers and storage
devices. In structure transparency, multiple file servers must be given to adaptability,
dependability, and performance.

2. Naming Transparency
There should be no hint of the file's location in the file's name. When the file is transferred
form one node to other, the file name should not be changed.

3. Access Transparency

Local and remote files must be accessible in the same method. The file system must
automatically locate the accessed file and deliver it to the client.

4. Replication Transparency

When a file is copied across various nodes, the copies files and their locations must be hidden
from one node to the next.

Scalability

The distributed system will inevitably increase over time when more machines are added to the
network, or two networks are linked together. A good DFS must be designed to scale rapidly
as the system's number of nodes and users increases.

Data Integrity

Many users usually share a file system. The file system needs to secure the integrity of data
saved in a transferred file. A concurrency control method must correctly synchronize
concurrent access requests from several users who are competing for access to the same file. A
file system commonly provides users with atomic transactions that are high-level concurrency
management systems for data integrity.

High Reliability

The risk of data loss must be limited as much as feasible in an effective DFS. Users must not
feel compelled to make backups of their files due to the system's unreliability. Instead, a file
system should back up key files so that they may be restored if the originals are lost. As a high-
reliability strategy, many file systems use stable storage.

High Availability

A DFS should be able to function in the case of a partial failure, like a node failure, a storage
device crash, and a link failure.

Ease of Use

The UI of a file system in multiprogramming must be simple, and the commands in the file
must be minimal.
Performance

The average time it takes to persuade a client is used to assess performance. It must perform
similarly to a centralized file system.

Distributed File System Replication

Initial versions of DFS used Microsoft's File Replication Service (FRS), enabling basic file
replication among servers. FRS detects new or altered files and distributes the most recent
versions of the full file to all servers.

Windows Server 2003 R2 developed the "DFS Replication" (DFSR). It helps to enhance
FRS by only copying the parts of files that have changed and reducing network traffic with
data compression. It also gives users the ability to control network traffic on a configurable
schedule using flexible configuration options.

History of Distributed File System

The DFS's server component was firstly introduced as an additional feature. When it
was incorporated into Windows NT 4.0 Server, it was called "DFS 4.1". Later, it was
declared a standard component of all Windows 2000 Server editions. Windows NT 4.0 and
later versions of Windows have client-side support.

Linux kernels 2.6.14 and later include a DFS-compatible SMB client VFS known
as "cifs". DFS is available in versions Mac OS X 10.7 (Lion) and later.

Working of Distributed File System

There are two methods of DFS in which they might be implemented, and these are as follows:

1. Standalone DFS namespace


2. Domain-based DFS namespace

Standalone DFS namespace

It does not use Active Directory and only permits DFS roots that exist on the local
system. A Standalone DFS may only be acquired on the systems that created it. It offers no-
fault liberation and may not be linked to other DFS.

Domain-based DFS namespace

It stores the DFS configuration in Active Directory and creating namespace root
at domainname>dfsroot> or FQDN>dfsroot>.

DFS namespace
SMB routes of the form are used in traditional file shares that are linked to a single server.

\\<SERVER>\<path>\<subpath>

Domain-based DFS file share paths are identified by utilizing the domain name for the server's
name throughout the form.

\\<DOMAIN.NAME>\<dfsroot>\<path>

When users access such a share, either directly or through mapping a disk, their computer
connects to one of the accessible servers connected with that share, based on rules defined by
the network administrator. For example, the default behavior is for users to access the nearest
server to them; however, this can be changed to prefer a certain server.

Applications of Distributed File System

There are several applications of the distributed file system. Some of them are as follows:

Hadoop

Hadoop is a collection of open-source software services. It is a software framework that


uses the MapReduce programming style to allow distributed storage and management of large
amounts of data. Hadoop is made up of a storage component known as Hadoop Distributed
File System (HDFS). It is an operational component based on the MapReduce programming
model.

NFS (Network File System)

A client-server architecture enables a computer user to store, update, and view files
remotely. It is one of various DFS standards for Network-Attached Storage.

SMB (Server Message Block)

IBM developed an SMB protocol to file sharing. It was developed to permit systems to
read and write files to a remote host across a LAN. The remote host's directories may be
accessed through SMB and are known as "shares".

NetWare

It is an abandon computer network operating system that is developed by Novell, Inc.


The IPX network protocol mainly used combined multitasking to execute many services on a
computer system.
CIFS (Common Internet File System)

CIFS is an accent of SMB. The CIFS protocol is a Microsoft-designed implementation


of the SIMB protocol.

Advantages and Disadvantages of Distributed File System

There are various advantages and disadvantages of the distributed file system. These
are as follows:

Advantages

There are various advantages of the distributed file system. Some of the advantages are as
follows:

1. It allows the users to access and store the data.


2. It helps to improve the access time, network efficiency, and availability of files.
3. It provides the transparency of data even if the server of disk files.
4. It permits the data to be shared remotely.
5. It helps to enhance the ability to change the amount of data and exchange data.

Disadvantages

There are various disadvantages of the distributed file system. Some of the disadvantages are
as follows:

1. In a DFS, the database connection is complicated.


2. In a DFS, database handling is also more complex than in a single-user system.
3. If all nodes try to transfer data simultaneously, there is a chance that overloading will
happen.
4. There is a possibility that messages and data would be missed in the network while
moving from one node to another.

DESIGN ISSUES

Design issues of the distributed system –


1. Heterogeneity: Heterogeneity is applied to the network, computer hardware,
operating system, and implementation of different developers. A key component
of the heterogeneous distributed system client-server environment is middleware.
Middleware is a set of services that enables applications and end-user to interact
with each other across a heterogeneous distributed system.
2. Openness: The openness of the distributed system is determined primarily by the
degree to which new resource-sharing services can be made available to the users.
Open systems are characterized by the fact that their key interfaces are published.
It is based on a uniform communication mechanism and published interface for
access to shared resources. It can be constructed from heterogeneous hardware
and software.
3. Scalability: The scalability of the system should remain efficient even with a
significant increase in the number of users and resources connected. It shouldn’t
matter if a program has 10 or 100 nodes; performance shouldn’t vary. A
distributed system’s scaling requires consideration of a number of elements,
including size, geography, and management.
4. Security: The security of an information system has three components
Confidentially, integrity, and availability. Encryption protects shared resources
and keeps sensitive information secrets when transmitted.
5. Failure Handling: When some faults occur in hardware and the software
program, it may produce incorrect results or they may stop before they have
completed the intended computation so corrective measures should to
implemented to handle this case. Failure handling is difficult in distributed
systems because the failure is partial i, e, some components fail while others
continue to function.
6. Concurrency: There is a possibility that several clients will attempt to access a
shared resource at the same time. Multiple users make requests on the same
resources, i.e. read, write, and update. Each resource must be safe in a concurrent
environment. Any object that represents a shared resource in a distributed system
must ensure that it operates correctly in a concurrent environment.
7. Transparency: Transparency ensures that the distributed system should be
perceived as a single entity by the users or the application programmers rather
than a collection of autonomous systems, which is cooperating. The user should
be unaware of where the services are located and the transfer from a local machine
to a remote one should be transparent.

CASE STUDIES

a file system that serves very large data files (hundreds of Gigabytes or Terabytes). The
architecture presented here is a slightly simplified description of the Google File System and
of several of its descendants, including the HADOOP Distributed File System (HDFS)
available as an opensource project.

The technical environment is that of a high speed local network connecting a cluster of
servers. The file systems is designed to satisfy some specific requirements:

(i) we need to handle very large collections of unstructured to semi-structured


documents,
(ii) data collections are written once and read many times, and (iii) the infrastructure
that supports these components consists of thousands of connected machines,
with high failure probability. These particularities make common distributed
system tools only partially appropriate.
THE SUN NETWORK FILE SYSTEM

What is NFS?
Network File System (NFS) is a networking protocol for distributed file sharing. A file
system defines the way data in the form of files is stored and retrieved from storage devices,
such as hard disk drives, solid-state drives and tape drives. NFS is a network file sharing
protocol that defines the way files are stored and retrieved from storage devices across
networks.

The NFS protocol defines a network file system, originally developed for local file
sharing among Unix systems and released by Sun Microsystems in 1984. The NFS protocol
specification was first published by the Internet Engineering Task Force (IETF) as an internet
protocol in RFC 1094 in 1989. The current version of the NFS protocol is documented in RFC
7530, which documents the NFS version 4 (NFSv4) Protocol.

NFS enables system administrators to share all or a portion of a file system on a


networked server to make it accessible to remote computer users. Clients with authorization to
access the shared file system can mount NFS shares, also known as shared file systems. NFS
uses Remote Procedure Calls (RPCs) to route requests between clients and servers.

NFS is one of the most widely used protocols for file servers. NFS implementations are
available for most modern operating systems (OSes), including the following:

 Hewlett Packard Enterprise HP-UX


 IBM AIX
 Microsoft Windows
 Linux
 Oracle Solaris

Cloud vendors also implement the NFS protocol for cloud storage, including Amazon
Elastic File System, NFS file shares in Microsoft Azure and Google Cloud Filestore.

Any device that can be attached to an NFS host file system can be shared through NFS.
This includes hard disks, solid state drives, tape drives, printers and other peripherals. Users
with appropriate permissions can access resources from their client machines as if those
resources are mounted locally.

NFS is an application layer protocol, meaning that it can operate over any transport or
network protocol stack. However, in most cases NFS is implemented on systems running
the TCP/IP protocol suite. The original intention for NFS was to create a simple
and stateless protocol for distributed file system sharing.
Early versions of NFS used the User Datagram Protocol (UDP) for its transport layer.
This eliminated the need to define a stateful storage protocol; however, NFS now supports both
the Transmission Control Protocol (TCP) and UDP. Support for TCP as a transport layer
protocol was added to NFS version 3 (NFSv3) in 1995.

How does the Network File System work?


NFS is a client-server protocol. An NFS server is a host that meets the following requirements:

 has NFS server software installed;


 has at least one network connection for sharing NFS resources; and
 is configured to accept and respond to NFS requests over the network connection.

An NFS client is a host that meets the following requirements:

 has NFS client software installed;


 has network connectivity to an NFS server;
 is authorized to access resources on the NFS server; and
 is configured to send and receive NFS requests over the network connection.

NFS was initially conceived as a method for sharing file systems across workgroups using
Unix. It is still often used for ad hoc sharing of resources.

The process of setting up NFS service includes the following three steps, whether on an
enterprise file server or on a local workstation:

1. Verify that rpc.mountd or just mountd is installed and working. This is the NFS
daemon -- the program that listens to the network for NFS requests.
2. Create or choose a shared directory on the server. This is the NFS mount point.
Using the mount point and the server host name or address uniquely identifies the
NFS resource.
3. Configure permissions on the NFS server to enable authorized users to read, write
and execute files in the file system.

Setting up an NFS client machine to access an NFS server can be done manually, using
the mount command or using an NFS configuration file -- /etc/exports. Each line in the NFS
config file contains a mount point, an IP address or a host domain name and any
configuration metadata needed to access the file system.
NFS
enables networked resource sharing, just like Microsoft's Server Message Block (SMB)
protocol. SMB and NFS are implemented on many different OSes.

Versions of NFS
NFSv4, the current version of NFS, and other versions subsequent to NFS version 2 (NFSv2)
are usually compatible after client and server machines negotiate a connection.

NFS versions from the earliest to the current one are as follows:

Sun Network Filesystem released March 1984

Sun Microsystems published the first implementation of its network file system in March 1984.
The objective was to provide transparent, remote access to file systems. Sun intended to
differentiate its NFS project from other Unix file systems by designing it to be easily portable
to other OSes and machine architectures.

NFSv2 released March 1989

NFSv2 is specified in RFC 1094. Its key features included the following:

 It uses UDP as its transport protocol. This enables keeping the server stateless, with
file locking implemented outside of the core protocol.
 Its file offsets are limited to 32-bit quantity, making the maximum size of files
clients can access 4.2 GB.
 Its data transfer size is limited to 8 KB, and it requires that NFS servers commit
data written by a client to a disk or non-volatile random-access memory (NVRAM)
before responding.

NFSv2 is obsolete and should not be used.

NFSv3 released June 1995

Specified in RFC 1813, NFSv3 incorporated the following new features and updates:

 It extended file offsets from 32- to 64-bits, which removed the 4.2 GB maximum
file size limit.
 It relaxed the 8 KB data transfer limitation rule to enable larger read and write
transfers.
 TCP was added as a transport layer protocol option in NFSv3. TCP transport makes
it easier to use NFS over a wide area network (WAN) and enhances read and write
transfer capabilities.
 Added a COMMIT operation enabling reliable asynchronous writes, and an
ACCESS RPC that improves support for access control lists, or ACLs, and power
users.
 The server replies to WRITE RPCs instantly in NFSv3, without syncing to a disk
or NVRAM. To ensure data is on stable storage, the client only needs to send a
COMMIT RPC.

NFSv3 is reported to still be in widespread use. It is interoperable with NFSv4 but lacks support
for many of the new and improved features rolled out with later versions.

NFSv4 released April 2003

The update to NFSv4 was first documented in RFC 3010 in 2000. This is the first version of
the NFS specification that the IETF published as a proposed standard; prior versions were
published as informational.

New and improved features in this update included the following:

 support for strong authentication, integrity and privacy;


 support for advanced file caching;
 improved internationalization capability;
 better interoperability with Microsoft Windows filesharing was added;
 better support for integrated locking was added; and
 improved performance and reliability because communication was handled with
compound RPCs and TCP use was required.

A new API was included for future additions of new security mechanisms.

A slightly-updated version of the NFS specification was republished in 2003 as RFC 3530, to
correct errors in the first version and add some improvements to the protocol.

NFS version 4.1 (NFSv4.1) released January 2010

A minor version protocol, NFSv4.1 published as RFC 5661, added new features including the
following:

 It enabled the use of NFS on global WANs.


 It standardized parallel NFS to address bandwidth and scalability issues.
 It internationalized support using UTF-8 encoding for file names, directories and
other identifiers. UTF-8 replaces the ASCII character set. It is a variable width
character encoding that is as compact as ASCII but can also contain Unicode
 It added a new session model to maintain the server's state relative to the
connections belonging to the client.
 Directory delegation added the ability to delegate file operations to the accessing
client.

NFS version 4.2 (NFSv4.2) released November 2016

NFSv4.2 is documented in RFC 7862. It added the following new features and updates:

 enhanced modern scale-out storage architectures;


 support for server-side copy, which enables cloning and snapshots of files by any
NFSv4.2 storage server;
 space reservations to ensure a file will have storage available;
 support for sparse files, which contain large blocks of zero data that are transferred
as zeros when read from the file;
 support for application data block support, which defines the format of a file;
 support for labeled NFS, which supports additional security when used
with Security-Enhanced Linux.
Benefits of NFS
Among many benefits for organizations using NFS are the following:

 Mature. NFS is a mature protocol, which means most aspects of implementing,


securing and using it are well understood, as are its potential weaknesses.
 Open. NFS is an open protocol, with its continued development documented in
internet specifications as a free and open network protocol.
 Cost-effective. NFS is a low-cost solution for network file sharing that is easy to
set up because it uses the existing network infrastructure.
 Centrally managed. NFS's centralized management decreases the need for added
software and disk space on individual user systems.
 User-friendly. The protocol is easy to use and enables users to access remote files
on remote hosts in the same way they access local ones.
 Distributed. NFS can be used as a distributed file system, reducing the need
for removable media storage devices.
 Secure. With NFS, there is less removeable media like CDs, DVDs, Blu-
ray disks, diskettes and USB drives in circulation, making the system more
secure.
Disadvantages of NFS
Some of the drawbacks of using NFS include the following:

 Dependence on RPCs makes NFS inherently insecure and should only be used
on a trusted network behind a firewall. Otherwise, NFS will be vulnerable to
internet threats.
 Some reviews of NFSv4 and NFSv4.1 suggest that these versions have limited
bandwidth and scalability and that NFS slows down during heavy network
traffic. The bandwidth and scalability issue is reported to have improved with
NFSv4.2.
CODA

Why is Coda promising and potentially very important?

Coda is a distributed filesystem with its origin in AFS2. It has many features that are very
desirable for network filesystems. Currently, Coda has several features not found elsewhere.

1. disconnected operation for mobile computing


2. is freely available under a liberal license
3. high performance through client side persistent caching
4. server replication
5. security model for authentication, encryption and access control
6. continued operation during partial network failures in server network
7. network bandwith adaptation
8. good scalability
9. well defined semantics of sharing, even in the presence of network failures

Current activities on Coda.

CMU is making a serious effort to improve Coda. We believe that the system needs to be
taken from its current status to a widely available system. The research to date has produced a
lot of information regarding performance and implementation on which the design was based.
We are now in a position to further develop and adapt the system for wider use. We will
emphasize:

 reliability and performance


 ports to important platforms
 documentation, mailing groups
 extensions in functionality

5 MARKS

1.What is Distributed Operating Systems?

2. Explain about distributed operating systems Issues?

3.Discuss about Communication Primitives?

4.What is Case studies?

10 MARKS

1.Explain about distributed operating systems?

2.discuss about The Sun Network File System?

3. Explain about Deadlock handling strategies?

4. Explain about Lamport‟s Logical Clocks?

MCQ:

1. In distributed system, each processor has its own ___________


a) local memory
b) clock
c) both local memory and clock
d) none of the mentioned
Answer: c
Explanation: None.

2. If one site fails in distributed system then ___________


a) the remaining sites can continue operating
b) all the sites will stop working
c) directly connected sites will stop working
d) none of the mentioned
Answer: a
Explanation: None.
3. Network operating system runs on ___________
a) server
b) every system in the network
c) both server and every system in the network
d) none of the mentioned
Answer: a
Explanation: None.

4. Which technique is based on compile-time program transformation for accessing remote


data in a distributed-memory parallel system?
a) cache coherence scheme
b) computation migration
c) remote procedure call
d) message passing
Answer: b
Explanation: None.

5. Logical extension of computation migration is ___________


a) process migration
b) system migration
c) thread migration
d) data migration
Answer: a
Explanation: None.

6. Processes on the remote systems are identified by ___________


a) host ID
b) host name and identifier
c) identifier
d) process ID
Answer: b
Explanation: None.

7. Which routing technique is used in a distributed system?


a) fixed routing
b) virtual routing
c) dynamic routing
d) all of the mentioned
Answer: d
Explanation: None.

8. In distributed systems, link and site failure is detected by ___________


a) polling
b) handshaking
c) token passing
d) none of the mentioned
Answer: b
Explanation: None.

9. The capability of a system to adapt the increased service load is called ___________
a) scalability
b) tolerance
c) capacity
d) none of the mentioned
Answer: a
Explanation: None.
10. Internet provides _______ for remote login.
a) telnet
b) http
c) ftp
d) rpc
Answer: a
Explanation: None.
11. What is not true about a distributed system?
a) It is a collection of processor
b) All processors are synchronized
c) They do not share memory
d) None of the mentioned
Answer: b
Explanation: None.
12. What are the characteristics of processor in distributed system?
a) They vary in size and function
b) They are same in size and function
c) They are manufactured with single purpose
d) They are real-time devices
Answer: a
Explanation: None.

13. What are the characteristics of a distributed file system?


a) Its users, servers and storage devices are dispersed
b) Service activity is not carried out across the network
c) They have single centralized data repository
d) There are multiple dependent storage devices
Answer: a
Explanation: None.
14. What is not a major reason for building distributed systems?
a) Resource sharing
b) Computation speedup
c) Reliability
d) Simplicity
Answer: d
Explanation: None.
15. What are the types of distributed operating system?
a) Network Operating system
b) Zone based Operating system
c) Level based Operating system
d) All of the mentioned
Answer: a
Explanation: None
16. What are characteristic of Network Operating Systems?
a) Users are aware of multiplicity of machines
b) They are transparent
c) They are simple to use
d) All of the mentioned
Answer: a
Explanation: None.
17. How is access to resources of various machines is done?
a) Remote logging using ssh or telnet
b) Zone are configured for automatic access
c) FTP is not used
d) All of the mentioned
Answer: a
Explanation: None.
18. What are the characteristics of Distributed Operating system?
a) Users are aware of multiplicity of machines
b) Access is done like local resources
c) Users are aware of multiplicity of machines
d) They have multiple zones to access files.
Answer: b
Explanation: None.

19. What are the characteristics of data migration?


a) transfer data by entire file or immediate portion required
b) transfer the computation rather than the data
c) execute an entire process or parts of it at different sites
d) none of the mentioned
Answer: a
Explanation: None.
20. What are the characteristics of computation migration?
a) transfer data by entire file or immediate portion required
b) transfer the computation rather than the data
c) execute an entire process or parts of it at different sites
d) none of the mentioned
Answer: b
Explanation: None.
21. What are the characteristics of process migration?
a) transfer data by entire file or immediate portion required
b) transfer the computation rather than the data
c) execute an entire process or parts of it at different sites
d) none of the mentioned
Answer: c
Explanation: None.
22.What are the different ways in which clients and servers are dispersed across machines?
a) Servers may not run on dedicated machines
b) Servers and clients can be on same machines
c) Distribution cannot be interposed between a OS and the file system
d) OS cannot be distributed with the file system a part of that distribution
Answer: b
Explanation: None.

23. What are not the characteristics of a DFS?


a) login transparency and access transparency
b) Files need not contain information about their physical location
c) No Multiplicity of users
d) No Multiplicity if files
Answer: c
Explanation: None.
24. What are characteristic of a DFS?
a) Fault tolerance
b) Scalability
c) Heterogeneity of the system
d) Upgradation
Answer: d
Explanation: None.
25. What are the different ways file accesses take place?
a) sequential access
b) direct access
c) indexed sequential access
d) all of the mentioned
Answer: d
Explanation: None.
26. Which is not a major component of a file system?
a) Directory service
b) Authorization service
c) Shadow service
d) System service
Answer: c
Explanation: None.

27. What are the different ways mounting of the file system?
a) boot mounting
b) auto mounting
c) explicit mounting
d) all of the mentioned

Answer: d
Explanation: None.

28. What is the advantage of caching in remote file access?


a) Reduced network traffic by retaining recently accessed disk blocks
b) Faster network access
c) Copies of data creates backup automatically
d) None of the mentioned
Answer: a
Explanation: None.
29. What is networked virtual memory?
a) Caching
b) Segmentation
c) RAM disk
d) None of the mentioned
Answer: a
Explanation: None.

30. What are the examples of state information?


a) opened files and their clients
b) file descriptors and file handles
c) current file position pointers
d) all of the mentioned
Answer: d
Explanation: None.

31. Which is not an example of state information?


a) Mounting information
b) Description of HDD space
c) Session keys
d) Lock status
Answer: b
Explanation: None.

32. What is a stateless file server?


a) It keeps tracks of states of different objects
b) It maintains internally no state information at all
c) It maintains some information in them
d) None of the mentioned
Answer: b
Explanation: None.
33. What are the characteristics of the stateless server?
a) Easier to implement
b) They are not fault-tolerant upon client or server failures
c) They store all information file server
d) They are redundant to keep data safe
Answer: a
Explanation: None.
34. Implementation of a stateless file server must not follow?
a) Idempotency requirement
b) Encryption of keys
c) File locking mechanism
d) Cache consistency
Answer: b
Explanation: None.
35. What are the advantages of file replication?
a) Improves availability & performance
b) Decreases performance
c) They are consistent
d) Improves speed
Answer: a
Explanation: None.
36. What are characteristic of NFS protocol?
a) Search for file within directory
b) Read a set of directory entries
c) Manipulate links and directories
d) All of the mentioned
Answer: d
Explanation: None.
37. What is the coherency of replicated data?
a) All replicas are identical at all times
b) Replicas are perceived as identical only at some points in time
c) Users always read the most recent data in the replicas
d) All of the mentioned
Answer: d
Explanation: None.
38. What are the three popular semantic modes?
a) Unix, Coherent & Session semantics
b) Unix, Transaction & Session semantics
c) Coherent, Transaction & Session semantics
d) Session, Coherent semantics
View Answer
Answer: b
Explanation: None.

39. What are the characteristics of Unix semantics?


a) Easy to implement in a single processor system
b) Data cached on a per process basis using write through case control
c) Write-back enhances access performance
d) All of the mentioned
Answer: d
Explanation: None.

40. What are the characteristics of transaction semantics?


a) Suitable for applications that are concerned about coherence of data
b) The users of this model are interested in the atomicity property for their transaction
c) Easy to implement in a single processor system
d) Write-back enhances access performance
Answer: b
Explanation: None.

41. What are non characteristics of session semantics?


a) Each client obtains a working copy from the server
b) When file is closed, the modified file is copied to the file server
c) The burden of coordinating file sharing is ignored by the system
d) Easy to implement in a single processor system.
Answer: d
Explanation: None.

42. The NFS servers ____________


a) are stateless
b) save the current state of the request
c) maybe stateless
d) none of the mentioned
Answer: a
Explanation: None.
43. Every NFS request has a _________ allowing the server to determine if a request is
duplicated or if any are missing.
a) name
b) transaction
c) sequence number
d) all of the mentioned
Answer: c
Explanation: None.
44. A server crash and recovery will __________ to a client.
a) be visible
b) affect
c) be invisible
d) harm
Answer: c
Explanation: All blocks that the server is managing for the client will be intact.
45. The server must write all NFS data ___________
a) synchronously
b) asynchronously
c) index-wise
d) none of the mentioned
Answer: a
Explanation: None.

46. A single NFS write procedure ____________


a) can be atomic
b) is atomic
c) is non atomic
d) none of the mention
Answer: b
Explanation: None.
47. The NFS protocol __________ concurrency control mechanisms.
a) provides
b) does not provide
c) may provide
d) none of the mentioned
Answer: b
Explanation: None.
48. _______________ in NFS involves the parsing of a path name into separate directory
entries – or components.
a) Path parse
b) Path name parse
c) Path name translation
d) Path name parsing
Answer: c
Explanation: None.
49. For every pair of component and directory vnode after path name translation
____________
a) a single NFS lookup call is used sequentially
b) a single NFS lookup call is used beginning from the last component
c) at least two NFS lookup calls per component are performed
d) a separate NFS lookup call is performed
Answer: d
Explanation: None.

50. When a client has a cascading mount _______ server(s) is/are involved in a path name
traversal.
a) at least one
b) more than one
c) more than two
d) more than three
Answer: b
Explanation: None.
UNIT-3
Realtime Operating Systems : Introduction – Applications of Real Time Systems – Basic
Model of Real Time System – Characteristics – Safety and Reliability - Real Time Task
Scheduling

REALTIME OPERATING SYSTEMS :

INTRODUCTION:

 A real-time operating system (RTOS) is a special-purpose operating system used in


computers that has strict time constraints for any job to be performed.
 It is employed mostly in those systems in which the results of the computations are
used to influence a process while it is executing.
 Whenever an event external to the computer occurs, it is communicated to the
computer with the help of some sensor used to monitor the event.
 The sensor produces the signal that is interpreted by the operating system as an
interrupt. On receiving an interrupt, the operating system invokes a specific process or
a set of processes to serve the interrupt.

 This process is completely uninterrupted unless a higher priority interrupt occurs


during its execution.
 Therefore, there must be a strict hierarchy of priority among the interrupts. The
interrupt with the highest priority must be allowed to initiate the process , while lower
priority interrupts should be kept in a buffer that will be handled later.
 Interrupt management is important in such an operating system.

Real-time operating systems employ special-purpose operating systems because conventional


operating systems do not provide such performance.

The various examples of Real-time operating systems are:


o MTS
o Lynx
o QNX
o VxWorks etc.

APPLICATIONS OF REAL-TIME OPERATING SYSTEM (RTOS):

RTOS is used in real-time applications that must work within specific deadlines. Following
are the common areas of applications of Real-time operating systems are given below.

o Real-time running structures are used inside the Radar gadget.


o Real-time running structures are utilized in Missile guidance.
o Real-time running structures are utilized in on line inventory trading.
o Real-time running structures are used inside the cell phone switching gadget.
o Real-time running structures are utilized by Air site visitors to manipulate structures.
o Real-time running structures are used in Medical Imaging Systems.
o Real-time running structures are used inside the Fuel injection gadget.
o Real-time running structures are used inside the Traffic manipulate gadget.
o Real-time running structures are utilized in Autopilot travel simulators.

Types of Real-time operating system

Following are the three types of RTOS systems are:


Hard Real-Time operating system:

 In Hard RTOS, all critical tasks must be completed within the specified time duration,
i.e., within the given deadline. Not meeting the deadline would result in critical
failures such as damage to equipment or even loss of human life.

For Example,

 Let's take an example of airbags provided by carmakers along with a handle in the
driver's seat. When the driver applies brakes at a particular instance, the airbags grow
and prevent the driver's head from hitting the handle. Had there been some delay even
of milliseconds, then it would have resulted in an accident.

Soft Real-Time operating system:

 Soft RTOS accepts a few delays via the means of the Operating system. In this kind
of RTOS, there may be a closing date assigned for a particular job, but a delay for a
small amount of time is acceptable. So, cut off dates are treated softly via means of
this kind of RTOS.

For Example,

 This type of system is used in Online Transaction systems and Livestock price
quotation Systems.

Firm Real-Time operating system:

 In Firm RTOS additionally want to observe the deadlines. However, lacking a closing
date might not have a massive effect, however may want to purposely undesired
effects, like a massive discount within the fine of a product.

For Example, this system is used in various forms of Multimedia applications.

Advantages of Real-time operating system:

The benefits of real-time operating system are as follows-:

o Easy to layout, develop and execute real-time applications under the real-time
operating system.
o The real-time working structures are extra compact, so those structures require much
less memory space.
o In a Real-time operating system, the maximum utilization of devices and systems.
o Focus on running applications and less importance to applications that are in the
queue.
o Since the size of programs is small, RTOS can also be embedded systems like in
transport and others.
o These types of systems are error-free.
o Memory allocation is best managed in these types of systems.

Disadvantages of Real-time operating system:

The disadvantages of real-time operating systems are as follows-

o Real-time operating systems have complicated layout principles and are very costly to
develop.
o Real-time operating systems are very complex and can consume critical CPU cycles.

BASIC MODEL OF REAL TIME SYSTEM


 Basic Model of a Real-time System: The basic model of a real-time system
presents the overview of all the components involved in a real-time system.
 Real-time system includes various hardware and software embedded in a such a way
that the specific tasks can be performed in the time constraints allowed.
 The accuracy and correctness involved in real-time system makes the model
complex.
 There are various models of real-time system which are more complex and are hard
to understand.
 Here we will discuss a basic model of real-time system which has some commonly
used terms and hardware.
 Following diagram represents a basic model of Real-time System:

Sensor: Sensor is used for the conversion of some physical events or characteristics
into the electrical signals. These are hardware devices that takes the input from
environment and gives to the system by converting it. For example, a thermometer
takes the temperature as physical characteristic and then converts it into electrical
signals for the system.
Actuator: Actuator is the reverse device of sensor. Where sensor converts the physical
events into electrical signals, actuator does the reverse. It converts the electrical signals into
the physical events or characteristics. It takes the input from the output interface of the
system. The output from the actuator may be in any form of physical action. Some of the
commonly used actuator are motors and heaters.
Signal Conditioning Unit: When the sensor converts the physical actions into electrical
signals, then computer can’t used them directly. Hence, after the conversion of physical
actions into electrical signals, there is need of conditioning. Similarly while giving the
output when electrical signals are sent to the actuator, then also conditioning is required.
Therefore, Signal conditioning is of two types:
 Input Conditioning Unit: It is used for conditioning the electrical signals
coming from sensor.
 Output Conditioning Unit: It is used for conditioning the electrical signals
coming from the system.
Interface Unit: Interface units are basically used for the conversion of digital to analog and
vice-versa. Signals coming from the input conditioning unit are analog and the system does
the operations on digital signals only, then the interface unit is used to change the analog
signals to digital signals. Similarly, while transmitting the signals to output conditioning
unit the interface of signals are changed i.e. from digital to analog. On this basis, Interface
unit is also of two types:
 Input Interface: It is used for conversion of analog signals to digital.
 Output Interface: It is used for conversion of digital signals to analog.
CHARACTERISTICS
Characteristics of Real-time System:
Following are the some of the characteristics of Real-time System:
1. Time Constraints: Time constraints related with real-time systems simply
means that time interval allotted for the response of the ongoing program. This
deadline means that the task should be completed within this time interval. Real-
time system is responsible for the completion of all tasks within their time
intervals.
2. Correctness: Correctness is one of the prominent part of real-time systems.
Real-time systems produce correct result within the given time interval. If the
result is not obtained within the given time interval then also result is not
considered correct. In real-time systems, correctness of result is to obtain correct
result in time constraint.
3. Embedded: All the real-time systems are embedded now-a-days. Embedded
system means that combination of hardware and software designed for a specific
purpose. Real-time systems collect the data from the environment and passes to
other components of the system for processing.
4. Safety: Safety is necessary for any system but real-time systems provide critical
safety. Real-time systems also can perform for a long time without failures. It
also recovers very soon when failure occurs in the system and it does not cause
any harm to the data and information.
5. Concurrency: Real-time systems are concurrent that means it can respond to a
several number of processes at a time. There are several different tasks going on
within the system and it responds accordingly to every task in short intervals.
This makes the real-time systems concurrent systems.
6. Distributed: In various real-time systems, all the components of the systems are
connected in a distributed way. The real-time systems are connected in such a
way that different components are at different geographical locations. Thus all
the operations of real-time systems are operated in distributed ways.
7. Stability: Even when the load is very heavy, real-time systems respond in the
time constraint i.e. real-time systems does not delay the result of tasks even
when there are several task going on a same time. This brings the stability in
real-time systems.
8. Fault tolerance: Real-time systems must be designed to tolerate and recover
from faults or errors. The system should be able to detect errors and recover
from them without affecting the system’s performance or output.
9. Determinism: Real-time systems must exhibit deterministic behavior, which
means that the system’s behavior must be predictable and repeatable for a given
input. The system must always produce the same output for a given input,
regardless of the load or other factors.
10. Real-time communication: Real-time systems often require real-time
communication between different components or devices. The system must
ensure that communication is reliable, fast, and secure.
11. Resource management: Real-time systems must manage their resources
efficiently, including processing power, memory, and input/output devices. The
system must ensure that resources are used optimally to meet the time
constraints and produce correct results.
12. Heterogeneous environment: Real-time systems may operate in a
heterogeneous environment, where different components or devices have
different characteristics or capabilities. The system must be designed to handle
these differences and ensure that all components work together seamlessly.
13. Scalability: Real-time systems must be scalable, which means that the system
must be able to handle varying workloads and increase or decrease its resources
as needed.
14. Security: Real-time systems may handle sensitive data or operate in critical
environments, which makes security a crucial aspect. The system must ensure
that data is protected and access is restricted to authorized users only.
REAL TIME TASK SCHEDULING
Tasks in Real-Time Systems

A real-time operating system (RTOS) serves real-time applications that process data
without any buffering delay. In an RTOS, the Processing time requirement is calculated in
tenths of seconds increments of time. It is a time-bound system that is defined as fixed time
constraints. In this type of system, processing must be done inside the specified constraints.
Otherwise, the system will fail.

Real-time tasks are the tasks associated with the quantitative expression of time. This
quantitative expression of time describes the behavior of the real-time tasks. Real-time tasks
are scheduled to finish all the computation events involved in it into timing constraint. The
timing constraint related to the real-time tasks is the deadline. All the real-time tasks need to
be completed before the deadline. For example, Input-output interaction with devices, web
browsing, etc.

Types of Tasks in Real-Time Systems

There are the following types of tasks in real-time systems, such as:

1. Periodic Task

In periodic tasks, jobs are released at regular intervals. A periodic task repeats itself after a
fixed time interval. A periodic task is denoted by five tuples: Ti = < Φi, Pi, ei, Di >

Where,

o Φi: It is the phase of the task, and phase is the release time of the first job in the task.
If the phase is not mentioned, then the release time of the first job is assumed to be
zero.
o Pi: It is the period of the task, i.e., the time interval between the release times of two
consecutive jobs.
o ei: It is the execution time of the task.
o Di: It is the relative deadline of the task.

For example: Consider the task Ti with period = 5 and execution time = 3

Phase is not given so, assume the release time of the first job as zero. So the job of this task is
first released at t = 0, then it executes for 3s, and then the next job is released at t = 5, which
executes for 3s, and the next job is released at t = 10. So jobs are released at t = 5k where k =
0, 1. . . N

Hyper period of a set of periodic tasks is the least common multiple of all the tasks in that set.
For example, two tasks T1 and T2 having period 4 and 5 respectively will have a hyper
period, H = lcm(p1, p2) = lcm(4, 5) = 20. The hyper period is the time after which the pattern
of job release times starts to repeat.

2. Dynamic Tasks

It is a sequential program that is invoked by the occurrence of an event. An event may be


generated by the processes external to the system or by processes internal to the system.
Dynamically arriving tasks can be categorized on their criticality and knowledge about their
occurrence times.

1. Aperiodic Tasks: In this type of task, jobs are released at arbitrary time intervals.
Aperiodic tasks have soft deadlines or no deadlines.
2. Sporadic Tasks:They are similar to aperiodic tasks, i.e., they repeat at random
instances. The only difference is that sporadic tasks have hard deadlines. Three tuples
denote a sporadic task: Ti =(ei, gi, Di)
o Where
o ei: It is the execution time of the task.
o gi: It is the minimum separation between the occurrence of two consecutive
instances of the task.
o Di: It is the relative deadline of the task.

3. Critical Tasks
Critical tasks are those whose timely executions are critical. If deadlines are missed,
catastrophes occur.

For example, life-support systems and the stability control of aircraft. If critical tasks are
executed at a higher frequency, then it is necessary.

4. Non-critical Tasks

Non-critical tasks are real times tasks. As the name implies, they are not critical to the
application. However, they can deal with time, varying data, and hence they are useless if not
completed within a deadline. The goal of scheduling these tasks is to maximize the
percentage of jobs successfully executed within their deadlines.

Task Scheduling

Real-time task scheduling essentially refers to determining how the various tasks are the pick
for execution by the operating system. Every operating system relies on one or more task
schedulers to prepare the schedule of execution of various tasks needed to run. Each task
scheduler is characterized by the scheduling algorithm it employs. A large number of
algorithms for real-time scheduling tasks have so far been developed.

Classification of Task Scheduling

Here are the following types of task scheduling in a real-time system, such as:
1. Valid Schedule: A valid schedule for a set of tasks is one where at most one task is
assigned to a processor at a time, no task is scheduled before its arrival time, and the
precedence and resource constraints of all tasks are satisfied.
2. Feasible Schedule: A valid schedule is called a feasible schedule only if all tasks
meet their respective time constraints in the schedule.
3. Proficient Scheduler: A task scheduler S1 is more proficient than another scheduler
S2 if S1 can feasibly schedule all task sets that S2 can feasibly schedule, but not vice
versa. S1 can feasibly schedule all task sets that S2 can, but there is at least one task
set that S2 cannot feasibly schedule, whereas S1 can. If S1 can feasibly schedule all
task sets that S2 can feasibly schedule and vice versa, then S1 and S2 are called
equally proficient schedulers.
4. Optimal Scheduler: A real-time task scheduler is called optimal if it can feasibly
schedule any task set that any other scheduler can feasibly schedule. In other words, it
would not be possible to find a more proficient scheduling algorithm than an optimal
scheduler. If an optimal scheduler cannot schedule some task set, then no other
scheduler should produce a feasible schedule for that task set.
5. Scheduling Points: The scheduling points of a scheduler are the points on a timeline
at which the scheduler makes decisions regarding which task is to be run next. It is
important to note that a task scheduler does not need to run continuously, and the
operating system activates it only at the scheduling points to decide which task to run
next. The scheduling points are defined as instants marked by interrupts generated by
a periodic timer in a clock-driven scheduler. The occurrence of certain events
determines the scheduling points in an event-driven scheduler.
6. Preemptive Scheduler: A preemptive scheduler is one that, when a higher priority
task arrives, suspends any lower priority task that may be executing and takes up the
higher priority task for execution. Thus, in a preemptive scheduler, it cannot be the
case that a higher priority task is ready and waiting for execution, and the lower
priority task is executing. A preempted lower priority task can resume its execution
only when no higher priority task is ready.
7. Utilization: The processor utilization (or simply utilization) of a task is the average
time for which it executes per unit time interval. In notations:
for a periodic task Ti, the utilization ui = ei/pi, where
o ei is the execution time and
o pi is the period of Ti.

For a set of periodic tasks {Ti}: the total utilization due to all tasks U = i=1∑ n ei/pi.
Any good scheduling algorithm's objective is to feasibly schedule even those task sets
with very high utilization, i.e., utilization approaching 1. Of course, on a uniprocessor,
it is not possible to schedule task sets having utilization of more than 1.
8. Jitter
Jitter is the deviation of a periodic task from its strict periodic behavior. The arrival
time jitter is the deviation of the task from the precise periodic time of arrival. It may
be caused by imprecise clocks or other factors such as network congestions. Similarly,
completion time jitter is the deviation of the completion of a task from precise
periodic points.
The completion time jitter may be caused by the specific scheduling algorithm
employed, which takes up a task for scheduling as per convenience and the load at an
instant, rather than scheduling at some strict time instants. Jitters are undesirable for
some applications.
Sometimes actual release time of a job is not known. Only know that ri is in a range
[ri-, ri+]. This range is known as release time jitter. Here
o ri is how early a job can be released and,
o ri+ is how late a job can be released.

Only the range [ei-, ei+] of the execution time of a job is known. Here

o ei- is the minimum amount of time required by a job to complete its execution
and,
o ei+ is the maximum amount of time required by a job to complete its
execution.

Precedence Constraint of Jobs

Jobs in a task are independent if they can be executed in any order. If there is a specific order
in which jobs must be executed, then jobs are said to have precedence constraints. For
representing precedence constraints of jobs, a partial order relation < is used, and this is
called precedence relation. A job Ji is a predecessor of job Jj if Ji < Jj, i.e., Jj cannot begin its
execution until Ji completes. Ji is an immediate predecessor of Jj if Ji < Jj, and there is no
other job Jk such that Ji < Jk < Jj. Ji and Jj are independent if neither Ji < Jj nor Jj < Ji is true.

An efficient way to represent precedence constraints is by using a directed graph G = (J, <)
where J is the set of jobs. This graph is known as the precedence graph. Vertices of the graph
represent jobs, and precedence constraints are represented using directed edges. If there is a
directed edge from Ji to Jj, it means that Ji is the immediate predecessor of Jj.

For example: Consider a task T having 5 jobs J1, J2, J3, J4, and J5, such that J2 and J5 cannot
begin their execution until J1 completes and there are no other constraints. The precedence
constraints for this example are:

J1 < J2 and J1 < J5


Set representation of precedence graph:

1. < (1) = { }
2. < (2) = {1}
3. < (3) = { }
4. < (4) = { }
5. < (5) = {1}

Consider another example where a precedence graph is given, and you have to find
precedence constraints.

From the above graph, we derive the following precedence constraints:

1. J1< J2
2. J2< J3
3. J2< J4
4. J3< J4
5MARKS

1.What is Realtime Operating Systems?

2.discuss about Characteristics?

3.what is Applications of Real Time Systems?

10MARKS

1.Explain about realtime operating systems?

2.Explain about Basic Model of Real Time System ?

3.Explain about Real Time Task Scheduling?

MCQ

1. 1. In real time operating system ____________


a) all processes have the same priority
b) a task must be serviced by its deadline period
c) process scheduling can be done only once
d) kernel is not required
Answer: b
Explanation: None.
2. Hard real time operating system has ______________ jitter than a soft real time operating
system.
a) less
b) more
c) equal
d) none of the mentioned
Answer: a
Explanation: Jitter is the undesired deviation from the true periodicity.
3. For real time operating systems, interrupt latency should be ____________
a) minimal
b) maximum
c) zero
d) dependent on the scheduling
Answer: a
Explanation: Interrupt latency is the time duration between the generation of interrupt and
execution of its service.
4. In rate monotonic scheduling ____________
a) shorter duration job has higher priority
b) longer duration job has higher priority
c) priority does not depend on the duration of the job
d) none of the mentioned
Answer: a
Explanation: None.
5. In which scheduling certain amount of CPU time is allocated to each process?
a) earliest deadline first scheduling
b) proportional share scheduling
c) equal share scheduling
d) none of the mentioned
Answer: b
Explanation: None.
6. The problem of priority inversion can be solved by ____________
a) priority inheritance protocol
b) priority inversion protocol
c) both priority inheritance and inversion protocol
d) none of the mentioned
Answer: a
Explanation: None.

7. Time duration required for scheduling dispatcher to stop one process and start another is
known as ____________
a) process latency
b) dispatch latency
c) execution latency
d) interrupt latency
Answer: b
Explanation: None.

8. Time required to synchronous switch from the context of one thread to the context of
another thread is called?
a) threads fly-back time
b) jitter
c) context switch time
d) none of the mentioned
Answer: c
Explanation: None.
9. Which one of the following is a real time operating system?
a) RTLinux
b) VxWorks
c) Windows CE
d) All of the mentioned
Answer: d
Explanation: None.

10. VxWorks is centered around ____________


a) wind microkernel
b) linux kernel
c) unix kernel
d) none of the mentioned
Answer: a
Explanation: None.
11. In a real time system the computer results ____________
a) must be produced within a specific deadline period
b) may be produced at any time
c) may be correct
d) all of the mentioned
Answer: a
Explanation: None.
12. In a safety critical system, incorrect operation ____________
a) does not affect much
b) causes minor problems
c) causes major and serious problems
d) none of the mentioned
Answer: c
Explanation: None.
13. Antilock brake systems, flight management systems, pacemakers are examples of
____________
a) safety critical system
b) hard real time system
c) soft real time system
d) safety critical system and hard real time system
Answer: d
Explanation: None.

14. In a ______ real time system, it is guaranteed that critical real time tasks will be
completed within their deadlines.
a) soft
b) hard
c) critical
d) none of the mentioned
Answer: b
Explanation: None.
15. Some of the properties of real time systems include ____________
a) single purpose
b) inexpensively mass produced
c) small size
d) all of the mentioned
Answer: d
Explanation: None.
16. The amount of memory in a real time system is generally ____________
a) less compared to PCs
b) high compared to PCs
c) same as in PCs
d) they do not have any memory
Answer: a
Explanation: None.
17. What is the priority of a real time task?
a) must degrade over time
b) must not degrade over time
c) may degrade over time
d) none of the mentioned
Answer: b
Explanation: None.

18. Memory management units ____________


a) increase the cost of the system
b) increase the power consumption of the system
c) increase the time required to complete an operation
d) all of the mentioned
Answer: d
Explanation: None.

19. The technique in which the CPU generates physical addresses directly is known as
____________
a) relocation register method
b) real addressing
c) virtual addressing
d) none of the mentioned
Answer: b
Explanation: None.
20. Earliest deadline first algorithm assigns priorities according to ____________
a) periods
b) deadlines
c) burst times
d) none of the mentioned
Answer: b
Explanation: None.

21. A process P1 has a period of 50 and a CPU burst of t1 = 25, P2 has a period of 80 and a
CPU burst of 35. The total CPU utilization is ____________
a) 0.90
b) 0.74
c) 0.94
d) 0.80
Answer: c
Explanation: None.

22. A process P1 has a period of 50 and a CPU burst of t1 = 25, P2 has a period of 80 and a
CPU burst of 35., the priorities of P1 and P2 are?
a) remain the same throughout
b) keep varying from time to time
c) may or may not be change
d) none of the mentioned
Answer: b
Explanation: None.
23. A process P1 has a period of 50 and a CPU burst of t1 = 25, P2 has a period of 80 and a
CPU burst of 35., can the two processes be scheduled using the EDF algorithm without
missing their respective deadlines?
a) Yes
b) No
c) Maybe
d) None of the mentione
Answer: a
Explanation: None.

24. Using EDF algorithm practically, it is impossible to achieve 100 percent utilization due to
__________
a) the cost of context switching
b) interrupt handling
c) power consumption
d) all of the mentioned
Answer: a
Explanation: None.

25. T shares of time are allocated among all processes out of N shares in __________
scheduling algorithm.
a) rate monotonic
b) proportional share
c) earliest deadline first
d) none of the mentioned
Answer: b
Explanation: None.

26. If there are a total of T = 100 shares to be divided among three processes, A, B and C. A
is assigned 50 shares, B is assigned 15 shares and C is assigned 20 shares.
A will have ______ percent of the total processor time.
a) 20
b) 15
c) 50
d) none of the mentioned
Answer: c
Explanation: None.
27. If there are a total of T = 100 shares to be divided among three processes, A, B and C. A
is assigned 50 shares, B is assigned 15 shares and C is assigned 20 shares.
B will have ______ percent of the total processor time.
a) 20
b) 15
c) 50
d) none of the mentioned
Answer: b
Explanation: None.
28. If there are a total of T = 100 shares to be divided among three processes, A, B and C. A
is assigned 50 shares, B is assigned 15 shares and C is assigned 20 shares.
C will have ______ percent of the total processor time.
a) 20
b) 15
c) 50
d) none of the mention
Answer: a
Explanation: None.

29. If there are a total of T = 100 shares to be divided among three processes, A, B and C. A
is assigned 50 shares, B is assigned 15 shares and C is assigned 20 shares.
If a new process D requested 30 shares, the admission controller would __________
a) allocate 30 shares to it
b) deny entry to D in the system
c) all of the mentioned
d) none of the mentioned
Answer: b
Explanation: None.
30. CPU scheduling is the basis of ___________
a) multiprocessor systems
b) multiprogramming operating systems
c) larger memory sized systems
d) none of the mentioned
Answer: b
Explanation: None.

31. With multiprogramming ______ is used productively.


a) time
b) space
c) money
d) all of the mentioned
Answer: a
Explanation: None.
32. What are the two steps of a process execution?
a) I/O & OS Burst
b) CPU & I/O Burst
c) Memory & I/O Burst
d) OS & Memory Burst
Answer: b
Explanation: None.
33. An I/O bound program will typically have ____________
a) a few very short CPU bursts
b) many very short I/O bursts
c) many very short CPU bursts
d) a few very short I/O bursts
Answer: c
Explanation: None.
34. A process is selected from the ______ queue by the ________ scheduler, to be executed.
a) blocked, short term
b) wait, long term
c) ready, short term
d) ready, long term
Answer: c
Explanation: None.
35. In the following cases non – preemptive scheduling occurs?
a) When a process switches from the running state to the ready state
b) When a process goes from the running state to the waiting state
c) When a process switches from the waiting state to the ready state
d) All of the mentioned
Answer: b
Explanation: There is no other choice.
36. The switching of the CPU from one process or thread to another is called ____________
a) process switch
b) task switch
c) context switch
d) all of the mentioned
Answer: d
Explanation: None.

37. What is Dispatch latency?


a) the speed of dispatching a process from running to the ready state
b) the time of dispatching a process from running to ready state and keeping the CPU idle
c) the time to stop one process and start running another one
d) none of the mentioned

Answer: c
Explanation: None.
38. Scheduling is done so as to ____________
a) increase CPU utilization
b) decrease CPU utilization
c) keep the CPU more idle
d) none of the mentioned
Answer: a
Explanation: None.

39. Scheduling is done so as to ____________


a) increase the throughput
b) decrease the throughput
c) increase the duration of a specific amount of work
d) none of the mentioned
Answer: a
Explanation: None.

40. What is Turnaround time?


a) the total waiting time for a process to finish execution
b) the total time spent in the ready queue
c) the total time spent in the running queue
d) the total time from the completion till the submission of a process
Answer: d
Explanation: None.

41. Scheduling is done so as to ____________


a) increase the turnaround time
b) decrease the turnaround time
c) keep the turnaround time same
d) there is no relation between scheduling and turnaround time
Answer: b
Explanation: None.
42. What is Waiting time?
a) the total time in the blocked and waiting queues
b) the total time spent in the ready queue
c) the total time spent in the running queue
d) the total time from the completion till the submission of a process
Answer: b
Explanation: None.
43. Scheduling is done so as to ____________
a) increase the waiting time
b) keep the waiting time the same
c) decrease the waiting time
d) none of the mentioned
Answer: c
Explanation: None.
44. What is Response time?
a) the total time taken from the submission time till the completion time
b) the total time taken from the submission time till the first response is produced
c) the total time taken from submission time till the response is output
d) none of the mentioned
Answer: b
Explanation: None.

45. What is the disadvantage of real addressing mode?


a) there is a lot of cost involved
b) time consumption overhead
c) absence of memory protection between processes
d) restricted access to memory locations by processes
Answer: c
Explanation: None.

46. Preemptive, priority based scheduling guarantees ____________


a) hard real time functionality
b) soft real time functionality
c) protection of memory
d) none of the mentioned
Answer: b
Explanation: None.

47. Real time systems must have ____________


a) preemptive kernels
b) non preemptive kernels
c) preemptive kernels or non preemptive kernels
d) neither preemptive nor non preemptive kernels
Answer: a
Explanation: None.

48. What is Event latency?


a) the amount of time an event takes to occur from when the system started
b) the amount of time from the event occurrence till the system stops
c) the amount of time from event occurrence till the event crashes
d) the amount of time that elapses from when an event occurs to when it is serviced.
Answer: d
Explanation: None.

49. Interrupt latency refers to the period of time ____________


a) from the occurrence of an event to the arrival of an interrupt
b) from the occurrence of an event to the servicing of an interrupt
c) from arrival of an interrupt to the start of the interrupt service routine
d) none of the mentioned
Answer: c
Explanation: None.

50. Real time systems need to __________ the interrupt latency.


a) minimize
b) maximize
c) not bother about
d) none of the mentioned
Answer: a
Explanation: None.
UNIT 4
Handheld Operating System:
 Handheld operating systems are available in all handheld devices like Smart phones and tablets. It is
sometimes also known as a Personal Digital Assistant. The popular handheld device in today’s world is
Android and iOS. These operating systems need a high-processing processor and are also embedded with
various types of sensors.
Some points related to Handheld operating systems are as follows:

1. Since the development of handheld computers in the 1990s, the demand for software to operate and run on
these devices has increased.
2. Three major competitors have emerged in the handheld PC world with three different operating systems
for these handheld PCs.
3. Out of the three companies, the first was the Palm Corporation with their PalmOS.
4. Microsoft also released what was originally called Windows CE. Microsoft’s recently released operating
system for the handheld PC comes under the name of Pocket PC.
5. More recently, some companies producing handheld PCs have also started offering a handheld version of
the Linux operating system on their machines.
Features of Handheld Operating System:
1. Its work is to provide real-time operations.
2. There is direct usage of interrupts.
3. Input/Output device flexibility.
4. Configurability
Types of Handheld Operating Systems:
Types of Handheld Operating Systems are as follows:
1. Palm OS
2. Symbian OS
3. Linux OS
4. Windows
5. Android
Palm OS:
 Since the Palm Pilot was introduced in 1996, the Palm OS platform has provided various mobile devices with essential
business tools, as well as the capability that they can access the internet via a wireless connection.
 These devices have mainly concentrated on providing basic personal information-management applications. The
latest Palm products have progressed a lot, packing in more storage, wireless internet, etc.
Symbian OS:
 It has been the most widely-used smartphone operating system because of its ARM architecture before it was
discontinued in 2014. It was developed by Symbian Ltd.
 This operating system consists of two subsystems where the first one is the microkernel-based operating system
which has its associated libraries and the second one is the interface of the operating system with which a user can
interact.
 Since this operating system consumes very less power, it was developed for smartphones and handheld devices.
 It has good connectivity as well as stability.
 It can run applications that are written in Python, Ruby, .NET, etc.
Linux OS:
 Linux OS is an open-source operating system project which is a crossplatform system that was developed based on
UNIX. It was developed by Linus Torvalds. It is a system software that basically allows the apps and users to perform
some tasks on the PC.
 Linux is free and can be easily downloaded from the internet and it is considered that it has the best community
support. Linux is portable which means it can be installed on different types of devices like mobile, computers, and
tablets.
 It is a multi-user operating system.
 Linux interpreter program which is called BASH is used to execute commands.
 It provides user security using authentication features.
Windows OS:
 Windows is an operating system developed by Microsoft. Its interface which is called Graphical User Interface
eliminates the need to memorize commands for the command line by using a mouse to navigate through menus,
dialog boxes, and buttons.
 It is named Windows because its programs are displayed in the form of a square. It has been designed for both a
beginner as well professional.
 It comes preloaded with many tools which help the users to complete all types of tasks on their computer, mobiles,
etc.
 It has a large user base so there is a much larger selection of available software programs.
Android OS:
 It is a Google Linux-based operating system that is mainly designed for touchscreen devices such as phones, tablets,
etc. There are three architectures which are ARM, Intel, and MIPS which are used by the hardware for supporting
Android.
 These lets users manipulate the devices intuitively, with movements of our fingers that mirror some common
motions such as swiping, tapping, etc.
 Android operating system can be used by anyone because it is an opensource operating system and it is also free.
 It offers 2D and 3D graphics, GSM connectivity, etc.
 There is a huge list of applications for users since Play Store offers over one million apps.
 Professionals who want to develop applications for the Android OS can download the Android Development Kit. By
downloading it they can easily develop apps for android.
REQUIREMENTS IN HAND HELD OS
 Installations of handheld computers are progressing in a variety of fields such as logistics and manufacturing with
applications including inventory management, data verification, process management, traceability, and shipping
mistake prevention.
 This section explains the environment that is required in order to actually install handheld computers.
1. Requirements for Operating Handheld Computers
2. Determining the Hardware Configuration
3. Power Supply Environment
4. Printers and Other Peripheral Equipment
5. Developing Software
6. KEYENCE Enables Easy Software Development With No Programming Required
1.Requirements for Operating Handheld Computers
 The advantage of a handheld computer is its ability to perform multiple functions as a standalone device, filling
many roles such as reading various codes as well as collecting, sending, and receiving data. However, before
handheld computers can be installed, it is necessary to organize the surrounding environment.
 The necessity of preparing both hardware and software For operation, a variety of equipment is necessary.
Examples include the PC or server to communicate with, the battery that supplies the power and the dedicated
battery charger, and the dedicated printer used to output the recorded data. It is also necessary to develop
software to provide system functions and operability that match the usage environment and the purpose.
2.Determining the HardwareConfiguration

Communication environment

 Handheld computers can read and accumulate data in a standalone manner,but integrating these
devices with PCs and servers is essential in aggregating data, sharing data with different
departments, and making use of data from other departments. The problem is determining which
method to use to communicate between handheld computers and PCs/servers.
 The answer is determined by the usage environment and generally is selected from one of two
options: using a communication unit and using a wireless LAN.
Use a communication unit when the usage location is limited
 If the usage location is limited and is fixed, select the communication unit method. Use a LAN cable or a USB
cable to connect the communication unit to a PC.

Use a wireless LAN in large ware houses, stores, and factories


 If you want to use handheld computers while moving throughout a large ware house, store, or factory
,select the wireless type. It is necessary to build an on-site LAN by installing dedicated access points.
When building an environment with a wireless LAN, it is often the case that the handheld computers
are used over a wide range, some a sure the environment and determine the number and installation
location so the access points required to match the operating environment.

3.Power Supply Environment

 Regarding portability and ease of use, handheld computers are cordless and battery powered. There are
various types of batteries that are used ,including dedicated rechargeable batteries and general-purpose dry
cell batteries.
 When just using handheld computers within a company or facility, it is sufficient to prepare dedicated
cradles that automatically charge the handheld computers when they are docked such as at the end of
work.
 In situations where it is expected that technicians and sales personnel will take the handheld
computers outside of the company, it is most common to select a handheld computer type that can
use dry cell batteries or general purpose rechargeable batteries that can be purchased immediately
when outside of the office in order to replace dead batteries instead of selecting a handheld computer
type that uses a dedicated battery charger.

4.Printers and Other Peripheral Equipment


One common thing that is seen is home delivery drivers printing off data when delivering packages. An
advantage of handheld computers is that in addition to being able to read and aggregate data, they can print
off data by connecting a printer. A separate printer is required in order to print labels with data read in a
warehouse or factory and affixes these labels to card board boxes and to issue barcodes for shipping.
Compact, portable printers that can be used with handheld computers are also available, so use these printers
to match the application.
5.DevelopingSoftware
Difficult development to dedicated software

 The development of dedicated software is more difficult than establishing the hardware
environment that includes the handheld computers and peripheral equipment such as
communication equipment, batteries, and printers. This is because system construction such as
determining how to aggregate and process the read data and how to implementon-screen
operations are essentially the domain of system engineers. Naturally, the development costs
during hardware installation require a large investment. There is no short age of cases in which
operators want to install handheld computers to make work more efficient but
runintothebottleneckofsoftwaredevelopmentandareunabletoreachtheir expected efficiency.

General software development methods

The handheld computer installation conditions vary depending on the specifications and on whether a
corporate system is present, but the development methods can generally be separated into the four listed
below.

Embedded applications
 With this pattern, the application to execute is embedded in the handheld computer. This is the optimal
method for corporations that want to accumulate data, implement rich device control, and develop
applications easily.

Web applications

With this pattern, the browser on the handheld computer accesses web pages on a web server. This is the
optimal method for corporations that want to use or are already using web applications and want to manage
applications in a centralized manner.
Terminal services
With this pattern, the handheld computer emulates PC applications. This is the optimal method for
corporations that want to use PC applications as-is and manage applications in a centralized manner.

Terminal emulators/middleware

Thispatternusesthird-partyterminalemulators/middleware.Thisistheoptimal method for corporations that want to


create a fast and rich web system, use AS/400 and SAP emulators on handheld computers, and manage applications
in a centralized manner.

Selecting the development method according to needs


It is necessary to select the development method from the four listed above by finding the method that
matches the on-site needs ,what operators want to do with handheld computers. This section introduces
the main functions that are required of handheld computers and the development methods that can be
used to realize these needs.

6.KEYENCE Enables Easy Software Development With No Programming Required

 When using handheld computers, there are different software development methods such as embedded
applications, web applications, terminal services,and terminal emulators/middleware. However, all
methods incur development costs and have their own delivery dates. KEYENCE's development tools solve
this problem and make it possible to more easily develop dedicated software on your own.
 Thegreatestcharacteristicofthesetoolsistheirsimplevisualdevelopment. Anyone, even people with absolutely
no knowledge of difficult computer languages, can develop dedicated software just by selecting the
required functions, icons, and other such items from the rich templates and GUI (graphical user interface)
tools displayed on the PC screen.
 This eliminates waste by reducing the hassle, cost, and time required to order development from dedicated
vendors and engineers .What's more, systems can be developed easily and quickly on your own, which
makes it possible to support low-cost, short-term system projects without difficulty.
Introduction To Mobile Operating System – PALM OS

 PALM OS is an operating system for personal digital assistants, designed for touch screen. It consists of a limited
number of features designed for low memory and processor usage which in turn helps in getting longer battery life.

Features of PALM OS
 Elementary memory management system.
 Provides PALM Emulator.
 Handwriting recognition is possible.
 Supports recording and playback.
 Supports C, and C++ software.

Palm Architecture

 The User Interface in the architecture is used for graphical input-output.


 The Memory Management section is used for maintaining databases, global variables, etc.
 System Management’s job is to maintain events, calendars, dates, times, etc.
 Communication TCP/IP as the name denotes is simply used for communication.
 Microkernel is an essential tool in architecture. It is responsible for providing the mechanism needed for the proper
functioning of an operating system.

Development Cycle

For the development of the PALM OS, these are the phases it has to go through before it can be used in the market:
 Editing the code for the operating system that is checking for errors and correcting errors.
 Compile and Debug the code to check for bugs and correct functioning of the code.
 Run the program on a mobile device or related device.
 If all the above phases are passed, we can finally have our finished product which is the operating system for mobile
devices named PALM OS.
Advantages

 Fewer features are designed for low memory and processor usage which means longer battery life.
 No need to upgrade the operating system as it is handled automatically in PALM OS.
 More applications are available for users.
 Extended connectivity for users. Users can now connect to wide areas.

Disadvantages

 The user cannot download applications using the external memory in PALM OS. It will be a disadvantage for users with
limited internal memory.
 Systems and extended connectivity are less compared to what is offered by other operating systems.
SYMBIAN OS

The Origin of Symbian OS

Symbian is a discontinued mobile operating system developed and sold by Symbian LTD. It was a closed-
source mobile operating system designed for smart phones in 1998. Symbian OS was designed to be used
on higher-end mobile phones. It was an operating system for mobile devices which has limited resources,
multitasking needs, and soft real-time requirements.

The Symbian operating system has evolved from the Pison EPOC, which was released on ARM
processors. In June 1998, the Pison software was named Symbian LTD as the result of a joint venture
between Psion and phone manufacturers Ericsson, Motorola, and Nokia.

In**2008**,Nokia announced the accession of Symbian LTD, and a new open-source,


nonprofit organization called Symbian Foundation was established.

In May 2014 the development of Symbian OS was discontinued.

 In the 1990s, software company Psion was actively working on the development of innovative mobile
operating systems. Their earlier productswere16-bitsystems,butin 1994,they began working ona32-bit version
programmed in C++,and it was namedEPOC32.Thenin1998,PsionrebrandedSymbianLtd. In collaboration with
popular mobile phone brands, Nokia, Ericsson, and Motorola.
 Symbian Ltd. began upgrading EPOC32, and the new version was named Symbian OS.

FeaturesofSymbianOS

Symbian OS was equipped with the following features:

User Interface

 Symbian offered an interactive graphical user interface for mobile phones with the AVKON toolkit,
also called S60.However,itwasdesignedmainly to be operated with a keyboard. As the demand for
touch screen phones increased, Symbian shifted to the Qt framework to design a better user
interface for touch screen phones.
Browser

Initially, Symbian phones came with Opera as the default browser. Later on, a built-in browser was
developed for the Symbian OS based on WebKit. In phones built in the S60 platform, this browser
was simply named as Web Browser for S60. It boasted of faster speed and better interface.
App Development

 The standard software development kit to build apps for Symbian OS was Qt, with C++
programming language. UIQ and S60 also provided SDKs for app development on Symbian, but Qt
became the standard later on. As for the programming language, even though C++ Is preferred,
it’s also possible to build with Python, Java, and Adobe Flash Lite.
Multimedia

 To fulfill consumer demand for entertainment, Symbian OS supported high- quality recording and
playback of audio and video, along with image conversion features. It expanded the ability of
mobile phones to handle multimedia files.
Security

 As security is one of the most important things to consider for an operating system, Symbian
offered strong protection against malware and came with reliable security certificates. I t proved
to be a secure operating system for phones and a safe platform for app development.
OpenSource

 After Nokia acquired Symbian Ltd., the Symbian Foundation was formed, and Symbian OS was
made open source. It opened doors of opportunity for developers to contribute to this operating
system's growth and develop innovative mobile applications.
Advantages of Symbian OS
 It has a greater range of applications.
 Connectivity was a lot easier.
 It consists of a better-in built wap browser.
 It has an open platform based on C++.
 It provides a feature for power saving.
 Itprovidesfullymultitaskableprocessing.

The disadvantage of the Symbian OS was:

 It was not available on the PC.


 Symbian OS has less accuracy as compared to android.
 It has security issues as it can be easily affected by Sviruses.

Android Operating System


 Android is a mobile operating system based on a modified version of the Linux kernel and other open-source software,
designed primarily for touchscreen mobile devices such as smart phones and tablets. Android is developed by a
partnership of developers known as the Open Handset Alliance and commercially sponsored by Google. It was
disclosed in November 2007, with the first commercial Android device, the HTC Dream, launched in September 2008.
 About 70% of Android Smartphone runs Google's ecosystem, some with vendor-customized user interface and some with software
suite, such as TouchWizand later One UI by Samsung, and HTC Sense.

Features of Android Operating System

Below are the following unique features and characteristics of the android operating system, such as:

1. Near Field Communication (NFC)

Most Android devices support NFC, which allows electronic devices to interact across short distances easily. The main goal here is to create
a payment option that is simpler than carrying cash or credit cards, and while the market hasn't exploded as many experts had predicted, there
may be an alternative in the works, in the form of Bluetooth Low Energy (BLE).

2. Infrared Transmission

The Android operating system supports a built-in infrared transmitter that allows you to use your phone or tablet as a remote control.

3. Automation

The Tasker app allows control of app permissions and also automates them.

4. Wireless App Downloads

You can download apps on your PC by using the Android Market or third-party options like AppBrain. Then it automatically syncs them to your Droid, and no plugging is required.

5. Storage and Battery Swap

Android phones also have unique hardware capabilities. Google's OS makes it possible to upgrade, replace, and remove your battery that no longer holds a charge. In addition, Android
phones come with SD card slots for expandable storage.

6. Custom Home Screens

While it's possible to hack certain phones to customize the home screen, Android comes with this capability from the get-go. Download a third-party launcher like Apex, Nova, and you can
add gestures, new shortcuts, or even performance enhancements for older-model devices.

7. Widgets

Apps are versatile, but sometimes you want information at a glance instead of having to open an app and wait for it to load. Android widgets let you display just about any feature you
choose on the home screen, including weather apps, music widgets, or productivity tools that helpfully remind you of upcoming meetings or approaching deadlines.

8. Custom ROMs

Because the Android operating system is open-source, developers can twist the current OS and build their versions, which users can
download and install in place of the stock OS. Some are filled with features, while others change the look and feel of a device. Chances are,
if there's a feature you want, someone has already built a custom ROM for it.
Architecture of Android OS

The android architecture contains a different number of components to support any android device needs. Android software contains an
open-source Linux Kernel with many C/C++ libraries exposed through application framework services.

Among all the components, Linux Kernel provides the main operating system functions to Smartphone and Dalvik Virtual Machine (DVM)
to provide a platform for running an android application. An android operating system is a stack of software components roughly divided
into five sections and four main layers, as shown in the below architecture diagram.

o Applications
o Application Framework
o Android Runtime
o Platform Libraries
o Linux Kernel

1. Applications

An application is the top layer of the android architecture. The pre-installed applications like camera, gallery, home, contacts, etc., and third-
party applications downloaded from the play store like games, chat applications, etc., will be installed on this layer.

It runs within the Android run time with the help of the classes and services provided by the application framework.

2. Application framework

Application Framework provides several important classes used to create an Android application. It provides a generic abstraction for
hardware access and helps in managing the user interface with application resources. Generally, it provides the services with the help of
which we can create a particular class and make that class helpful for the Applications creation.

It includes different types of services, such as activity manager, notification manager, view system, package manager etc., which are helpful
for the development of our application according to the prerequisite.
The Application Framework layer provides many higher-level services to applications in the form of Java classes. Application developers are
allowed to make use of these services in their applications. The Android framework includes the following key services:

o Activity Manager: Controls all aspects of the application lifecycle and activity stack.
o Content Providers: Allows applications to publish and share data with other applications.
o Resource Manager: Provides access to non-code embedded resources such as strings, colour settings and user interface layouts.
o Notifications Manager: Allows applications to display alerts and notifications to the user.
o View System: An extensible set of views used to create application user interfaces.

3. Application runtime

Android Runtime environment contains components like core libraries and the Dalvik virtual machine (DVM). It
provides the base for the application framework and powers our application with the help of the core libraries.

Like Java Virtual Machine (JVM), Dalvik Virtual Machine (DVM) is a register-based virtual machine designed and
optimized for Android to ensure that a device can run multiple instances efficiently.

It depends on the layer Linux kernel for threading and low-level memory management. The core libraries enable us to
implement android applications using the standard JAVA or Kotlin programming languages.

4. Platform libraries

The Platform Libraries include various C/C++ core libraries and Java-based libraries such as Media, Graphics,
Surface Manager, OpenGL, etc., to support Android development.

o app: Provides access to the application model and is the cornerstone of all Android applications.
o content: Facilitates content access, publishing and messaging between applications and application components.
o database: Used to access data published by content providers and includes SQLite database, management classes.
o OpenGL: A Java interface to the OpenGL ES 3D graphics rendering API.
o os: Provides applications with access to standard operating system services, including messages, system services and
inter-process communication.
o text: Used to render and manipulate text on a device display.
o view: The fundamental building blocks of application user interfaces.
o widget: A rich collection of pre-built user interface components such as buttons, labels, list views, layout managers,
radio buttons etc.
o WebKit: A set of classes intended to allow web-browsing capabilities to be built into applications.
o media: Media library provides support to play and record an audio and video format.
o surface manager: It is responsible for managing access to the display subsystem.
o SQLite: It provides database support, and FreeType provides font support.
o SSL: Secure Sockets Layer is a security technology to establish an encrypted link between a web server and a web
browser.

5. Linux Kernel

Linux Kernel is the heart of the android architecture. It manages all the available drivers such as display, camera, Bluetooth, audio, memory,
etc., required during the runtime.

The Linux Kernel will provide an abstraction layer between the device hardware and the other android architecture components. It is
responsible for the management of memory, power, devices etc. The features of the Linux kernel are:

o Security: The Linux kernel handles the security between the application and the system.
o Memory Management: It efficiently handles memory management, thereby providing the freedom to develop our apps.
o Process Management: It manages the process well, allocates resources to processes whenever they need them.
o Network Stack: It effectively handles network communication.
o Driver Model: It ensures that the application works properly on the device and hardware manufacturers responsible for building
their drivers into the Linux build.

Android Applications

Android applications are usually developed in the Java language using the Android Software Development Kit. Once developed, Android
applications can be packaged easily and sold out either through a store such as Google Play, SlideME, Opera Mobile Store, Mobango, F-
droid or the Amazon Appstore.

Android powers hundreds of millions of mobile devices in more than 190 countries around the world. It's the largest installed base of any
mobile platform and growing fast. Every day more than 1 million new Android devices are activated worldwide.

Android Emulator

The Emulator is a new application in the Android operating system. The Emulator is a new prototype used to develop and test android
applications without using any physical device.

The android emulator has all of the hardware and software features like mobile devices except phone calls. It provides a variety of navigation
and control keys. It also provides a screen to display your application. The emulators utilize the android virtual device configurations. Once
your application is running on it, it can use services of the android platform to help other applications, access the network, play audio, video,
store, and retrieve the data.

SECURING HANDHELD SYSTEM.

1. Enable user authentication


2. Use a password manager
3. Always run updates
4. Avoid public wi-fi
5. Enable remote lock
6. Cloud backups
7. Use MDM/MAM
1. Enable User Authentication On
 It's so easy for company laptops, tablets, and smart phones to get lost or stolen as we leave the min taxi
cabs, restaurants, airplanes...the list goes on.
 The first thing to do is to ensure that all your mobile user devices have the screen lock turned on and that
they require a password or PIN to gain entry. There is a ton of valuable information on the device!
 Most devices have biometric security options like FaceID and TouchID, which definitely makes the device
more accessible, but not necessarily more secure.
 That's why it is a good idea to take your mobile security practices a step further and implement a Multi-
Factor Authentication (MFA, also known as two-Factor authentication)policy for all end-users as an
additional layer of security.
2. Use A Password Manager

 Let's be honest, passwords are not disappearing any time soon, and most of us find the member so mean
hard to remember. We're also asked to change them frequently, which makes the whole process even
more painful.

 Enter the password manager, which you can think of as a "book of passwords" locked by a master key
that only you know.

 Not only do they store passwords, but they also generate strong, unique passwords that save you from
using your cat's name or child's birthday...over and over.

 Although Microsoft has enabled password removal on their Microsoft 365accounts,we'res till far
from being rid of them forever !As long as we have sensitive data and corporate data to protect,
passwords will be a critical security measure.
3.Update Your Operating Systems(OS) Regularly

 If you're using out dated software, your risk of getting hacked sky rockets. Vendors such as Apple(IOS),
Google ,and Microsoft constantly provide security updates to stay ahead of security vulnerabilities.

 Don't ignore those alerts to upgrade your laptop, tablet, or smart phone. To help with this, ensure you have
automatic software updates turned on by default on your mobile devices. Regularly updating your
operating system ensures you have the latest security configurations available!

 When it comes to your laptop, your IT department or your IT services provider should be pushing you
appropriate software updates on a regular basis.

4. Avoid Public Wi-Fi

 Although it's very tempting to use that free Wi-Fi at the coffee shop, airport or hotel lobby - don't do it.

 Anytime you connect to another organization’s network, you’re increasing your risk of exposure to
malware and hackers. There are so many online videos and easily accessible tools that even a no vice
hacker can intercept traffic flowing over Wi-Fi, accessing valuable information such as credit card
number, bank account numbers, passwords, and other private data.

 Interesting but disturbing fact: although public Wi-Fi and Bluetooth are a considerable security gap and
most of us (91%) know it, 89% of us ignore it. Choose to be in the minority here!

5.Remote Lock and Data Wipe


 Every business should have a Bring Your Own Device (BYOD) policy that includes a strict remote
lock and data wipe policy.

 Under this policy, whenever a mobile device is believed to be stolen or lost, the business can protect the lost
data by remotely wiping the device or, at minimum, locking access.
 Where this gets a bit sticky is that you're essentially giving the business permission to delete
all personal data as well, as typically in
 A BYOD situation the employee is using the device for both work and play.

 Most IT security experts view remote lock and data wipe as a basic and necessary security caution, so
employees should be educated and made aware of any such policy in advance.

6.Cloud Security and Data Backup

 Keep in mind that your public cloud-based apps and services are also being accessed by employee-
owned mobile devices ,increasing your company’s risk of data loss.

 That’s why, for starters, backup your cloud data! If your device is lost or stolen,
you'llstillwanttobeabletoaccessanydatathatmighthavebeencompromised as quickly as possible.

 Select a cloud platform that maintain savers ion history of your files and allows you to roll back to those earlier
versions, at least for the past 30days.

 Google’s G Suite, MicrosoftOffice365,and Drop box support this.

 Once those 30 days have elapsed, deleted files or earlier versions are gone for good.

 You can safeguard against this by investing in a cloud-to-cloud backup solution, which will back up your
data for a relatively nominal monthly fee.

7. Understand and Utilize Mobile Device Management (MDM) and Mobile Application
Management (MAM)
 Mobile security has become the hottest topic in the IT world. How do we allow users to access the data they
need remotely, while keeping that data safe from whatever lurks around on these potentially unprotected
devices?

 The solution is two-fold: Mobile Device Management (MDM) and Mobile Application Management
(MAM).

 Mobile Device Management is the configuration, monitoring, and management of your employees'
personal devices, such as phones, tablets, and laptops.

 Mobile Application Management is configuring, monitoring, and managing the applications on those
mobile devices. This includes things like Microsoft 365 and authenticator apps.

 When combined, MDM and MAM can become powerful security solutions, preventing
unauthorized devices from accessing your company network of applications and data.

 Note that both solutions should be sourced, implemented, and managed by IT experts-in-house or
outsourced-familiar with mobile security. For example ,you can look at this short case study on how we
 Implemented Microsoft In tune MDM for a healthcare provider, including the details behind the
implementation.

 Implementing these 7 best practices for your employees and end-users, and enforcing strong
mobile security policies, will go a long way to keeping your mobile device security in check.
1 mark
1. Handheld systems include ?

A. PFAs
B. PDAs
C. PZAs
D. PUAs
Ans : B
2. Which of the following is an example of PDAs?

A. Palm-Pilots
B. Cellular Telephones
C. Both A and B
D. None of the above
Ans : C
3. Many handheld devices have between ___________ of memory

A. 256 KB and 8 MB
B. 512 KB and 2 MB
C. 256 KB and 4 MB
D. 512 KB and 8 MB
Ans : D
4. Handheld devices do not use virtual memory techniques.

A. TRUE
B. FALSE
C. Can be true or false
D. Can not say
Ans : A

5. To include a faster processor in a handheld device would require a ________ battery.

A. very small
B. small
C. medium
D. larger
Ans : D
6. Some handheld devices may use wireless technology such as BlueTooth, allowing remote access to e-mail and web browsing.

A. Yes
B. No
C. Can be yes or no
D. Can not say
Ans : A

6) Android is –

a. an operating system

b.a web browser

c.a web server

d.None of the above

7) Under which of the following Android is licensed?


a. OSS
b. Sourceforge
c. Apache/MIT
d. None of the above

8) For which of the following Android is mainly developed?

a. Servers
b. Desktops
c. Laptops
d. Mobile devices

9) Which of the following is the first mobile phone released that ran the Android OS?

a. HTC Hero
b. Google gPhone
c. T - Mobile G1
d. None of the above

10) Which of the following virtual machine is used by the Android operating system?

a. JVM
b. Dalvik virtual machine
c. Simple virtual machine
d. None of the above

11) Android is based on which of the following language?

a. Java
b. C++
c. C
d. None of the above

12) APK stands for -

a. Android Phone Kit


b. Android Page Kit
c. Android Package Kit
d. None of the above

13) What does API stand for?

a. Application Programming Interface


b. Android Programming Interface
c. Android Page Interface
d. Application Page Interface

14) Which of the following converts Java byte code into Dalvik byte code?

a. Dalvik converter
b. Dex compiler
c. Mobile interpretive compiler (MIC)
d. None of the above

15) How can we stop the services in android?

a. By using the stopSelf() and stopService() method


b. By using the finish() method
c. By using system.exit() method
d. None of the above

16) What is an activity in android?

a. android class
b. android package
c. A single screen in an application with supporting java code
d. None of the above

17) How can we kill an activity in android?

a. Using finish() method


b. Using finishActivity(int requestCode)
c. Both (a) and (b)
d. Neither (a) nor (b)

18) On which of the following, developers can test the application, during developing the android applications?

a. Third-party emulators
b. Emulator included in Android SDK
c. Physical android phone
d. All of the above

19) Which of the following kernel is used in Android?

a. MAC
b. Windows
c. Linux
d. Redhat

20) Which of the following is the parent class of Activity?

a. context
b. object
c. contextThemeWrapper
d. None of the above
UNIT-5

INTRODUCTION- Linux Operating System


 The Linux Operating System is a type of operating system that is similar to Unix,
and it is built upon the Linux Kernel. The Linux Kernel is like the brain of the
operating system because it manages how the computer interacts with its hardware
and resources.
 It makes sure everything works smoothly and efficiently. But the Linux Kernel
alone is not enough to make a complete operating system. To create a full and
functional system, the Linux Kernel is combined with a collection of software
packages and utilities, which are together called Linux distributions.

 These distributions make the Linux Operating System ready for users to run their
applications and perform tasks on their computers securely and effectively. Linux
distributions come in different flavors, each tailored to suit the specific needs and
preferences of users.

 Linux is a powerful and flexible family of operating systems that are free to use and
share. It was created by a person named Linus Torvalds in 1991. What’s cool is that
anyone can see how the system works because its source code is open for everyone
to explore and modify. This openness encourages people from all over the world to
work together and make Linux better and better.
Linux Distribution
 Linux distribution is an operating system that is made up of a collection of
software based on Linux kernel or you can say distribution contains the Linux
kernel and supporting libraries and software. And you can get Linux based
operating system by downloading one of the Linux distributions and these
distributions are available for different types of devices like embedded devices,
personal computers, etc.
 Around 600 + Linux Distributions are available and some of the popular Linux
distributions are:
 MX Linux
 Manjaro
 Linux Mint
 elementary
 Ubuntu
 Debian
Architecture of Linux
Linux architecture has the following components:
1. Kernel: Kernel is the core of the Linux based operating system. It virtualizes the
common hardware resources of the computer to provide each process with its virtual
resources. This makes the process seem as if it is the sole process running on the
machine. The kernel is also responsible for preventing and mitigating conflicts
between different processes. Different types of the kernel are:
 Monolithic Kernel
 Hybrid kernels
 Exo kernels
 Micro kernels
2. System Library:Linux uses system libraries, also known as shared libraries, to
implement various functionalities of the operating system. These libraries contain
pre-written code that applications can use to perform specific tasks. By using these
libraries, developers can save time and effort, as they don’t need to write the same
code repeatedly. System libraries act as an interface between applications and the
kernel, providing a standardized and efficient way for applications to interact with
the underlying system.
3. Shell:The shell is the user interface of the Linux Operating System. It allows users
to interact with the system by entering commands, which the shell interprets and
executes. The shell serves as a bridge between the user and the kernel, forwarding
the user’s requests to the kernel for processing. It provides a convenient way for
users to perform various tasks, such as running programs, managing files, and
configuring the system.
4. Hardware Layer: The hardware layer encompasses all the physical components of
the computer, such as RAM (Random Access Memory), HDD (Hard Disk Drive),
CPU (Central Processing Unit), and input/output devices. This layer is responsible
for interacting with the Linux Operating System and providing the necessary
resources for the system and applications to function properly. The Linux kernel and
system libraries enable communication and control over these hardware components,
ensuring that they work harmoniously together.
5. System Utility: System utilities are essential tools and programs provided by the
Linux Operating System to manage and configure various aspects of the system.
These utilities perform tasks such as installing software, configuring network
settings, monitoring system performance, managing users and permissions, and
much more. System utilities simplify system administration tasks, making it easier
for users to maintain their Linux systems efficiently.
Advantages of Linux
 The main advantage of Linux is it is an open-source operating system. This means
the source code is easily available for everyone and you are allowed to contribute,
modify and distribute the code to anyone without any permissions.
 In terms of security, Linux is more secure than any other operating system. It does
not mean that Linux is 100 percent secure, it has some malware for it but is less
vulnerable than any other operating system. So, it does not require any anti-virus
software.
 The software updates in Linux are easy and frequent.
 Various Linux distributions are available so that you can use them according to your
requirements or according to your taste.
 Linux is freely available to use on the internet.
 It has large community support.
 It provides high stability. It rarely slows down or freezes and there is no need to
reboot it after a short time.
 It maintains the privacy of the user.
 The performance of the Linux system is much higher than other operating systems. It
allows a large number of people to work at the same time and it handles them
efficiently.
 It is network friendly.
Disadvantages of Linux
 It is not very user-friendly. So, it may be confusing for beginners.
 It has small peripheral hardware drivers as compared to windows.

Linux Memory Management


The subsystem of Linux memory management is responsible to manage the memory
inside the system. It contains the implementation of demand paging and virtual memory.

Also, it contains memory allocation for user space programs and kernel internal
structures. Linux memory management subsystem includes files mapping into the address
space of the processes and several other things.
Huge Pages

The translation of addresses requires various memory accesses. These memory accesses
are very slow as compared to the speed of the CPU. To ignore spending precious cycles
of the processor on the translation of the address, CPUs manage the cache of these types
of translations known as Translation Lookaside Buffer (TLB).

Virtual Memory Primer

In a computer system, physical memory is a restricted resource. The physical memory


isn't contiguous necessarily. It may be accessible as a group of different ranges of
addresses. Besides, distinct architectures of CPU and implementations of similar
architecture have distinct perspectives of how these types of ranges are specified.

It will make dealing with physical memory directly quite difficult and to ignore this
complexity a mechanism virtual memory was specified.

Zones

Linux combines memory pages into some zones according to the possible usage. Let's
say, ZONE_HIGHMEM will include memory that isn't mapped permanently into the
address space of the kernel, ZONE_DMA will include memory that could be used by
various devices for DMA, and ZONE_NORMAL will include addressed pages normally.

Page Cache

The common case to get data into memory is to read it through files as the physical
memory is unstable.

The data will be put in the page cache to ignore expensive access of disk on the
subsequent reads whenever any file is read.

Similarly, the data will be positioned inside the page cache and gets into the backing
storage device whenever any file is written.
Nodes

Several multi-processor machines can be defined as the NUMA - Non-Uniform Memory


Access systems. The memory is organized into banks that include distinct access latency
relying on the "distance" through the processor in these types of systems. All the banks
are known as a node and for all nodes, Linux creates a subsystem of independent memory
management. A single node contains its zones set, list of used and free pages, and several
statistics counters.

Anonymous Memory

The anonymous mapping or anonymous memory specifies memory that isn't backed by
any file system. These types of mappings are implicitly developed for heap and stack of
the program or by explicitly calls to the mmap(2) system call.

The anonymous mappings usually only specify the areas of virtual memory that a
program is permitted to access.

OOM killer

It is feasible that the kernel would not be able to reclaim sufficient memory and the
loaded machine memory would be exhausted to proceed to implement.

Compaction

As the system executes, various tasks allocate the free up the memory space and it
becomes partitioned. However, it is possible to restrict scattered physical pages with
virtual memory. Memory compaction defines the partitioning problems.

Reclaim

According to the usage of the page, it is treated by Linux memory management


differently. The pages that could be freed either due to they cache the details that existed
elsewhere on a hard disk or due to they could be again swapped out to a hard disk, are
known as reclaimable.

PROCESS SCHEDULING IN LINUX


 Process scheduling is one of the most important aspects or roles of any operating
system.
 Process Scheduling is an important activity performed by the process manager of
the respective operating system.
 Scheduling in Linux deals with the removal of the current process from the CPU
and selecting another process for execution.
Scheduling Process Types in Linux

In the LINUX operating system, we have mainly two types of processes namely - Real-
time Process and Normal Process. Let us learn more about them in detail.

Real time Process

Real-time processes are processes that cannot be delayed in any situation. Real-time
processes are referred to as urgent processes.

There are mainly two types of real-time processes in LINUX namely:

 SCHED_FIFO
 SCHED_RR.

A real-time process will try to seize all the other working processes having lesser priority.

For example, A migration process that is responsible for the distribution of the processes
across the CPU is a real-time process. Let us learn about different scheduling policies
used to deal with real-time processes briefly.

SCHED_FIFO
FIFO in SCHED_FIFO means First In First Out. Hence, the SCHED_FIFO policy
schedules the processes according to the arrival time of the process.

SCHED_RR
RR in SCHED_RR means Round Robin. The SCHED_RR policy schedules the
processes by giving them a fixed amount of time for execution. This fixed time is known
as time quantum.

Note: Real-time processes have priority ranging between 1 and 99.


Hence, SCHED_FIFO, and SCHED_RR policies deal with processes having a priority
higher than 0.

Normal Process

Normal Processes are the opposite of real-time processes. Normal processes will execute
or stop according to the time assigned by the process scheduler. Hence, a normal process
can suffer some delay if the CPU is busy executing other high-priority processes. Let us
learn about different scheduling policies used to deal with the normal processes in detail.

Normal (SCHED_NORMAL or SCHED_OTHER)


SCHED_NORMAL / SCHED_OTHER is the default or standard scheduling policy used
in the LINUX operating system. A time-sharing mechanism is used in the normal policy.
A time-sharing mechanism means assigning some specific amount of time to a process
for its execution. Normal policy deals with all the threads of processes that do not
need any real-time mechanism.

Batch (SCHED_BATCH)
As the name suggests, the SCHED_BATCH policy is used for executing a batch of
processes. This policy is somewhat similar to the Normal policy. SCHED_BATCH
policy deals with the non-interactive processes that are useful in optimizing the CPU
throughput time. SCHED_BATCH scheduling policy is used for a group of processes
having priority: 0.

ACCESSING LINUX FILES

 Linux file access permissions are used to control who is able to read,
write and execute a certain file.
 This is an important consideration due to the multi-user nature of
Linux systems and as a security mechanism to protect the critical
system files both from the individual user and For many malicious
software or viruses.
 Access permissions are implemented at a file level with the
appropriate permission set based on the file owner, the group owner
of the file and worldwide access.
 In Linux, directories are also files and therefore the file permissions
apply on a directory level as well, although some permission are
applied differently depending upon whether the file is a regular file
or directory.
 As devices are also represented as files then the same permissions
commands can be applied to access to certain resource so external
devices.

Basic File Permissions

Permission Groups
Each file and directory has three user based permission groups:

 Owner-The Owner permissions apply only the owner of


the file or directory, they will not impact the actions of
other users.

 Group-The Group permissions apply only to the group that


has been assigned to the file or directory, they will not affect
the actions of other users.
 All users- The All Users permissions apply to allow the users on the
system; this is the permission group that you want to watch the most.

Permission Types
Each file or directory has three basic permission types:
 Read - The Read permission refers to a user's capability to read the
contents of the file.
 Write - The Write permissions refer to a user's capability to write or
modify a file or directory.
 Execute - The Execute permission affects a user's capability to
execute a file or view the contents of a directory.

Viewing the Permissions


You can view the permissions by checking the file or directory
permissions in your favorite GUI File Manager (which I will not
cover here) or by reviewing the output of the\"ls-l\"command while
in the terminal and while working in the directory which contains
the file or folder.
The permission in the command line is displayed as:_rwxrwxrwx
1 owner:group
 Userrights/Permissions
o The first character that I marked with an underscore is the
special permission flag that can vary.
o The following set of three characters (rwx) is for
the owner permissions.
o Thes econdsetofthreecharacters(rwx)isfortheGrouppermissions.
o Thethirdsetofthreecharacters(rwx)isfortheAllUserspermissions.
 Followingthatgroupingsincetheinteger/numberdisplaysthenumber
of hardlinks to the file.
 ThelastpieceistheOwnerandGroupassignmentformatte
das Owner:Group.

1. ModifyingthePermissions
When in the command line, the permissions are edited by using
thecommandchmod.Youcanassignthepermissionsexplicitlyorbyusin
ga binary reference as described below.
2. ExplicitlyDefiningPermissions
ToexplicitlydefinepermissionsweneedtoreferencethePermissionGro
up and Permission Types.
ThePermissionGroupsusedare:

3. u-Owner
4. g-Group
5. oora-AllUsers
ThepotentialAssignmentOperators are+(plus)and -(minus);theseare
usedtotellthesystemwhethertoaddorremovethespecificpermissions.
The Permission Types that are used are:

6. r-Read
7. w-Write
8. x-Execute
Soforanexample,let’ssaywehaveafilenamedfile1thatcurrentlyhasthe
permissions set to_rw_rw_rw, which means that the owner, group
and all users have read and write permission. Now we want to
remove the read and write permissions from the all users group.
Tomakethismodificationwewouldinvokethecommand:chmoda-rw
file1
toaddthepermissions,wewouldinvokethecommand:chmoda+rwfile1

IOS
Architecture of IOS Operating System
 IOS is a Mobile Operating System that was developed by Apple Inc. for iPhones,
iPads, and other Apple mobile devices. iOS is the second most popular and most
used Mobile Operating System after Android.
 The structure of the iOS operating System is Layered based. Its communication
doesn’t occur directly. The layer’s between the Application Layer and the
Hardware layer will help for Communication. The lower level gives basic
services on which all applications rely and the higher-level layers provide
graphics and interface-related services. Most of the system interfaces come with a
special package called a framework.
 A framework is a directory that holds dynamic shared libraries like .a files,
header files, images, and helper apps that support the library. Each layer has a set
of frameworks that are helpful for developers.
CORE OS Layer:
All the IOS technologies are built under the lowest level layer i.e. Core OS layer.
These technologies include:
1. Core Bluetooth Framework
2. External Accessories Framework
3. Accelerate Framework
4. Security Services Framework
5. Local Authorization Framework etc.
It supports 64 bit which enables the application to run faster.
CORE SERVICES Layer:
Some important frameworks are present in the CORE SERVICES Layer which helps
the iOS operating system to cure itself and provide better functionality. It is the 2nd
lowest layer in the Architecture as shown above. Below are some important frameworks
present in this layer:
1. Address Book Framework-
The Address Book Framework provides access to the contact details of the user.
2. Cloud Kit Framework-
This framework provides a medium for moving data between your app and iCloud.
3. Core Data Framework-
This is the technology that is used for managing the data model of a Model View
Controller app.
4. Core Foundation Framework-
This framework provides data management and service features for iOS applications.
5. Core Location Framework-
This framework helps to provide the location and heading information to the
application.
6. Core Motion Framework-
All the motion-based data on the device is accessed with the help of the Core Motion
Framework.
7. Foundation Framework-
Objective C covering too many of the features found in the Core Foundation
framework.
8. HealthKit Framework-
This framework handles the health-related information of the user.
9. HomeKit Framework-
This framework is used for talking with and controlling connected devices with the
user’s home.
10. Social Framework-
It is simply an interface that will access users’ social media accounts.
11. StoreKit Framework-
This framework supports for buying of contents and services from inside iOS apps.
MEDIA Layer:
With the help of the media layer, we will enable all graphics video, and audio
technology of the system. This is the second layer in the architecture. The different
frameworks of MEDIA layers are:
1. ULKit Graphics-
This framework provides support for designing images and animating the view
content.
2. Core Graphics Framework-
This framework support 2D vector and image-based rendering and it is a native
drawing engine for iOS.
3. Core Animation-
This framework helps in optimizing the animation experience of the apps in iOS.
4. Media Player Framework-
This framework provides support for playing the playlist and enables the user to use
their iTunes library.
5. AV Kit-
This framework provides various easy-to-use interfaces for video presentation,
recording, and playback of audio and video.
6. Open AL-
This framework is an Industry Standard Technology for providing Audio.
7. Core Images-
This framework provides advanced support for motionless images.
8. GL Kit-
This framework manages advanced 2D and 3D rendering by hardware-accelerated
interfaces.
COCOA TOUCH:
COCOA Touch is also known as the application layer which acts as an interface for the
user to work with the iOS Operating system. It supports touch and motion events and
many more features. The COCOA TOUCH layer provides the following frameworks :
1. EvenKit Framework-
This framework shows a standard system interface using view controllers for
viewing and changing events.
2. GameKit Framework-
This framework provides support for users to share their game-related data online
using a Game Center.
3. MapKit Framework-
This framework gives a scrollable map that one can include in your user interface of
the app.
4. PushKit Framework-
This framework provides registration support.
Features of iOS operating System:
Let us discuss some features of the iOS operating system-
1. Highly Securer than other operating systems.
2. iOS provides multitasking features like while working in one application we can
switch to another application easily.
3. iOS’s user interface includes multiple gestures like swipe, tap, pinch, Reverse pinch.
4. iBooks, iStore, iTunes, Game Center, and Email are user-friendly.
5. It provides Safari as a default Web Browser.
6. It has a powerful API and a Camera.
7. It has deep hardware and software integration
Applications of IOS Operating System:
Here are some applications of the iOS operating system-
1. iOS Operating System is the Commercial Operating system of Apple Inc. and is
popular for its security.
2. iOS operating system comes with pre-installed apps which were developed by Apple
like Mail, Map, TV, Music, Wallet, Health, and Many More.
3. Swift Programming language is used for Developing Apps that would run on IOS
Operating System.
4. In iOS Operating System we can perform Multitask like Chatting along with Surfing
on the Internet.
Advantages of IOS Operating System:
The iOS operating system has some advantages over other operating systems available
in the market especially the Android operating system. Here are some of them-
1. More secure than other operating systems.
2. Excellent UI and fluid responsive
3. Suits best for Business and Professionals
4. Generate Less Heat as compared to Android.
Disadvantages of IOS Operating System:
Let us have a look at some disadvantages of the iOS operating system-
1. More Costly.
2. Less User Friendly as Compared to Android Operating System.
3. Not Flexible as it supports only IOS devices.
4. Battery Performance is poor.

FILE SYSTEM
Since iOS 10.3 and later, the Apple File System (APFS) is the default file system that
handles persistent storage of data files.
The iOS file system contains two volumes.
 The system volume contains the operating system and for this reason cannot
be completely erased. Only iOS system data can be written to this location.
 The user volume contains user data. Information stored on the user volume
is encrypted only when the device is protected with a passcode.
 All third-party apps exist within an app sandbox to prevent them from accessing or
modifying the contents of other apps without permission from the user.
 App data is information created and stored by an app during use.
 App data might include the high score of a game or the contents of a
document.
 App data is stored within the app directory and, potentially, backed up by
iCloud or another server.
 Local app data is removed when an app is deleted. However, data may still
exist in iCloud or a third-party server.
 Using Shared iPad divides the data differently on an iOS device.
 When a student logs in, apps that support cloud storage sync their app data
in the background. Other app data is cached locally on the iPad and, if
necessary, continues to push to the cloud even after logout.
 Shared iPad allows administrators to designate how many users can share an
iPad; local storage is then divided to provide space for each user.
 Shared iPad is enabled via a mobile device management (MDM) solution. In
Jamf Pro, shared iPad is enbaled as part of a PreStage enrollment.
The sandbox directory

When in comes to reading and writing files, each iOS application has its own sandbox
directory.

For security reasons, every interaction of the iOS app with the file system is limited to
this sandbox directory. Exceptions are access requests to user data like photos, music,
contacts etc.

The structure of the sandbox directory looks as follows:

 Bundle Container Directory


o contains the app's bundle ExampleApp.app with all of its resource files that
we included within the app like images, string files, localized resources etc.
o has read-only access.
 Data Container Directory
o holds data for both the app and the user.
o is divided into the following directories:
 Documents - to store user-generated content.
 Library - to store app files that should not be exposed to user.
 tmp - for temporary files. The system periodically purges these files.

Let's take a closer look at the listed directories.

The Documents directory

Apple recommends to use the Documents directory for user-generated content.

This includes anything a user might create, view or delete through our app, for example
text files, drawings, videos, images, audio files etc. We can add subdirectories to organise
this content.

The system additionally creates the Documents/Inbox directory which we can use to
access files that our app was asked to open by other applications. We can read and delete
files in this directory but cannot edit or create new files.

The Library directory

The Library directory contains standard subdirectories we can use to store app support
files. The most used subdirectories are:

 Library/Application Support/ - to store any files the app needs that should not be
exposed to the user, for example configuration files, templates etc.
 Library/Caches/ - to cache data that can be recreated and needs to persist longer
than files in the tmp directory. The system may delete the directory on rare
occasions to free up disk space.
1 MARK
1. A file can be recognized as an ordinary file or directory by ____ symbol.
a) $
b) –
c) *
d) /
Answer: b
2. Which command is used to display the operating system name
a) os
b) unix
c) kernel
d) uname
Answer: d
3. Which command is used to print a file
a) print
b) ptr
c) lpr
d) none of the mentioned
Answer: c
4. How many types of permissions a file has in UNIX?
a) 1
b) 2
c) 3
d) 4
Answer: c
5. Permissions of a file are represented by which of the following characters?
a) r,w,x
b) e,w,x
c) x,w,e
d) e,x,w
Answer: a
6. Which of the following symbol is used to indicate the absence of a permission of a file?
a) $
b) &
c) +
d) –
Answer: d
7. When we create a file, we are the owner of a file.
a) True
b) False
Answer: a
8. What is group ownership?
a) group of users who can access the file
b) group of users who can create the file
c) group of users who can edit the file
d) group of users who can delete the file
Answer: a
9. A file has permissions as rwx r– —. A user other than the owner cannot edit the file.
a) True
b) False
Answer: a
10. If a file is read protected, we can write to the file.
a) True
b) False
Answer: b
11. The write permission for a directory determines that ____________
a) we can write to a directory file
b) we can read the directory file
c) we can execute the directory file
d) we can add or remove files to it
Answer: d
12. If the file is write-protected and the directory has to write permission then we cannot delete the file.
a) True
b) False
Answer: b
13. What is execute permission?
a) permission to execute the file
b) permission to delete the file
c) permission to rename the file
d) permission to search or navigate through the directory
Answer: d
14. Which of the following is default permission set for ordinary files?
a) rw-rw-rw-
b) rwxrwxrwx
c) r–r–r–
d) rw-rw-rwx
Answer: a
15. Which of the following is default permission set for directories?
a) rw-rw-rw-
b) rwxrwxrwx
c) r–r–r–
d) rw-rw-rwx
Answer: b

16. Which of the following is Application development environments for iOS?

A. Cocoa
B. Cocoa touch
C. Cocoa iOS
D. Cocoa begin
Ans : B

17. Cocoa touch used to refer the application development using any programmatic interface?

A. TRUE
B. FALSE
C. Can be true or false
D. Can not say
Ans : A
18. Which JSON framework is supported by iOS?

A. UIKit
B. Django
C. SBJson
D. UCJson
Ans : C
19. ___________ is a two-part string used to identify one or more apps from a single development team.

A. bundle ID
B. app ID
C. team ID
D. All of the above
Ans : B
20. How many ways to achieve concurrency in iOS?

A. 1
B. 2
C. 3
D. 4
Ans : C
21. When an app is said to be in not running state?

A. When it is launched
B. When it is not launched
C. When it gets terminated by the system during running
D. Both B and C
Ans : D
22. In Which state the app is running in the foreground and is receiving events?

A. Not running
B. Inactive
C. Background
D. Active
Ans : D
23. UIKit classes should be used only from an application’s main thread.

A. TRUE
B. FALSE
C. Can be true or false
D. Can not say
Ans : A
24. ARC stands for?

A. Auto Reference Counting


B. Automatic Reference Counting
C. Automatic Reference Cloud
D. Automatic Reference Cocoa
Ans : B
25. Through _________ users can dispatch queues, prioritize queues, POSIX threads, and thread objects.

A. POS
B. DOS
C. IOS
D. QOS
Ans : D

26. __________ is used to detect modifications to a value or property.

A. KVA
B. KVO
C. KVR
D. KVZ
Ans : B
27.Which company introduced IOS?

A)IBM

B)Intel

C)Apple

D)Google

2 8.When was iOS first introduced?

A)June 29, 2007

B)June 29, 2008

C)June 29, 2009

D)June 29, 2010

29 .Which country iPhone is original?

A)UK

B)USA

C)China

30 iOS Stands for?

A)Internet Operating System

B)Internetwork Operating System

C)iPhone Operating System

D)None of the Above


31 .Which framework is not used in iOS?

A)CoreMotion Framework

B)Foundation Framework

C)UIKit Framework

D)AppKit Framework

6.

32.Which of the following is Application development environments for iOS?

A)Cocoa

B)Cocoa iOS

C)Cocoa begin

D)Cocoa touch

33 .Cocoa touch used to refer the application development using any programmatic interface?

A)TRUE

B)FALSE

C)None of the Above

D)—

8.

34.IOS is written in which programming languages?

A)HTML,CSS,Angular

B)React, Redux,Swift

C)C#,Node,Objective-C, Swift

D)C, C++, Objective-C, Swift

10 .

35.iOS remote push notifications are introduced by apple in?

A)iOS 1.0

B)iOS 2.0

C)iOS 3.0
D)iOS 4.0

36 .__________ is a two-part string used to identify one or more apps from a single development team.

A)app ID

B)team ID

C)bundle ID

D)All of the Above

37 .Which of the following is a default UI property in IOS

A)atomic

B)assign

C)non-atomic

D)None of the Above

38 .To create an emulator, you need an AVD. What does it stand for?

A)Active Virtual Device

B)Android Virtual Display

C)Application Virtual Display

D)Android Virtual Device

39 .The Iphone has a_________that activates when you rotate the device from portrait to landscape.

A)Special Sensor

B)Accelerometer

C)Shadow detector

D)None of the above

10 .

40.Which iOS frameworks is a commonly used third party Library?

A)AVFoundation.framework

B)AFNetwork.framework

C)Audiotoolbox.framework

D)CFNetwork.framework

You might also like