Advanced Operating Systems-23Pcs06 Unit - 1
Advanced Operating Systems-23Pcs06 Unit - 1
UNIT – 1
Basics of Operating Systems: What is an Operating System? – Main frame Systems –Desktop
Systems – Multiprocessor Systems – Distributed Systems – Clustered Systems –Real-Time
Systems – Handheld Systems – Feature Migration – Computing Environments -Process
Scheduling – Cooperating Processes – Inter Process Communication- Deadlocks –Prevention
– Avoidance – Detection – Recovery.
An operating system acts as an interface between the software and different parts of
the computer or the computer hardware.
The operating system is designed in such a way that it can manage the overall
resources and operations of the computer.
Operating System is a fully integrated set of specialized programs that handle all the
operations of the computer.
It controls and monitors the execution of all other programs that reside in the
computer, which also includes application programs and other system software of the
computer. Examples of Operating Systems are Windows, Linux, Mac OS, etc.
An Operating System (OS) is a collection of software that manages computer hardware
resources and provides common services for computer programs. The operating system is the
most important type of system software in a computer system.
What is an Operating System Used for?
The operating system helps in improving the computer software as well as hardware.
Without OS, it became very difficult for any application to be user-friendly. The
Operating System provides a user with an interface that makes any application
attractive and user-friendly.
The operating System comes with a large number of device drivers that make OS
services reachable to the hardware environment.
Each and every application present in the system requires the Operating System.
The operating system works as a communication channel between system hardware
and system software.
The operating system helps an application with the hardware part without knowing
about the actual hardware configuration.
It is one of the most important parts of the system and hence it is present in every
device, whether large or small device.
What Is a Mainframe?
An operating system (OS) acts as an interface between the hardware and software of a
desktop system. It manages system resources, facilitates software execution, and
provides a user-friendly environment.
Different operating systems offer distinct features, compatibility, and performance,
catering to the diverse needs and preferences of users.
Central Processing Unit (CPU): The CPU is the brain of a desktop system,
responsible for executing instructions and performing calculations. It processes data
and carries out tasks based on the instructions provided by software programs. The
CPU’s performance is measured by its clock speed, number of cores, and cache size.
Random Access Memory (RAM): RAM is a type of volatile memory that
temporarily stores data and instructions for the CPU to access quickly. It allows for
efficient multitasking and faster data retrieval, significantly impacting the overall
performance of the system. The amount of RAM in a desktop system determines its
capability to handle multiple programs simultaneously.
Storage Devices: Desktop systems utilize various storage devices to store and
retrieve data. Hard Disk Drives (HDDs) are the traditional storage medium, offering
large capacities but slower read/write speeds. Solid-State Drives (SSDs) are a newer
technology that provides faster data access, enhancing the system’s responsiveness
and reducing loading times.
Graphics Processing Unit (GPU): The GPU is responsible for rendering images,
videos, and animations on the computer screen. It offloads the graphical processing
tasks from the CPU, ensuring smooth visuals and enabling resource-intensive
applications such as gaming, video editing, and 3D modeling. High-performance
GPUs are essential for users who require demanding graphical capabilities.
Input and Output Devices: Desktop systems are equipped with various input and
output devices. Keyboards and mice are the primary input devices, allowing users to
interact with the system and input commands. Monitors, printers, speakers, and
headphones serve as output devices, providing visual or auditory feedback based on
the system’s output.
Desktop systems have evolved significantly over the years. From the bulky and
limited-capability systems of the past to the sleek and powerful computers of today,
technological advancements have revolutionized the desktop computing experience.
Smaller form factors, increased processing power, improved storage technologies, and
enhanced user interfaces are some of the notable advancements that have shaped the
evolution of desktop systems.
Windows: Windows, developed by Microsoft, is one of the most widely used desktop
operating systems globally.
macOS: macOS is the operating system designed specifically for Apple’s Mac
computers. Known for its sleek and intuitive interface, macOS offers seamless
integration with other Apple devices and services.
Linux: Linux is an open-source operating system that provides a high degree of
customization and flexibility. It is favored by developers, system administrators, and
tech enthusiasts due to its stability, security, and vast array of software options.
Virtual reality (VR) and augmented reality (AR) integration, cloud-based computing,
artificial intelligence (AI) integration, and seamless connectivity across devices are some of
the trends that will shape the future of desktop systems.
MULTIPROCESSOR SYSTEMS
Multiple CPUs are interconnected so that a job can be divided among them for
faster execution.
When a job finishes, results from all CPUs are collected and compiled to give
the final output. Jobs needed to share main memory and they may also share
other system resources among themselves.
Multiple CPUs can also be used to run multiple jobs simultaneously.
For Example: UNIX Operating system is one of the most widely used multiprocessing
systems.
The basic organization of a typical multiprocessing system is shown in the given figure
o Increased throughout: As several processors increase, more work can be done in less
o The economy of Scale: As multiprocessors systems share peripherals, secondary
storage devices, and power supplies, they are relatively cheaper than single-processor
systems.
In a Symmetrical multiprocessing system, each processor executes the same copy of the
operating system, takes its own decisions, and cooperates with other processes to smooth the
entire functioning of the system. The CPU scheduling policies are very simple. Any new job
submitted by a user can be assigned to any processor that is least burdened. It also results in a
system in which all processors are equally burdened at any time.
o These systems are fault-tolerant. Failure of a few processors does not bring the entire
system to a halt.
Further, one processor may act as a master processor or supervisor processor while others are
treated as shown below.
In the above figure, the asymmetric processing system shows that CPU n1 acts as a supervisor
whose function controls other following processors.
In this type of system, each processor is assigned a specific task, and there is a designated
master processor that controls the activities of other processors.
For example, we have a math co-processor that can handle mathematical jobs better than the
main CPU. Similarly, we have an MMX processor that is built to handle multimedia-related
jobs. Similarly, we have a graphics processor to handle the graphics-related job better than the
main processor. When a user submits a new job, the OS has to decide which processor can
perform it better, and then that processor is assigned that newly arrived job. This processor acts
as the master and controls the system. All other processors look for masters for instructions or
have predefined tasks. It is the responsibility of the master to allocate work to other processors.
o In this type of multiprocessing operating system the processors are unequally burdened.
One processor may be having a long job queue, while another one may be sitting idle.
o In this system, if the process handling a specific work fails, the entire system will go
down.
DISTRIBUTED SYSTEMS
While distributed systems offer many advantages, they also present some challenges that
must be addressed. These challenges include:
CLUSTERED SYSTEMS
Cluster systems are similar to parallel systems because both systems use multiple CPUs.
The primary difference is that clustered systems are made up of two or more
independent systems linked together.
They have independent computer systems and a shared storage media, and all systems
work together to complete all tasks.
All cluster nodes use two different approaches to interact with one another,
like message passing interface (MPI) and parallel virtual machine (PVM).
you will learn about the Clustered Operating system, its types, classification, advantages, and
disadvantages.
There are two clusters available to make a more efficient cluster. These are as follows:
1. Software Cluster
2. Hardware Cluster
Software Cluster
Hardware Cluster
In the asymmetric cluster system, one node out of all nodes is in hot standby mode, while the
remaining nodes run the essential applications. Hot standby mode is completely fail-safe and
also a component of the cluster system. The node monitors all server functions; the hot standby
node swaps this position if it comes to a halt.
Multiple nodes help run all applications in this system, and it monitors all nodes
simultaneously. Because it uses all hardware resources, this cluster system is more reliable than
asymmetric cluster systems.
A parallel cluster system enables several users to access similar data on the same shared storage
system. The system is made possible by a particular software version and other apps.
Classification of clusters
Computer clusters are managed to support various purposes, from general-purpose business
requirements like web-service support to computation-intensive scientific calculations. There
are various classifications of clusters. Some of them are as follows:
The process of moving applications and data resources from a failed system to another system
in the cluster is referred to as fail-over. These are the databases used to cluster important
missions, application servers, mail, and file.
The cluster requires better load balancing abilities amongst all available computer systems. All
nodes in this type of cluster can share their computing workload with other nodes, resulting in
better overall performance. For example, a web-based cluster can allot various web queries to
various nodes, so it helps to improve the system speed. When it comes to grabbing requests,
only a few cluster systems use the round-robin method.
These are also referred to as "HA clusters". They provide a high probability that all resources
will be available. If a failure occurs, such as a system failure or the loss of a disk volume, the
queries in the process are lost. If a lost query is retried, it will be handled by a different cluster
computer. It is widely used in news, email, FTP servers, and the web.
Various advantages and disadvantages of the Clustered Operating System are as follows:
Advantages
1. High Availability
Although every node in a cluster is a standalone computer, the failure of a single node doesn't
mean a loss of service. A single node could be pulled down for maintenance while the
remaining clusters take on a load of that single node.
2. Cost Efficiency
When compared to highly reliable and larger storage mainframe computers, these types of
cluster computing systems are thought to be more cost-effective and cheaper. Furthermore,
most of these systems outperform mainframe computer systems in terms of performance.
3. Additional Scalability
A cluster is set up in such a way that more systems could be added to it in minor increments.
Clusters may add systems in a horizontal fashion. It means that additional systems could be
added to clusters to improve their performance, fault tolerance, and redundancy.
4. Fault Tolerance
Clustered systems are quite fault-tolerance, and the loss of a single node does not result in the
system's failure. They might also have one or more nodes in hot standby mode, which allows
them to replace failed nodes.
5. Performance
The clusters are commonly used to improve the availability and performance over the single
computer systems, whereas usually being much more cost-effective than the single computer
system of comparable speed or availability.
6. Processing Speed
The processing speed is also similar to mainframe systems and other types of supercomputers
on the market.
Disadvantages
1. Cost-Effective
One major disadvantage of this design is that it is not cost-effective. The cost is high, and the
cluster will be more expensive than a non-clustered server management design since it requires
good hardware and a design.
2. Required Resources
Clustering necessitates the use of additional servers and hardware, making monitoring and
maintenance difficult. As a result, infrastructure must be improved.
3. Maintenance
REAL-TIME SYSTEMS
Advantages:
The advantages of real-time operating systems are as follows-
1. Maximum consumption: Maximum utilization of devices and systems. Thus
more output from all the resources.
2. Task Shifting: Time assigned for shifting tasks in these systems is very less.
For example, in older systems, it takes about 10 microseconds. Shifting one task
to another and in the latest systems, it takes 3 microseconds.
2. Use Heavy System Resources: Sometimes the system resources are not so good
and they are expensive as well.
3. Complex Algorithms: The algorithms are very complex and difficult for the
designer to write on.
4. Device Driver And Interrupt signals: It needs specific device drivers and
interrupts signals to respond earliest to interrupts.
5. Thread Priority: It is not good to set thread priority as these systems are very
less prone to switching tasks.
Complex Simple
Handheld operating systems are available in all handheld devices like Smartphones and
tablets. It is sometimes also known as a Personal Digital Assistant. The popular handheld
device in today’s world is Android and iOS. These operating systems need a high-processing
processor and are also embedded with various types of sensors.
1. Since the development of handheld computers in the 1990s, the demand for
software to operate and run on these devices has increased.
2. Three major competitors have emerged in the handheld PC world with three
different operating systems for these handheld PCs.
3. Out of the three companies, the first was the Palm Corporation with their PalmOS.
4. Microsoft also released what was originally called Windows CE. Microsoft’s
recently released operating system for the handheld PC comes under the name of
Pocket PC.
5. More recently, some companies producing handheld PCs have also started
offering a handheld version of the Linux operating system on their machines.
Features of Handheld Operating System:
1. Its work is to provide real-time operations.
2. There is direct usage of interrupts.
3. Input/Output device flexibility.
4. Configurability.
Types of Handheld Operating Systems:
Types of Handheld Operating Systems are as follows:
1. Palm OS
2. Symbian OS
3. Linux OS
4. Windows
5. Android
Palm OS:
Since the Palm Pilot was introduced in 1996, the Palm OS platform has provided
various mobile devices with essential business tools, as well as the capability that
they can access the internet via a wireless connection.
These devices have mainly concentrated on providing basic personal-information-
management applications. The latest Palm products have progressed a lot, packing
in more storage, wireless internet, etc.
Symbian OS:
It has been the most widely-used smartphone operating system because of its
ARM architecture before it was discontinued in 2014. It was developed by
Symbian Ltd.
This operating system consists of two subsystems where the first one is the
microkernel-based operating system which has its associated libraries a nd the
second one is the interface of the operating system with which a user can interact.
Since this operating system consumes very less power, it was developed for
smartphones and handheld devices.
It has good connectivity as well as stability.
It can run applications that are written in Python, Ruby, .NET, etc.
Linux OS:
Windows OS:
Android OS:
1. Less Cost.
2. Less weight and size.
3. Less heat generation.
4. More reliability.
Disadvantages of Handheld Operating System:
Some disadvantages of Handheld Operating Systems are as follows:
1. Less Speed.
2. Small Size.
3. Input / Output System (memory issue or less memory is available).
How Handheld operating systems are different from Desktop operating systems?
Since the handheld operating systems are mainly designed to run on machines that
have lower speed resources as well as less memory, they were designed in a way
that they use less memory and require fewer resources.
They are also designed to work with different types of hardware as compared to
standard desktop operating systems.
It happens because the power requirements for standard CPUs far exceed the
power of handheld devices.
Handheld devices aren’t able to dissipate large amounts of heat generated by
CPUs. To deal with such kind of problem, big companies like Intel and Motorola
have designed smaller CPUs with lower power requirements and also lower heat
generation. Many handheld devices fully depend on flash memory cards for their
internal memory because large hard drives do not fit into handheld devices.
FEATURE MIGRATION
Process migration is a particular type of process management by which processes are moved
starting with one computing environment then onto the next.
There are two types of Process Migration:
The process is halted on its source node and is restarted on its destination node.
The address space of the process is transferred from its source node to its
destination node.
Message forwarding is implied for the transferred process.
Managing the communication between collaborating processes that have been
isolated because of process migration.
Computing environments refer to the technology infrastructure and software platforms that
are used to develop, test, deploy, and run software applications. There are several types of
computing environments, including:
1. Mainframe: A large and powerful computer system used for critical applications
and large-scale data processing.
2. Client-Server: A computing environment in which client devices access
resources and services from a central server.
3. Cloud Computing: A computing environment in which resources and services
are provided over the Internet and accessed through a web browser or client
software.
4. Mobile Computing: A computing environment in which users access
information and applications using handheld devices such as smartphones and
tablets.
5. Grid Computing: A computing environment in which resources and services are
shared across multiple computers to perform large-scale computations.
6. Embedded Systems: A computing environment in which software is integrated
into devices and products, often with limited processing power and memory.
Each type of computing environment has its own advantages and disadvantages, and the
choice of environment depends on the specific requirements of the software application and
the resources available.
In the world of technology where every tasks are performed with help of computers, these
computers have become one part of human life. Computing is nothing but process of
completing a task by using this computer technology and it may involve computer hardware
and/or software. But computing uses some form of computer system to manage, process,
and communicate information. After getting some idea about computing now lets
understand about computing environments.
Computing Environments : When a problem is solved by the computer, during that
computer uses many devices, arranged in different ways and which work together to solve
problems. This constitutes a computing environment where various number of computer
devices arranged in different ways to solve different types of problems in different ways. In
different computing environments computer devices are arranged in different ways and they
exchange information in between them to process and solve problem. One computing
environment consists of many computers other computational devices, software and
networks that to support processing and sharing information and solving task. Based on the
organization of different computer devices and communication processes there exists
multiple types of computing environments.
Now lets know about different types of computing environments.
1. Mainframe: High cost and complexity, with a significant learning curve for
developers.
2. Client-Server: Dependence on network connectivity, and potential security risks
from centralized data storage.
3. Cloud Computing: Dependence on network connectivity, and potential security
and privacy concerns.
4. Mobile Computing: Limited processing power and memory compared to other
computing environments, and potential security risks.
5. Grid Computing: Complexity in setting up and managing the grid infrastructure.
6. Embedded Systems: Limited processing power and memory, and the need for
specialized skills for software development
PROCESS SCHEDULING
Definition
The process scheduling is the activity of the process manager that handles the removal of the
running process from the CPU and the selection of another process on the basis of a particular
strategy.
1. Non-preemptive: Here the resource can’t be taken from a process until the process
completes execution. The switching of resources occurs when the running process
terminates and moves to a waiting state.
2. Preemptive: Here the OS allocates the resources to a process for a fixed amount of
time. During resource allocation, the process switches from running state to ready
state or from waiting state to ready state. This switching occurs as the CPU may give
priority to other processes and replace the process with higher priority with the
running process.
The OS maintains all Process Control Blocks (PCBs) in Process Scheduling Queues. The OS
maintains a separate queue for each of the process states and PCBs of all processes in the
same execution state are placed in the same queue. When the state of a process is changed, its
PCB is unlinked from its current queue and moved to its new state queue.
The Operating System maintains the following important process scheduling queues −
Job queue − This queue keeps all the processes in the system.
Ready queue − This queue keeps a set of all processes residing in main memory,
ready and waiting to execute. A new process is always put in this queue.
Device queues − The processes which are blocked due to unavailability of an I/O
device constitute this queue.
The OS can use different policies to manage each queue (FIFO, Round Robin, Priority, etc.).
The OS scheduler determines how to move processes between the ready and run queues
which can only have one entry per processor core on the system; in the above diagram, it has
been merged with the CPU.
Two-state process model refers to running and non-running states which are described below
−
Running
1
When a new process is created, it enters into the system as in the running state.
Not Running
Processes that are not running are kept in queue, waiting for their turn to execute. Each
entry in the queue is a pointer to a particular process. Queue is implemented by using
2
linked list. Use of dispatcher is as follows. When a process is interrupted, that process is
transferred in the waiting queue. If the process has completed or aborted, the process is
discarded. In either case, the dispatcher then selects a process from the queue to execute.
Schedulers
Schedulers are special system software which handle process scheduling in various ways.
Their main task is to select the jobs to be submitted into the system and to decide which
process to run. Schedulers are of three types −
Long-Term Scheduler
Short-Term Scheduler
Medium-Term Scheduler
It is also called a job scheduler. A long-term scheduler determines which programs are
admitted to the system for processing. It selects processes from the queue and loads them into
memory for execution. Process loads into the memory for CPU scheduling.
The primary objective of the job scheduler is to provide a balanced mix of jobs, such as I/O
bound and processor bound. It also controls the degree of multiprogramming. If the degree of
multiprogramming is stable, then the average rate of process creation must be equal to the
average departure rate of processes leaving the system.
On some systems, the long-term scheduler may not be available or minimal. Time-sharing
operating systems have no long term scheduler. When a process changes the state from new
to ready, then there is use of long-term scheduler.
It is also called as CPU scheduler. Its main objective is to increase system performance in
accordance with the chosen set of criteria. It is the change of ready state to running state of
the process. CPU scheduler selects a process among the processes that are ready to execute
and allocates CPU to one of them.
Short-term schedulers, also known as dispatchers, make the decision of which process to
execute next. Short-term schedulers are faster than long-term schedulers.
A running process may become suspended if it makes an I/O request. A suspended processes
cannot make any progress towards completion. In this condition, to remove the process from
memory and make space for other processes, the suspended process is moved to the
secondary storage. This process is called swapping, and the process is said to be swapped out
or rolled out. Swapping may be necessary to improve the process mix.
It is a process swapping
1 It is a job scheduler It is a CPU scheduler
scheduler.
It is almost absent or
It is also minimal in time It is a part of Time sharing
4 minimal in time sharing
sharing system systems.
system
Context Switching
A context switching is the mechanism to store and restore the state or context of a CPU in
Process Control block so that a process execution can be resumed from the same point at a
later time. Using this technique, a context switcher enables multiple processes to share a
single CPU. Context switching is an essential part of a multitasking operating system
features.
When the scheduler switches the CPU from executing one process to execute another, the
state from the current running process is stored into the process control block. After this, the
state for the process to run next is loaded from its own PCB and used to set the PC, registers,
etc. At that point, the second process can start executing.
Context switches are computationally intensive since register and memory state must be
saved and restored. To avoid the amount of context switching time, some hardware systems
employ two or more sets of processor registers. When the process is switched, the following
information is stored for later use.
Program Counter
Scheduling information
Base and limit register value
Currently used register
Changed State
I/O State information
Accounting information
COOPERATING PROCESSES
Before learning about Cooperating processes in operating systems let's learn a bit
about Operating Systems and Processes.
There are two types of software first is the application software and the other is the system
software.
Operating system is system software that manages the resources of a computer system that
is both hardware and software. It works as an interface between the user and the hardware so
that the user can interact with the hardware. It provides a convenient environment in which a
user can execute the programs. An operating system is a resource manager and
it hides the internal working complexity of the hardware so that users can perform a
specific task without any difficulty.
Now, Lets's talk about how the process is the most important part of an operating system:
A program under execution is known as the process. Every task in an operating system is
converted into a process. A process has several states from its start to its termination. After a
new process is generated the process gets admitted into a ready queue by job scheduler or
long-term scheduler where every process is ready for execution. Then the processes inside
the ready queue are admitted into the execution state by CPU scheduler or short-term
scheduler. After execution, the process gets terminated.
Before termination, there may be multiple processes being executed in a system. There are
two modes in which the processes can be executed. These two modes are:
1. Serial mode
2. Parallel mode
In serial mode, the process will be executed one after the other means the next process
cannot be executed until the previous process gets terminated.
On the contrary in parallel mode, there may be several processes being executed at the same
time quantum. In this way, there will be two types of processes which can be
either cooperating processes or independent processes.
Cooperating Process in the operating system is a process that gets affected by other
processes under execution or can affect any other process under execution. It shares data with
other processes in the system by directly sharing a logical space which is
both code and data or by sharing data through files or messages.
Whereas, an independent process in an operating system is one that does not affect or
impact any other process of the system. It does not share any data with other processes.
There are two methods by which the cooperating process in OS can communicate:
Cooperation by Sharing
Cooperation by Message Passing
Cooperation by Sharing
The cooperation processes in OS can communicate with each other using the shared
resource which includes data, memory, variables, files, etc.
Processes can then exchange the information by reading or writing data to the shared region.
We can use a critical section that provides data integrity and avoids data inconsistency.
Let's see a diagram to understand more clearly the communication by shared region:
In the above diagram, We have two processes A and B which are communicating with
each other through a shared region of memory.
Process A will write the information in the shared region and then Process B will read
the information from the shared memory and that's how the process of communication
takes place between the cooperating processes by sharing.
The cooperating processes in OS can communicate with each other with the help of message
passing. The production process will send the message and the consumer process will receive
the same message.
There is no concept of shared memory instead the producer process will first send the
message to the kernel and then the kernel sends that message to the consumer process.
A kernel is known as the heart and core of an operating system. The kernel interacts with the
hardware to execute the processes given by the user space. It works as a bridge between the
user space and hardware. Functions of the kernel include process management, file
management, memory management, and I/O management.
If a consumer process waits for a message from another process to execute a particular task
then this may cause a problem of deadlock and if the consumer process does not receive the
message then this may cause a problem of process starvation.
The kernel then sends the message to the process P2 and that's how the process of
communication takes place between the cooperation processes by communication.
One process will write to the file and the other process reads the file. Therefore, every
process in the system could be affected by the other process.
The need for cooperating processes in OS can be divided into four types:
1. Information Sharing
2. Computation Speed
3. Convenience
4. Modularity
Information Sharing
As we know the cooperating process in OS shares data and information between other
processes. There may be a possibility that different processes are accessing the same file.
Processes can access the files concurrently which makes the execution of the process more
efficient and faster.
Computation Speed
When a task is divided into several subtasks and starts executing them parallelly, this
improves the computation speed of the execution and makes it faster. Computation speed can
be achieved if a system has multiple CPUs and input/output devices.
When the tasks are assigned into several subtasks they become several different processes
that need to communicate with each other. That's why we need cooperating processes in the
operating system.
Convenience
A user may be performing several tasks at the same time which leads to the running of
different processes concurrently. These processes need to cooperate so that every process can
run smoothly without interrupting each other.
Modularity
We want to divide a system of complex tasks into several different modules and later they
will be established together to achieve a goal. This will help in completing tasks with more
efficiency as well as speed.
With help of data and information sharing, the processes can be executed with much
faster speed and efficiency as processes can access the same files concurrently.
Modularity gives the advantage of breaking a complex task into several modules
which are later put together to achieve the goal of faster execution of processes.
Cooperating processes provide convenience as different processes running at the same
time can cooperate without any interruption among them.
The computation speed of the processes increases by dividing processes into different
subprocesses and executing them parallelly at the same time.
Let's take the example of the producer-consumer problem which is also known as
a bounded buffer problemto understand cooperating processes in more detail:
Producer:
The process which generates the message that a consumer may consume is known as
the producer.
Consumer:
A producer produces a piece of information and stores it in a buffer(critical section) and the
consumer consumes that information.
For Example, A web server produces web pages that are consumed by the client. A compiler
produces an assembly code that is consumed by the assembler.
Unbounded buffer: It is a kind of buffer that has no practical limit on the size buffer.
The producer can produce new information but the consumer might have to wait for
them.
Bounded buffer: It is a kind of buffer that assumes a fixed size. Here, the consumer
has to wait if the buffer is empty, while the producer has to wait if the buffer is full.
But here in the process-consumer problem, we have used a bounded buffer.
Producer and consumer both processes execute simultaneously. The problem arises when a
consumer wants to consume information when the buffer is empty or there is nothing to be
consumed and a producer produces a piece of information when the buffer is full or the
memory of the consumer is already full.
Producer Process:
while(true)
{
produce an item &
while(counter = = buffer-size);
buffer[int] = next produced;
in = (in+1) % buffer- size;
counter ++;
}
Consumer Process:
while(true)
{
while (counter = = 0);
next consumed = buffer[out];
out= (out+1) % buffer size;
counter--;
}
In the above producer code and consumer code, we have the following variables:
counter: counter is used to identify the size of the buffer which is used by the
producer as well as consumer processes.
in: in a variable is used by the producer to detect the next empty slot in
the buffer region.
out: The consumer uses the out variable to detect where the items are stored.
Shared Resources:
1. Buffer
2. Counter
When both the producer process and consumer process do not execute on time then it may
cause inconsistency in the process. The value of the counter variable will be incorrect if both
the producer and consumer processes will be executed concurrently.
var n;
type item = .....;
var Buffer : array [0,n-1] of item;
in, out:0..n-1;
The shared buffer region holds two logical pointers i.e, in and out, and are implemented in
the form circular array. By default, the values of both the variables(in and out) are initialized
to 0. As we discussed earlier, the out variable points to the first filled location in the buffer
while the in variable points to the first free location in the buffer. The buffer will be empty
if in=out. The buffer will be filled if in+1 mod n=out.
Let us now look at the general definition of inter-process communication, which will explain
the same thing that we have discussed above.
Definition
To understand inter process communication, you can consider the following given diagram that
illustrates the importance of inter-process communication:
Role of Synchronization in Inter Process Communication
It is one of the essential parts of inter process communication. Typically, this is provided by
interprocess communication control mechanisms, but sometimes it can also be controlled by
communication processes.
These are the following methods that used to provide the synchronization:
1. Mutual Exclusion
2. Semaphore
3. Barrier
4. Spinlock
Mutual Exclusion:-
It is generally required that only one process thread can enter the critical section at a time. This
also helps in synchronization and creates a stable state to avoid the race condition.
Semaphore:-
Semaphore is a type of variable that usually controls the access to the shared resources by
several processes. Semaphore is further divided into two types which are as follows:
1. Binary Semaphore
2. Counting Semaphore
Barrier:-
A barrier typically not allows an individual process to proceed unless all the processes does not
reach it. It is used by many parallel languages, and collective routines impose barriers.
Spinlock:-
Spinlock is a type of lock as its name implies. The processes are trying to acquire the spinlock
waits or stays in a loop while checking that the lock is available or not. It is known as busy
waiting because even though the process active, the process does not perform any functional
operation (or task).
We will now discuss some different approaches to inter-process communication which are as
follows:
1. Pipes
2. Shared Memory
3. Message Queue
4. Direct Communication
5. Indirect communication
6. Message Passing
7. FIFO
Pipe:-
The pipe is a type of data channel that is unidirectional in nature. It means that the data in this
type of data channel can be moved in only a single direction at a time. Still, one can use two-
channel of this type, so that he can able to send and receive data in two processes. Typically, it
uses the standard methods for input and output. These pipes are used in all types of POSIX
systems and in different versions of window operating systems as well.
Shared Memory:-
It can be referred to as a type of memory that can be used or accessed by multiple processes
simultaneously. It is primarily used so that the processes can communicate with each other.
Therefore the shared memory is used by almost all POSIX and Windows operating systems as
well.
Message Queue:-
In general, several different messages are allowed to read and write the data to the message
queue. In the message queue, the messages are stored or stay in the queue unless their recipients
retrieve them. In short, we can also say that the message queue is very helpful in inter-process
communication and used by all operating systems.
To understand the concept of Message queue and Shared memory in more detail, let's take a
look at its diagram given below:
Message Passing:-
It is a type of mechanism that allows processes to synchronize and communicate with each
other. However, by using the message passing, the processes can communicate with each other
without restoring the hared variables.
Usually, the inter-process communication mechanism provides two operations that are as
follows:
o send (message)
o received (message)
Direct Communication:-
In this type of communication process, usually, a link is created or established between two
communicating processes. However, in every pair of communicating processes, only one link
can exist.
Indirect Communication
Indirect communication can only exist or be established when processes share a common
mailbox, and each pair of these processes shares multiple communication links. These shared
links can be unidirectional or bi-directional.
FIFO:-
o Socket:-
It acts as a type of endpoint for receiving or sending the data in a network. It is correct for data
sent between processes on the same computer or data sent between different computers on the
same network. Hence, it used by several types of operating systems.
o File:-
A file is a type of data record or a document stored on the disk and can be acquired on demand
by the file server. Another most important thing is that several processes can access that file as
required or needed.
o Signal:-
As its name implies, they are a type of signal used in inter process communication in a minimal
way. Typically, they are the massages of systems that are sent by one process to another.
Therefore, they are not used for sending data but for remote commands between multiple
processes.
Usually, they are not used to send the data but to remote commands in between several
processes.
There are numerous reasons to use inter-process communication for sharing the data. Here are
some of the most important reasons that are given below:
o Computational
o Privilege separation
o Convenience
o Helps operating system to communicate with each other and synchronize their actions
as well.
DEADLOCKS
Every process needs some resources to complete its execution. However, the resource is
granted in a sequential order.
A Deadlock is a situation where each of the computer process waits for a resource which is
being assigned to some another process. In this situation, none of the process gets executed
since the resource it needs, is held by some other process which is also waiting for some other
resource to be released.
Let us assume that there are three processes P1, P2 and P3. There are three different resources
R1, R2 and R3. R1 is assigned to P1, R2 is assigned to P2 and R3 is assigned to P3.
After some time, P1 demands for R1 which is being used by P2. P1 halts its execution since it
can't complete without R2. P2 also demands for R3 which is being used by P3. P2 also stops
its execution because it can't continue without R3. P3 also demands for R1 which is being used
by P1 therefore P3 also stops its execution.
In this scenario, a cycle is being formed among the three processes. None of the process is
progressing and they are all waiting. The computer becomes unresponsive since all the
processes got blocked.
Difference between Starvation and Deadlock
1. Mutual Exclusion
A resource can only be shared in mutually exclusive manner. It implies, if two process
cannot use the same resource at the same time.
A process waits for some resources while holding another resource at the same time.
3. No preemption
The process which once scheduled will be executed till the completion. No other
process can be scheduled by the scheduler meanwhile.
4. Circular Wait
All the processes must be waiting for the resources in a cyclic manner so that the last
process is waiting for the resource which is being held by the first process.
PREVENTION
Introduction to Deadlock
Consider a one-way road with two cars approaching from opposite directions, blocking each
other. The road is the resource, and crossing it represents a process. Since it's a one-way road,
both cars can't move simultaneously, leading to a deadlock.
Deadlock Characteristics/Conditions
1. Mutual exclusion
2. Hold and wait
3. No preemption
4. Circular wait.
Example: Only allowing traffic from one direction, will exclude the possibility
of blocking the road.
2. Deadlock avoidance: The Operating system runs an algorithm on requests to check for
a safe state. Any request that may result in a deadlock is not granted.
Example: Checking each car and not allowing any car that can block the road. If
there is already traffic on the road, then a car coming from the opposite direction can
cause blockage.
Deadlock prevention is a set of methods used to ensure that all requests are safe, by
eliminating at least one of the four necessary conditions for deadlock.
Deadlock prevention is eliminating one of the necessary conditions of deadlock so that only
safe requests are made to OS and the possibility of deadlock is excluded before making
requests.
As now requests are made carefully, the operating system can grant all requests safely.
Here OS does not need to do any additional tasks as it does in deadlock avoidance by running
an algorithm on requests checking for the possibility of deadlock.
Deadlock prevention techniques refer to violating any one of the four necessary conditions.
We will see one by one how we can violate each of them to make safe requests and which is
the best approach to prevent deadlock.
Mutual Exclusion
Some resources are inherently unshareable, for example, Printers. For unshareable resources,
processes require exclusive control of the resources.
Shared resources do not cause deadlock but some resources can't be shared among processes,
leading to a deadlock.
For Example: read operation on a file can be done simultaneously by multiple processes, but
write operation cannot. Write operation requires sequential access, so, some processes have
to wait while another process is doing a write operation.
It is not possible to eliminate mutual exclusion, as some resources are inherently non-
shareable,
For Example Tape drive, as only one process can access data from a Tape drive at a time.
For other resources like printers, we can use a technique called Spooling.
A Printer has associated memory which can be used as a spooler directory (memory that is
used to store files that are to be printed next).
In spooling, when multiple processes request the printer, their jobs ( instructions of the
processes that require printer access) are added to the queue in the spooler directory.
The printer is allocated to jobs on a first come first serve (FCFS) basis. In this way, the
process does not have to wait for the printer and it continues its work after adding its job to
the queue.
We can understand the workings of the Spooler directory better with the diagram given
below:
Challenges of Spooling:
Spooling can only be used for resources with associated memory, like a Printer.
It may also cause race condition. A race condition is a situation
where two or more processes are accessing a resource and the final results cannot be
definitively determined.
For Example: In printer spooling, if process A overwrites the job of process B in
the queue, then process B will never receive the output.
It is not a full-proof method as after the queue becomes full, incoming processes go
into a waiting state.
For Example: If the size of the queue is 10 blocks then whenever there are more
than 10 processes, they will go in a waiting state.
:::
Hold and wait is a condition in which a process is holding one resource while simultaneously
waiting for another resource that is being held by another process. The
process cannot continue till it gets all the required resources.
1. By eliminating wait:
The process specifies the resources it requires in advance so that it does not have to
wait for allocation after execution starts.
For Example: Process1 declares in advance that it requires both Resource1 and
Resource2
2. By eliminating hold:
The process has to release all resources it is currently holding before making a new
request.
For Example: Process1 has to release Resource2 and Resource3 before making
request for Resource1
Challenges:
As a process executes instructions one by one, it cannot know about all required
resources before execution.
Releasing all the resources a process is currently holding is also problematic as they
may not be usable by other processes and are released unnecessarily.
For example: When Process1 releases both Resource2 and Resource3, Resource3 is
released unnecessarily as it is not required by Process2.
No preemption
For example, if process P1 is using a resource and a high-priority process P2 requests for the
resource, process P1 is stopped and the resources are allocated to P2.
There are two ways to eliminate this condition by preemption:
1. If a process is holding some resources and waiting for other resources, then it should
release all previously held resources and put a new request for the required resources
again. The process can resume once it has all the required resources.
For example: If a process has resources R1, R2, and R3 and it is waiting for
resource R4, then it has to release R1, R2, and R3 and put a new request of all
resources again.
2. If a process P1 is waiting for some resource, and there is another process P2 that is
holding that resource and is blocked waiting for some other resource. Then the
resource is taken from P2 and allocated to P1. This way process P2 is preempted and
it requests again for its required resources to resume the task. The above approaches
are possible for resources whose states are easily restored and saved, such as memory
and registers.
Challenges:
These approaches are problematic as the process might be actively using these
resources and halting the process by preempting can cause inconsistency.
For example: If a process is writing to a file and its access is revoked for the process
before it completely updates the file, the file will remain unusable and in an
inconsistent state.
Circular Wait
In circular wait, two or more processes wait for resources in a circular order. We can
understand this better by the diagram given below:
To eliminate circular wait, we assign a priority to each resource. A process can only request
resources in increasing order of priority.
In the example above, process P3 is requesting resource R1, which has a number lower than
resource R3 which is already allocated to process P3. So this request is invalid and cannot be
made, as R1 is already allocated to process P1.
Challenges:
For Example: A media player will give a lesser priority to a printer while a document
processor might give it a higher priority. The priority of resources is different
according to the situation and use case.
AVOIDANCE
Deadlock Avoidance is a process used by the Operating System to avoid Deadlock. Let's
first understand what is Deadlock in an Operating System is. Deadlock is a situation that
occurs in the Operating System when any Process enters a waiting state because another
waiting process is holding the demanded resource. Deadlock is a common problem in multi-
processing where several processes share a specific type of mutually exclusive resource
known as a soft lock or software.
The operating system avoids Deadlock by knowing the maximum resource requirements of
the processes initially, and also, the Operating System knows the free resources available at
that time. The operating system tries to allocate the resources according to the process
requirements and checks if the allocation can lead to a safe state or an unsafe state. If the
resource allocation leads to an unsafe state, then the Operating System does not proceed
further with the allocation sequence.
How does Deadlock Avoidance Work?
Let's understand the working of Deadlock Avoidance with the help of an intuitive example.
Let's consider three processes P1, P2, P3. Some more information on which the processes tell
the Operating System are :
But only 2 resources are free now. Can P1, P2, and P3 satisfy their requirements? Let's try
to find out.
As only 2 resources are free for now, only P3 can satisfy its need for 2 resources. If P3 takes
2 resources and completes its execution, then P3 can release its 3 (1+2) resources. Now the
three free resources that P3 released can satisfy the need of P2. Now, P2 after taking the three
free resources, can complete its execution and then release 5 (2+3) resources. Now five
resources are free. P1 can now take 4 out of the 5 free resources and complete its execution.
So, with 2 free resources available initially, all the processes were able to complete their
execution leading to a Safe State. The order of execution of the processes was <P3, P2, P1>.
What if initially there was only 1 free resource available? None of the processes would be
able to complete its execution. Thus leading to an unsafe state.
We use two words, safe and unsafe states. What are those states? Let's understand these
concepts.
Safe State - In the above example, we saw that the Operating System was able to satisfy the
needs of all three processes, P1, P2, and P3, with their resource requirements. So all the
processes were able to complete their execution in a certain order like P3->P2->P1.
So, If the Operating System is able to allocate or satisfy the maximum resource
requirements of all the processes in any order then the system is said to be in Safe State.
Unsafe State - If the Operating System is not able to prevent Processes from requesting
resources which can also lead to a Deadlock, then the System is said to be in an Unsafe State.
Unsafe State does not necessarily cause deadlock it may or may not cause deadlock.
So, in the above diagram shows the three states of the System. An unsafe state does not always
cause a Deadlock. Some unsafe states can lead to a Deadlock, as shown in the diagram.
Process R1 R2 R3 R4
P1 3 2 3 2
P2 2 3 1 4
P3 3 1 5 0
Process R1 R2 R3 R4
P1 1 2 3 1
P2 2 1 0 2
P3 2 0 1 0
R1 R2 R3 R4
7 4 5 4
We can find out the no of available resources for each of P1, P2, P3, P4 by subtracting the
currently allocated resources from total resources.
R1 R2 R3 R4
2 1 1 1
Now, The need for the resources for the processes can be calculated by :
The available free resources are <2,1,1,1> of resources of R1, R2, R3, and R4 respectively,
which can be used to satisfy only the requirements of process P1 only initially as process P2
requires 2 R2 resources which are not available. The same is the case with Process P3, which
requires 4 R3 resources which is not available initially.
1. Firstly, Process P1 will take the available resources and satisfy its resource need,
complete its execution and then release all its allocated resources. Process P1 is
initially allocated <1,2,3,1> resources of R1, R2, R3, and R4 respectively.
Process P1 needs <2,1,0,1> resources of R1, R2, R3 and R4 respectively to complete
its execution. So, process P1 takes the available free resources <2,1,1,1> resources
of R1, R2, R3, R4 respectively and can complete its execution and then release its
current allocated resources and also the free resources it used to complete its
execution. Thus P1 releases <1+2,2+1,3+1,1+1> = <3,3,4,2> resources of R1, R2, R3,
and R4 respectively.
2. After step 1 now, available resources are now <3,3,4,2>, which can satisfy the need of
Process P2 as well as process P3. After process P2 uses the available Resources and
completes its execution, the available resources are now <5,4,4,4>.
3. Now, the available resources are <5,4,4,4>, and the only Process left for execution is
Process P3, which requires <1,1,4,0> resources each of R1, R2, R3, and R4. So it can
easily use the available resources and complete its execution. After P3 is executed, the
resources available are <7,4,5,4>, which is equal to the maximum resources or total
resources available in the System.
So, the process execution sequence in the above example was <P1, P2, P3>. But it could
also have been <P1, P3, P2> if process P3 would have been executed before process P2,
which was possible as there were sufficient resources available to satisfy the need of both
Process P2 and P3 after step 1 above.
Deadlock Avoidance Solution
Resource Allocation Graph (RAG) is used to represent the state of the System in the form of
a Graph. The Graph contains all processes and resources which are allocated to them and also
the requesting resources of every Process. Sometimes if the number of processes is less, We
can easily identify a deadlock in the System just by observing the Graph, which can not be
done easily by using tables that we use in Banker's algorithm.
Resource Allocation Graph has a process vertex represented by a circle and a resource vertex
represented by a box. The instance of the resources is represented by a dot inside the box. The
instance can be single or multiple instances of the resource. An example of RAG is shown
below.
Banker's Algorithm
Banker's algorithm does the same as we explained the Deadlock avoidance with the help of
an example. The algorithm predetermines whether the System will be in a safe state or not by
simulating the allocation of the resources to the processes according to the maximum
available resources. It makes an "s-state" check before actually allocating the resources
to the Processes.
When there are more number of Processes and many Resources, then Banker's
Algorithm is useful.
DETECTION
OS doesn't apply any mechanism to avoid or prevent the deadlocks. Therefore the
system considers that the deadlock will definitely occur. In order to get rid of deadlocks, The
OS periodically checks the system for any deadlock. In case, it finds any of the deadlock then
the OS will recover the system using some recovery techniques.
The main task of the OS is detecting the deadlocks. The OS can detect the deadlocks
with the help of Resource allocation graph.
In single instanced resource types, if a cycle is being formed in the system then there
will definitely be a deadlock. On the other hand, in multiple instanced resource type graph,
detecting a cycle is not just enough. We have to apply the safety algorithm on the system by
converting the resource allocation graph into the allocation matrix and request matrix.
We can snatch one of the resources from the owner of the resource (process) and give
it to the other process with the expectation that it will complete the execution and will release
this resource sooner. Well, choosing a resource which will be snatched is going to be a bit
difficult.
System passes through various states to get into the deadlock state. The operating
system canrollback the system to the previous safe state. For this purpose, OS needs to
implement check pointing at every state.
The moment, we get into deadlock, we will rollback all the allocations to get into the
previous safe state.
For Process
Kill a process
Killing a process can solve our problem but the bigger concern is to decide which
process to kill. Generally, Operating system kills a process which has done least amount of
work until now.
This is not a suggestible approach but can be implemented if the problem becomes very
serious. Killing all process will lead to inefficiency in the system because all the processes will
execute again from starting.
5 MARKS
1.What do you mean by an operating system? What are its basic functions?
2. what is Deadlocks? Explain it?
MCQ
Explanation: True, The hot standby mode is a fail safe in which a hot standby node is part of
the system .
UNIT-2
Distributed Operating Systems: Issues – Communication Primitives – Lamport‟s Logical
Clocks – Deadlock handling strategies – Issues in deadlock detection and resolution-distributed
file systems –design issues – Case studies – The Sun Network File System-Coda.
ISSUES:
Examples¶
Distributed systems
Application examples
Email
News
Multimedia information systems - video conferencing
Airline reservation system
Banking system
File downloads (BitTorrent)
Messaging
Illustration
Design Issues
Openness
Resource Sharing
Concurrency
Scalability
Fault-Tolerance
Transparency
High-Performance
Naming
Communication
Communication is an essential part of distributed systems - e.g., clients and servers must
communicate for request and response
Asynchronous or non-blocking
Types of Communication
Client-Server
Group Multicast
Function Shipping
Performance of distributed systems depends critically on communication performance
We will study the software components involved in communication
Client-Server Communication
Client-Server Communication
Client-Server Communication¶
Group Multicast¶
Group Multicast
Software Structure
Consistency Management
Caching
Suppose your program (pseudocode) adds numbers stored in a file as follows (assume
each number is 4 bytes:
for I= 1, 1000
tmp = read next number from file
sum = sum + tmp
end for
Copy code
With no caching, each read will go over the network, which will send a new 4 byte
number. Assuming 1 millisecond (ms) to get a number, requres a total of 1s to get all
of the numbers.
With caching, assuming 1000 byte pages, 249 of the 250 reads will be local requests
(from the cache).
Consistency¶
Update consistency
when multiple processes access and update data concurrently
effect should be such that all processes sharing data see the same values
(consistent image)
E.g., sharing data in a database
Replication consistency
when data replicated and once process updates it
All other processes should see the updated data immediately
e.g., replicated files, electronic bulletin board
Cache consistency
When data (normally at different levels of granularity, such as pages, disk
blocks, files…) is cached and updates by one process, it must be invalidated or
updated by others
When and how depends on the consistency models used
Workload Allocation¶
In distributed systems many resources (e.g., other workstations, servers etc.) may be
available for “computing”
Capacity and size of memory of a workstation or server may determine what
applications may are able to run
Parts of applications may be run on different workstations for parallelism (e.g.,
compiling different files of the same program)
Some workstations or servers may have special hardware to do certain types of
applications fast (e.g., video compression)
Idle workstations may be utilized for better performance and utilization
In a processor pool model, processes are allocated to processors for their lifetime (e.g the
Amoeba research O/S supports this concept).
Quality-of-Service¶
Quality of Service (a.k.a. QoS) refers to performance and other service expectations of a client
or an application.
Performance
Reliability and availability
security
Naming
Scalability
Compatibility
Process synchronization
Data migration: data are brought to the location that needs them.
o distributed filesystem (file migration)
o distributed shared memory (page migration)
Computation migration: the computation migrates to another location.
o remote procedure call: computation is done at the remote machine.
o processes migration: processes are transferred to other processors.
Security
Structuring
Communication Networks
Communication Models
message passing
remote procedure call (RPC)
Message Passing Primitives
You can find more information on these and other socket I/O operations in the Unix man pages.
COMMUNICATION PRIMITIVES
Message Passing
Locking
Leader Election
Atomic
Consensus
Replication
Message Passing
A distributed system’s nodes can communicate with one another by using a protocol
called message forwarding. It permits communication between nodes that might be
dispersed geographically, uses various operating systems or coding languages, and has
various processing powers.
e.g Message passing, for instance, can be used in a microservices architecture to
facilitate communication between several services that each carry out particular tasks.
When Service B receives a message from Service A, it may process it and reply to
Service A. This enables services to function independently of one another and provides
for flexible connectivity between them.
Locking
e.g For instance, locking can be used in a distributed database to prevent multiple nodes
from writing to the same database record at the same time. The other nodes must wait
for the lock to be released, while only one node can acquire the lock and execute the
write operation.
Leader Election
A distributed system’s leader node is chosen using the leader election protocol to control
coordination and decision-making. It’s frequently used in fault-tolerant systems to make
sure that only one node is in charge of managing operations and making decisions.
e.g For instance, a leader election protocol can be used in a distributed system with
numerous nodes to guarantee that one node is designated as the major node in charge of
coordinating operations. Another node can be chosen as the new leader to take over the
coordinating and decision-making duties if the primary node fails.
Atomic Transactions
Atomic transactions are a method for ensuring that several activities are carried out as a
single, indivisible unit, thereby ensuring consistency and dependability. Atomic transactions in
a distributed system guarantee that a set of operations will either succeed completely or fail
completely.
e.g An atomic transaction, for instance, can be used in a banking application to guarantee
that a money transfer between two accounts is successful or completely unsuccessful. To
maintain consistency and dependability, the entire transaction is rolled back if any portion of it
fails.
Consensus
e.g For instance, a consensus mechanism like proof of work or proof of stake is used in
a blockchain network to make sure that all nodes concur on the network’s state, the sequencing
of transactions, and the generation of new blocks.
Replication
Through replication, it is made possible for another node to take over processing duties
in the event of a failed node without affecting the system’s overall performance.
e.g Replication, for instance, can be used in a web application to guarantee that many
instances of the program are active at once, offering high availability and scalability. Without
affecting the general user experience, processing can continue if one instance fails.
Take the starting value as 1, since it is the 1st event and there is no incoming value
at the starting point:
e11 = 1
e21 = 1
The value of the next point will go on increasing by d (d = 1), if there is no
incoming value i.e., to follow [IR1].
e12 = e11 + d = 1 + 1 = 2
e13 = e12 + d = 2 + 1 = 3
e14 = e13 + d = 3 + 1 = 4
e15 = e14 + d = 4 + 1 = 5
e16 = e15 + d = 5 + 1 = 6
e22 = e21 + d = 1 + 1 = 2
e24 = e23 + d = 3 + 1 = 4
e26 = e25 + d = 6 + 1 = 7
When there will be incoming value, then follow [IR2] i.e., take the maximum
value between Cj and Tm + d.
e17 = max(7, 5) = 7, [e16 + d = 6 + 1 = 7, e24 + d = 4 + 1 = 5, maximum
among 7 and 5 is 7]
e23 = max(3, 3) = 3, [e22 + d = 2 + 1 = 3, e12 + d = 2 + 1 = 3, maximum
among 3 and 3 is 3]
e25 = max(5, 6) = 6, [e24 + 1 = 4 + 1 = 5, e15 + d = 5 + 1 = 6, maximum
among 5 and 6 is 6]
Limitation:
In case of [IR1], if a -> b, then C(a) < C(b) -> true.
In case of [IR2], if a -> b, then C(a) < C(b) -> May be true or may not be true.
C++
C
Java
Python3
C#
Javascript
#include <bits/stdc++.h>
using namespace std;
// Function Call
display(e1, e2, p1, p2);
}
// Driver Code
int main()
{
int e1 = 5, e2 = 3, m[5][3];
// Function Call
lamportLogicalClock(e1, e2, m);
return 0;
}
Output
e21 e22 e23
e11 0 0 0
e12 0 0 1
e13 0 0 0
e14 0 0 0
e15 0 -1 0
The time stamps of events in P1:
12345
The time stamps of events in P2:
123
Time Complexity: O(e1 * e2 * (e1 + e2))
Auxiliary Space: O(e1 + e2)
DEADLOCK HANDLING STRATEGIES
The following are the strategies used for Deadlock Handling in Distributed System:
Deadlock Prevention
Deadlock Avoidance
Deadlock Detection and Recovery
1. Deadlock Prevention: As the name implies, this strategy ensures that deadlock can never
happen because system designing is carried out in such a way. If any one of the deadlock-
causing conditions is not met then deadlock can be prevented. Following are the three
methods used for preventing deadlocks by making one of the deadlock conditions to be
unsatisfied:
Collective Requests: In this strategy, all the processes will declare the required
resources for their execution beforehand and will be allowed to execute only if
there is the availability of all the required resources. When the process ends up
with processing then only resources will be released. Hence, the hold and wait
condition of deadlock will be prevented.
But the issue is initial resource requirements of a process before it starts are based
on an assumption and not because they will be required. So, resources will be
unnecessarily occupied by a process and prior allocation of resources also affects
potential concurrency.
Ordered Requests: In this strategy, ordering is imposed on the resources and
thus, process requests for resources in increasing order. Hence, the circular wait
condition of deadlock can be prevented.
An ordering strictly indicates that a process never asks for a low
resource while holding a high one.
There are two more ways of dealing with global timing and
transactions in distributed systems, both of which are based on the
principle of assigning a global timestamp to each transaction as soon
as it begins.
During the execution of a process, if a process seems to be blocked
because of the resource acquired by another process then the timestamp
of the processes must be checked to identify the larger timestamp
process. In this way, cycle waiting can be prevented.
It is better to give priority to the old processes because of their long
existence and might be holding more resources.
It also eliminates starvation issues as the younger transaction will
eventually be out of the system.
Preemption: Resource allocation strategies that reject no-preemption conditions
can be used to avoid deadlocks.
Wait-die: If an older process requires a resource held by a younger
process, the latter will have to wait. A young process will be destroyed
if it requests a resource controlled by an older process.
Wound-wait: If an old process seeks a resource held by a young
process, the young process will be preempted, wounded, and killed,
and the old process will resume and wait. If a young process needs a
resource held by an older process, it will have to wait.
2. Deadlock Avoidance: In this strategy, deadlock can be avoided by examining the state of
the system at every step. The distributed system reviews the allocation of resources and
wherever it finds an unsafe state, the system backtracks one step and again comes to the safe
state. For this, resource allocation takes time whenever requested by a process. Firstly, the
system analysis occurs whether the granting of resources will make the system in a safe state
or unsafe state then only allocation will be made.
A safe state refers to the state when the system is not in deadlocked state and order
is there for the process regarding the granting of requests.
An unsafe state refers to the state when no safe sequence exists for the system.
Safe sequence implies the ordering of a process in such a way that all the processes
run to completion in a safe state.
3. Deadlock Detection and Recovery: In this strategy, deadlock is detected and an attempt
is made to resolve the deadlock state of the system. These approaches rely on a Wait-For-
Graph (WFG), which is generated and evaluated for cycles in some methods. The following
two requirements must be met by a deadlock detection algorithm:
Progress: In a given period, the algorithm must find all existing deadlocks. There
should be no deadlock existing in the system which is undetected under this
condition. To put it another way, after all, wait-for dependencies for a deadlock
have arisen, the algorithm should not wait for any additional events to detect the
deadlock.
No False Deadlocks: Deadlocks that do not exist should not be reported by the
algorithm which is called phantom or false deadlocks.
There are different types of deadlock detection techniques:
Centralized Deadlock Detector: The resource graph for the entire system is
managed by a central coordinator. When the coordinator detects a cycle, it
terminates one of the processes involved in the cycle to break the deadlock.
Messages must be passed when updating the coordinator’s graph. Following are
the methods:
A message must be provided to the coordinator whenever an arc is
created or removed from the resource graph.
Every process can transmit a list of arcs that have been added or
removed since the last update periodically.
When information is needed, the coordinator asks for it.
Hierarchical Deadlock Detector: In this approach, deadlock detectors are
arranged in a hierarchy. Here, only those deadlocks can be detected that fall within
their range.
Distributed Deadlock Detector: In this approach, detectors are distributed so
that all the sites can fully participate to resolve the deadlock state. In one of the
following below four classes for the Distributed Detection Algorithm- The probe-
based scheme can be used for this purpose. It follows local WFGs to detect local
deadlocks and probe messages to detect global deadlocks.
There are four classes for the Distributed Detection Algorithm:
Path-pushing: In path-pushing algorithms, the detection of distributed deadlocks
is carried out by maintaining an explicit global WFG.
Edge-chasing: In an edge-chasing algorithm, probe messages are used to detect
the presence of a cycle in a distributed graph structure along the edges of the
graph.
Diffusion computation: Here, the computation for deadlock detection is
dispersed throughout the system’s WFG.
Global state detection: The detection of Distributed deadlocks can be made by
taking a snapshot of the system and then inspecting it for signs of a deadlock.
To recover from a deadlock, one of the methods can be followed:
Termination of one or more processes that created the unsafe state.
Using checkpoints for the periodic checking of the processes so that whenever
required, rollback of processes that makes the system unsafe can be carried out
and hence, maintained a safe state of the system.
Breaking of existing wait-for relationships between the processes.
Rollback of one or more blocked processes and allocating their resources to
stopped processes, allowing them to restart operation.
Deadlock
Deadlock is a fundamental problem in distributed systems.
A process may request resources in any order, which may not be known a priori
and a process can request resource while holding others.
If the sequence of the allocations of resources to the processes is not controlled.
A deadlock is a state where a set of processes request resources that are held by
other processes in the set.
DEADLOCK DETECTION:
1. Deadlock Prevention:
Prevention involves ensuring that at least one of the necessary conditions for
deadlock (mutual exclusion, hold and wait, no preemption, circular wait) is not
satisfied.
By carefully managing resource allocation and enforcing certain policies,
deadlocks can be avoided altogether.
However, prevention methods can be complex, restrictive, and may limit system
performance or resource utilization.
2. Deadlock Avoidance:
A distributed file system (DFS) is a file system that is distributed on various file servers and
locations. It permits programs to access and store isolated data in the same method as in the
local files. It also permits the user to access files from any system. It allows network users to
share information and files in a regulated and permitted manner. Although, the servers have
complete control over the data and provide users access control.
DFS's primary goal is to enable users of physically distributed systems to share resources and
information through the Common File System (CFS). It is a file system that runs as a part of
the operating systems. Its configuration is a set of workstations and mainframes that a LAN
connects. The process of creating a namespace in DFS is transparent to the clients.
DFS has two components in its services, and these are as follows:
1. Local Transparency
2. Redundancy
Local Transparency
Redundancy
In the case of failure or heavy load, these components work together to increase data
availability by allowing data from multiple places to be logically combined under a single
folder known as the "DFS root".
It is not required to use both DFS components simultaneously; the namespace component can
be used without the file replication component, and the file replication component can be used
between servers without the namespace component.
Features
There are various features of the DFS. Some of them are as follows:
Transparency
1. Structure Transparency
The client does not need to be aware of the number or location of file servers and storage
devices. In structure transparency, multiple file servers must be given to adaptability,
dependability, and performance.
2. Naming Transparency
There should be no hint of the file's location in the file's name. When the file is transferred
form one node to other, the file name should not be changed.
3. Access Transparency
Local and remote files must be accessible in the same method. The file system must
automatically locate the accessed file and deliver it to the client.
4. Replication Transparency
When a file is copied across various nodes, the copies files and their locations must be hidden
from one node to the next.
Scalability
The distributed system will inevitably increase over time when more machines are added to the
network, or two networks are linked together. A good DFS must be designed to scale rapidly
as the system's number of nodes and users increases.
Data Integrity
Many users usually share a file system. The file system needs to secure the integrity of data
saved in a transferred file. A concurrency control method must correctly synchronize
concurrent access requests from several users who are competing for access to the same file. A
file system commonly provides users with atomic transactions that are high-level concurrency
management systems for data integrity.
High Reliability
The risk of data loss must be limited as much as feasible in an effective DFS. Users must not
feel compelled to make backups of their files due to the system's unreliability. Instead, a file
system should back up key files so that they may be restored if the originals are lost. As a high-
reliability strategy, many file systems use stable storage.
High Availability
A DFS should be able to function in the case of a partial failure, like a node failure, a storage
device crash, and a link failure.
Ease of Use
The UI of a file system in multiprogramming must be simple, and the commands in the file
must be minimal.
Performance
The average time it takes to persuade a client is used to assess performance. It must perform
similarly to a centralized file system.
Initial versions of DFS used Microsoft's File Replication Service (FRS), enabling basic file
replication among servers. FRS detects new or altered files and distributes the most recent
versions of the full file to all servers.
Windows Server 2003 R2 developed the "DFS Replication" (DFSR). It helps to enhance
FRS by only copying the parts of files that have changed and reducing network traffic with
data compression. It also gives users the ability to control network traffic on a configurable
schedule using flexible configuration options.
The DFS's server component was firstly introduced as an additional feature. When it
was incorporated into Windows NT 4.0 Server, it was called "DFS 4.1". Later, it was
declared a standard component of all Windows 2000 Server editions. Windows NT 4.0 and
later versions of Windows have client-side support.
Linux kernels 2.6.14 and later include a DFS-compatible SMB client VFS known
as "cifs". DFS is available in versions Mac OS X 10.7 (Lion) and later.
There are two methods of DFS in which they might be implemented, and these are as follows:
It does not use Active Directory and only permits DFS roots that exist on the local
system. A Standalone DFS may only be acquired on the systems that created it. It offers no-
fault liberation and may not be linked to other DFS.
It stores the DFS configuration in Active Directory and creating namespace root
at domainname>dfsroot> or FQDN>dfsroot>.
DFS namespace
SMB routes of the form are used in traditional file shares that are linked to a single server.
\\<SERVER>\<path>\<subpath>
Domain-based DFS file share paths are identified by utilizing the domain name for the server's
name throughout the form.
\\<DOMAIN.NAME>\<dfsroot>\<path>
When users access such a share, either directly or through mapping a disk, their computer
connects to one of the accessible servers connected with that share, based on rules defined by
the network administrator. For example, the default behavior is for users to access the nearest
server to them; however, this can be changed to prefer a certain server.
There are several applications of the distributed file system. Some of them are as follows:
Hadoop
A client-server architecture enables a computer user to store, update, and view files
remotely. It is one of various DFS standards for Network-Attached Storage.
IBM developed an SMB protocol to file sharing. It was developed to permit systems to
read and write files to a remote host across a LAN. The remote host's directories may be
accessed through SMB and are known as "shares".
NetWare
There are various advantages and disadvantages of the distributed file system. These
are as follows:
Advantages
There are various advantages of the distributed file system. Some of the advantages are as
follows:
Disadvantages
There are various disadvantages of the distributed file system. Some of the disadvantages are
as follows:
DESIGN ISSUES
CASE STUDIES
a file system that serves very large data files (hundreds of Gigabytes or Terabytes). The
architecture presented here is a slightly simplified description of the Google File System and
of several of its descendants, including the HADOOP Distributed File System (HDFS)
available as an opensource project.
The technical environment is that of a high speed local network connecting a cluster of
servers. The file systems is designed to satisfy some specific requirements:
What is NFS?
Network File System (NFS) is a networking protocol for distributed file sharing. A file
system defines the way data in the form of files is stored and retrieved from storage devices,
such as hard disk drives, solid-state drives and tape drives. NFS is a network file sharing
protocol that defines the way files are stored and retrieved from storage devices across
networks.
The NFS protocol defines a network file system, originally developed for local file
sharing among Unix systems and released by Sun Microsystems in 1984. The NFS protocol
specification was first published by the Internet Engineering Task Force (IETF) as an internet
protocol in RFC 1094 in 1989. The current version of the NFS protocol is documented in RFC
7530, which documents the NFS version 4 (NFSv4) Protocol.
NFS is one of the most widely used protocols for file servers. NFS implementations are
available for most modern operating systems (OSes), including the following:
Cloud vendors also implement the NFS protocol for cloud storage, including Amazon
Elastic File System, NFS file shares in Microsoft Azure and Google Cloud Filestore.
Any device that can be attached to an NFS host file system can be shared through NFS.
This includes hard disks, solid state drives, tape drives, printers and other peripherals. Users
with appropriate permissions can access resources from their client machines as if those
resources are mounted locally.
NFS is an application layer protocol, meaning that it can operate over any transport or
network protocol stack. However, in most cases NFS is implemented on systems running
the TCP/IP protocol suite. The original intention for NFS was to create a simple
and stateless protocol for distributed file system sharing.
Early versions of NFS used the User Datagram Protocol (UDP) for its transport layer.
This eliminated the need to define a stateful storage protocol; however, NFS now supports both
the Transmission Control Protocol (TCP) and UDP. Support for TCP as a transport layer
protocol was added to NFS version 3 (NFSv3) in 1995.
NFS was initially conceived as a method for sharing file systems across workgroups using
Unix. It is still often used for ad hoc sharing of resources.
The process of setting up NFS service includes the following three steps, whether on an
enterprise file server or on a local workstation:
1. Verify that rpc.mountd or just mountd is installed and working. This is the NFS
daemon -- the program that listens to the network for NFS requests.
2. Create or choose a shared directory on the server. This is the NFS mount point.
Using the mount point and the server host name or address uniquely identifies the
NFS resource.
3. Configure permissions on the NFS server to enable authorized users to read, write
and execute files in the file system.
Setting up an NFS client machine to access an NFS server can be done manually, using
the mount command or using an NFS configuration file -- /etc/exports. Each line in the NFS
config file contains a mount point, an IP address or a host domain name and any
configuration metadata needed to access the file system.
NFS
enables networked resource sharing, just like Microsoft's Server Message Block (SMB)
protocol. SMB and NFS are implemented on many different OSes.
Versions of NFS
NFSv4, the current version of NFS, and other versions subsequent to NFS version 2 (NFSv2)
are usually compatible after client and server machines negotiate a connection.
NFS versions from the earliest to the current one are as follows:
Sun Microsystems published the first implementation of its network file system in March 1984.
The objective was to provide transparent, remote access to file systems. Sun intended to
differentiate its NFS project from other Unix file systems by designing it to be easily portable
to other OSes and machine architectures.
NFSv2 is specified in RFC 1094. Its key features included the following:
It uses UDP as its transport protocol. This enables keeping the server stateless, with
file locking implemented outside of the core protocol.
Its file offsets are limited to 32-bit quantity, making the maximum size of files
clients can access 4.2 GB.
Its data transfer size is limited to 8 KB, and it requires that NFS servers commit
data written by a client to a disk or non-volatile random-access memory (NVRAM)
before responding.
Specified in RFC 1813, NFSv3 incorporated the following new features and updates:
It extended file offsets from 32- to 64-bits, which removed the 4.2 GB maximum
file size limit.
It relaxed the 8 KB data transfer limitation rule to enable larger read and write
transfers.
TCP was added as a transport layer protocol option in NFSv3. TCP transport makes
it easier to use NFS over a wide area network (WAN) and enhances read and write
transfer capabilities.
Added a COMMIT operation enabling reliable asynchronous writes, and an
ACCESS RPC that improves support for access control lists, or ACLs, and power
users.
The server replies to WRITE RPCs instantly in NFSv3, without syncing to a disk
or NVRAM. To ensure data is on stable storage, the client only needs to send a
COMMIT RPC.
NFSv3 is reported to still be in widespread use. It is interoperable with NFSv4 but lacks support
for many of the new and improved features rolled out with later versions.
The update to NFSv4 was first documented in RFC 3010 in 2000. This is the first version of
the NFS specification that the IETF published as a proposed standard; prior versions were
published as informational.
A new API was included for future additions of new security mechanisms.
A slightly-updated version of the NFS specification was republished in 2003 as RFC 3530, to
correct errors in the first version and add some improvements to the protocol.
A minor version protocol, NFSv4.1 published as RFC 5661, added new features including the
following:
NFSv4.2 is documented in RFC 7862. It added the following new features and updates:
Dependence on RPCs makes NFS inherently insecure and should only be used
on a trusted network behind a firewall. Otherwise, NFS will be vulnerable to
internet threats.
Some reviews of NFSv4 and NFSv4.1 suggest that these versions have limited
bandwidth and scalability and that NFS slows down during heavy network
traffic. The bandwidth and scalability issue is reported to have improved with
NFSv4.2.
CODA
Coda is a distributed filesystem with its origin in AFS2. It has many features that are very
desirable for network filesystems. Currently, Coda has several features not found elsewhere.
CMU is making a serious effort to improve Coda. We believe that the system needs to be
taken from its current status to a widely available system. The research to date has produced a
lot of information regarding performance and implementation on which the design was based.
We are now in a position to further develop and adapt the system for wider use. We will
emphasize:
5 MARKS
10 MARKS
MCQ:
9. The capability of a system to adapt the increased service load is called ___________
a) scalability
b) tolerance
c) capacity
d) none of the mentioned
Answer: a
Explanation: None.
10. Internet provides _______ for remote login.
a) telnet
b) http
c) ftp
d) rpc
Answer: a
Explanation: None.
11. What is not true about a distributed system?
a) It is a collection of processor
b) All processors are synchronized
c) They do not share memory
d) None of the mentioned
Answer: b
Explanation: None.
12. What are the characteristics of processor in distributed system?
a) They vary in size and function
b) They are same in size and function
c) They are manufactured with single purpose
d) They are real-time devices
Answer: a
Explanation: None.
27. What are the different ways mounting of the file system?
a) boot mounting
b) auto mounting
c) explicit mounting
d) all of the mentioned
Answer: d
Explanation: None.
50. When a client has a cascading mount _______ server(s) is/are involved in a path name
traversal.
a) at least one
b) more than one
c) more than two
d) more than three
Answer: b
Explanation: None.
UNIT-3
Realtime Operating Systems : Introduction – Applications of Real Time Systems – Basic
Model of Real Time System – Characteristics – Safety and Reliability - Real Time Task
Scheduling
INTRODUCTION:
RTOS is used in real-time applications that must work within specific deadlines. Following
are the common areas of applications of Real-time operating systems are given below.
In Hard RTOS, all critical tasks must be completed within the specified time duration,
i.e., within the given deadline. Not meeting the deadline would result in critical
failures such as damage to equipment or even loss of human life.
For Example,
Let's take an example of airbags provided by carmakers along with a handle in the
driver's seat. When the driver applies brakes at a particular instance, the airbags grow
and prevent the driver's head from hitting the handle. Had there been some delay even
of milliseconds, then it would have resulted in an accident.
Soft RTOS accepts a few delays via the means of the Operating system. In this kind
of RTOS, there may be a closing date assigned for a particular job, but a delay for a
small amount of time is acceptable. So, cut off dates are treated softly via means of
this kind of RTOS.
For Example,
This type of system is used in Online Transaction systems and Livestock price
quotation Systems.
In Firm RTOS additionally want to observe the deadlines. However, lacking a closing
date might not have a massive effect, however may want to purposely undesired
effects, like a massive discount within the fine of a product.
o Easy to layout, develop and execute real-time applications under the real-time
operating system.
o The real-time working structures are extra compact, so those structures require much
less memory space.
o In a Real-time operating system, the maximum utilization of devices and systems.
o Focus on running applications and less importance to applications that are in the
queue.
o Since the size of programs is small, RTOS can also be embedded systems like in
transport and others.
o These types of systems are error-free.
o Memory allocation is best managed in these types of systems.
o Real-time operating systems have complicated layout principles and are very costly to
develop.
o Real-time operating systems are very complex and can consume critical CPU cycles.
Sensor: Sensor is used for the conversion of some physical events or characteristics
into the electrical signals. These are hardware devices that takes the input from
environment and gives to the system by converting it. For example, a thermometer
takes the temperature as physical characteristic and then converts it into electrical
signals for the system.
Actuator: Actuator is the reverse device of sensor. Where sensor converts the physical
events into electrical signals, actuator does the reverse. It converts the electrical signals into
the physical events or characteristics. It takes the input from the output interface of the
system. The output from the actuator may be in any form of physical action. Some of the
commonly used actuator are motors and heaters.
Signal Conditioning Unit: When the sensor converts the physical actions into electrical
signals, then computer can’t used them directly. Hence, after the conversion of physical
actions into electrical signals, there is need of conditioning. Similarly while giving the
output when electrical signals are sent to the actuator, then also conditioning is required.
Therefore, Signal conditioning is of two types:
Input Conditioning Unit: It is used for conditioning the electrical signals
coming from sensor.
Output Conditioning Unit: It is used for conditioning the electrical signals
coming from the system.
Interface Unit: Interface units are basically used for the conversion of digital to analog and
vice-versa. Signals coming from the input conditioning unit are analog and the system does
the operations on digital signals only, then the interface unit is used to change the analog
signals to digital signals. Similarly, while transmitting the signals to output conditioning
unit the interface of signals are changed i.e. from digital to analog. On this basis, Interface
unit is also of two types:
Input Interface: It is used for conversion of analog signals to digital.
Output Interface: It is used for conversion of digital signals to analog.
CHARACTERISTICS
Characteristics of Real-time System:
Following are the some of the characteristics of Real-time System:
1. Time Constraints: Time constraints related with real-time systems simply
means that time interval allotted for the response of the ongoing program. This
deadline means that the task should be completed within this time interval. Real-
time system is responsible for the completion of all tasks within their time
intervals.
2. Correctness: Correctness is one of the prominent part of real-time systems.
Real-time systems produce correct result within the given time interval. If the
result is not obtained within the given time interval then also result is not
considered correct. In real-time systems, correctness of result is to obtain correct
result in time constraint.
3. Embedded: All the real-time systems are embedded now-a-days. Embedded
system means that combination of hardware and software designed for a specific
purpose. Real-time systems collect the data from the environment and passes to
other components of the system for processing.
4. Safety: Safety is necessary for any system but real-time systems provide critical
safety. Real-time systems also can perform for a long time without failures. It
also recovers very soon when failure occurs in the system and it does not cause
any harm to the data and information.
5. Concurrency: Real-time systems are concurrent that means it can respond to a
several number of processes at a time. There are several different tasks going on
within the system and it responds accordingly to every task in short intervals.
This makes the real-time systems concurrent systems.
6. Distributed: In various real-time systems, all the components of the systems are
connected in a distributed way. The real-time systems are connected in such a
way that different components are at different geographical locations. Thus all
the operations of real-time systems are operated in distributed ways.
7. Stability: Even when the load is very heavy, real-time systems respond in the
time constraint i.e. real-time systems does not delay the result of tasks even
when there are several task going on a same time. This brings the stability in
real-time systems.
8. Fault tolerance: Real-time systems must be designed to tolerate and recover
from faults or errors. The system should be able to detect errors and recover
from them without affecting the system’s performance or output.
9. Determinism: Real-time systems must exhibit deterministic behavior, which
means that the system’s behavior must be predictable and repeatable for a given
input. The system must always produce the same output for a given input,
regardless of the load or other factors.
10. Real-time communication: Real-time systems often require real-time
communication between different components or devices. The system must
ensure that communication is reliable, fast, and secure.
11. Resource management: Real-time systems must manage their resources
efficiently, including processing power, memory, and input/output devices. The
system must ensure that resources are used optimally to meet the time
constraints and produce correct results.
12. Heterogeneous environment: Real-time systems may operate in a
heterogeneous environment, where different components or devices have
different characteristics or capabilities. The system must be designed to handle
these differences and ensure that all components work together seamlessly.
13. Scalability: Real-time systems must be scalable, which means that the system
must be able to handle varying workloads and increase or decrease its resources
as needed.
14. Security: Real-time systems may handle sensitive data or operate in critical
environments, which makes security a crucial aspect. The system must ensure
that data is protected and access is restricted to authorized users only.
REAL TIME TASK SCHEDULING
Tasks in Real-Time Systems
A real-time operating system (RTOS) serves real-time applications that process data
without any buffering delay. In an RTOS, the Processing time requirement is calculated in
tenths of seconds increments of time. It is a time-bound system that is defined as fixed time
constraints. In this type of system, processing must be done inside the specified constraints.
Otherwise, the system will fail.
Real-time tasks are the tasks associated with the quantitative expression of time. This
quantitative expression of time describes the behavior of the real-time tasks. Real-time tasks
are scheduled to finish all the computation events involved in it into timing constraint. The
timing constraint related to the real-time tasks is the deadline. All the real-time tasks need to
be completed before the deadline. For example, Input-output interaction with devices, web
browsing, etc.
There are the following types of tasks in real-time systems, such as:
1. Periodic Task
In periodic tasks, jobs are released at regular intervals. A periodic task repeats itself after a
fixed time interval. A periodic task is denoted by five tuples: Ti = < Φi, Pi, ei, Di >
Where,
o Φi: It is the phase of the task, and phase is the release time of the first job in the task.
If the phase is not mentioned, then the release time of the first job is assumed to be
zero.
o Pi: It is the period of the task, i.e., the time interval between the release times of two
consecutive jobs.
o ei: It is the execution time of the task.
o Di: It is the relative deadline of the task.
For example: Consider the task Ti with period = 5 and execution time = 3
Phase is not given so, assume the release time of the first job as zero. So the job of this task is
first released at t = 0, then it executes for 3s, and then the next job is released at t = 5, which
executes for 3s, and the next job is released at t = 10. So jobs are released at t = 5k where k =
0, 1. . . N
Hyper period of a set of periodic tasks is the least common multiple of all the tasks in that set.
For example, two tasks T1 and T2 having period 4 and 5 respectively will have a hyper
period, H = lcm(p1, p2) = lcm(4, 5) = 20. The hyper period is the time after which the pattern
of job release times starts to repeat.
2. Dynamic Tasks
1. Aperiodic Tasks: In this type of task, jobs are released at arbitrary time intervals.
Aperiodic tasks have soft deadlines or no deadlines.
2. Sporadic Tasks:They are similar to aperiodic tasks, i.e., they repeat at random
instances. The only difference is that sporadic tasks have hard deadlines. Three tuples
denote a sporadic task: Ti =(ei, gi, Di)
o Where
o ei: It is the execution time of the task.
o gi: It is the minimum separation between the occurrence of two consecutive
instances of the task.
o Di: It is the relative deadline of the task.
3. Critical Tasks
Critical tasks are those whose timely executions are critical. If deadlines are missed,
catastrophes occur.
For example, life-support systems and the stability control of aircraft. If critical tasks are
executed at a higher frequency, then it is necessary.
4. Non-critical Tasks
Non-critical tasks are real times tasks. As the name implies, they are not critical to the
application. However, they can deal with time, varying data, and hence they are useless if not
completed within a deadline. The goal of scheduling these tasks is to maximize the
percentage of jobs successfully executed within their deadlines.
Task Scheduling
Real-time task scheduling essentially refers to determining how the various tasks are the pick
for execution by the operating system. Every operating system relies on one or more task
schedulers to prepare the schedule of execution of various tasks needed to run. Each task
scheduler is characterized by the scheduling algorithm it employs. A large number of
algorithms for real-time scheduling tasks have so far been developed.
Here are the following types of task scheduling in a real-time system, such as:
1. Valid Schedule: A valid schedule for a set of tasks is one where at most one task is
assigned to a processor at a time, no task is scheduled before its arrival time, and the
precedence and resource constraints of all tasks are satisfied.
2. Feasible Schedule: A valid schedule is called a feasible schedule only if all tasks
meet their respective time constraints in the schedule.
3. Proficient Scheduler: A task scheduler S1 is more proficient than another scheduler
S2 if S1 can feasibly schedule all task sets that S2 can feasibly schedule, but not vice
versa. S1 can feasibly schedule all task sets that S2 can, but there is at least one task
set that S2 cannot feasibly schedule, whereas S1 can. If S1 can feasibly schedule all
task sets that S2 can feasibly schedule and vice versa, then S1 and S2 are called
equally proficient schedulers.
4. Optimal Scheduler: A real-time task scheduler is called optimal if it can feasibly
schedule any task set that any other scheduler can feasibly schedule. In other words, it
would not be possible to find a more proficient scheduling algorithm than an optimal
scheduler. If an optimal scheduler cannot schedule some task set, then no other
scheduler should produce a feasible schedule for that task set.
5. Scheduling Points: The scheduling points of a scheduler are the points on a timeline
at which the scheduler makes decisions regarding which task is to be run next. It is
important to note that a task scheduler does not need to run continuously, and the
operating system activates it only at the scheduling points to decide which task to run
next. The scheduling points are defined as instants marked by interrupts generated by
a periodic timer in a clock-driven scheduler. The occurrence of certain events
determines the scheduling points in an event-driven scheduler.
6. Preemptive Scheduler: A preemptive scheduler is one that, when a higher priority
task arrives, suspends any lower priority task that may be executing and takes up the
higher priority task for execution. Thus, in a preemptive scheduler, it cannot be the
case that a higher priority task is ready and waiting for execution, and the lower
priority task is executing. A preempted lower priority task can resume its execution
only when no higher priority task is ready.
7. Utilization: The processor utilization (or simply utilization) of a task is the average
time for which it executes per unit time interval. In notations:
for a periodic task Ti, the utilization ui = ei/pi, where
o ei is the execution time and
o pi is the period of Ti.
For a set of periodic tasks {Ti}: the total utilization due to all tasks U = i=1∑ n ei/pi.
Any good scheduling algorithm's objective is to feasibly schedule even those task sets
with very high utilization, i.e., utilization approaching 1. Of course, on a uniprocessor,
it is not possible to schedule task sets having utilization of more than 1.
8. Jitter
Jitter is the deviation of a periodic task from its strict periodic behavior. The arrival
time jitter is the deviation of the task from the precise periodic time of arrival. It may
be caused by imprecise clocks or other factors such as network congestions. Similarly,
completion time jitter is the deviation of the completion of a task from precise
periodic points.
The completion time jitter may be caused by the specific scheduling algorithm
employed, which takes up a task for scheduling as per convenience and the load at an
instant, rather than scheduling at some strict time instants. Jitters are undesirable for
some applications.
Sometimes actual release time of a job is not known. Only know that ri is in a range
[ri-, ri+]. This range is known as release time jitter. Here
o ri is how early a job can be released and,
o ri+ is how late a job can be released.
Only the range [ei-, ei+] of the execution time of a job is known. Here
o ei- is the minimum amount of time required by a job to complete its execution
and,
o ei+ is the maximum amount of time required by a job to complete its
execution.
Jobs in a task are independent if they can be executed in any order. If there is a specific order
in which jobs must be executed, then jobs are said to have precedence constraints. For
representing precedence constraints of jobs, a partial order relation < is used, and this is
called precedence relation. A job Ji is a predecessor of job Jj if Ji < Jj, i.e., Jj cannot begin its
execution until Ji completes. Ji is an immediate predecessor of Jj if Ji < Jj, and there is no
other job Jk such that Ji < Jk < Jj. Ji and Jj are independent if neither Ji < Jj nor Jj < Ji is true.
An efficient way to represent precedence constraints is by using a directed graph G = (J, <)
where J is the set of jobs. This graph is known as the precedence graph. Vertices of the graph
represent jobs, and precedence constraints are represented using directed edges. If there is a
directed edge from Ji to Jj, it means that Ji is the immediate predecessor of Jj.
For example: Consider a task T having 5 jobs J1, J2, J3, J4, and J5, such that J2 and J5 cannot
begin their execution until J1 completes and there are no other constraints. The precedence
constraints for this example are:
1. < (1) = { }
2. < (2) = {1}
3. < (3) = { }
4. < (4) = { }
5. < (5) = {1}
Consider another example where a precedence graph is given, and you have to find
precedence constraints.
1. J1< J2
2. J2< J3
3. J2< J4
4. J3< J4
5MARKS
10MARKS
MCQ
7. Time duration required for scheduling dispatcher to stop one process and start another is
known as ____________
a) process latency
b) dispatch latency
c) execution latency
d) interrupt latency
Answer: b
Explanation: None.
8. Time required to synchronous switch from the context of one thread to the context of
another thread is called?
a) threads fly-back time
b) jitter
c) context switch time
d) none of the mentioned
Answer: c
Explanation: None.
9. Which one of the following is a real time operating system?
a) RTLinux
b) VxWorks
c) Windows CE
d) All of the mentioned
Answer: d
Explanation: None.
14. In a ______ real time system, it is guaranteed that critical real time tasks will be
completed within their deadlines.
a) soft
b) hard
c) critical
d) none of the mentioned
Answer: b
Explanation: None.
15. Some of the properties of real time systems include ____________
a) single purpose
b) inexpensively mass produced
c) small size
d) all of the mentioned
Answer: d
Explanation: None.
16. The amount of memory in a real time system is generally ____________
a) less compared to PCs
b) high compared to PCs
c) same as in PCs
d) they do not have any memory
Answer: a
Explanation: None.
17. What is the priority of a real time task?
a) must degrade over time
b) must not degrade over time
c) may degrade over time
d) none of the mentioned
Answer: b
Explanation: None.
19. The technique in which the CPU generates physical addresses directly is known as
____________
a) relocation register method
b) real addressing
c) virtual addressing
d) none of the mentioned
Answer: b
Explanation: None.
20. Earliest deadline first algorithm assigns priorities according to ____________
a) periods
b) deadlines
c) burst times
d) none of the mentioned
Answer: b
Explanation: None.
21. A process P1 has a period of 50 and a CPU burst of t1 = 25, P2 has a period of 80 and a
CPU burst of 35. The total CPU utilization is ____________
a) 0.90
b) 0.74
c) 0.94
d) 0.80
Answer: c
Explanation: None.
22. A process P1 has a period of 50 and a CPU burst of t1 = 25, P2 has a period of 80 and a
CPU burst of 35., the priorities of P1 and P2 are?
a) remain the same throughout
b) keep varying from time to time
c) may or may not be change
d) none of the mentioned
Answer: b
Explanation: None.
23. A process P1 has a period of 50 and a CPU burst of t1 = 25, P2 has a period of 80 and a
CPU burst of 35., can the two processes be scheduled using the EDF algorithm without
missing their respective deadlines?
a) Yes
b) No
c) Maybe
d) None of the mentione
Answer: a
Explanation: None.
24. Using EDF algorithm practically, it is impossible to achieve 100 percent utilization due to
__________
a) the cost of context switching
b) interrupt handling
c) power consumption
d) all of the mentioned
Answer: a
Explanation: None.
25. T shares of time are allocated among all processes out of N shares in __________
scheduling algorithm.
a) rate monotonic
b) proportional share
c) earliest deadline first
d) none of the mentioned
Answer: b
Explanation: None.
26. If there are a total of T = 100 shares to be divided among three processes, A, B and C. A
is assigned 50 shares, B is assigned 15 shares and C is assigned 20 shares.
A will have ______ percent of the total processor time.
a) 20
b) 15
c) 50
d) none of the mentioned
Answer: c
Explanation: None.
27. If there are a total of T = 100 shares to be divided among three processes, A, B and C. A
is assigned 50 shares, B is assigned 15 shares and C is assigned 20 shares.
B will have ______ percent of the total processor time.
a) 20
b) 15
c) 50
d) none of the mentioned
Answer: b
Explanation: None.
28. If there are a total of T = 100 shares to be divided among three processes, A, B and C. A
is assigned 50 shares, B is assigned 15 shares and C is assigned 20 shares.
C will have ______ percent of the total processor time.
a) 20
b) 15
c) 50
d) none of the mention
Answer: a
Explanation: None.
29. If there are a total of T = 100 shares to be divided among three processes, A, B and C. A
is assigned 50 shares, B is assigned 15 shares and C is assigned 20 shares.
If a new process D requested 30 shares, the admission controller would __________
a) allocate 30 shares to it
b) deny entry to D in the system
c) all of the mentioned
d) none of the mentioned
Answer: b
Explanation: None.
30. CPU scheduling is the basis of ___________
a) multiprocessor systems
b) multiprogramming operating systems
c) larger memory sized systems
d) none of the mentioned
Answer: b
Explanation: None.
Answer: c
Explanation: None.
38. Scheduling is done so as to ____________
a) increase CPU utilization
b) decrease CPU utilization
c) keep the CPU more idle
d) none of the mentioned
Answer: a
Explanation: None.
1. Since the development of handheld computers in the 1990s, the demand for software to operate and run on
these devices has increased.
2. Three major competitors have emerged in the handheld PC world with three different operating systems
for these handheld PCs.
3. Out of the three companies, the first was the Palm Corporation with their PalmOS.
4. Microsoft also released what was originally called Windows CE. Microsoft’s recently released operating
system for the handheld PC comes under the name of Pocket PC.
5. More recently, some companies producing handheld PCs have also started offering a handheld version of
the Linux operating system on their machines.
Features of Handheld Operating System:
1. Its work is to provide real-time operations.
2. There is direct usage of interrupts.
3. Input/Output device flexibility.
4. Configurability
Types of Handheld Operating Systems:
Types of Handheld Operating Systems are as follows:
1. Palm OS
2. Symbian OS
3. Linux OS
4. Windows
5. Android
Palm OS:
Since the Palm Pilot was introduced in 1996, the Palm OS platform has provided various mobile devices with essential
business tools, as well as the capability that they can access the internet via a wireless connection.
These devices have mainly concentrated on providing basic personal information-management applications. The
latest Palm products have progressed a lot, packing in more storage, wireless internet, etc.
Symbian OS:
It has been the most widely-used smartphone operating system because of its ARM architecture before it was
discontinued in 2014. It was developed by Symbian Ltd.
This operating system consists of two subsystems where the first one is the microkernel-based operating system
which has its associated libraries and the second one is the interface of the operating system with which a user can
interact.
Since this operating system consumes very less power, it was developed for smartphones and handheld devices.
It has good connectivity as well as stability.
It can run applications that are written in Python, Ruby, .NET, etc.
Linux OS:
Linux OS is an open-source operating system project which is a crossplatform system that was developed based on
UNIX. It was developed by Linus Torvalds. It is a system software that basically allows the apps and users to perform
some tasks on the PC.
Linux is free and can be easily downloaded from the internet and it is considered that it has the best community
support. Linux is portable which means it can be installed on different types of devices like mobile, computers, and
tablets.
It is a multi-user operating system.
Linux interpreter program which is called BASH is used to execute commands.
It provides user security using authentication features.
Windows OS:
Windows is an operating system developed by Microsoft. Its interface which is called Graphical User Interface
eliminates the need to memorize commands for the command line by using a mouse to navigate through menus,
dialog boxes, and buttons.
It is named Windows because its programs are displayed in the form of a square. It has been designed for both a
beginner as well professional.
It comes preloaded with many tools which help the users to complete all types of tasks on their computer, mobiles,
etc.
It has a large user base so there is a much larger selection of available software programs.
Android OS:
It is a Google Linux-based operating system that is mainly designed for touchscreen devices such as phones, tablets,
etc. There are three architectures which are ARM, Intel, and MIPS which are used by the hardware for supporting
Android.
These lets users manipulate the devices intuitively, with movements of our fingers that mirror some common
motions such as swiping, tapping, etc.
Android operating system can be used by anyone because it is an opensource operating system and it is also free.
It offers 2D and 3D graphics, GSM connectivity, etc.
There is a huge list of applications for users since Play Store offers over one million apps.
Professionals who want to develop applications for the Android OS can download the Android Development Kit. By
downloading it they can easily develop apps for android.
REQUIREMENTS IN HAND HELD OS
Installations of handheld computers are progressing in a variety of fields such as logistics and manufacturing with
applications including inventory management, data verification, process management, traceability, and shipping
mistake prevention.
This section explains the environment that is required in order to actually install handheld computers.
1. Requirements for Operating Handheld Computers
2. Determining the Hardware Configuration
3. Power Supply Environment
4. Printers and Other Peripheral Equipment
5. Developing Software
6. KEYENCE Enables Easy Software Development With No Programming Required
1.Requirements for Operating Handheld Computers
The advantage of a handheld computer is its ability to perform multiple functions as a standalone device, filling
many roles such as reading various codes as well as collecting, sending, and receiving data. However, before
handheld computers can be installed, it is necessary to organize the surrounding environment.
The necessity of preparing both hardware and software For operation, a variety of equipment is necessary.
Examples include the PC or server to communicate with, the battery that supplies the power and the dedicated
battery charger, and the dedicated printer used to output the recorded data. It is also necessary to develop
software to provide system functions and operability that match the usage environment and the purpose.
2.Determining the HardwareConfiguration
Communication environment
Handheld computers can read and accumulate data in a standalone manner,but integrating these
devices with PCs and servers is essential in aggregating data, sharing data with different
departments, and making use of data from other departments. The problem is determining which
method to use to communicate between handheld computers and PCs/servers.
The answer is determined by the usage environment and generally is selected from one of two
options: using a communication unit and using a wireless LAN.
Use a communication unit when the usage location is limited
If the usage location is limited and is fixed, select the communication unit method. Use a LAN cable or a USB
cable to connect the communication unit to a PC.
Regarding portability and ease of use, handheld computers are cordless and battery powered. There are
various types of batteries that are used ,including dedicated rechargeable batteries and general-purpose dry
cell batteries.
When just using handheld computers within a company or facility, it is sufficient to prepare dedicated
cradles that automatically charge the handheld computers when they are docked such as at the end of
work.
In situations where it is expected that technicians and sales personnel will take the handheld
computers outside of the company, it is most common to select a handheld computer type that can
use dry cell batteries or general purpose rechargeable batteries that can be purchased immediately
when outside of the office in order to replace dead batteries instead of selecting a handheld computer
type that uses a dedicated battery charger.
The development of dedicated software is more difficult than establishing the hardware
environment that includes the handheld computers and peripheral equipment such as
communication equipment, batteries, and printers. This is because system construction such as
determining how to aggregate and process the read data and how to implementon-screen
operations are essentially the domain of system engineers. Naturally, the development costs
during hardware installation require a large investment. There is no short age of cases in which
operators want to install handheld computers to make work more efficient but
runintothebottleneckofsoftwaredevelopmentandareunabletoreachtheir expected efficiency.
The handheld computer installation conditions vary depending on the specifications and on whether a
corporate system is present, but the development methods can generally be separated into the four listed
below.
Embedded applications
With this pattern, the application to execute is embedded in the handheld computer. This is the optimal
method for corporations that want to accumulate data, implement rich device control, and develop
applications easily.
Web applications
With this pattern, the browser on the handheld computer accesses web pages on a web server. This is the
optimal method for corporations that want to use or are already using web applications and want to manage
applications in a centralized manner.
Terminal services
With this pattern, the handheld computer emulates PC applications. This is the optimal method for
corporations that want to use PC applications as-is and manage applications in a centralized manner.
Terminal emulators/middleware
When using handheld computers, there are different software development methods such as embedded
applications, web applications, terminal services,and terminal emulators/middleware. However, all
methods incur development costs and have their own delivery dates. KEYENCE's development tools solve
this problem and make it possible to more easily develop dedicated software on your own.
Thegreatestcharacteristicofthesetoolsistheirsimplevisualdevelopment. Anyone, even people with absolutely
no knowledge of difficult computer languages, can develop dedicated software just by selecting the
required functions, icons, and other such items from the rich templates and GUI (graphical user interface)
tools displayed on the PC screen.
This eliminates waste by reducing the hassle, cost, and time required to order development from dedicated
vendors and engineers .What's more, systems can be developed easily and quickly on your own, which
makes it possible to support low-cost, short-term system projects without difficulty.
Introduction To Mobile Operating System – PALM OS
PALM OS is an operating system for personal digital assistants, designed for touch screen. It consists of a limited
number of features designed for low memory and processor usage which in turn helps in getting longer battery life.
Features of PALM OS
Elementary memory management system.
Provides PALM Emulator.
Handwriting recognition is possible.
Supports recording and playback.
Supports C, and C++ software.
Palm Architecture
Development Cycle
For the development of the PALM OS, these are the phases it has to go through before it can be used in the market:
Editing the code for the operating system that is checking for errors and correcting errors.
Compile and Debug the code to check for bugs and correct functioning of the code.
Run the program on a mobile device or related device.
If all the above phases are passed, we can finally have our finished product which is the operating system for mobile
devices named PALM OS.
Advantages
Fewer features are designed for low memory and processor usage which means longer battery life.
No need to upgrade the operating system as it is handled automatically in PALM OS.
More applications are available for users.
Extended connectivity for users. Users can now connect to wide areas.
Disadvantages
The user cannot download applications using the external memory in PALM OS. It will be a disadvantage for users with
limited internal memory.
Systems and extended connectivity are less compared to what is offered by other operating systems.
SYMBIAN OS
Symbian is a discontinued mobile operating system developed and sold by Symbian LTD. It was a closed-
source mobile operating system designed for smart phones in 1998. Symbian OS was designed to be used
on higher-end mobile phones. It was an operating system for mobile devices which has limited resources,
multitasking needs, and soft real-time requirements.
The Symbian operating system has evolved from the Pison EPOC, which was released on ARM
processors. In June 1998, the Pison software was named Symbian LTD as the result of a joint venture
between Psion and phone manufacturers Ericsson, Motorola, and Nokia.
In the 1990s, software company Psion was actively working on the development of innovative mobile
operating systems. Their earlier productswere16-bitsystems,butin 1994,they began working ona32-bit version
programmed in C++,and it was namedEPOC32.Thenin1998,PsionrebrandedSymbianLtd. In collaboration with
popular mobile phone brands, Nokia, Ericsson, and Motorola.
Symbian Ltd. began upgrading EPOC32, and the new version was named Symbian OS.
FeaturesofSymbianOS
User Interface
Symbian offered an interactive graphical user interface for mobile phones with the AVKON toolkit,
also called S60.However,itwasdesignedmainly to be operated with a keyboard. As the demand for
touch screen phones increased, Symbian shifted to the Qt framework to design a better user
interface for touch screen phones.
Browser
Initially, Symbian phones came with Opera as the default browser. Later on, a built-in browser was
developed for the Symbian OS based on WebKit. In phones built in the S60 platform, this browser
was simply named as Web Browser for S60. It boasted of faster speed and better interface.
App Development
The standard software development kit to build apps for Symbian OS was Qt, with C++
programming language. UIQ and S60 also provided SDKs for app development on Symbian, but Qt
became the standard later on. As for the programming language, even though C++ Is preferred,
it’s also possible to build with Python, Java, and Adobe Flash Lite.
Multimedia
To fulfill consumer demand for entertainment, Symbian OS supported high- quality recording and
playback of audio and video, along with image conversion features. It expanded the ability of
mobile phones to handle multimedia files.
Security
As security is one of the most important things to consider for an operating system, Symbian
offered strong protection against malware and came with reliable security certificates. I t proved
to be a secure operating system for phones and a safe platform for app development.
OpenSource
After Nokia acquired Symbian Ltd., the Symbian Foundation was formed, and Symbian OS was
made open source. It opened doors of opportunity for developers to contribute to this operating
system's growth and develop innovative mobile applications.
Advantages of Symbian OS
It has a greater range of applications.
Connectivity was a lot easier.
It consists of a better-in built wap browser.
It has an open platform based on C++.
It provides a feature for power saving.
Itprovidesfullymultitaskableprocessing.
Below are the following unique features and characteristics of the android operating system, such as:
Most Android devices support NFC, which allows electronic devices to interact across short distances easily. The main goal here is to create
a payment option that is simpler than carrying cash or credit cards, and while the market hasn't exploded as many experts had predicted, there
may be an alternative in the works, in the form of Bluetooth Low Energy (BLE).
2. Infrared Transmission
The Android operating system supports a built-in infrared transmitter that allows you to use your phone or tablet as a remote control.
3. Automation
The Tasker app allows control of app permissions and also automates them.
You can download apps on your PC by using the Android Market or third-party options like AppBrain. Then it automatically syncs them to your Droid, and no plugging is required.
Android phones also have unique hardware capabilities. Google's OS makes it possible to upgrade, replace, and remove your battery that no longer holds a charge. In addition, Android
phones come with SD card slots for expandable storage.
While it's possible to hack certain phones to customize the home screen, Android comes with this capability from the get-go. Download a third-party launcher like Apex, Nova, and you can
add gestures, new shortcuts, or even performance enhancements for older-model devices.
7. Widgets
Apps are versatile, but sometimes you want information at a glance instead of having to open an app and wait for it to load. Android widgets let you display just about any feature you
choose on the home screen, including weather apps, music widgets, or productivity tools that helpfully remind you of upcoming meetings or approaching deadlines.
8. Custom ROMs
Because the Android operating system is open-source, developers can twist the current OS and build their versions, which users can
download and install in place of the stock OS. Some are filled with features, while others change the look and feel of a device. Chances are,
if there's a feature you want, someone has already built a custom ROM for it.
Architecture of Android OS
The android architecture contains a different number of components to support any android device needs. Android software contains an
open-source Linux Kernel with many C/C++ libraries exposed through application framework services.
Among all the components, Linux Kernel provides the main operating system functions to Smartphone and Dalvik Virtual Machine (DVM)
to provide a platform for running an android application. An android operating system is a stack of software components roughly divided
into five sections and four main layers, as shown in the below architecture diagram.
o Applications
o Application Framework
o Android Runtime
o Platform Libraries
o Linux Kernel
1. Applications
An application is the top layer of the android architecture. The pre-installed applications like camera, gallery, home, contacts, etc., and third-
party applications downloaded from the play store like games, chat applications, etc., will be installed on this layer.
It runs within the Android run time with the help of the classes and services provided by the application framework.
2. Application framework
Application Framework provides several important classes used to create an Android application. It provides a generic abstraction for
hardware access and helps in managing the user interface with application resources. Generally, it provides the services with the help of
which we can create a particular class and make that class helpful for the Applications creation.
It includes different types of services, such as activity manager, notification manager, view system, package manager etc., which are helpful
for the development of our application according to the prerequisite.
The Application Framework layer provides many higher-level services to applications in the form of Java classes. Application developers are
allowed to make use of these services in their applications. The Android framework includes the following key services:
o Activity Manager: Controls all aspects of the application lifecycle and activity stack.
o Content Providers: Allows applications to publish and share data with other applications.
o Resource Manager: Provides access to non-code embedded resources such as strings, colour settings and user interface layouts.
o Notifications Manager: Allows applications to display alerts and notifications to the user.
o View System: An extensible set of views used to create application user interfaces.
3. Application runtime
Android Runtime environment contains components like core libraries and the Dalvik virtual machine (DVM). It
provides the base for the application framework and powers our application with the help of the core libraries.
Like Java Virtual Machine (JVM), Dalvik Virtual Machine (DVM) is a register-based virtual machine designed and
optimized for Android to ensure that a device can run multiple instances efficiently.
It depends on the layer Linux kernel for threading and low-level memory management. The core libraries enable us to
implement android applications using the standard JAVA or Kotlin programming languages.
4. Platform libraries
The Platform Libraries include various C/C++ core libraries and Java-based libraries such as Media, Graphics,
Surface Manager, OpenGL, etc., to support Android development.
o app: Provides access to the application model and is the cornerstone of all Android applications.
o content: Facilitates content access, publishing and messaging between applications and application components.
o database: Used to access data published by content providers and includes SQLite database, management classes.
o OpenGL: A Java interface to the OpenGL ES 3D graphics rendering API.
o os: Provides applications with access to standard operating system services, including messages, system services and
inter-process communication.
o text: Used to render and manipulate text on a device display.
o view: The fundamental building blocks of application user interfaces.
o widget: A rich collection of pre-built user interface components such as buttons, labels, list views, layout managers,
radio buttons etc.
o WebKit: A set of classes intended to allow web-browsing capabilities to be built into applications.
o media: Media library provides support to play and record an audio and video format.
o surface manager: It is responsible for managing access to the display subsystem.
o SQLite: It provides database support, and FreeType provides font support.
o SSL: Secure Sockets Layer is a security technology to establish an encrypted link between a web server and a web
browser.
5. Linux Kernel
Linux Kernel is the heart of the android architecture. It manages all the available drivers such as display, camera, Bluetooth, audio, memory,
etc., required during the runtime.
The Linux Kernel will provide an abstraction layer between the device hardware and the other android architecture components. It is
responsible for the management of memory, power, devices etc. The features of the Linux kernel are:
o Security: The Linux kernel handles the security between the application and the system.
o Memory Management: It efficiently handles memory management, thereby providing the freedom to develop our apps.
o Process Management: It manages the process well, allocates resources to processes whenever they need them.
o Network Stack: It effectively handles network communication.
o Driver Model: It ensures that the application works properly on the device and hardware manufacturers responsible for building
their drivers into the Linux build.
Android Applications
Android applications are usually developed in the Java language using the Android Software Development Kit. Once developed, Android
applications can be packaged easily and sold out either through a store such as Google Play, SlideME, Opera Mobile Store, Mobango, F-
droid or the Amazon Appstore.
Android powers hundreds of millions of mobile devices in more than 190 countries around the world. It's the largest installed base of any
mobile platform and growing fast. Every day more than 1 million new Android devices are activated worldwide.
Android Emulator
The Emulator is a new application in the Android operating system. The Emulator is a new prototype used to develop and test android
applications without using any physical device.
The android emulator has all of the hardware and software features like mobile devices except phone calls. It provides a variety of navigation
and control keys. It also provides a screen to display your application. The emulators utilize the android virtual device configurations. Once
your application is running on it, it can use services of the android platform to help other applications, access the network, play audio, video,
store, and retrieve the data.
Let's be honest, passwords are not disappearing any time soon, and most of us find the member so mean
hard to remember. We're also asked to change them frequently, which makes the whole process even
more painful.
Enter the password manager, which you can think of as a "book of passwords" locked by a master key
that only you know.
Not only do they store passwords, but they also generate strong, unique passwords that save you from
using your cat's name or child's birthday...over and over.
Although Microsoft has enabled password removal on their Microsoft 365accounts,we'res till far
from being rid of them forever !As long as we have sensitive data and corporate data to protect,
passwords will be a critical security measure.
3.Update Your Operating Systems(OS) Regularly
If you're using out dated software, your risk of getting hacked sky rockets. Vendors such as Apple(IOS),
Google ,and Microsoft constantly provide security updates to stay ahead of security vulnerabilities.
Don't ignore those alerts to upgrade your laptop, tablet, or smart phone. To help with this, ensure you have
automatic software updates turned on by default on your mobile devices. Regularly updating your
operating system ensures you have the latest security configurations available!
When it comes to your laptop, your IT department or your IT services provider should be pushing you
appropriate software updates on a regular basis.
Although it's very tempting to use that free Wi-Fi at the coffee shop, airport or hotel lobby - don't do it.
Anytime you connect to another organization’s network, you’re increasing your risk of exposure to
malware and hackers. There are so many online videos and easily accessible tools that even a no vice
hacker can intercept traffic flowing over Wi-Fi, accessing valuable information such as credit card
number, bank account numbers, passwords, and other private data.
Interesting but disturbing fact: although public Wi-Fi and Bluetooth are a considerable security gap and
most of us (91%) know it, 89% of us ignore it. Choose to be in the minority here!
Under this policy, whenever a mobile device is believed to be stolen or lost, the business can protect the lost
data by remotely wiping the device or, at minimum, locking access.
Where this gets a bit sticky is that you're essentially giving the business permission to delete
all personal data as well, as typically in
A BYOD situation the employee is using the device for both work and play.
Most IT security experts view remote lock and data wipe as a basic and necessary security caution, so
employees should be educated and made aware of any such policy in advance.
Keep in mind that your public cloud-based apps and services are also being accessed by employee-
owned mobile devices ,increasing your company’s risk of data loss.
That’s why, for starters, backup your cloud data! If your device is lost or stolen,
you'llstillwanttobeabletoaccessanydatathatmighthavebeencompromised as quickly as possible.
Select a cloud platform that maintain savers ion history of your files and allows you to roll back to those earlier
versions, at least for the past 30days.
Once those 30 days have elapsed, deleted files or earlier versions are gone for good.
You can safeguard against this by investing in a cloud-to-cloud backup solution, which will back up your
data for a relatively nominal monthly fee.
7. Understand and Utilize Mobile Device Management (MDM) and Mobile Application
Management (MAM)
Mobile security has become the hottest topic in the IT world. How do we allow users to access the data they
need remotely, while keeping that data safe from whatever lurks around on these potentially unprotected
devices?
The solution is two-fold: Mobile Device Management (MDM) and Mobile Application Management
(MAM).
Mobile Device Management is the configuration, monitoring, and management of your employees'
personal devices, such as phones, tablets, and laptops.
Mobile Application Management is configuring, monitoring, and managing the applications on those
mobile devices. This includes things like Microsoft 365 and authenticator apps.
When combined, MDM and MAM can become powerful security solutions, preventing
unauthorized devices from accessing your company network of applications and data.
Note that both solutions should be sourced, implemented, and managed by IT experts-in-house or
outsourced-familiar with mobile security. For example ,you can look at this short case study on how we
Implemented Microsoft In tune MDM for a healthcare provider, including the details behind the
implementation.
Implementing these 7 best practices for your employees and end-users, and enforcing strong
mobile security policies, will go a long way to keeping your mobile device security in check.
1 mark
1. Handheld systems include ?
A. PFAs
B. PDAs
C. PZAs
D. PUAs
Ans : B
2. Which of the following is an example of PDAs?
A. Palm-Pilots
B. Cellular Telephones
C. Both A and B
D. None of the above
Ans : C
3. Many handheld devices have between ___________ of memory
A. 256 KB and 8 MB
B. 512 KB and 2 MB
C. 256 KB and 4 MB
D. 512 KB and 8 MB
Ans : D
4. Handheld devices do not use virtual memory techniques.
A. TRUE
B. FALSE
C. Can be true or false
D. Can not say
Ans : A
A. very small
B. small
C. medium
D. larger
Ans : D
6. Some handheld devices may use wireless technology such as BlueTooth, allowing remote access to e-mail and web browsing.
A. Yes
B. No
C. Can be yes or no
D. Can not say
Ans : A
6) Android is –
a. an operating system
a. Servers
b. Desktops
c. Laptops
d. Mobile devices
9) Which of the following is the first mobile phone released that ran the Android OS?
a. HTC Hero
b. Google gPhone
c. T - Mobile G1
d. None of the above
10) Which of the following virtual machine is used by the Android operating system?
a. JVM
b. Dalvik virtual machine
c. Simple virtual machine
d. None of the above
a. Java
b. C++
c. C
d. None of the above
14) Which of the following converts Java byte code into Dalvik byte code?
a. Dalvik converter
b. Dex compiler
c. Mobile interpretive compiler (MIC)
d. None of the above
a. android class
b. android package
c. A single screen in an application with supporting java code
d. None of the above
18) On which of the following, developers can test the application, during developing the android applications?
a. Third-party emulators
b. Emulator included in Android SDK
c. Physical android phone
d. All of the above
a. MAC
b. Windows
c. Linux
d. Redhat
a. context
b. object
c. contextThemeWrapper
d. None of the above
UNIT-5
These distributions make the Linux Operating System ready for users to run their
applications and perform tasks on their computers securely and effectively. Linux
distributions come in different flavors, each tailored to suit the specific needs and
preferences of users.
Linux is a powerful and flexible family of operating systems that are free to use and
share. It was created by a person named Linus Torvalds in 1991. What’s cool is that
anyone can see how the system works because its source code is open for everyone
to explore and modify. This openness encourages people from all over the world to
work together and make Linux better and better.
Linux Distribution
Linux distribution is an operating system that is made up of a collection of
software based on Linux kernel or you can say distribution contains the Linux
kernel and supporting libraries and software. And you can get Linux based
operating system by downloading one of the Linux distributions and these
distributions are available for different types of devices like embedded devices,
personal computers, etc.
Around 600 + Linux Distributions are available and some of the popular Linux
distributions are:
MX Linux
Manjaro
Linux Mint
elementary
Ubuntu
Debian
Architecture of Linux
Linux architecture has the following components:
1. Kernel: Kernel is the core of the Linux based operating system. It virtualizes the
common hardware resources of the computer to provide each process with its virtual
resources. This makes the process seem as if it is the sole process running on the
machine. The kernel is also responsible for preventing and mitigating conflicts
between different processes. Different types of the kernel are:
Monolithic Kernel
Hybrid kernels
Exo kernels
Micro kernels
2. System Library:Linux uses system libraries, also known as shared libraries, to
implement various functionalities of the operating system. These libraries contain
pre-written code that applications can use to perform specific tasks. By using these
libraries, developers can save time and effort, as they don’t need to write the same
code repeatedly. System libraries act as an interface between applications and the
kernel, providing a standardized and efficient way for applications to interact with
the underlying system.
3. Shell:The shell is the user interface of the Linux Operating System. It allows users
to interact with the system by entering commands, which the shell interprets and
executes. The shell serves as a bridge between the user and the kernel, forwarding
the user’s requests to the kernel for processing. It provides a convenient way for
users to perform various tasks, such as running programs, managing files, and
configuring the system.
4. Hardware Layer: The hardware layer encompasses all the physical components of
the computer, such as RAM (Random Access Memory), HDD (Hard Disk Drive),
CPU (Central Processing Unit), and input/output devices. This layer is responsible
for interacting with the Linux Operating System and providing the necessary
resources for the system and applications to function properly. The Linux kernel and
system libraries enable communication and control over these hardware components,
ensuring that they work harmoniously together.
5. System Utility: System utilities are essential tools and programs provided by the
Linux Operating System to manage and configure various aspects of the system.
These utilities perform tasks such as installing software, configuring network
settings, monitoring system performance, managing users and permissions, and
much more. System utilities simplify system administration tasks, making it easier
for users to maintain their Linux systems efficiently.
Advantages of Linux
The main advantage of Linux is it is an open-source operating system. This means
the source code is easily available for everyone and you are allowed to contribute,
modify and distribute the code to anyone without any permissions.
In terms of security, Linux is more secure than any other operating system. It does
not mean that Linux is 100 percent secure, it has some malware for it but is less
vulnerable than any other operating system. So, it does not require any anti-virus
software.
The software updates in Linux are easy and frequent.
Various Linux distributions are available so that you can use them according to your
requirements or according to your taste.
Linux is freely available to use on the internet.
It has large community support.
It provides high stability. It rarely slows down or freezes and there is no need to
reboot it after a short time.
It maintains the privacy of the user.
The performance of the Linux system is much higher than other operating systems. It
allows a large number of people to work at the same time and it handles them
efficiently.
It is network friendly.
Disadvantages of Linux
It is not very user-friendly. So, it may be confusing for beginners.
It has small peripheral hardware drivers as compared to windows.
Also, it contains memory allocation for user space programs and kernel internal
structures. Linux memory management subsystem includes files mapping into the address
space of the processes and several other things.
Huge Pages
The translation of addresses requires various memory accesses. These memory accesses
are very slow as compared to the speed of the CPU. To ignore spending precious cycles
of the processor on the translation of the address, CPUs manage the cache of these types
of translations known as Translation Lookaside Buffer (TLB).
It will make dealing with physical memory directly quite difficult and to ignore this
complexity a mechanism virtual memory was specified.
Zones
Linux combines memory pages into some zones according to the possible usage. Let's
say, ZONE_HIGHMEM will include memory that isn't mapped permanently into the
address space of the kernel, ZONE_DMA will include memory that could be used by
various devices for DMA, and ZONE_NORMAL will include addressed pages normally.
Page Cache
The common case to get data into memory is to read it through files as the physical
memory is unstable.
The data will be put in the page cache to ignore expensive access of disk on the
subsequent reads whenever any file is read.
Similarly, the data will be positioned inside the page cache and gets into the backing
storage device whenever any file is written.
Nodes
Anonymous Memory
The anonymous mapping or anonymous memory specifies memory that isn't backed by
any file system. These types of mappings are implicitly developed for heap and stack of
the program or by explicitly calls to the mmap(2) system call.
The anonymous mappings usually only specify the areas of virtual memory that a
program is permitted to access.
OOM killer
It is feasible that the kernel would not be able to reclaim sufficient memory and the
loaded machine memory would be exhausted to proceed to implement.
Compaction
As the system executes, various tasks allocate the free up the memory space and it
becomes partitioned. However, it is possible to restrict scattered physical pages with
virtual memory. Memory compaction defines the partitioning problems.
Reclaim
In the LINUX operating system, we have mainly two types of processes namely - Real-
time Process and Normal Process. Let us learn more about them in detail.
Real-time processes are processes that cannot be delayed in any situation. Real-time
processes are referred to as urgent processes.
SCHED_FIFO
SCHED_RR.
A real-time process will try to seize all the other working processes having lesser priority.
For example, A migration process that is responsible for the distribution of the processes
across the CPU is a real-time process. Let us learn about different scheduling policies
used to deal with real-time processes briefly.
SCHED_FIFO
FIFO in SCHED_FIFO means First In First Out. Hence, the SCHED_FIFO policy
schedules the processes according to the arrival time of the process.
SCHED_RR
RR in SCHED_RR means Round Robin. The SCHED_RR policy schedules the
processes by giving them a fixed amount of time for execution. This fixed time is known
as time quantum.
Normal Process
Normal Processes are the opposite of real-time processes. Normal processes will execute
or stop according to the time assigned by the process scheduler. Hence, a normal process
can suffer some delay if the CPU is busy executing other high-priority processes. Let us
learn about different scheduling policies used to deal with the normal processes in detail.
Batch (SCHED_BATCH)
As the name suggests, the SCHED_BATCH policy is used for executing a batch of
processes. This policy is somewhat similar to the Normal policy. SCHED_BATCH
policy deals with the non-interactive processes that are useful in optimizing the CPU
throughput time. SCHED_BATCH scheduling policy is used for a group of processes
having priority: 0.
Linux file access permissions are used to control who is able to read,
write and execute a certain file.
This is an important consideration due to the multi-user nature of
Linux systems and as a security mechanism to protect the critical
system files both from the individual user and For many malicious
software or viruses.
Access permissions are implemented at a file level with the
appropriate permission set based on the file owner, the group owner
of the file and worldwide access.
In Linux, directories are also files and therefore the file permissions
apply on a directory level as well, although some permission are
applied differently depending upon whether the file is a regular file
or directory.
As devices are also represented as files then the same permissions
commands can be applied to access to certain resource so external
devices.
Permission Groups
Each file and directory has three user based permission groups:
Permission Types
Each file or directory has three basic permission types:
Read - The Read permission refers to a user's capability to read the
contents of the file.
Write - The Write permissions refer to a user's capability to write or
modify a file or directory.
Execute - The Execute permission affects a user's capability to
execute a file or view the contents of a directory.
1. ModifyingthePermissions
When in the command line, the permissions are edited by using
thecommandchmod.Youcanassignthepermissionsexplicitlyorbyusin
ga binary reference as described below.
2. ExplicitlyDefiningPermissions
ToexplicitlydefinepermissionsweneedtoreferencethePermissionGro
up and Permission Types.
ThePermissionGroupsusedare:
3. u-Owner
4. g-Group
5. oora-AllUsers
ThepotentialAssignmentOperators are+(plus)and -(minus);theseare
usedtotellthesystemwhethertoaddorremovethespecificpermissions.
The Permission Types that are used are:
6. r-Read
7. w-Write
8. x-Execute
Soforanexample,let’ssaywehaveafilenamedfile1thatcurrentlyhasthe
permissions set to_rw_rw_rw, which means that the owner, group
and all users have read and write permission. Now we want to
remove the read and write permissions from the all users group.
Tomakethismodificationwewouldinvokethecommand:chmoda-rw
file1
toaddthepermissions,wewouldinvokethecommand:chmoda+rwfile1
IOS
Architecture of IOS Operating System
IOS is a Mobile Operating System that was developed by Apple Inc. for iPhones,
iPads, and other Apple mobile devices. iOS is the second most popular and most
used Mobile Operating System after Android.
The structure of the iOS operating System is Layered based. Its communication
doesn’t occur directly. The layer’s between the Application Layer and the
Hardware layer will help for Communication. The lower level gives basic
services on which all applications rely and the higher-level layers provide
graphics and interface-related services. Most of the system interfaces come with a
special package called a framework.
A framework is a directory that holds dynamic shared libraries like .a files,
header files, images, and helper apps that support the library. Each layer has a set
of frameworks that are helpful for developers.
CORE OS Layer:
All the IOS technologies are built under the lowest level layer i.e. Core OS layer.
These technologies include:
1. Core Bluetooth Framework
2. External Accessories Framework
3. Accelerate Framework
4. Security Services Framework
5. Local Authorization Framework etc.
It supports 64 bit which enables the application to run faster.
CORE SERVICES Layer:
Some important frameworks are present in the CORE SERVICES Layer which helps
the iOS operating system to cure itself and provide better functionality. It is the 2nd
lowest layer in the Architecture as shown above. Below are some important frameworks
present in this layer:
1. Address Book Framework-
The Address Book Framework provides access to the contact details of the user.
2. Cloud Kit Framework-
This framework provides a medium for moving data between your app and iCloud.
3. Core Data Framework-
This is the technology that is used for managing the data model of a Model View
Controller app.
4. Core Foundation Framework-
This framework provides data management and service features for iOS applications.
5. Core Location Framework-
This framework helps to provide the location and heading information to the
application.
6. Core Motion Framework-
All the motion-based data on the device is accessed with the help of the Core Motion
Framework.
7. Foundation Framework-
Objective C covering too many of the features found in the Core Foundation
framework.
8. HealthKit Framework-
This framework handles the health-related information of the user.
9. HomeKit Framework-
This framework is used for talking with and controlling connected devices with the
user’s home.
10. Social Framework-
It is simply an interface that will access users’ social media accounts.
11. StoreKit Framework-
This framework supports for buying of contents and services from inside iOS apps.
MEDIA Layer:
With the help of the media layer, we will enable all graphics video, and audio
technology of the system. This is the second layer in the architecture. The different
frameworks of MEDIA layers are:
1. ULKit Graphics-
This framework provides support for designing images and animating the view
content.
2. Core Graphics Framework-
This framework support 2D vector and image-based rendering and it is a native
drawing engine for iOS.
3. Core Animation-
This framework helps in optimizing the animation experience of the apps in iOS.
4. Media Player Framework-
This framework provides support for playing the playlist and enables the user to use
their iTunes library.
5. AV Kit-
This framework provides various easy-to-use interfaces for video presentation,
recording, and playback of audio and video.
6. Open AL-
This framework is an Industry Standard Technology for providing Audio.
7. Core Images-
This framework provides advanced support for motionless images.
8. GL Kit-
This framework manages advanced 2D and 3D rendering by hardware-accelerated
interfaces.
COCOA TOUCH:
COCOA Touch is also known as the application layer which acts as an interface for the
user to work with the iOS Operating system. It supports touch and motion events and
many more features. The COCOA TOUCH layer provides the following frameworks :
1. EvenKit Framework-
This framework shows a standard system interface using view controllers for
viewing and changing events.
2. GameKit Framework-
This framework provides support for users to share their game-related data online
using a Game Center.
3. MapKit Framework-
This framework gives a scrollable map that one can include in your user interface of
the app.
4. PushKit Framework-
This framework provides registration support.
Features of iOS operating System:
Let us discuss some features of the iOS operating system-
1. Highly Securer than other operating systems.
2. iOS provides multitasking features like while working in one application we can
switch to another application easily.
3. iOS’s user interface includes multiple gestures like swipe, tap, pinch, Reverse pinch.
4. iBooks, iStore, iTunes, Game Center, and Email are user-friendly.
5. It provides Safari as a default Web Browser.
6. It has a powerful API and a Camera.
7. It has deep hardware and software integration
Applications of IOS Operating System:
Here are some applications of the iOS operating system-
1. iOS Operating System is the Commercial Operating system of Apple Inc. and is
popular for its security.
2. iOS operating system comes with pre-installed apps which were developed by Apple
like Mail, Map, TV, Music, Wallet, Health, and Many More.
3. Swift Programming language is used for Developing Apps that would run on IOS
Operating System.
4. In iOS Operating System we can perform Multitask like Chatting along with Surfing
on the Internet.
Advantages of IOS Operating System:
The iOS operating system has some advantages over other operating systems available
in the market especially the Android operating system. Here are some of them-
1. More secure than other operating systems.
2. Excellent UI and fluid responsive
3. Suits best for Business and Professionals
4. Generate Less Heat as compared to Android.
Disadvantages of IOS Operating System:
Let us have a look at some disadvantages of the iOS operating system-
1. More Costly.
2. Less User Friendly as Compared to Android Operating System.
3. Not Flexible as it supports only IOS devices.
4. Battery Performance is poor.
FILE SYSTEM
Since iOS 10.3 and later, the Apple File System (APFS) is the default file system that
handles persistent storage of data files.
The iOS file system contains two volumes.
The system volume contains the operating system and for this reason cannot
be completely erased. Only iOS system data can be written to this location.
The user volume contains user data. Information stored on the user volume
is encrypted only when the device is protected with a passcode.
All third-party apps exist within an app sandbox to prevent them from accessing or
modifying the contents of other apps without permission from the user.
App data is information created and stored by an app during use.
App data might include the high score of a game or the contents of a
document.
App data is stored within the app directory and, potentially, backed up by
iCloud or another server.
Local app data is removed when an app is deleted. However, data may still
exist in iCloud or a third-party server.
Using Shared iPad divides the data differently on an iOS device.
When a student logs in, apps that support cloud storage sync their app data
in the background. Other app data is cached locally on the iPad and, if
necessary, continues to push to the cloud even after logout.
Shared iPad allows administrators to designate how many users can share an
iPad; local storage is then divided to provide space for each user.
Shared iPad is enabled via a mobile device management (MDM) solution. In
Jamf Pro, shared iPad is enbaled as part of a PreStage enrollment.
The sandbox directory
When in comes to reading and writing files, each iOS application has its own sandbox
directory.
For security reasons, every interaction of the iOS app with the file system is limited to
this sandbox directory. Exceptions are access requests to user data like photos, music,
contacts etc.
This includes anything a user might create, view or delete through our app, for example
text files, drawings, videos, images, audio files etc. We can add subdirectories to organise
this content.
The system additionally creates the Documents/Inbox directory which we can use to
access files that our app was asked to open by other applications. We can read and delete
files in this directory but cannot edit or create new files.
The Library directory contains standard subdirectories we can use to store app support
files. The most used subdirectories are:
Library/Application Support/ - to store any files the app needs that should not be
exposed to the user, for example configuration files, templates etc.
Library/Caches/ - to cache data that can be recreated and needs to persist longer
than files in the tmp directory. The system may delete the directory on rare
occasions to free up disk space.
1 MARK
1. A file can be recognized as an ordinary file or directory by ____ symbol.
a) $
b) –
c) *
d) /
Answer: b
2. Which command is used to display the operating system name
a) os
b) unix
c) kernel
d) uname
Answer: d
3. Which command is used to print a file
a) print
b) ptr
c) lpr
d) none of the mentioned
Answer: c
4. How many types of permissions a file has in UNIX?
a) 1
b) 2
c) 3
d) 4
Answer: c
5. Permissions of a file are represented by which of the following characters?
a) r,w,x
b) e,w,x
c) x,w,e
d) e,x,w
Answer: a
6. Which of the following symbol is used to indicate the absence of a permission of a file?
a) $
b) &
c) +
d) –
Answer: d
7. When we create a file, we are the owner of a file.
a) True
b) False
Answer: a
8. What is group ownership?
a) group of users who can access the file
b) group of users who can create the file
c) group of users who can edit the file
d) group of users who can delete the file
Answer: a
9. A file has permissions as rwx r– —. A user other than the owner cannot edit the file.
a) True
b) False
Answer: a
10. If a file is read protected, we can write to the file.
a) True
b) False
Answer: b
11. The write permission for a directory determines that ____________
a) we can write to a directory file
b) we can read the directory file
c) we can execute the directory file
d) we can add or remove files to it
Answer: d
12. If the file is write-protected and the directory has to write permission then we cannot delete the file.
a) True
b) False
Answer: b
13. What is execute permission?
a) permission to execute the file
b) permission to delete the file
c) permission to rename the file
d) permission to search or navigate through the directory
Answer: d
14. Which of the following is default permission set for ordinary files?
a) rw-rw-rw-
b) rwxrwxrwx
c) r–r–r–
d) rw-rw-rwx
Answer: a
15. Which of the following is default permission set for directories?
a) rw-rw-rw-
b) rwxrwxrwx
c) r–r–r–
d) rw-rw-rwx
Answer: b
A. Cocoa
B. Cocoa touch
C. Cocoa iOS
D. Cocoa begin
Ans : B
17. Cocoa touch used to refer the application development using any programmatic interface?
A. TRUE
B. FALSE
C. Can be true or false
D. Can not say
Ans : A
18. Which JSON framework is supported by iOS?
A. UIKit
B. Django
C. SBJson
D. UCJson
Ans : C
19. ___________ is a two-part string used to identify one or more apps from a single development team.
A. bundle ID
B. app ID
C. team ID
D. All of the above
Ans : B
20. How many ways to achieve concurrency in iOS?
A. 1
B. 2
C. 3
D. 4
Ans : C
21. When an app is said to be in not running state?
A. When it is launched
B. When it is not launched
C. When it gets terminated by the system during running
D. Both B and C
Ans : D
22. In Which state the app is running in the foreground and is receiving events?
A. Not running
B. Inactive
C. Background
D. Active
Ans : D
23. UIKit classes should be used only from an application’s main thread.
A. TRUE
B. FALSE
C. Can be true or false
D. Can not say
Ans : A
24. ARC stands for?
A. POS
B. DOS
C. IOS
D. QOS
Ans : D
A. KVA
B. KVO
C. KVR
D. KVZ
Ans : B
27.Which company introduced IOS?
A)IBM
B)Intel
C)Apple
D)Google
A)UK
B)USA
C)China
A)CoreMotion Framework
B)Foundation Framework
C)UIKit Framework
D)AppKit Framework
6.
A)Cocoa
B)Cocoa iOS
C)Cocoa begin
D)Cocoa touch
33 .Cocoa touch used to refer the application development using any programmatic interface?
A)TRUE
B)FALSE
D)—
8.
A)HTML,CSS,Angular
B)React, Redux,Swift
C)C#,Node,Objective-C, Swift
10 .
A)iOS 1.0
B)iOS 2.0
C)iOS 3.0
D)iOS 4.0
36 .__________ is a two-part string used to identify one or more apps from a single development team.
A)app ID
B)team ID
C)bundle ID
A)atomic
B)assign
C)non-atomic
38 .To create an emulator, you need an AVD. What does it stand for?
39 .The Iphone has a_________that activates when you rotate the device from portrait to landscape.
A)Special Sensor
B)Accelerometer
C)Shadow detector
10 .
A)AVFoundation.framework
B)AFNetwork.framework
C)Audiotoolbox.framework
D)CFNetwork.framework