Kavitha’s College of Arts & Science,Vaiyappamalai
Department of Computer Science
I M.Sc.-CS
Subject Name: ADVANCED OPERATING SYSTEMS
Syllabus
Unit:1
BASICS OF OPERATING SYSTEMS: Basics of Operating Systems: What is an
Operating System? – Main frame Systems –Desktop Systems – Multiprocessor
Systems – Distributed Systems – Clustered Systems –Real-Time Systems –
Handheld Systems – Feature Migration – Computing Environments -Process
Scheduling – Cooperating Processes – Inter Process Communication- Deadlocks –
Prevention – Avoidance – Detection – Recovery.
Unit:2
DISTRIBUTED OPERATING SYSTEMS: Distributed Operating Systems: Issues
– Communication Primitives – Lamport‟s Logical Clocks – Deadlock handling
strategies – Issues in deadlock detection and resolution-distributed file systems –
design issues – Case studies – The Sun Network File System-Coda.
Unit:3
REAL TIME OPERATING SYSTEM: Realtime Operating Systems : Introduction
– Applications of Real Time Systems – Basic Model of Real Time System –
Characteristics – Safety and Reliability - Real Time Task Scheduling.
Unit:4
HANDHELD SYSTEM: Operating Systems for Handheld Systems: Requirements
– Technology Overview –Handheld Operating Systems – PalmOS-Symbian
Operating System- Android –Architecture of android –Securing Hand Held
System.
Unit:5
CASE STUDIES: Case Studies : Linux System: Introduction – Memory
Management – Process Scheduling – Scheduling Policy - Managing I/O devices –
Accessing Files- iOS : Architecture and SDK Framework - Media Layer - Services
Layer - Core OS Layer - File System.
Unit:6
Contemporary Issues : Expert lectures, online seminars – webinars
TEXT BOOK
1. Abraham Silberschatz; Peter Baer Galvin; Greg Gagne, “Operating System
Concepts”, Seventh Edition, John Wiley & Sons, 2004.
2. MukeshSinghal and Niranjan G. Shivaratri, “Advanced Concepts in
Operating Systems – Distributed, Database, and Multiprocessor Operating
Systems”, Tata McGraw-Hill, 2001.
REFERENCES
1. Rajib Mall, “Real-Time Systems: Theory and Practice”, Pearson Education India,
2006.
2. Pramod Chandra P.Bhatt, An introduction to operating systems, concept and
practice, PHI, Third edition, 2010.
3. Daniel.P.Bovet& Marco Cesati,“Understanding the Linux
kernel”,3rdedition,O‟Reilly, 2005
4. Neil Smyth, “iPhone iOS 4 Development Essentials – Xcode”, Fourth Edition,
Payload media, 2011.
Related Online Contents [MOOC, SWAYAM, NPTEL, Websites etc.]
1. https://onlinecourses.nptel.ac.in/noc20_cs04/preview
2. https://www.udacity.com/course/advanced-operating-systems--ud189
3. https://minnie.tuhs.org/CompArch/Resources/os-notes.pdf
Unit I
An operating system is a program that acts as an interface between the computer
user and computer hardware, and controls the execution of programs.
An operating system acts as an intermediary between the user of a computer and
computer hardware.
Characteristics of Operating Systems
Let us now discuss some of the important characteristic features of operating
systems:
Device Management: The operating system keeps track of all the devices.
So, it is also called the Input/Output controller that decides which process
gets the device, when, and for how much time.
File Management: It allocates and de-allocates the resources and also
decides who gets the resource.
Job Accounting: It keeps track of time and resources used by various jobs
or users.
Error-detecting Aids: These contain methods that include the production
of dumps, traces, error messages, and other debugging and error-detecting
methods.
Memory Management: It keeps track of the primary memory, like what
part of it is in use by whom, or what part is not in use, etc. and It also
allocates the memory when a process or program requests it.
Processor Management: It allocates the processor to a process and then de-
allocates the processor when it is no longer required or the job is done.
Control on System Performance: It records the delays between the request
for a service and the system.
Security: It prevents unauthorized access to programs and data using
passwords or some kind of protection technique.
Convenience: An OS makes a computer more convenient to use.
Efficiency: An OS allows the computer system resources to be used
efficiently.
Ability to Evolve: An OS should be constructed in such a way as to permit
the effective development, testing, and introduction of new system functions
at the same time without interfering with service.
Throughput: An OS should be constructed so that It can give maximum
throughput (Number of tasks per unit time).
Functionalities of Operating System
Resource Management: When parallel accessing happens in the OS means
when multiple users are accessing the system the OS works as Resource
Manager, Its responsibility is to provide hardware to the user. It decreases the
load in the system.
Process Management: It includes various tasks like scheduling and
termination of the process. It is done with the help of CPU Scheduling
algorithms.
Storage Management: The file system mechanism used for the
management of the storage. NIFS, CIFS, CFS, NFS, etc. are some file systems.
All the data is stored in various tracks of Hard disks that are all managed by
the storage manager. It included Hard Disk.
Memory Management: Refers to the management of primary memory. The
operating system has to keep track of how much memory has been used and
by whom. It has to decide which process needs memory space and how much.
OS also has to allocate and deallocate the memory space.
Security/Privacy Management: Privacy is also provided by the Operating
system using passwords so that unauthorized applications can’t access
programs or data. For example, Windows uses Kerberos authentication to
prevent unauthorized access to data.
The process operating system as User Interface:
1. User
2. System and application programs
3. Operating system
4. Hardware
Every general-purpose computer consists of hardware, an operating system(s),
system programs, and application programs. The hardware consists of memory,
CPU, ALU, I/O devices, peripheral devices, and storage devices. The system
program consists of compilers, loaders, editors, OS, etc. The application program
consists of business
programs and
database programs.
Conceptual View of Computer System
Purposes of an Operating System
It controls the allocation and use of the computing System’s resources
among the various user and tasks.
It provides an interface between the computer hardware and the
programmer that simplifies and makes it feasible for coding and debugging of
application programs.
Tasks of an Operating System
1. Provides the facilities to create and modify programs and data files using
an editor.
2. Access to the compiler for translating the user program from high-level
language to machine language.
3. Provide a loader program to move the compiled program code to the
computer’s memory for execution.
4. Provide routines that handle the details of I/O programming.
MAINFRAME SYSTEMS:
A mainframe operating system is networking software infrastructure that allows
a mainframe computer to run programs, connect linked machines, and process
complex numerical and data-driven tasks.
All computers use some sort of basic operating system (OS), which is what
enables them to organize files and execute commands.
The biggest difference between a simple, one-computer OS and a mainframe
operating system is where each is located.
Mainframes are a type of computers, which are made for ‘throughput’ as fast as
possible; a throughput can be defined as “the rate at which the data is
processed” . And also mainframes are majorly used for transaction processing; a
transaction can be defined as “a set of operations including disk read and write,
operating system calls, transferring data from one subsystem to another, etc…”
The mainframes have more processing power compared to servers and
microcomputers (like- laptop, PC, etc…), but have less processing power
compared to a supercomputer.
The main focus of the Main-frames is throughput, “A throughput is a rate at
which something is processed.”
Mainframe computers Uses:
Main-frames are used for Reliability, Redundancy, and availability.
DESKTOP SYSTEMS
An operating system (OS) acts as an interface between the hardware and software
of a desktop system. It manages system resources, facilitates software execution,
and provides a user-friendly environment. Different operating systems offer
distinct features, compatibility, and performance, catering to the diverse needs
and preferences of users.
Importance of Desktop Systems
Desktop systems play a crucial role in various domains, including
education, business, entertainment, and personal productivity.
They provide individuals and organizations with powerful computing
capabilities, enabling complex tasks to be completed efficiently.
Desktop systems facilitate creativity, communication, data analysis, and
knowledge sharing, contributing to enhanced productivity and innovation
Components of a Desktop System
Central Processing Unit (CPU):
Random Access Memory (RAM):
Storage Devices:
Graphics Processing Unit (GPU):
Input and Output Devices: .
Evolution of Desktop Systems
Desktop systems have evolved significantly over the years. From the bulky and
limited-capability systems of the past to the sleek and powerful computers of
today, technological advancements have revolutionized the desktop computing
experience.
Smaller form factors, increased processing power, improved storage technologies,
and enhanced user interfaces are some of the notable advancements that have
shaped the evolution of desktop systems.
Popular Desktop Operating Systems
Windows: Windows, developed by Microsoft, is one of the most widely
used desktop operating systems globally.
macOS: macOS is the operating system designed specifically for Apple’s
Mac computers. Known for its sleek and intuitive interface, macOS offers
seamless integration with other Apple devices and services.
Linux: Linux is an open-source operating system that provides a high
degree of customization and flexibility. It is favored by developers, system
administrators, and tech enthusiasts due to its stability, security, and vast
array of software options.
Read more on Operating System Tutorial Main Page
Multiprocessing Operating System
Multiprocessing Operating System is the type of Operating System that uses
multiple processors to operate within a single system. Multiple CPUs are
connected to divide and execute a job more quickly. After the task is finished, the
output from all Processors is compiled to provide a final result. Jobs are required
to share main memory and they may often share other system resources.
The organization of a typical Multiprocessing Operating System is shown in the
image given below.
Advantages of Multiprocessing Operating System
Multiprocessing Operating Systems have the following advantages over the other
types of Operating Systems.
Multiprocessing Operating System uses multiple processors to execute the
tasks which results in faster execution and better performance of the system.
In case a processor fails to work, the other processors can continue to
execute the tasks. Thus, Multiprocessing Operating Systems ensure the high
availability of the system.
Multiprocessing Operating Systems are scalable which means that can
handle the increased amount of workload without affecting the performance of
the system.
Multiprocessing Operating Systems efficiently utilize the resources.
Types of Multiprocessing Operating Systems
Multiprocessing Operating Systems are of the following types.
1. Symmetrical Multiprocessing Operating System
2. Asymmetrical Multiprocessing Operating System
Both of these types are discussed below in detail.
Symmetrical Multiprocessing Operating System
In a Symmetrical multiprocessing operating system, each processor executes the
same copy of the operating system, makes its own decisions, and collaborates
with other processes to ensure that the system runs smoothly.
Asymmetrical Multiprocessing Operating System
Asymmetrical Multiprocessing Operating System involves one processor acting as
a master and the others as slaves.
Distributed Operating System:
A distributed operating system is a type of operating system that manages a
network of independent computers and makes it appear as if they are a single
computer. It allows for the sharing of resources, such as storage, processing
power, and memory across multiple machines.
Features / Characteristics of Distributed Operating System
The features or characteristics of the distributed operating system are:
1. Concurrency:
2. Resource sharing:
3. Scalability:
4. Fault tolerance: .
5. Transparency:
6. Heterogeneity:
7. Communication:
8. Security:
Types of Distributed Operating System
Let’s check out each type of distributed operating system in detail:
1. Peer-to-Peer Systems: . P2P systems are often used in file sharing, instant
messaging, and gaming applications. They are also known as “Loosely Coupled
Systems”.
2. Client-Server Systems: In a client-server distributed operating system, the
server provides a specific set of services or resources to the client.
Middleware: In contrast to the other distributed operating systems, middleware is
a software layer that sits between the operating system and application software.
3. N-tier Systems: N-tier distributed operating systems are based on the
concept of dividing an application into different tiers, where each tier has a
specific responsibility.
Clustered Systems:
A clustered system refers to a group of interconnected computers, or nodes, that
collaborate closely to perform tasks in parallel. These nodes work together as a
single system, sharing resources and distributing workload effectively. By
leveraging the combined power of multiple machines, clustered systems achieve
high availability, scalability, and improved performance.
Types of Clustered Systems
Clustered systems come in various forms, each tailored to specific requirements.
Let’s delve into the three common types of clustered systems:
High-Availability Clusters High-availability clusters focus on providing
continuous operation even in the face of hardware or software failures.
Load-Balancing Clusters Load-balancing clusters distribute incoming workload
evenly across multiple nodes, optimizing resource utilization and preventing
bottlenecks.
Failover Clusters Failover clusters are designed to provide seamless failover in
case of node failures. When a node becomes unavailable, another standby node
takes over its responsibilities to ensure uninterrupted operation.
Components of a Clustered SystemTo understand how clustered systems work,
it is essential to familiarize ourselves with their key components:
Cluster NodesCluster nodes are individual computers or servers that make up the
cluster. These nodes work in harmony, communicating and collaborating to
achieve common goals.
InterconnectInterconnect refers to the communication infrastructure that allows
cluster nodes to exchange data and coordinate their activities. High-speed
networks, such as Ethernet or InfiniBand, are often used as interconnects.
Cluster SoftwareCluster software provides the necessary tools and services to
manage and control the cluster. It facilitates resource sharing, load balancing,
failover, and other crucial functionalities.
Examples of Clustered Systems
Several well-known clustered systems are widely used across various
industries. Let’s explore a few notable examples:
Microsoft Windows Server Failover Clustering
Linux-HA
Apache Hadoop
Clustered Systems Work:
Clustered systems employ specialized algorithms and protocols to distribute
workload, monitor node health, and ensure efficient task execution. These systems
leverage the power of parallel processing to handle complex tasks by dividing
them into smaller subtasks that can be executed concurrently across multiple
nodes.
What is Real Time Operating System?
A real time operating system is a type of operating system used in computing
systems that require strict completion deadlines for all tasks that need to be
performed. An real time OS is critical in applications that need immediate and
deterministic behavior, such as in industrial control systems, aerospace and
defense, medical devices, and automotive industries.Overall, an real time
operating system ensures that a system is reliable, safe, and efficient.
Types of Real Time Operating Systems
There are three types of real time OS:
1. Hard real time OS:
A hard real time operating system is a type of real-time system that guarantees
that all tasks will be completed within a certain deadline, without exception.
These systems are designed to provide deterministic behavior, ensuring that
critical tasks are completed on time, every time.
2. Soft real time OS:
A soft real time operating system is a type of real-time system that does not
guarantee that all tasks will be completed within a certain deadline. Instead, it
provides the best-effort service, attempting to complete tasks as quickly as
possible, but without making any guarantees about response time or deadline
completion.
3. Firm real time OS:
A firm real time operating system (RTOS) is a type of real time system that
guarantees that tasks will be completed within a certain deadline but with a
degree of flexibility. Unlike hard real time systems that have to meet hard
deadlines without exception, firm real time systems can tolerate occasional
deadline misses, but they should be infrequent and not affect the overall system
operation.
Terms used in Real Time Operating System
Some essential terms used in real time operating system are discussed below:
Task: A set of related tasks that are jointly able to provide some system
functionality.
Job: A job is a small piece of work that can be assigned to a processor, and
that may or may not require resources.
Release time of a job: It’s a time of a job at which the job becomes ready for
execution.
Execution time of a job: It is the time taken by a job to finish its execution.
Deadline of a job: It’s the time by which a job should finish its execution.
Processors: They are also known as active resources. They are important for
the execution of a job.
Maximum: It is the allowable response time of a job that is called its relative
deadline.
Response time of a job: It is the length of time from the release time of a job
when the instant finishes.
Absolute deadline: This is the relative deadline, which also includes its
release time.
Factors for Selecting an Real Time Operating System
Below mentioned are some factors which explains why we should select an real
time operating system.
Performance: The performance of real time operating system plays a crucial
factor to consider when selecting an real time OS. When it comes to performance,
developers have a variety of factors to consider in real time OS.
Features: Every real time OS has different features and developers need to
evaluate which features are the most important feature to the success and select
the real time OS with those features.
Ecosystem: A software products ecosystem is a critical piece of the selection
process in order to ensure ease of integration, support, and product lifetime.
Middleware: Many real time OS come with middleware components or
have third parties who have developed components that integrate into the real
time OS.
Engineering Team: The characteristic of real time OS selection that is
probably the most common to overlook is the engineering team.
RTOS Examples
Here are some RTOS examples:
1. FreeRTOS: FreeRTOS is a popular open-source Real time OS. It is designed
for microcontrollers and small embedded systems.
2. VxWorks: VxWorks is a real time operating system developed by Wind
River Systems. It is widely used in the aerospace, defense, and industrial
automation industries.
3. QNX: QNX is a commercial real time operating system developed by
BlackBerry. It is used in mission-critical applications such as automotive, medical
devices, and nuclear power plants.
4. ThreadX: ThreadX is a real time operating system developed by Express
Logic. It is widely used in consumer electronics, medical devices, and automotive
applications.
5. Nucleus RTOS: Nucleus RTOS is a real time operating system developed by
Mentor Graphics. It is used in a wide range of applications, including consumer
electronics, medical devices, and automotive systems.
These are just a few RTOS examples, there are many other commercial and open-
source RTOS available in the market.
Characteristics of Real-time System
Correctness: It is one of the precious parts of a real time OS. A real time
operating system produces a correct result within the given time.
Safety: Safety is necessary for any system but real time operating system can
perform for a long time without failures.
Time Constraints: In real time operating system, the tasks should be
completed within the given time period.
Embedded: real time operating systems are embedded. Embedded means
the system that is designed for a specific purpose by the combination of hardware
and software.
Features of Real Time Operating System
Real time OS occupies very less space.
The response time of real time OS is predictable.
It consumes some of the resources.
In real time OS, the kernel restores the state of the task and passes control of
the CPU for that task.
Advantages of Real Time Operating System
Real time OS is easy to develop and execute in real-time applications.
The real time operating system working structures are extra compact.
The real time operating system structures require less memory space.
Memory allocation in these types of systems is managed easily.
The types of real time operating system are error-free.
Applications of Real Time OS
Real Time OS are used in:
Real time operating system is used in airline reservation systems.
Air traffic control system.
Systems that provide immediate updating.
Used in any system that provides up-to-date and minute information on
stock prices.
Handheld System
Operating Systems that have low speed processors, less memory requirements and
are ideal to be used in mobile devices and personal digital assistants (PDAs) are
called Handheld operating systems. Such systems need fewer resources. These
systems are also intended to work with different types of hardware. They are also
known as Mobile operating systems.
Handheld devices are generally small in size that perfectly fits in the palm. These
devices can be carried easily in pocket. These systems use Bluetooth, e-mail, Wi-
Fi, GPS mobile navigation, video camera, music player, web browsing, and other
wireless technologies. The most basic handheld devices are designed for personal
information management (PIM) applications, allowing users to keep calendars,
task lists and addresses handy.
Types of Handheld System
The handheld system comes under a variety of sizes and shapes for different
kinds of use. Handheld devices are generally categorized in two ways
Appearance/form factor
There are two divisions in physical appearance of handheld devices: tablet and
clamshell designs. Tablets are small and have integral keyboards. Clamshell
devices are miniature versions of keyboards and screens. Other form factors
include different input mechanisms, handwriting recognition, processors,
memory, wireless connectivity, wireless connectivity, battery types, displays, and
cameras.
Advantages of Handheld operating systems
They are small and lightweight and easily portable.
They take seconds to boot up.
They have very long battery life.
They are significantly lower in cost as compared to desktops or portables.
Feature Migration in operating system
Feature migration refers to the process of transferring or adapting features or
functionalities from one system, application, or platform to another. It involves
taking specific capabilities or characteristics of an existing system and
implementing them in a different system or environment.
The process of feature migration typically involves the following steps:
1. Feature Identification: Analyzing the source system to identify the specific
features or functionalities that need to be migrated. This involves
understanding the purpose, requirements, and implementation details of the
features.
2. Compatibility Assessment: Assessing the compatibility of the features with
the target system or environment. This includes evaluating the technical
feasibility, dependencies, and potential challenges associated with migrating
the features.
3. Design and Adaptation: Designing the architecture and adapting the
feature implementation to suit the target system. This may involve
modifying the code, data structures, interfaces, or configurations to ensure
proper integration and functionality in the new environment.
4. Implementation: Developing and implementing the migrated features in
the target system. This includes coding, testing, and debugging the features
to ensure they work as intended in the new context.
5. Testing and Validation: Conducting thorough testing and validation of the
migrated features to ensure they meet the desired functionality, performance,
and quality standards. This involves various testing methodologies, such as
unit testing, integration testing, and system testing.
6. Deployment and Rollout: Deploying the migrated features in the
production environment and making them available for end-users. This may
involve coordinating with stakeholders, managing dependencies, and
ensuring a smooth transition from the old system to the new one.
Feature migration can bring several benefits, including:
1. Improved Functionality:
2. System Integration
3. Cost and Time Savings:
4. Legacy System Modernization:
5. Standardization and Consistency:
Computing Environments
Computing environments refer to the technology infrastructure and software
platforms that are used to develop, test, deploy, and run software applications.
There are several types of computing environments, including:
1. Mainframe: A large and powerful computer system used for critical
applications and large-scale data processing.
2. Client-Server: A computing environment in which client devices access
resources and services from a central server.
3. Cloud Computing: A computing environment in which resources and
services are provided over the Internet and accessed through a web browser or
client software.
4. Mobile Computing: A computing environment in which users access
information and applications using handheld devices such as smartphones and
tablets.
5. Grid Computing: A computing environment in which resources and
services are shared across multiple computers to perform large-scale
computations.
6. Embedded Systems: A computing environment in which software is
integrated into devices and products, often with limited processing power and
memory.
Computing Environments :
Types of Computing Environments : There are the various types of computing
environments. They are : \
1. Personal Computing Environment : In personal computing environment
there is a stand-alone machine.
2. Time-Sharing Computing Environment : In Time Sharing Computing
Environment multiple users share system simultaneously. Different users
(different processes) are allotted different time slice and processor switches
rapidly among users according to it.
3. Client Server Computing Environment : In client server computing
environment two machines are involved i.e., client machine and server
machine, sometime same machine also serve as client and server.
4. Distributed Computing Environment : In a distributed computing
environment multiple nodes are connected together using network but
physically they are separated.
5. Grid Computing Environment : In grid computing environment, multiple
computers from different locations works on single problem.
6. Cloud Computing Environment : In cloud computing environment on
demand availability of computer system resources like processing and storage
are availed.
7. Cluster Computing Environment : In cluster computing environment
cluster performs task where cluster is a set of loosely or tightly connected
computers that work together. It is viewed as single system and performs task
parallelly that’s why also it is similar to parallel computing environment.
Cluster aware applications are especially used in cluster computing
environment.
Advantages of different computing environments:
1. Mainframe: High reliability, security, and scalability, making it suitable for
mission-critical applications.
2. Client-Server: Easy to deploy, manage and maintain, and provides a
centralized point of control.
3. Cloud Computing: Cost-effective and scalable, with easy access to a wide
range of resources and services.
4. Mobile Computing: Allows users to access information and applications
from anywhere, at any time.
5. Grid Computing: Provides a way to harness the power of multiple
computers for large-scale computations.
6. Embedded Systems: Enable the integration of software into devices and
products, making them smarter and more functional.
PROCESS SCHEDULING IN OPERATING SYSTEM
Process scheduling is an important part of multiprogramming operating systems.
It is the process of removing the running task from the processor and selecting
another task for processing. It schedules a process into different states like ready,
waiting, and running.
Categories of Scheduling in OS
There are two categories of scheduling:
1. Non-preemptive: In non-preemptive, the resource can’t be taken from a process
until the process completes execution.
2. Preemptive: In preemptive scheduling, the OS allocates the resources to a
process for a fixed amount of time.
Process Scheduling Queues
These process scheduling queues are:
1. Job queue: Makes sure that processes stay in the system.
2. Ready queue: This stores a set of all processes in main memory, ready and
waiting for execution. The ready queue stores any new process.
3. Device queue: This queue consists of the processes blocked due to the
unavailability of an I/O device.
There are different policies that the OS uses to manage each queue and the OS
scheduler decides how to move processes between the ready and run queue
which allows only one entry per processor core on the system.
The stages a process goes through are:
A new process first goes in the Ready queue, where it waits for execution or
to be dispatched.
The CPU gets allocated to one of the processes for execution.
The process issues an I/O request, after which an OS places it in the I/O
queue.
The process then creates a new subprocess and waits for its termination.
If removed forcefully, the process creates an interrupt. Thus, once this
interrupt completes, the process goes back to the ready queue.
Objectives of Process Scheduling in OS
Following are the objectives of process scheduling:
1. It maximizes the number of interactive users within acceptable response times.
2. It achieves a balance between response and utilization.
3. It makes sure that there is no postponement for an unknown time and enforces
priorities.
4. It gives reference to the processes holding the key resources.
Two-State Process Model
There are two states in the two-state process model, namely, running state and
non-running state.
1. Running: A new process enters into the system in running state, after creation.
2. Not Running: The not running processes are stored in a queue until their turn
to get executed arrives and each entry in the queue points to a particular process.
The queue can be implemented using a linked list.
Schedulers in OS
A scheduler is a special type of system software that handles process scheduling
in numerous ways. It mainly selects the jobs that are to be submitted into the
system and decides whether the currently running process should keep running
or not. If not then which process should be the next one to run. A scheduler makes
a decision:
Types of Schedulers in operating System
There are three types of schedulers:
Comparison of OS Schedulers
Long-Term Medium-Term
S.No. Short-Term Scheduler
Scheduler Scheduler
A process swapping
1. A job scheduler A CPU scheduler
scheduler
Speed is between the
2. Slowest speed Fastest Speed
other two
Provides less control
Controls the degree Reduces the degree
over the degree of
3. of of
multiprogrammingad
multiprogramming multiprogramming
Absent or minimal in Minimal in time- Part of time-sharing
4.
the time-sharing OS sharing OS OS
Selects a process from Re-introduces
pool and loads it into Selects a process that is processes into
5.
memory for ready for execution memory for
execution continued execution
Context Switch in OS
A context switch is an important feature of multitasking OS that can be used to
store and restore the state or context of a CPU in a PCB, so that the execution of a
process can be resumed from that very point at a later time.
Program Counter
Scheduling information
Base and limit register value
Currently used register
Changed State
I/O State information
Accounting information
Cooperating Processes in Operating System
There may be several processes running in the system at the same time which can
be either cooperating processes or independent processes.
An independent process cannot be impacted or affected by other processes.
Cooperating Process in OS is a process that can affect or get affected by any other
process under execution.
What are Cooperating Processes in the Operating System?
Before learning about Cooperating processes in operating systems let's learn a bit
about Operating Systems and Processes.
There are two types of software first is the application software and the other is
the system software.
An application software performs tasks for the user while system
software operates and controls the computer system and works as an interface to
run the application software.
Operating system is system software that manages the resources of a computer
system that is both hardware and software. It works as an interface between the
user and the hardware so that the user can interact with the hardware. It provides
a convenient environment in which a user can execute the programs. An
operating system is a resource manager and it hides the internal working
complexity of the hardware so that users can perform a specific task without any
difficulty.
Some widely used operating systems are:
Single-process operating system
Batch-processing operating system
Multiprogramming operating system
Multitasking operating system
Multi-processing operating system
Distributed operating system
Real-Time operating system
There are two modes in which the processes can be executed. These two modes
are:
1. Serial mode
2. Parallel mode
In serial mode, the process will be executed one after the other means the next
process cannot be executed until the previous process gets terminated.
On the contrary in parallel mode, there may be several processes being executed
at the same time quantum. In this way, there will be two types of processes which
can be either cooperating processes or independent processes.
Methods of Cooperating Process in OS
Cooperating processes in OS requires a communication method that will allow the
processes to exchange data and information.
There are two methods by which the
cooperating process in OS can communicate:
Cooperation by Sharing
Cooperation by Message Passing
Details about the methods are given below:
Cooperation by Sharing
The cooperation processes in OS can
communicate with each other using the shared
resource which
includes data, memory, variables, files, etc.
Let's see a diagram to understand more clearly
the communication by shared region:
In the above diagram, We have two processes A and B which are communicating
with each other through a shared region of memory. Process A will write the
information in the shared region and then Process B will read the information
from the shared memory and that's how the process of communication takes place
between the cooperating processes by sharing.
Cooperation by Message Passing
The cooperating processes in OS can communicate with each other with the help
of message passing. The production process will send the message and the
consumer process will receive the same message.
There is no concept of shared memory instead the producer process will first send
the message to the kernel and then the kernel sends that message to the consumer
process.
Let's see a diagram to understand more clearly the
cooperation by communication:
In the above diagram, the processes A and B are
communicating with each other. Process A first sends
the message to the kernel and then the kernel will
interpret that this message is meant for Process B.
The kernel then sends the message to the
process P2 and that's how the process of
communication takes place between the cooperation
processes by communication.
Need of Cooperating Processes in OS
One process will write to the file and the other process reads the file. Therefore,
every process in the system could be affected by the other process.
The need for cooperating processes in OS can be divided into four types:
1. Information Sharing
2. Computation Speed
3. Convenience
4. Modularity
.
Example of Cooperating Process in Operating System
Let's take the example of the producer-consumer problem which is also known as
a bounded buffer problemto understand cooperating processes in more detail:
Producer:
The process which generates the message that a consumer may consume is known
as the producer.
Consumer:
The process which consumes the information generated by the produces.
A producer produces a piece of information and stores it in a buffer(critical
section) and the consumer consumes that information.
Buffer memory can be of two types:
Unbounded buffer: It is a kind of buffer that has no practical limit on the
size buffer. The producer can produce new information but the consumer
might have to wait for them.
Bounded buffer: It is a kind of buffer that assumes a fixed size. Here, the
consumer has to wait if the buffer is empty, while the producer has to wait if
the buffer is full. But here in the process-consumer problem, we have used
a bounded buffer.
Producer and consumer both processes execute simultaneously. The problem
arises when a consumer wants to consume information when the buffer
is empty or there is nothing to be consumed and a producer produces a piece of
information when the buffer is full or the memory of the consumer is already full.
Producer Process:
while(true)
{
produce an item &
while(counter = = buffer-size);
buffer[int] = next produced;
in = (in+1) % buffer- size;
counter ++;
}
Consumer Process:
while(true)
{
while (counter = = 0);
next consumed = buffer[out];
out= (out+1) % buffer size;
counter--;
}
Producer and consumer processes both share the following variables:
var n;
type item = .....;
var Buffer : array [0,n-1] of item;
in, out:0..n-1;
Inter Process Communication in OS?
A system can have two types of processes i.e. independent or cooperating.
Cooperating processes affect each other and may share data and information
among themselves.
Inter-process communication is used for exchanging useful information between
numerous threads in one or more processes (or programs)."
Role of Synchronization in Inter Process Communication
It is one of the essential parts of inter process communication. Typically, this is
provided by interprocess communication control mechanisms, but sometimes it
can also be controlled by communication processes.
These are the following methods that used to provide the synchronization:
1. Mutual Exclusion
2. Semaphore
3. Barrier
4. Spinlock
Mutual Exclusion:-
It is generally required that only one process thread can enter the critical section at
a time. This also helps in synchronization and creates a stable state to avoid the
race condition.
Semaphore:-
Semaphore is a type of variable that usually controls the access to the shared
resources by several processes. Semaphore is further divided into two types
which are as follows:
1. Binary Semaphore
2. Counting Semaphore
Barrier:-
A barrier typically not allows an individual process to proceed unless all the
processes does not reach it. It is used by many parallel languages, and collective
routines impose barriers.
Spinlock:-
Spinlock is a type of lock as its name implies. The processes are trying to acquire
the spinlock waits or stays in a loop while checking that the lock is available or
not. It is known as busy waiting because even though the process active, the
process does not perform any functional operation (or task).
Approaches to Interprocess Communication
We will now discuss some different approaches to inter-process communication
which are as follows:
1. Pipes
2. Shared Memory
3. Message Queue
4. Direct Communication
5. Indirect communication
6. Message Passing
7. FIFO
Reason of need interprocess communication:
There are numerous reasons to use inter-process communication for sharing the
data. Here are some of the most important reasons that are given below:
o It helps to speedup modularity
o Computational
o Privilege separation
o Convenience
o Helps operating system to communicate with each other and synchronize
their actions as well.
Deadlock Prevention
Deadlock prevention algorithms ensure that at least one of the necessary
conditions (Mutual exclusion, hold and wait, no preemption and circular wait)
does not hold true. However most prevention algorithms have poor resource
utilization, and hence result in reduced throughputs.
Mutual Exclusion
Not always possible to prevent deadlock by preventing mutual exclusion (making
all resources shareable) as certain resources are cannot be shared safely.
Hold and Wait
No preemption
We will see two approaches here. If a process request for a resource which is held
by another waiting resource, then the resource may be preempted from the other
waiting resource. In the second approach, if a process request for a resource which
are not readily available, all other resources that it holds are preempted.
The challenge here is that the resources can be preempted only if we can save the
current state can be saved and processes could be restarted later from the saved
state.
Circular wait
To avoid circular wait, resources may be ordered and we can ensure that each
process can request resources only in an increasing order of these numbers. The
algorithm may itself increase complexity and may also lead to poor resource
utilization.
Deadlock avoidance
If R2 is
allocated to p2
and if P1
request for R2,
there will be a
deadlock.
The resource
allocation graph is not much useful if there
are multiple instances for a resource. In such a case, we can use Banker’s
algorithm. In this algorithm, every process must tell upfront the maximum
resource of each type it need, subject to the maximum available instances for each
type. Allocation of resources is made only, if the allocation ensures a safe state;
else the processes need to wait. The Banker’s algorithm can be divided into two
parts: Safety algorithm if a system is in a safe state or not.
Deadlock Detection
If deadlock prevention and avoidance are not done properly, as deadlock may
occur and only things left to do is to detect the recover from the deadlock.
If all resource types has only single instance, then we can use a graph called wait-
for-graph, which is a variant of resource allocation graph. Here, vertices represent
processes and a directed edge from P1 to P2 indicate that P1 is waiting for a
resource held by P2. Like in the case of resource allocation graph, a cycle in a
wait-for-graph indicate a deadlock. So the system can maintain a wait-for-graph
and check for cycles periodically to detect any deadlocks.
The wait-for-graph is not much useful if
there are multiple instances for a resource,
as a cycle may not imply a deadlock. In
such a case, we can use an algorithm
similar to Banker’s algorithm to detect
deadlock. We can see if further allocations
can be made on not based on current
allocations. You can refer to any operating
system text books for details of these
algorithms.
Deadlock Recovery
Once a deadlock is detected, you will have to break the deadlock. It can be done
through different ways, including, aborting one or more processes to break the
circular wait condition causing the deadlock and preempting resources from one
or more processes which are deadlocked.
UNIT 2
Distributed System is a collection of autonomous computer systems that are
physically separated but are connected by a centralized computer network that
is equipped with distributed system software. The following are some of the
major design issues of distributed systems:
Design issues of the distributed system –
1. Heterogeneity: Heterogeneity is applied to the network, computer
hardware, operating system, and implementation of different developers.
2. Openness: The openness of the distributed system is determined
primarily by the degree to which new resource-sharing services can be made
available to the users.
3. Scalability: The scalability of the system should remain efficient even with
a significant increase in the number of users and resources connected.
4. Security: The security of an information system has three components
Confidentially, integrity, and availability. Encryption protects shared
resources and keeps sensitive information secrets when transmitted.
5. Failure Handling: When some faults occur in hardware and the software
program, it may produce incorrect results or they may stop before they have
completed the intended computation so corrective measures should to
implemented to handle this case.
6. Concurrency: There is a possibility that several clients will attempt to
access a shared resource at the same time. Multiple users make requests on
the same resources, i.e. read, write, and update.
7. Transparency: Transparency ensures that the distributed system should
be perceived as a single entity by the users or the application programmers
rather than a collection of autonomous systems, which is cooperating.
COMMUNICATION PRIMITIVES IN DISTRIBUTED SYSTEMS
1. Communication System: Layered Implementation
2. Network Protocol
3. Request and Reply Primitives
4. RMI and RPC
5. RMI and RPC Semantics and Failures
6. Indirect Communication
7. Group Communication
8. Publish-Subscribe Systems
1. Communication Models and their Layered Implementation
In this chapter:
▪ Communication between distributed objects by means of two models:
▪ Remote Method Invocation (RMI)
▪ Remote Procedure Call (RPC)
▪ RMI, as well as RPC, are implemented on top of request and reply
primitives
. ▪ Request and reply are implemented on top of the network protocol (e.g.
TCP or UDP in case of the internet).
2.NETWORK PROTOCOL
▪ Middleware and distributed applications are implemented on top of a
network protocol. Such a protocol is implemented as several layers. ▪ In case
of the Internet:
A)TCP (Transport Control Protocol) and
B)UDP (User Datagram Protocol) are both transport protocols implemented
on top of the Internet Protocol (IP).
A)TCP
TCP is a reliable protocol.
▪ TCP guarantees the delivery to the receiving process of all data delivered
by the sending process, in the same order.
▪ TCP implements mechanisms on top of IP to meet reliability guarantees.
▪ Sequencing: A sequence number is attached to each transmitted segment
(packet). At the receiver side, packets are delivered in order of this number.
▪ Flow control: The sender takes care not to overwhelm the receiver. This is
based on periodic acknowledgements received by the sender from the
receiver.
▪ Retransmission and duplicate handling: If a segment is not acknowledged
within a timeout, it is retransmitted. Using sequence number, the receiver
detects and rejects duplicates.
▪ Buffering: Buffering balances the flow. If the receiving buffer is full,
incoming segments are dropped. They will be retransmitted by the sender.
▪ Checksum: Each segment carries a checksum. If the received segment does
not match the checksum, it is dropped (and will be retransmitted).
B)UDP: UDP is a protocol that does not guarantee reliable transmission
. ▪ UDP offers no guarantee of delivery.
▪ According to the IP, packets may be dropped because of congestion or
network error. UDP adds no reliability mechanism to this
. ▪ UDP provides a means of transmitting messages with minimal additional
costs or transmission delays above those due to IP transmission.
▪ Its use is restricted to applications and services that do not require reliable
delivery of messages.
▪ If reliable delivery is requested with UDP, reliability mechanisms have to
be implemented on top of the network protocol (in the middleware).
3.REQUEST AND REPLY PRIMITIVES
Communication between processes and objects in a distributed system is
performed by message passing
. ▪ In a typical scenario (e.g. client-server model), such a communication is
through request and reply messages
REQUEST-REPLY PRIMITIVES IN THE CLIENT-SERVER MODEL
The system is structured as a group of processes (objects), called servers,
that deliver services to clients.
3. Remote Method Invocation (RMI) and Remote Procedure Call (RPC)
The goal: make, for the programmer, distributed computing look like
centralized computing.
The solution:
Asking for a service is solved by the client issuing a method invocation or
procedure call; this is a remote invocation (call).
▪ RMI (RPC) is transparent: the calling object (procedure) is not aware that
the called one is executing on a different machine, and vice versa.
Remote Method Invocation
Implementation of RMI
▪ Object A and Object B belong to the application
▪ Remote reference module and communication module belong to the RMI
middleware.
▪ The stub (client-side proxy object for B) and the skeleton (server-side
proxy object for remote callers to B) represent the so-called RMI software
. ▪ Glue code, specific to the function/method being called
▪ Situated at the border between middleware and application
▪ Generated automatically with help of available tools that are delivered
together with the middleware software.
▪ Transparent to the application core code the call in A and the service B’s
code need not be modified to enable RMI
The life of a RMI communication
1. The calling sequence in the client object calls the method in the stub
(clientside proxy) corresponding to the invoked method in B.
2. The method in the stub packs the arguments into a message (marshalling)
and forwards it to the communication module.
3. Based on the remote reference obtained from the remote reference
module, the communication module initiates the request/reply protocol
over the network.
4. The communication module on the server’s machine receives the request.
Based on the local reference received from the remote reference module, it
calls the corresponding method in the skeleton for B.
5. The skeleton method extracts the arguments from the received message
(unmarshalling) and calls the corresponding method in the server object B.
6. After receiving the results from B, the method in the skeleton packs them
into the message to be sent back (marshalling) and forwards this message to
the communication module.
7. The communication module sends the reply, through the network, to the
client’s machine.
8. The communication module receives the reply and forwards it to the
corresponding method in the stub.
9. The stub method extracts the results from the received message
(unmarshalling) and forwards them to the client.
Remote Procedure Call
5. RMI Semantics and Failures
If everything works OK, RMI behaves exactly like a local invocation. What if
certain failures occur? Classes of failures that have to be handled by an RMI
protocol:
1. Lost request message 2. Lost reply message 3. Server crash 4. Client
crash
We consider an omission failure model.
This means:
▪ Messages are either lost or received correctly.
▪ Client or server processes either crash or execute correctly. After a crash,
the server can possibly restart with or without loss of memory
Lost Request Messages
▪ The communication module starts a timer when sending the request
▪ If the timer expires before a reply or acknowledgment comes back, the
communication module sends the request message again.
Lost Reply Message
The client can not distinguish the loss of a request from that of a reply; it
simply resends the request because no answer has been received!
▪ If the reply really got lost → when the duplicate request arrives at the
server, it already has executed the operation once!
▪ In order to resend the reply, the server may need to re-execute the
operation in order to get the result.
Conclusion with Lost Messages
▪ Exactly-once semantics can be implemented in the case of lost (request or
reply) messages if both duplicate filtering and history are provided and the
message is resent until an answer arrives:
▪ Eventually a reply arrives at the client and the call has been executed
correctly - exactly one time.
However, the situation is different if we assume that the server can crash…
CLIENT CRASH
The client sends a request to a server and crashes before the server replies.
▪ The computation which is active in the server becomes an orphan - a
computation nobody is waiting for. Problems:
▪ waste of server CPU time
▪ locked resources (files, peripherals, etc.)
▪ if the client reboots and repeats the RMI, confusion can be created. The
solution is based on identifying and killing the orphans.
6. Direct vs. Indirect Communication
▪ The communication primitives studied so far are based on direct coupling
between sender and receiver: the sender has a reference/pointer to the
receiver and specifies it as an argument of the communication primitive.
▪ The sender writes something like:
…
send (request) to server_reference;
… → Very Rigid!
We look at two examples: 1. Group communication 2. Publish-subscribe
systems
7. group communication
The assumption with client-server communication and RMI (RPC) is that
two parties are involved: the client and the server.
▪ Sometimes communication involves multiple processes, not only two.
▪ A (simple) solution is to perform separate message passing operations or
RMIs to each receiver.
▪ With group communication, a message can be sent to a group and then it
is delivered to all members of the group → multiple receivers in one
operation.
Publish-Subscribe Systems
The general objective of publish-subscribe systems is to let information
propagate from publishers to interested subscribers, in an anonymous,
decoupled fashion.
▪ Publishers publish events.
▪ Subscribers subscribe to and receive the events they are interested in.
Subscribers are not directly targeted from publishers but indirectly via the
notification service
▪ Subscribers express their interest by issuing subscriptions for specific
notifications, independently from the publishers that produces them;
▪ they are asynchronously notified for all notifications, submitted by any
publisher, that match their subscription.
LAMPORT LOGICAL CLOCK
Lamport’s Logical Clock was created by Leslie Lamport. It is a procedure to
determine the order of events occurring. It provides a basis for the more
advanced Vector Clock Algorithm. Due to the absence of a Global Clock in
a Distributed Operating System Lamport Logical Clock is needed.
Algorithm:
Happened before relation(->): a -> b, means ‘a’ happened before ‘b’.
Logical Clock: The criteria for the logical clocks are:
[C1]: Ci (a) < Ci(b), [ Ci -> Logical Clock, If ‘a’ happened before ‘b’,
then time of ‘a’ will be less than ‘b’ in a particular process. ]
[C2]: Ci(a) < Cj(b), [ Clock value of Ci(a) is less than Cj(b) ]
Reference:
Process: Pi
Event: Eij, where i is the process in number and j: jth event in
the ith process.
tm: vector time span for message m.
Ci vector clock associated with process Pi, the jth element is Ci[j] and
contains Pi‘s latest value for the current time in process Pj.
d: drift time, generally d is 1.
Implementation Rules[IR]:
[IR1]: If a -> b [‘a’ happened before ‘b’ within the same process]
then, Ci(b) =Ci(a) + d
[IR2]: Cj = max(Cj, tm + d) [If there’s more number of processes, then tm =
value of Ci(a), Cj = max value between Cj and tm + d]
For Example:
Take the starting value as 1, since it is the 1st event and there is no
incoming value at the starting point:
e11 = 1
e21 = 1
The value of the next point will go on increasing by d (d = 1), if there is no
incoming value i.e., to follow [IR1].
e12 = e11 + d = 1 + 1 = 2
e13 = e12 + d = 2 + 1 = 3
e14 = e13 + d = 3 + 1 = 4
e15 = e14 + d = 4 + 1 = 5
e16 = e15 + d = 5 + 1 = 6
e22 = e21 + d = 1 + 1 = 2
e24 = e23 + d = 3 + 1 = 4
e26 = e25 + d = 6 + 1 = 7
When there will be incoming value, then follow [IR2] i.e., take the
maximum value between Cj and Tm + d.
e17 = max(7, 5) = 7, [e16 + d = 6 + 1 = 7, e24 + d = 4 + 1 = 5,
maximum among 7 and 5 is 7]
e23 = max(3, 3) = 3, [e22 + d = 2 + 1 = 3, e12 + d = 2 + 1 = 3,
maximum among 3 and 3 is 3]
e25 = max(5, 6) = 6, [e24 + 1 = 4 + 1 = 5, e15 + d = 5 + 1 = 6,
maximum among 5 and 6 is 6]
Limitation:
In case of [IR1], if a -> b, then C(a) < C(b) -> true.
In case of [IR2], if a -> b, then C(a) < C(b) -> May be true or may not be
true.
LAMPORT ALGORITHM FOR mutual exclusion in distributed system
Lamport’s Distributed Mutual Exclusion Algorithm is a permission based
algorithm proposed by Lamport as an illustration of his synchronization scheme
for distributed systems. In permission based timestamp is used to order critical
section requests and to resolve any conflict between requests. In Lamport’s
Algorithm critical section requests are executed in the increasing order of
timestamps i.e a request with smaller timestamp will be given permission to
execute critical section first than a request with larger timestamp. In this
algorithm:
Three type of messages ( REQUEST, REPLY and RELEASE) are used and
communication channels are assumed to follow FIFO order.
A site send a REQUEST message to all other site to get their permission to
enter critical section.
A site send a REPLY message to requesting site to give its permission to
enter the critical section.
A site send a RELEASE message to all other site upon exiting the critical
section.
Every site Si, keeps a queue to store critical section requests ordered by
their timestamps. request_queue i denotes the queue of site S i
A timestamp is given to each critical section request using Lamport’s
logical clock.
Timestamp is used to determine priority of critical section requests.
Smaller timestamp gets high priority over larger timestamp. The execution
of critical section request is always in the order of their timestamp.
Algorithm:
To enter Critical section:
When a site S i wants to enter the critical section, it sends a request
message Request(tsi, i) to all other sites and places the request
on request_queuei. Here, Tsi denotes the timestamp of Site S i
When a site S j receives the request message REQUEST(tsi, i) from
site Si, it returns a timestamped REPLY message to site S i and
places the request of site Si on request_queue j
To execute the critical section:
A site Si can enter the critical section if it has received the message
with timestamp larger than (tsi, i) from all other sites and its own
request is at the top of request_queuei
To release the critical section:
When a site S i exits the critical section, it removes its own request
from the top of its request queue and sends a
timestamped RELEASE message to all other sites
When a site Sj receives the timestamped RELEASE message from site
Si, it removes the request of S i from its request queue
Message Complexity: Lamport’s Algorithm requires invocation of 3(N – 1)
messages per critical section execution. These 3(N – 1) messages involves
(N – 1) request messages
(N – 1) reply messages
(N – 1) release messages
Drawbacks of Lamport’s Algorithm:
Unreliable approach: failure of any one of the processes will halt the
progress of entire system.
High message complexity: Algorithm requires 3(N-1) messages per
critical section invocation.
Performance:
Synchronization delay is equal to maximum message transmission time
It requires 3(N – 1) messages per CS execution.
Algorithm can be optimized to 2(N – 1) messages by omitting
the REPLY message in some situations.
Advantages of Lamport’s Algorithm for Mutual Exclusion in Distributed
System:
1. Simplicity: Lamport’s algorithm is relatively easy to understand and
implement compared to other algorithms for mutual exclusion in distributed
systems.
2. Fairness: The algorithm guarantees fairness by providing a total order of
events that is used to determine the next process that can enter the critical
section.
3. Scalability: Lamport’s algorithm is scalable because it only requires each
process to communicate with its neighbors, rather than all other processes in
the system.
4. Compatibility: The algorithm is compatible with a wide range of
distributed systems and can be adapted to different network topologies and
communication protocols.
Deadlock Handling Strategies in Distributed System
The following are the strategies used for Deadlock Handling in Distributed
System:
Deadlock Prevention
Deadlock Avoidance
Deadlock Detection and Recovery
refer in class notes
DESIGN ISSUES IN THE DISTRIBUTED FILE SYSTEM
Design Issues of Distributed File System
1. Naming and Name Resolution:
Name refers to an object such as file or a directory. Name Resoltuion refers to the
process of mapping a name to an object that is physical storage. Name space is
collection of names. Names can be assigned to files in distributed file system in
three ways:
a) Concatenate the host name to the names of files that are stored on that host.
Advantages:
File name is unique
Systemwide
Name resolution is simple as file can be located easily.
b) Mount remote directories onto local directories. Mounting a remote directory
require that host of directory to be known only once. Once a remote directory is
mounted, its files can be referred in location transparent way. This approach
resolve file name without consulting any host.
c) Maintain a single global directory where all the files in the system belong to
single namespace. The main limitation of this scheme is that it is limited to one
computing facility or to a few co-operating computing facilities. This scheme is
not used generally.
2. Caches on Disk or Main Memory:
Caching refers to storage of data either into the main memory or onto disk space
after its first reference by client machine.
Advantages of having cache in main memory:
Diskless workstations can also take advantage of caching.
Accessing a cache in main memory is much faster than accessing a cache on
local disk.
The server cache is in the main memory at the server, a single design for a
caching mechanism is used for clients and servers.
Advantages of having cache on a local disk:
Large files can be cached without affecting performance.
Virtual memeory management is simple.
Portable workstation can be incorporated in distributed system.
3. Writing Policy:
This policy decides when a modified cache block at client should be transferred to
the server. Following policies are used:
a) Write Through: All writes required by clients applications are also carried out
at servers immediately. The main advantage is reliability that is when client crash,
little information is lost. This scheme cannot take advantage of caching.
b) Delayed Writing Policy: It delays the writing at the server that is modifications
due to write are reflected at server after some delay. This scheme can take
advantage of caching. The main limitation is less reliability that is when client
crash, large amount of information can be lost.
c) This scheme delays the updating of files at the server until the file is closed at
the client. When average period for which files are open is short then this policy is
equivalent to write through and when period is large then this policy is euivalent
to delayed writing policy.
4. Cache Consistancy:
When multiple clients want to modify or access the same data, then cache
consistancy problem arises. Two schemes are used to gurantee that data returned
to the client is valid.
a) Server initiated approach: Server inform the cache manager whenever data in
client caches becomes valid. Cache manager at clients then retrieve the new data
or invalidate the blocks containing old data in cache in cache. Server has maintain
reliable records on what data blocks are cached by which cache managers. Co
operation between servers and cache manager is required.
b) Client-initiated approach: It is the responsibilty of cache manager at the clients
to validate data with server before returning it to the clients. This approach does
not take benefit of caching as the cache manager consult the server for validation
of cached block each time.
5. Availability:
It is one of the important issue is design of Distributed file system. Server failure
or communication network can attract the availability of files.
Replication: The primary mechanism used for enhancing availability of files is
replication. In this mechanism, many copies or replicas of files are maintained at
fifferent servers.
Unit of Replication: The main design issue in replication is the unit of replication.
Following units can be used for replication of data:
a) The most basic unit is file: This unit allow the replication of only those files
that need to have higher availability. This unit results in expansive replica
management. Protection rights associated with directory have to individually
stored with each replica. Replica of common directory may not have common file
servers so require extra name resolution to locate each replica in case of
modification of file or directory.
b) Group of files called volumes can be used: Volume may represent files of
single user or files that are in a server. This scheme is used in coda file system. In
this scheme, replica management is simple that is protection rights can be
associated with volume instead with each individual file replica. Volume
replication results in wasteful as a user needs higher availability for only a few
files in the volume.
c) Combination of volume and single file replication can be used: All the files of
a user form a file group called primary pack. A replica of primary pack, called a
pack, is allowed to contain a subset of files in primary pack. Corresponding to
primary pack, one or more of packs can be obtained according to requirement.
6. Scalability:
The design of DFS should be such that new systems can be easily introduced
without affecting it. Generally, client-server organisation is used to define DFS
structure. Caching is used in this organisation to improve performance. Server
initiated cache invalidation is used to maintain cache consistancy. In this
approach, server maintain a record based information regarding all the clients
sharing file stored on it. This information represents server state. As the system
grows both the size of server state and load due to inalidations increases on
server. Following schemes can be used to reduce server state and server load:
a) Exploit knowledge about usage of files that is it is found that most commonly
used and shared files are accessed in read only mode. So, there is no need to check
the validity of these files of maintain the list of clients at servers for validation
purpose.
b) Generally, data required by a client is found in another client's cache so a client
can obtain required data from another client rather than server.
Structure of server process play am important role. If server is designed with
single proces, then many clients have to wait for a long time whenevr a disk
input/ output is initiated. This can be avoided if separate process is assigned to
each client.
7. Semantics:
The semantic of a file system represent the affects of acceses on file. The basic
semantic is that a read operation will return the data (stored ) due to latest write
operation. The semantic can be guranteed in two ways: All read and writes from
various clients will have to go through the server. Sharing will have to be
disallowed either by server or by the use of locks by application. In first way, the
server become bottleneck and in second way, the file is not available for certain
clients.
Sun’s Network File System (NFS)
The advent of distributed computing was marked by the introduction of
distributed file systems. Such systems involved multiple client machines and one
or a few servers. The server stores data on its disks and the clients may request
data through some protocol messages. Advantages of a distributed file system:
Allows easy sharing of data among clients.
Provides centralized administration.
Provides security, i.e. one must only secure the servers to secure data.
Distributed File System Architecture:
This consists of the following components:
Volume Identifier – An NFS server may have multiple file systems or
partitions. The volume identifier tells the server which file system is being
referred to.
Inode Number – This number identifies the file within the partition.
Generation Number – This number is used while reusing an inode number.
File Attributes: “File attributes” is a term commonly used in NFS terminology.
This is a collective term for the tracked metadata of a file, including file creation
time, last modified, size, ownership permissions etc. This can be accessed by
calling stat() on the file. NFSv2 Protocol: Some of the common protocol messages
are listed below.
Message Description
NFSPROC_GETATTR Given a file handle, returns file attributes.
NFSPROC_SETATTR Sets/updates file attributes.
Given file handle and name of the file to look up,
NFSPROC_LOOKUP
returns file handle.
Given file handle, offset, count data and attributes,
NFSPROC_READ
reads the data.
Given file handle, offset, count data and attributes,
NFSPROC_WRITE
writes data into the file.
Given the directory handle, name of file and attributes,
NFSPROC_CREATE
creates a file.
Given the directory handle and name of file, deletes the
NFSPROC_REMOVE
file.
Given directory handle, name of directory and
NFSPROC_MKDIR
attributes, creates a new directory.
The LOOKUP protocol message is used to obtain the file handle for further
accessing data. The NFS mount protocol helps obtain the directory handle for the
root (/) directory in the file system. If a client application opens a file /abc.txt, the
client-side file system will send a LOOKUP request to the server, through the root
(/) file handle looking for a file named abc.txt. If the lookup is successful, the file
attributes are returned. Client-Side Caching: To improve performance of NFS,
distributed file systems cache the data as well as the metadata read from the
server onto the clients. This is known as client-side caching. This reduces the time
taken for subsequent client accesses. The cache is also used as a temporary buffer
for writing. This helps improve efficiency even more since all writes are written
onto the server at once.
UNIT 3
Real Time Operating System (RTOS)
Real-time operating systems (RTOS) are used in environments where a large
number of events, mostly external to the computer system, must be accepted
and processed in a short time or within certain deadlines. such applications are
industrial control, telephone switching equipment, flight control, and real-time
simulations. With an RTOS, the processing time is measured in tenths of
seconds. This system is time-bound and has a fixed deadline. The processing in
this type of system must occur within the specified constraints. Otherwise,
This will lead to system failure.
Examples of real-time operating systems are airline traffic control systems,
Command Control Systems, airline reservation systems, Heart pacemakers,
Network Multimedia Systems, robots, etc.
The real-time operating systems can be of 3 types –
1. Hard Real-Time Operating System: These operating systems guarantee
that critical tasks are completed within a range of time.
For example, a robot is hired to weld a car body. If the robot welds too early
or too late, the car cannot be sold, so it is a hard real-time system that
requires complete car welding by the robot hardly on time., scientific
experiments, medical imaging systems, industrial control systems, weapon
systems, robots, air traffic control systems, etc.
2. Soft real-time operating system: This operating system provides some
relaxation in the time limit.
For example – Multimedia systems, digital audio systems, etc. Explicit,
programmer-defined, and controlled processes are encountered in real-time
systems. A separate process is changed by handling a single external event.
The process is activated upon the occurrence of the related event signaled by
an interrupt.
Multitasking operation is accomplished by scheduling processes for
execution independently of each other. Each process is assigned a certain
level of priority that corresponds to the relative importance of the event that
it services. The processor is allocated to the highest-priority processes. This
type of schedule, called, priority-based preemptive scheduling is used by
real-time systems.
3. Firm Real-time Operating System: RTOS of this type have to follow
deadlines as well. In spite of its small impact, missing a deadline can have
unintended consequences, including a reduction in the quality of the
product. Example: Multimedia applications.
4. Deterministic Real-time operating System: Consistency is the main key
in this type of real-time operating system. It ensures that all the task and
processes execute with predictable timing all the time,which make it more
suitable for applications in which timing accuracy is very
important. Examples: INTEGRITY, PikeOS.
The advantages of real-time operating systems are as follows-
1. Maximum consumption: Maximum utilization of devices and systems.
Thus more output from all the resources.
2. Task Shifting: Time assigned for shifting tasks in these systems is very
less. For example, in older systems, it takes about 10 microseconds. Shifting
one task to another and in the latest systems, it takes 3 microseconds.
3. Focus On Application: Focus on running applications and less importance
to applications that are in the queue.
4. Real-Time Operating System In Embedded System: Since the size of
programs is small, RTOS can also be embedded systems like in transport and
others.
5. Error Free: These types of systems are error-free.
6. Memory Allocation: Memory allocation is best managed in these types of
systems.
7. Comparison of Regular and Real-Time operating systems:
Regular OS Real-Time OS (RTOS)
Complex Simple
Best effort Guaranteed response
Regular OS Real-Time OS (RTOS)
Fairness Strict Timing constraints
Average Bandwidth Minimum and maximum limits
Unknown components Components are known
Unpredictable behavior Predictable behavior
Plug and play RTOS is upgradeable
Applications of Real-time operating system (RTOS):
RTOS is used in real-time applications that must work within specific deadlines.
Following are the common areas of applications of Real-time operating systems
are given below.
o Real-time running structures are used inside the Radar gadget.
o Real-time running structures are utilized in Missile guidance.
o Real-time running structures are utilized in on line inventory trading.
o Real-time running structures are used inside the cell phone switching
gadget.
o Real-time running structures are utilized by Air site visitors to manipulate
structures.
o Real-time running structures are used in Medical Imaging Systems.
o Real-time running structures are used inside the Fuel injection gadget.
o Real-time running structures are used inside the Traffic manipulate gadget.
o Real-time running structures are utilized in Autopilot travel simulators.
Basic Model of a Real-time System
Real-time System is a system that is used for performing some specific tasks.
These tasks are related with time constraints and need to be completed in that
time interval.
Basic Model of a Real-time System: The basic model of a real-time system
presents the overview of all the components involved in a real-time system.
Real-time system includes various hardware and software embedded in a such a
way that the specific tasks can be performed in the time constraints allowed.
The accuracy and correctness involved in real-time system makes the model
complex. There are various models of real-time system which are more complex
and are hard to understand. Here we will discuss a basic model of real-time
system which has some commonly used terms and hardware. Following
diagram represents a basic model of Real-time System:
Sensor: Sensor is used for the conversion of some physical events or
characteristics into the electrical signals. These are hardware devices that takes
the input from environment and gives to the system by converting it. For
example, a thermometer takes the temperature as physical characteristic and
then converts it into electrical signals for the system.
Actuator: Actuator is the reverse device of sensor. Where sensor converts the
physical events into electrical signals, actuator does the reverse. It converts the
electrical signals into the physical events or characteristics. It takes the input
from the output interface of the system. The output from the actuator may be in
any form of physical action. Some of the commonly used actuator are motors
and heaters.
Signal Conditioning Unit: When the sensor converts the physical actions into
electrical signals, then computer can’t used them directly. Hence, after the
conversion of physical actions into electrical signals, there is need of
conditioning. Similarly while giving the output when electrical signals are sent
to the actuator, then also conditioning is required. Therefore, Signal
conditioning is of two types:
Input Conditioning Unit: It is used for conditioning the electrical signals
coming from sensor.
Output Conditioning Unit: It is used for conditioning the electrical
signals coming from the system.
Interface Unit: Interface units are basically used for the conversion of digital to
analog and vice-versa. Signals coming from the input conditioning unit are
analog and the system does the operations on digital signals only, then the
interface unit is used to change the analog signals to digital signals. Similarly,
while transmitting the signals to output conditioning unit the interface of
signals are changed i.e. from digital to analog. On this basis, Interface unit is
also of two types:
Input Interface: It is used for conversion of analog signals to digital.
Output Interface: It is used for conversion of digital signals to analog.
Characteristics of Real-time System
Correctness: It is one of the precious parts of a real time OS. A real time
operating system produces a correct result within the given time.
Safety: Safety is necessary for any system but real time operating system can
perform for a long time without failures.
Time Constraints: In real time operating system, the tasks should be
completed within the given time period.
Embedded: real time operating systems are embedded. Embedded means
the system that is designed for a specific purpose by the combination of hardware
and software.
Safety and Reliability:
A fail-safe state of a system is one which if enteredwhen the system fails, no
damage would result.
A safety-critical system is one whose failure cancause severe damages.
.
How to Achieve High Reliability?
Error Avoidance:
Error Detection and Removal:
Fault-Tolerance:
Legend: C1, C2, C3: Redundant copies of the same component
Fig. 28.11 Schematic Representation of TMR
It is relatively simple to design a hardware equipment to be fault-tolerant. The
following aretwo methods that are popularly used to achieve hardware fault-
tolerance:
Error Detection and Removal: In spite of using the best available error
avoidancetechniques, many errors still manage to creep into the code. These
errors need to be detected and removed. This can be achieved to a large extent by
conductingthorough reviews and testing. Once errors are detected, they can be
easily fixed.
Built In Self Test (BIST): In BIST, the system periodically performs self tests of its
components. Upon detection of a failure, the system automatically
reconfiguresitself by switching out the faulty component and switching in one of
the redundantgood components.
Triple Modular Redundancy (TMR): In TMR, as the name suggests, three
redundantcopies of all critical components are made to run concurrently (see Fig.
28.11).Observe that in Fig. 28.11, C1, C2, and C3 are the redundant copies of the
samecritical component. The system performs voting of the results produced by
theredundant components to select the majority result. TMR can help
tolerateoccurrence of only a single failure at any time. (Can you answer why a
TMR schemecan effectively tolerate a single component failure only?). An
assumption that is implicit in the TMR technique is that at any time only one of
the three redundant components can produce erroneous results. The majority
result aftervoting would be erroneous if two or more components can fail
simultaneously (moreprecisely, before a repair can be carried out). In situations
where two or morecomponents are likely to fail (or produce erroneous results),
then greater amounts ofredundancies would be required to be incorporated. A
little thinking can show that atleast 2n+1 redundant components are required to
tolerate simultaneous failures of ncomponent.
As compared to hardware, software fault-tolerance is much harder to achieve. To
investigatethe reason behind this, let us first discuss the techniques currently
being used to achieve software fault-tolerance. We do this in the following
subsection.
Software Fault-Tolerance Techniques
Two methods are now popularly being used to achieve software fault-tolerance:
N-versionprogramming and recovery block techniques. These two techniques are
simple adaptations ofthe basic techniques used to provide hardware fault-
tolerance. We discuss these two techniques in the following.
N-Version Programming: This technique is an adaptation of the TMR technique
forhardware fault-tolerance. In the N-version programming technique,
independent teamsdevelop N different versions (value of N depends on the
degree of fault-tolerance required)of a software component (module).
Recovery Blocks: In the recovery block scheme, the redundant components are
called tryblocks. Each try block computes the same end result as the others but is
intentionally writtenusing a different algorithm compared to the other try blocks.
Fig. 28.12 A Software Fault-Tolerance Scheme Using Recovery Blocks
As was the case with N-version programming, the recovery blocks approach also
does notachieve much success in providing effective fault-tolerance. The reason
behind this is againstatistical correlation of failures. Different try blocks fail for
identical reasons as wasexplained in case of N-version programming approach.
Besides, this approach suffers from afurther limitation that it can only be used if
the task deadlines are much larger than the taskcomputation times (i.e. tasks have
large laxity), since the different try blocks are put toexecution one after the other
when failures occur. The recovery block approach poses specialdifficulty when
used with real-time tasks with very short slack time (i.e. short deadline
andconsiderable execution time),
as the try blocks are tried out one after the other deadlines may be missed.
Therefore, in suchcases the later try-blocks usually contain only skeletal code.
Fig. 28.13 Checkpointing and Rollback Recovery
Of course, it is possible that the later try blocks contain only skeletal code,
produce onlyapproximate results and therefore take much less time for
computation than the first tryblock.
Checkpointing and Rollback Recovery: Checkpointing and roll-back recovery is
anotherpopular technique to achieve fault-tolerance. In this technique as the
computation proceeds,the system state is tested each time after some meaningful
progress in computation is made.Immediately after a state-check test succeeds, the
state of the system is backed up on astable storage (see Fig. 28.13). In case the next
test does not succeed, the system can be madeto rollback to the last checkpointed
state. After a rollback, from a checkpointed state a freshcomputation can be
initiated. This technique is especially useful, if there is a chance that thesystem
state may be corrupted as the computation proceeds, such as data corruption
orprocessor failure.
Types of Real-Time Tasks
We have already seen that a real-time task is one for which quantitative
expressions of timeare needed to describe its behavior. This quantitative
expression of time usually appears inthe form of a constraint on the time at which
the task produces results. The most frequentlyoccurring timing constraint is a
deadline constraint which is used to express that a task isrequired to compute its
results within some deadline. We therefore implicitly assume onlydeadline type
of timing constraints on tasks in this section, though other types of constraints(as
explained in Sec.) may occur in practice. Real-time tasks can be classified into
thefollowing three broad categories:
A real-time task can be classified into either hard, soft, or firm real-time task
depending on the consequencesof a task missing its deadline.
It is not necessary that all tasks of a real-time application belong to the same
category. It ispossible that different tasks of a real-time system can belong to
different categories. We now elaborate these three types of real-time tasks.
Hard Real-Time Tasks
A hard real-time task is one that is constrained to produce its results within
certainpredefined time bounds. The system is considered to have failed whenever
any of its hardreal-time tasks does not produce its required results before the
specified time bound.
An example of a system having hard real-time tasks is a robot. The robot cyclically
carries outa number of activities including communication with the host system,
logging all completedactivities, sensing the environment to detect any obstacles
present, tracking the objects ofinterest, path planning, effecting next move, etc.
Now consider that the robot suddenlyencounters an obstacle. The robot must
detect it and as soon as possible try to escapecolliding with it. If it fails to respond
to it quickly (i.e. the concerned tasks are not completedbefore the required time
bound) then it would collide with the obstacle and the robot wouldbe considered
to have failed. Therefore detecting obstacles and reacting to it are hard real-time
tasks.
Firm Real-Time Tasks
Every firm real-time task is associated with some predefined deadline before
which it isrequired to produce its results. However, unlike a hard real-time task,
even when a firm real-time task does not complete within its deadline, the system
does not fail. The late results aremerely discarded. In other words, the utility of
the results computed by a firm real-time taskbecomes zero after the deadline. Fig.
28.14 schematically shows the utility of the resultsproduced by a firm real-time
task as a function of time. In Fig. 28.14 it can be seen that if theresponse time of a
task exceeds the specified deadline, then the utility of the results becomeszero and
the results are discarded.
Firm real-time tasks typically abound in multimedia applications. The following
are twoexamples of firm real- time tasks:
Video conferencing: In a video conferencing application, video frames and the
accompanying audio are converted into packets and transmitted to the
receiverover a network. However, some frames may get delayed at different
nodes duringtransit on a packet-switched network due to congestion at different
nodes. This mayresult in varying queuing delays experienced by packets traveling
along different routes. Even when packets traverse the same route, some packets
can take muchmore time than the other packets due to the specific transmission
strategy used atthe nodes. When a certain frame is being played, if some
preceding frame arrives atthe receiver, then this frame is of no use and is
discarded. Due to this reason, when aframe is delayed by more than say one
second, it is simply discarded at the receiver-end without carrying out any
processing on it.
Satellite-based tracking of enemy movements: Consider a satellite that takes
pictures of an enemy territory and beams it to a ground station computer frame
by frame. The ground computer processes each frame to find the
positionaldifference of different objects of interest with respect to their position in
theprevious frame to determine the movements of the enemy. When the
groundcomputer is overloaded, a new image may be received even before an
older image istaken up for processing. In this case, the older image is of not much
use. Hence theolder images may be discarded and the recently received image
could be processed.
For firm real-time tasks, the associated time bounds typically range from a few
milli secondsto several hundreds of milli seconds.
Soft Real-Time Tasks
Soft real-time tasks also have time bounds associated with them. However, unlike
hard andfirm real-time tasks, the timing constraints on soft real-time tasks are not
expressed asabsolute values. Instead, the constraints are expressed either in terms
of the averageresponse times required.
Non-Real-Time Tasks
A non-real-time task is not associated with any time bounds. Can you think of any
example ofa non-real-time task? Most of the interactive computations you
perform nowadays arehandled by soft real-time tasks. However, about two or
three decades back, when computerswere not interactive almost all tasks were
non-real-time. A few examples of non-real-timetasks are: batch processing jobs, e-
mail, and back ground tasks such as event loggers. Youmay however argue that
even these tasks, in the strict sense of the term, do have certain time bounds. For
example, an e-mail is expected to reach its destination at least within a couple
ofhours of being sent. Similar is the case with a batch processing job such as pay-
slip printing. What then really is the difference between a non-real-time task and a
soft real-time task? For non-real-time tasks, the associated time bounds are
typically of the order of a fewminutes, hours or even days. In contrast, the time
bounds associated with soft real-time tasksare at most of the order of a few
seconds.
UNIT 4
Handheld Operating System:
Handheld operating systems are available in all handheld devices like
Smartphones and tablets. It is sometimes also known as a Personal Digital
Assistant. The popular handheld device in today’s world is Android and iOS.
These operating systems need a high-processing processor and are also
embedded with various types of sensors.
Some points related to Handheld operating systems are as follows:
1. Since the development of handheld computers in the 1990s, the demand
for software to operate and run on these devices has increased.
2. Three major competitors have emerged in the handheld PC world with
three different operating systems for these handheld PCs.
3. Out of the three companies, the first was the Palm Corporation with their
PalmOS.
4. Microsoft also released what was originally called Windows CE.
Microsoft’s recently released operating system for the handheld PC comes
under the name of Pocket PC.
5. More recently, some companies producing handheld PCs have also started
offering a handheld version of the Linux operating system on their machines.
6.
Features of Handheld Operating System:
1. Its work is to provide real-time operations.
2. There is direct usage of interrupts.
3. Input/Output device flexibility.
4. Configurability.
Types of Handheld Operating Systems:
Types of Handheld Operating Systems are as follows:
1. Palm OS
2. Symbian OS
3. Linux OS
4. Windows
5. Android
Palm OS:
Since the Palm Pilot was introduced in 1996, the Palm OS platform has
provided various mobile devices with essential business tools, as well as the
capability that they can access the internet via a wireless connection.
These devices have mainly concentrated on providing basic personal-
information-management applications. The latest Palm products have
progressed a lot, packing in more storage, wireless internet, etc.
Symbian OS:
It has been the most widely-used smartphone operating system because of
its ARM architecture before it was discontinued in 2014. It was developed by
Symbian Ltd.
This operating system consists of two subsystems where the first one is the
microkernel-based operating system which has its associated libraries and
the second one is the interface of the operating system with which a user can
interact.
Since this operating system consumes very less power, it was developed
for smartphones and handheld devices.
It has good connectivity as well as stability.
It can run applications that are written in Python, Ruby, .NET, etc.
Linux OS:
Linux OS is an open-source operating system project which is a cross-
platform system that was developed based on UNIX.
It was developed by Linus Torvalds. It is a system software that basically
allows the apps and users to perform some tasks on the PC.
Linux is free and can be easily downloaded from the internet and it is
considered that it has the best community support.
Linux is portable which means it can be installed on different types of
devices like mobile, computers, and tablets.
It is a multi-user operating system.
Linux interpreter program which is called BASH is used to execute
commands.
It provides user security using authentication features.
Windows OS:
Windows is an operating system developed by Microsoft. Its interface
which is called Graphical User Interface eliminates the need to memorize
commands for the command line by using a mouse to navigate through
menus, dialog boxes, and buttons.
It is named Windows because its programs are displayed in the form of a
square. It has been designed for both a beginner as well professional.
It comes preloaded with many tools which help the users to complete all
types of tasks on their computer, mobiles, etc.
It has a large user base so there is a much larger selection of available
software programs.
One great feature of Windows is that it is backward compatible which
means that its old programs can run on newer versions as well.
Android OS:
It is a Google Linux-based operating system that is mainly designed for
touchscreen devices such as phones, tablets, etc.
There are three architectures which are ARM, Intel, and MIPS which are used
by the hardware for supporting Android. These lets users manipulate the
devices intuitively, with movements of our fingers that mirror some common
motions such as swiping, tapping, etc.
Android operating system can be used by anyone because it is an open-
source operating system and it is also free.
It offers 2D and 3D graphics, GSM connectivity, etc.
There is a huge list of applications for users since Play Store offers over
one million apps.
Professionals who want to develop applications for the Android OS can
download the Android Development Kit. By downloading it they can easily
develop apps for android.
Advantages of Handheld Operating System:
Some advantages of a Handheld Operating System are as follows:
1. Less Cost.
2. Less weight and size.
3. Less heat generation.
4. More reliability.
REQUIREMENTS IN HAND HELD OS
Installations of handheld computers are progressing in a variety of fields such
as logistics and manufacturing with applications including inventory
management, data verification, process management, traceability, and
shipping mistake prevention. This section explains the environment that is
required in order to actually install handheld computers.
1. Requirements for Operating Handheld Computers
2. Determining the Hardware Configuration
3. Power Supply Environment
4. Printers and Other Peripheral Equipment
5. Developing Software
6. KEYENCE Enables Easy Software Development With No
Programming Required
1.Requirements for Operating Handheld Computers
The advantage of a handheld computer is its ability to perform multiple functions
as a standalone device, filling many roles such as reading various codes as well as
collecting, sending, and receiving data. However, before handheld computers can
be installed, it is necessary to organize the surrounding environment.
The necessity of preparing both hardware and software
For operation, a variety of equipment is necessary. Examples include the PC or
server to communicate with, the battery that supplies the power and the
dedicated battery charger, and the dedicated printer used to output the recorded
data. It is also necessary to develop software to provide system functions and
operability that match the usage environment and the purpose. Examples of
issues handled by software include how to aggregate and process the data and
how to display the aggregated data on a PC or similar device. Hence, it is
important to thoroughly consider the peripheral equipment and the environment,
including hardware and software, prior to installation.
2.Determining the Hardware Configuration
Communication environment
Handheld computers can read and accumulate data in a standalone manner, but
integrating these devices with PCs and servers is essential in aggregating data,
sharing data with different departments, and making use of data from other
departments. The problem is determining which method to use to communicate
between handheld computers and PCs/servers. The answer is determined by the
usage environment and generally is selected from one of two options: using a
communication unit and using a wireless LAN.
Use a communication unit when the usage location is limited
If the usage location is limited and is fixed, select the communication unit method.
Use a LAN cable or a USB cable to connect the communication unit to a PC.
Use a wireless LAN in large warehouses, stores, and factories
If you want to use handheld computers while moving throughout a large
warehouse, store, or factory, select the wireless type. It is necessary to build an on-
site LAN by installing dedicated access points. When building an environment
with a wireless LAN, it is often the case that the handheld computers are used
over a wide range, so measure the environment and determine the number and
installation locations of the access points required to match the operating
environment.
3.Power Supply Environment
Regarding portability and ease of use, handheld computers are cordless and
battery powered. There are various types of batteries that are used, including
dedicated rechargeable batteries and general-purpose dry cell batteries. When just
using handheld computers within a company or facility, it is sufficient to prepare
dedicated cradles that automatically charge the handheld computers when they
are docked such as at the end of work. In situations where it is expected that
technicians and sales personnel will take the handheld computers outside of the
company, it is most common to select a handheld computer type that can use dry
cell batteries or general-purpose rechargeable batteries that can be purchased
immediately when outside of the office in order to replace dead batteries instead
of selecting a handheld computer type that uses a dedicated battery charger.
4.Printers and Other Peripheral Equipment
One common thing that is seen is home delivery drivers printing off data when
delivering packages. An advantage of handheld computers is that in addition to
being able to read and aggregate data, they can print off data by connecting a
printer. A separate printer is required in order to print labels with data read in a
warehouse or factory and affix these labels to cardboard boxes and to issue
barcodes for shipping. Compact, portable printers that can be used with handheld
computers are also available, so use these printers to match the application.
5.Developing Software
Difficult development of dedicated software
The development of dedicated software is more difficult than establishing the
hardware environment that includes the handheld computers and peripheral
equipment such as communication equipment, batteries, and printers. This is
because system construction such as determining how to aggregate and process
the read data and how to implement on-screen operations are essentially the
domain of system engineers. Naturally, the development costs during hardware
installation require a large investment. There is no shortage of cases in which
operators want to install handheld computers to make work more efficient but
run into the bottleneck of software development and are unable to reach their
expected efficiency.
General software development methods
The handheld computer installation conditions vary depending on the
specifications and on whether a corporate system is present, but the development
methods can generally be separated into the four listed below.
Embedded applications
With this pattern, the application to execute is embedded in the handheld
computer. This is the optimal method for corporations that want to accumulate
data, implement rich device control, and develop applications easily.
Web applications
With this pattern, the browser on the handheld computer accesses webpages on a
web server. This is the optimal method for corporations that want to use or are
already using web applications and want to manage applications in a centralized
manner.
Terminal services
With this pattern, the handheld computer emulates PC applications. This is the
optimal method for corporations that want to use PC applications as-is and
manage applications in a centralized manner.
Terminal emulators/middleware
This pattern uses third-party terminal emulators/middleware. This is the optimal
method for corporations that want to create a fast and rich web system, use
AS/400 and SAP emulators on handheld computers, and manage applications in
a centralized manner.
Selecting the development method according to needs
It is necessary to select the development method from the four listed above by
finding the method that matches the on-site needs, what operators want to do
with handheld computers. This section introduces the main functions that are
required of handheld computers and the development methods that can be used
to realize these needs.
6.KEYENCE Enables Easy Software Development With No Programming
Required
When using handheld computers, there are different software development
methods such as embedded applications, web applications, terminal services, and
terminal emulators/middleware. However, all methods incur development costs
and have their own delivery dates. KEYENCE's development tools solve this
problem and make it possible to more easily develop dedicated software on your
own.
The greatest characteristic of these tools is their simple visual development.
Anyone, even people with absolutely no knowledge of difficult computer
languages, can develop dedicated software just by selecting the required
functions, icons, and other such items from the rich templates and GUI (graphical
user interface) tools displayed on the PC screen. This eliminates waste by
reducing the hassle, cost, and time required to order development from dedicated
vendors and engineers. What's more, systems can be developed easily and quickly
on your own, which makes it possible to support low-cost, short-term system
projects without difficulty.
THE PALM OPERATING SYSTEM – TECHNOLOGY OVERVIEW
The Palm Computing Division of U.S. Robotics set out to create a compelling,
low-cost, small form-factor, hand-held computer that connects seamlessly to
Windows and Macintosh personal computers. The search for appropriate
operating system software for this device led to the discovery of a gap in
functionality between proprietary microcontroller-based system software on the
low-end and PDA operating systems on the high end. To fill this gap U.S.
Robotics created the Palm Operating System.
The Need for a New Operating System : Low-cost, hand-held devices, such as
electronic organizers, cell phones and pagers, have relied on proprietary,
microcontroller-based system software to achieve affordable price points and long
battery life (see Figure 1.) The resulting devices are limited in software
functionality, provide poor user interfaces and little or no connectivity to the
desktop. On the high-end are full multi-window, device-independent operating
systems for PDAs and Personal Communicators. While these operating systems
facilitate graphical user interfaces, applications development, and robust
functionality, they require system resources that prevent the creation of fast, low-
power, low-cost devices. Furthermore, even these high-end systems are not
designed from the ground up to provide seamless connectivity with desktop PCs.
The Need For Connectivity:
To make hand-held devices fast, power-efficient, and easy to operate, focused
functionality is essential. Data viewing, lookup and limited data gathering is
better suited for a hand-held device. The PC is ideal for data management, batch
data entry, printing, backup, archive and configuration tasks. To leverage
resources appropriately, applications need to be partitioned between the hand-
held and the PC, thereby streamlining the hand-held device with essential
functionality and leveraging the PC to extend that functionality.
What is the Palm Operating System?
The Palm Operating System (Palm OS) is a hand-held computer operating system
that enables low-cost, low power, small form-factor devices to integrate
seamlessly with Windows or Macintosh personal computers.
With their integrated connectivity, these devices actually extend the usefulness of
the PC. The Palm OS consists of two parts: ·
Highly efficient operating system software that runs on a 68000- based hand-held
computing devices. ·
Windows or Macintosh-based system software that manages synchronization of
the hand-held and the PC.
These interdependent components extend the traditional definition of an
operating system to reflect the integral nature of connectivity in system design.
The first device in the Palm OS product line is Pilot, The Connected Organizer. A
closer look at the Palm OS design goals will clarify what makes it work so well.
Design Goals
The Palm OS was designed with the following objectives in mind. On the hand-
held side, it is designed for: · Speed and efficiency: Provide nearly instantaneous
response to user input while running on a Motorola 68000 type processor,
requiring only 32K system memory. ·
Low-cost and low-power: Provide months of battery life on 2 AAA batteries,
utilizing standard, low-cost memory and processing components. ·
Small form-factor devices: Facilitate the creation of pocket sized devices with
user interface objects designed specifically for small displays; enable fast efficient
data entry without the use of a keyboard. ·
Integrated PC connectivity: Use record IDs, status flags and common data storage
to facilitate communication and synchronization with desktop software. ·
Standard application development: Facilitate application development in C or
C++ using Metrowerks compilers and other common development tools.
On the PC side, it is designed for: ·
Efficient synchronization: Synchronize hand-held data with multiple PC data
sources without user intervention. · Extendability: Enable Independent Software
Vendors (ISVs) to develop “conduits”, links between a wide range of desktop and
hand-held applications. ·
Communication independence: Insulate conduit developers from the
communications protocols to facilitate synchronization via a variety of physical
links. ·
Standard conduit development: Under Windows, conduits are DLLs and are
developed using standard C or visual basic tools.
To understand what makes the Palm OS work so well, a closer look at how it
achieves these design goals is in order, starting with the handheld side of the
Palm OS.
Speed and Efficiency The Palm OS memory manager facilitates fast access to
system software, applications, and data, yet requires a minimum of nonvolatile,
dynamic memory.
Low-Power Usage The Palm OS minimizes power consumption with efficient
power management. It supports three modes of operation: sleep mode, idle mode,
and running mode. When there is no user activity for a number of minutes, or
when the user hits the off key, the device enters sleep mode.
Support for Small Form-Factor Devices The Palm OS is designed specifically for
small form-factor devices. In addition its feature set is targeted to small devices to
eliminate unnecessary system overhead. To facilitate data entry for small form-
factor devices, the Palm OS provides optional support for Graffiti power writing
technology. On other platforms, Graffiti is added to the operating system,
requiring a Graffiti window to float on the screen for text input, thereby obscuring
the screen.
Integrated PC Connectivity The Palm OS
data manager provides built-in
functionality to facilitate efficient
synchronization with the desktop.
Each database on the hand-held has a
header that contains an attribute flag to
indicate that at least one record in the
database has changed since the last
synchronization. In the event that no
records RAM ROM OS Libraries Apps
32K Sys Data Apps Memory Module
Serial Port Motorola "Dragonball" Microprocessor (68000-based) Figure 2 have
changed, the synchronization process can bypass the unchanged database
entirely. Other systems must use a less-reliable date and time stamp on files to
determine their status.
Standard Development Environment In order to speed application development,
the developer creates Palm OS applications using standard development tools on
a Macintosh computer. The development environment uses Apple’s Macintosh
Programmers Workshop (MPW) and Metrowerks’ CodeWarrior compiler.
The developer writes the applications using the C language, although assembly
language and C++ are also available. The developer debugs applications on the
Macintosh using the source level debugger provided with CodeWarrior. The Palm
OS includes libraries that reproduce the functionality of the Palm OS system
software under the Macintosh environment. In order to test applications on the
Macintosh, the developer simply links the application code with Palm OS
“simulator” libraries to generate a Macintosh executable. This executable runs
from within the Macintosh based “simulator” that displays a window on the
screen representing the hand-held's’s display area.
Efficient Synchronization A synchronization manager application runs in the
background on the Windows or Macintosh PC to enable one-button
synchronization, or “HotSync,” between the hand-held and the PC. This
application monitors the serial port of the PC for a wake-up packet from the hand-
held PDA DEVELOPERS 4.2 • March/April 1995 Reprinted from the
March/April 1996 issue of PDA Developers. ©1996-1997 by Creative Digital
Publishing Inc. All rights reserved. device to begin synchronization. Once it
receives a wake-up packet, it runs in turn each conduit installed in the system,
systematically synchronizing each hand-held database with the associated
database on the PC (see Figure 3). Conduits need not, however, create a mirror-
image of data on the PC and hand-held. For example, an install conduit can be
used to install applications from the PC to the hand-held. A finance conduit might
upload new transaction data from the hand-held's’s check register and, after they
are integrated into the PC finance database, download a new balance Each
conduit uses the synchronization manager API to make calls during the
synchronization process to do whatever is required during synchronization. This
can include opening databases on the hand-held, retrieving and writing records,
closing databases, and so on. Extendability : The Palm OS enables
synchronization between a wide range of desktop and hand-held applications in a
single step. Other systems require developers to create their own connectivity
solution. As a result, end users must use multiple connectivity solutions for the
various applications on their hand-held devices. In order to add a type of
synchronization to the HotSync process, a developer simply creates a conduit, a
PC-based application that manages data exchange between a database on the
hand-held and a file on the PC. Conduits can perform a vast range of functions.
One conduit might install applications from a diskette on the PC to the RAM on
the handheld device. Another conduit might synchronize the Address Book
database on the hand-held with a file on the PC so that they are mirrorimages of
each other. To add a conduit, the developer registers it with the synchronization
manager. The synchronization manager runs all registered conduits in sequence
to synchronize the data on the two devices during HotSync. Conduits can
synchronize any PC application’s data with any of the databases residing on the
hand-held. For example, the address book conduit retrieves all new, modified,
deleted and archived records from the hand-held's’s address book. It then updates
the PC’s address book with the new records and any modified records that have
not changed on the PC. It then synchronizes any records changed on both the
handheld and the PC and deletes or archives any records deleted or archived on
the hand-held. Next, it updates the hand-held with the set of records changed
during the process. Finally, it readies both devices for the next synchronization by
clearing the status flags for each database and record on both sides.
Communication Independence Although other hand-held operating systems
facilitate some level of connectivity with the PC, the burden of integrating
communications is frequently the job of the developer. Palm OS conduits are
communication independent. The conduit developer need not worry about low-
level communications protocols. The synchronization manager manages
communications and runs transparently across a variety of communication media,
whether a wired serial connection, or a wired or wireless modem.
Therefore the conduit developer need
only interface to the synchronization
manager to access whatever
communication media are supported
by the device. Handheld Desktop B C
Sync Manager Private Public Conduit
B Conduit A A Conduit C A B C
Comm. Layer API Figure 3 Standard Conduit Development To facilitate easy
development, conduits are developed using standard Windows or Macintosh
development tools, depending on the target platform. Under Windows,
developers create conduits using Visual C++ and Microsoft’s MFC. Under
Macintosh, the developer creates conduits using Apple MPW.
Implications for the Future
The connectivity focus of the Palm OS facilitates more targeted devices that can
harness the power of the PC to extend functionality. For example, smart phones
can download address book information entered using the full-sized PC
keyboard, rather than forcing users to program phone numbers with the limited
phone keypad. The operating system extends the functionality of the PC by
providing a means to link to and carry desktop data. It also extends the power of
the hand-held device by leveraging the functionality of the PC applications to
perform more powerful functions. Palm OS conduits will be developed for a wide
range of desktop applications, including group scheduling, e-mail, personal
finance and database. As the Palm OS becomes more widely accepted, a new
breed of devices will emerge. It will serve as the foundation for a wide range of
increasingly smaller devices, including smart phones and graphical pagers. The
Palm OS expands the opportunity for hand-held computing and extends the
usefulness of the desktop computer. It will also enable network devices that can
connect to the Internet for information access.
Palm OS Software Development Kit Developers create Palm OS applications
using standard development tools on a Macintosh computer. The development
environment uses Apple’s Macintosh Programmers Workshop (MPW) and
Metrowerks’ CodeWarrior. The developer writes the applications using the C
language, although assembly language and C++ are also available, then debugs
the application on the Macintosh using the source level debugger provided with
CodeWarrior.
The Palm OS SDK will be available in April, 1996. It consists of the following:
• System software libraries and headers: Code libraries and associated header
files for all the Palm OS system software.
• System resources: ResEdit template files, fonts and bitmaps.
• Pilot Simulator: A Macintosh application that “hosts” a Palm OS application for
execution on a Macintosh to simulate a Pilot device. The simulator also provides
test and debug capabilities.
• Debug and console application: A Macintosh-based debugger for debugging
remote applications on the Pilot device.
• Documentation: Printed and HTML-based on-line documentation, including a
tutorial and style guidelines (available on the Palm Web Site:
http://www.usr.com/palm)
• MPW and Code Warrior extensions: Scripts and utilities that aid in the
development process.
• Install Conduit: A conduit that installs applications from the Macintosh to the
RAM of the Pilot device during a synchronization.
• Other Utilities: Project files, application shell code and makefiles.
• Sample Code: Well-documented source code to illustrate how to develop an
application. Referenced in the Tutorial.
Palm Conduit Software Development Kit To facilitate easy development,
conduits are developed using standard Windows or Macintosh development
tools, depending on the target platform. Under Windows, developers create
conduits using Visual C++ and Microsoft’s MFC. Under Macintosh, the developer
creates conduits using Macintosh’s MPW. The Palm Conduit SDK will be
available in April, 1996. It consists of the following:
• Libraries: Libraries that provide API calls to exchange data between the PC and
the handheld device via the Synchronization Manager, an application that runs in
the background on a Windows or Macintosh PC.
• Documentation: Printed and HTML-based on-line documentation (available on
the Palm Web Site http://www.usr.com/palm
• C and C++ Header Files: Header files with definitions of structures and
prototypes of functions.
• Sample Code: Well-documented sample conduits, referenced in the
documentation, makefile templates and a basic conduit template.
• Install Utilities: Programs to set up the Conduit SDK onto a host computer, and
a utility to register new conduits with the Synchronization Manager.
SYMBIAN OPERATING SYSTEM
The Origin of Symbian OS
In the 1990s, software company Psion was actively working on the
development of innovative mobile operating systems. Their earlier products were
16-bit systems, but in 1994, they began working on a 32-bit version programmed
in C++, and it was named EPOC32. Then in 1998, Psion rebranded Symbian Ltd.
in collaboration with popular mobile phone brands, Nokia, Ericsson, and
Motorola.
Symbian Ltd. began upgrading EPOC32, and the new version was named
Symbian OS.
Symbian is a discontinued mobile operating system developed and sold by
Symbian LTD. It was a closed-source mobile operating system designed for
smartphones in 1998. Symbian OS was designed to be used on higher-end
mobile phones. It was an operating system for mobile devices which has
limited resources, multitasking needs, and soft real-time requirements.
The Symbian operating system has evolved from the Pison EPOC, which
was released on ARM processors. In June 1998, the Pison software was
named Symbian LTD as the result of a joint venture between Psion and
phone manufacturers Ericsson, Motorola, and Nokia.
In** 2008**, Nokia announced the accession of Symbian LTD, and a new
open-source, nonprofit organization called Symbian Foundation was
established.
In May 2014 the development of Symbian OS was discontinued.
Features of Symbian OS
Symbian OS was equipped with the following features:
User Interface
Symbian offered an interactive graphical user interface for mobile phones
with the AVKON toolkit, also called S60. However, it was designed mainly
to be operated with a keyboard. As the demand for touchscreen phones
increased, Symbian shifted to the Qt framework to design a better user
interface for touchscreen phones.
Browser
Initially, Symbian phones came with Opera as the default browser. Later on,
a built-in browser was developed for the Symbian OS based on WebKit. In
phones built in the S60 platform, this browser was simply named as Web
Browser for S60. It boasted of faster speed and better interface.
App Development
The standard software development kit to build apps for Symbian OS was
Qt, with C++ programming language. UIQ and S60 also provided SDKs for
app development on Symbian, but Qt became the standard later on. As for
the programming language, even though C++ is preferred, it’s also possible
to build with Python, Java, and Adobe Flash Lite.
Multimedia
To fulfill consumer demand for entertainment, Symbian OS supported high-
quality recording and playback of audio and video, along with image
conversion features. It expanded the ability of mobile phones to handle
multimedia files.
Security
As security is one of the most important things to consider for an operating
system, Symbian offered strong protection against malware and came with
reliable security certificates. It proved to be a secure operating system for
phones and a safe platform for app development.
Open Source
After Nokia acquired Symbian Ltd., the Symbian Foundation was formed,
and Symbian OS was made open source. It opened doors of opportunity for
developers to contribute to this operating system's growth and develop
innovative mobile applications.
Advantages of Symbian OS
It has a greater range of applications.
connectivity was a lot easier.
It consists of a better-inbuilt wap browser.
It has an open platform based on C++.
It provides a feature for power saving.
It provides fully multitaskable processing.
The disadvantage of the Symbian OS was:
It was not available on the PC.
Symbian OS has less accuracy as compared to android.
It has security issues as it can be easily affected by viruses.
Android - Architecture
Android operating system is a stack of software components which is roughly
divided into five sections and four main layers as shown below in the architecture
diagram.
Linux kernel
At the bottom of the layers is Linux - Linux 3.6 with approximately 115 patches.
This provides a level of abstraction between the device hardware and it contains
all the essential hardware drivers like camera, keypad, display etc. Also, the
kernel handles all the things that Linux is really good at such as networking and a
vast array of device drivers, which take the pain out of interfacing to peripheral
hardware.
Libraries
On top of Linux kernel there is a set of libraries including open-source Web
browser engine WebKit, well known library libc, SQLite database which is a
useful repository for storage and sharing of application data, libraries to play and
record audio and video, SSL libraries responsible for Internet security etc.
Android Libraries
This category encompasses those Java-based libraries that are specific to Android
development. Examples of libraries in this category include the application
framework libraries in addition to those that facilitate user interface building,
graphics drawing and database access. A summary of some key core Android
libraries available to the Android developer is as follows −
android.app − Provides access to the application model and is the
cornerstone of all Android applications.
android.content − Facilitates content access, publishing and messaging
between applications and application components.
android.database − Used to access data published by content providers and
includes SQLite database management classes.
android.opengl − A Java interface to the OpenGL ES 3D graphics rendering
API.
android.os − Provides applications with access to standard operating
system services including messages, system services and inter-process
communication.
android.text − Used to render and manipulate text on a device display.
android.view − The fundamental building blocks of application user
interfaces.
android.widget − A rich collection of pre-built user interface components
such as buttons, labels, list views, layout managers, radio buttons etc.
android.webkit − A set of classes intended to allow web-browsing
capabilities to be built into applications.
Having covered the Java-based core libraries in the Android runtime, it is now
time to turn our attention to the C/C++ based libraries contained in this layer of
the Android software stack.
Android Runtime
This is the third section of the architecture and available on the second layer from
the bottom. This section provides a key component called Dalvik Virtual
Machine which is a kind of Java Virtual Machine specially designed and
optimized for Android.
The Dalvik VM makes use of Linux core features like memory management and
multi-threading, which is intrinsic in the Java language. The Dalvik VM enables
every Android application to run in its own process, with its own instance of the
Dalvik virtual machine.
The Android runtime also provides a set of core libraries which enable Android
application developers to write Android applications using standard Java
programming language.
Application Framework
The Application Framework layer provides many higher-level services to
applications in the form of Java classes. Application developers are allowed to
make use of these services in their applications.
The Android framework includes the following key services −
Activity Manager − Controls all aspects of the application lifecycle and
activity stack.
Content Providers − Allows applications to publish and share data with
other applications.
Resource Manager − Provides access to non-code embedded resources such
as strings, color settings and user interface layouts.
Notifications Manager − Allows applications to display alerts and
notifications to the user.
View System − An extensible set of views used to create application user
interfaces.
Applications
You will find all the Android application at the top layer. You will write your
application to be installed on this layer only. Examples of such applications are
Contacts Books, Browser, Games etc.
The main components of android architecture are following:-
Applications
Application Framework
Android Runtime
Platform Libraries
Linux Kernel
Pictorial representation of android architecture with several main components
and their sub components –
SECURING HAND HELD SYSTEM.
1. Enable user authentication
2. Use a password manager
3. Always run updates
4. Avoid public wi-fi
5. Enable remote lock
6. Cloud backups
7. Use MDM/MAM
1. Enable User Authentication On
It's so easy for company laptops, tablets, and smartphones to get lost or stolen as
we leave them in taxi cabs, restaurants, airplanes...the list goes on.
The first thing to do is to ensure that all your mobile user devices have the screen
lock turned on and that they require a password or PIN to gain entry. There is a
ton of valuable information on the device!
Most devices have biometric security options like Face ID and Touch ID, which
definitely makes the device more accessible, but not necessarily more secure.
That's why it is a good idea to take your mobile security practices a step further
and implement a Multi-Factor Authentication (MFA, also known as two-
factor authentication) policy for all end-users as an additional layer of security.
2. Use A Password Manager
Let's be honest, passwords are not disappearing any time soon, and most of us
find them cumbersome and hard to remember. We're also asked to change them
frequently, which makes the whole process even more painful.
Enter the password manager, which you can think of as a "book of passwords"
locked by a master key that only you know.
Not only do they store passwords, but they also generate strong, unique
passwords that save you from using your cat's name or child's birthday...over and
over.
Although Microsoft has enabled password removal on their Microsoft 365
accounts, we're still far from being rid of them forever! As long as we
have sensitive data and corporate data to protect, passwords will be a
critical security measure.
3. Update Your Operating Systems (OS) Regularly
If you're using outdated software, your risk of getting hacked skyrockets. Vendors
such as Apple (IOS), Google, and Microsoft constantly provide security updates to
stay ahead of security vulnerabilities.
Don't ignore those alerts to upgrade your laptop, tablet, or smartphone. To help
with this, ensure you have automatic software updates turned on by default on
your mobile devices. Regularly updating your operating system ensures you have
the latest security configurations available!
When it comes to your laptop, your IT department or your IT services provider
should be pushing you appropriate software updates on a regular basis.
Be sure to take a moment to hit "restart"; otherwise, it won't do you much good!
Although it's very tempting to use that free Wi-Fi at the coffee shop, airport or
hotel lobby - don't do it.
Any time you connect to another organization’s network, you’re increasing your
risk of exposure to malware and hackers.
There are so many online videos and easily accessible tools that even a novice
hacker can intercept traffic flowing over Wi-Fi, accessing valuable information
such as credit card number, bank account numbers, passwords and other private
data.
Interestingly, although public Wi-Fi and bluetooth are a huge security gap and
most of us (91%) know it, 89% of us choose to ignore it.
4. Avoid Public Wi-Fi
Although it's very tempting to use that free Wi-Fi at the coffee shop, airport or
hotel lobby - don't do it.
Any time you connect to another organization’s network, you’re increasing your
risk of exposure to malware and hackers. There are so many online videos and
easily accessible tools that even a novice hacker can intercept traffic flowing
over Wi-Fi, accessing valuable information such as credit card number, bank
account numbers, passwords, and other private data.
The only caveat here is...if you absolutely must use a public Wi-Fi network, make
sure you are also using a VPN to encrypt your internet activity and make it
unreadable to cyber criminals. But remember, even this tactic may not offer
the cybersecurity protection you need to be truly secure when using public
internet access.
Interesting but disturbing fact: although public Wi-Fi and Bluetooth are a
considerable security gap and most of us (91%) know it, 89% of us ignore it.
Choose to be in the minority here!
5. Remote Lock and Data Wipe
Every business should have a Bring Your Own Device (BYOD) policy that
includes a strict remote lock and data wipe policy.
Under this policy, whenever a mobile device is believed to be stolen or lost, the
business can protect the lost data by remotely wiping the device or, at minimum,
locking access.
Where this gets a bit sticky is that you're essentially giving the
business permission to delete all personal data as well, as typically in
a BYOD situation the employee is using the device for both work and play.
Most IT security experts view remote lock and data wipe as a basic and necessary
security caution, so employees should be educated and made aware of any such
policy in advance.
6. Cloud Security and Data Backup
Keep in mind that your public cloud-based apps and services are also being
accessed by employee-owned mobile devices, increasing your company’s risk
of data loss.
That’s why, for starters, back up your cloud data! If your device is lost or stolen,
you'll still want to be able to access any data that might have been compromised
as quickly as possible.
Select a cloud platform that maintains a version history of your files and allows
you to roll back to those earlier versions, at least for the past 30 days.
Google’s G Suite, Microsoft Office 365, and Dropbox support this.
Once those 30 days have elapsed, deleted files or earlier versions are gone for
good.
You can safeguard against this by investing in a cloud-to-cloud backup solution,
which will back up your data for a relatively nominal monthly fee.
7. Understand and Utilize Mobile Device Management (MDM) and Mobile
Application Management (MAM)
Mobile security has become the hottest topic in the IT world. How do we allow
users to access the data they need remotely, while keeping that data safe from
whatever lurks around on these potentially unprotected devices?
The solution is two-fold: Mobile Device Management (MDM) and Mobile
Application Management (MAM).
Mobile Device Management is the configuration, monitoring, and management of
your employees' personal devices, such as phones, tablets, and laptops.
Mobile Application Management is configuring, monitoring, and managing
the applications on those mobile devices. This includes things like Microsoft 365
and authenticator apps.
When combined, MDM and MAM can become powerful security solutions,
preventing unauthorized devices from accessing your company network of
applications and data.
Note that both solutions should be sourced, implemented, and managed by IT
experts - in-house or outsourced-familiar with mobile security. For example, you
can look at this short case study on how we
implemented Microsoft Intune MDM for a healthcare provider, including the
details behind the implementation.
Implementing these 7 best practices for your employees and end-users, and
enforcing strong mobile security policies, will go a long way to keeping
your mobile device security in check.
UNIT 5
CASE STUDIES: Linux System: Introduction – Memory Management
he subsystem of Linux memory management is responsible to manage the
memory inside the system. It contains the implementation of demand
paging and virtual memory.
Also, it contains memory allocation for user space programs and kernel internal
structures. Linux memory management subsystem includes files mapping into the
address space of the processes and several other things.
Linux memory management subsystem is a complicated system with several
configurable settings. Almost every setting is available by the /proc filesystem
and could be adjusted and acquired with sysctl. These types of APIs are specified
inside the man 5 proc and Documentation for /proc/sys/vm/.
Linux memory management includes its jargon. Here we discuss in detail that
how to understand several mechanisms of Linux memory management.
Concepts Overview
Linux memory management is a complicated system and included so many
functionalities for supporting various system varieties through the MMU-less
microcontrollers to the supercomputers.
For systems, the memory management without the MMU is known
as nommu and it gains a dedicated document which will hopefully be written
eventually. However, a few concepts are similar.
Here, we will assume that the MMU exists and the CPU can convert any virtual
address into the physical address.
o Huge Pages
o Virtual Memory Primer
o Zones
o Page Cache
o Nodes
o Anonymous Memory
o OOM killer
o Compaction
o Reclaim
Huge Pages
The translation of addresses requires various memory accesses. These memory
accesses are very slow as compared to the speed of the CPU. To ignore spending
precious cycles of the processor on the translation of the address, CPUs manage
the cache of these types of translations known as Translation Lookaside Buffer
(TLB).
Virtual Memory Primer
In a computer system, physical memory is a restricted resource. The physical
memory isn't contiguous necessarily. It may be accessible as a group of different
ranges of addresses. Besides, distinct architectures of CPU and implementations
of similar architecture have distinct perspectives of how these types of ranges are
specified.
It will make dealing with physical memory directly quite difficult and to ignore
this complexity a mechanism virtual memory was specified.
he virtual memory separates the physical memory details through the application
software.
It permits to keep of only required details inside the physical memory. It gives a
mechanism for controlled data sharing and protection between processes.
Zones
Linux combines memory pages into some zones according to the possible usage.
Let's say, ZONE_HIGHMEM will include memory that isn't mapped permanently
into the address space of the kernel, ZONE_DMA will include memory that could
be used by various devices for DMA, and ZONE_NORMAL will include
addressed pages normally.
Page Cache
The common case to get data into memory is to read it through files as the
physical memory is unstable.
The data will be put in the page cache to ignore expensive access of disk on the
subsequent reads whenever any file is read.
Similarly, the data will be positioned inside the page cache and gets into the
backing storage device whenever any file is written.
Nodes
Several multi-processor machines can be defined as the NUMA - Non-Uniform
Memory Access systems. The memory is organized into banks that include
distinct access latency relying on the "distance" through the processor in these
types of systems. All the banks are known as a node and for all nodes, Linux
creates a subsystem of independent memory management. A single node contains
its zones set, list of used and free pages, and several statistics counters.
Anonymous Memory
The anonymous mapping or anonymous memory specifies memory that isn't
backed by any file system. These types of mappings are implicitly developed for
heap and stack of the program or by explicitly calls to the mmap(2) system call.
The anonymous mappings usually only specify the areas of virtual memory that a
program is permitted to access.
OOM killer
It is feasible that the kernel would not be able to reclaim sufficient memory and
the loaded machine memory would be exhausted to proceed to implement.
Compaction
As the system executes, various tasks allocate the free up the memory space and it
becomes partitioned. However, it is possible to restrict scattered physical pages
with virtual memory. Memory compaction defines the partitioning problems.
Reclaim
According to the usage of the page, it is treated by Linux memory management
differently. The pages that could be freed either due to they cache the details that
existed elsewhere on a hard disk or due to they could be again swapped out to a
hard disk, are known as reclaimable.
Process Scheduling in Linux
Introduction
Scheduling of processes is one of the most important aspects or roles of any
operating system. A Process Scheduler deals with process scheduling in Linux.
Process Scheduler uses Scheduling Algorithms that help in deciding the process to
be executed.
The Linux Scheduling Algorithm
Process scheduling is one of the most important aspects or roles of any operating
system.
Process Scheduling is an important activity performed by the process
manager of the respective operating system.
Scheduling in Linux deals with the removal of the current process from
the CPU and selecting another process for execution.
Let us learn more about scheduling strategies (scheduling algorithms) used in
Linux operating systems.
Similar to the other UNIX-based operating systems, in LINUX a Process
Scheduler deals with process scheduling.
Process Scheduler chooses a process to be executed.
Process scheduler also decides for how long the chosen process is to be
executed.
Hence, Scheduling Algorithms is nothing but a kind of strategy that helps the
process scheduler in deciding the process to be executed.
Scheduling Process Types in Linux
In the LINUX operating system, we have mainly two types of processes namely -
Real-time Process and Normal Process. Let us learn more about them in detail.
Realtime Process
Real-time processes are processes that cannot be delayed in any situation. Real-
time processes are referred to as urgent processes.
There are mainly two types of real-time processes in LINUX namely:
SCHED_FIFO
SCHED_RR.
A real-time process will try to seize all the other working processes having lesser
priority.
For example, A migration process that is responsible for the distribution of the
processes across the CPU is a real-time process. Let us learn about different
scheduling policies used to deal with real-time processes briefly.
SCHED_FIFO
FIFO in SCHED_FIFO means First In First Out. Hence, the SCHED_FIFO policy
schedules the processes according to the arrival time of the process.
SCHED_RR
RR in SCHED_RR means Round Robin. The SCHED_RR policy schedules the
processes by giving them a fixed amount of time for execution. This fixed time is
known as time quantum.
Note: Real-time processes have priority ranging between 1 and 99.
Hence, SCHED_FIFO, and SCHED_RR policies deal with processes having a
priority higher than 0.
Normal Process
Normal Processes are the opposite of real-time processes. Normal processes will
execute or stop according to the time assigned by the process scheduler. Hence, a
normal process can suffer some delay if the CPU is busy executing other high-
priority processes. Let us learn about different scheduling policies used to deal
with the normal processes in detail.
Normal (SCHED_NORMAL or SCHED_OTHER)
SCHED_NORMAL / SCHED_OTHER is the default or standard scheduling
policy used in the LINUX operating system. A time-sharing mechanism is used in
the normal policy. A time-sharing mechanism means assigning some specific
amount of time to a process for its execution. Normal policy deals with all the
threads of processes that do not need any real-time mechanism.
Batch (SCHED_BATCH)
As the name suggests, the SCHED_BATCH policy is used for executing a batch of
processes. This policy is somewhat similar to the Normal policy. SCHED_BATCH
policy deals with the non-interactive processes that are useful in optimizing the
CPU throughput time. SCHED_BATCH scheduling policy is used for a group of
processes having priority: 0.
Note: Throughput refers to the amount of work completed in a unit of time.
Idle (SCHED_IDLE)
SCHED_IDLE policy deals with the processes having extremely Low Priority.
Low-priority tasks are the tasks that are executed when there are absolutely no
tasks to be executed. SHED_IDLE policy is designed for the lowest priority
tasks of the operating systems.
Note: Normal processes have a static priority that is 0.
Hence, SCHED_NORMAL, SCHED_BATCH, and SCHED_IDLE policies only
deal with 0-priority processes.
History of Linux scheduler
Before learning about different LINUX schedulers, let us discuss the history and
evolution of the different schedulers associated with the different LINUX kernels.
Initially, the LINUX kernel used the RR or Round-Robin approach to
schedule the processes. In the Round-Robin approach, a circular queue was
maintained that held the processes. The advantage of using a round-robin
was its fast behavior and easy implementation.
After the introduction of scheduling classes in LINUX Kernel 2.2, the
processes were divided into three process classes:
o Real-time process
o Non-preemptive process(a process does not leave the CPU until the
process makes its system call)
o Normal process.
After the introduction of the O(N) scheduler in LINUX Kernel 2.4, a queue is
used to store processes. At the time of scheduling, the best process in the
queue is selected according to tho the priority of the processes.
In the LINUX Kernel 2.6, the complexity of the scheduler was reduced
from O(N) to O(1).
After O(1) scheduler, CFS, or Completely Fair Scheduler was introduced.
Now, let us discuss the different schedulers.
Refer to the image below to see the locations of different schedulers in the Kernel.
O(n) Scheduler
The LINUX Kernel used the O(n) scheduler between version 2.4 and 2.6.
n is the number of runnable processes in the system.
O(n) scheduler divides the processor's time into a unit called epochs.
Each task is allowed to use at max 1 epoch. If the task is not completed in
the specified epoch, then the scheduler adds half of the remaining time in
the next epoch.
O(n) scheduler was better than the earlier used circular queue scheduler
because O(n) scheduler could schedule N-processes at a time.
O(1) Scheduler
O(1) Scheduler was introduced in LINUX Kernel 2.6. O(1) scheduler is also called
as Big O of 1 scheduler or constant time scheduler. As the name suggests, it can
schedule the processes within a constant amount of time, regardless of the
number of processes running on the system.
O(1) scheduler uses two queues.
The active processes are placed in a queue that stores the priority value of
each process. This queue is termed as run queue, or run queue.
The other queue is an array of expired processes called expired queue.
When the allotted time of a process expires, it is placed into the expired
queue.
The scheduler gives priority to interactive tasks and lowers the priorities of the
non-interactive tasks.
CFS Scheduler
CFS stands for Completely Fair Scheduler, it handles the CPU resource allocation
for executing processes. The main aim of CFS is to maximize the overall CPU
utilization and performance. CFS uses a very simple algorithm for process
scheduling.
CFS uses a red-black tree instead of a queue for the scheduling.
All the processes are inserted into Red-Black trees and whenever a new
process arrives, it is inserted into the tree.
CFS Usage of Red-Black Tree
As we know, a red-black tree is a self-balancing binary search tree having nodes
colored red or black. These red and black colors are used to ensure that the tree
maintains its balanced nature both during insertions and deletions.
Note: Insertion and deletion time complexity of RB Tree is O(log(n)).
The main aim of using an RB Tree is Rb Tree's self-balancing nature. RB Tress is
used to representing a task and to find out which task to be executed next.
Each task is stored in the RB tree was based on its virtual run time(vruntime). We
will learn about vruntime later in this article. When CFS needs to pick the next
task to be executed, it picks the leftmost node. Because the leftmost node in the
tree resembles the node having the least virtual runtime.
Priority Scheduling with Dynamic Priority
Dynamic priority scheduling is also a type of scheduling algorithm. In Dynamic
priority scheduling, the priorities are calculated at the time of execution of the
processes. The main aim of dynamic priority scheduling in Linux is to adapt to the
dynamically changing progress and to form optimal scheduled
processes. Dynamic priority gives high CPU utilization which means higher
utilization of resources and better performance.
Priority scheduling algorithms are also known as online scheduling
algorithms.
Priority scheduling in Linux is easy to implement as many times it does not
require any priority queue.
Virtual Runtime
The Virtual Runtime of a process is the amount of time spent by the process in the
actual execution. Virtual runtime does not include any other time like response
time or waiting time.
The CFS uses virtual runtime or Vruntime to schedule the processes. The CFS
maintains variables holding the maximum and minimum virtual runtime namely
- max_vruntime and min_vruntime. These max_vruntime and min_vruntime
variables are used to insert the process as a node in the Red-Black Trees.
The new processes and the processes that got back to the ready state from the
waiting queue are assigned min_vruntime. These new processes are then inserted
into the RB tree node with the key as min_vruntime.
Scheduling Policy
In POSIX 1003.1c-compliant systems, support for scheduling policy is optional.
The Threads Module determines whether scheduling policy is supported by
testing for the definition of the
macro _POSIX_THREAD_PRIORITY_SCHEDULING.
The Linux POSIX implementation of the Threads Module supports all three
scheduling policies as defined by the standard:
SCHED_FIFO — Specifies FIFO Scheduling, where threads run until preempted by
a thread of higher priority, or until blocked. Thread priorities are set by the
application; the system does not dynamically change a thread’s priority.
SCHED_RR — Selects round-robin scheduling, where the highest-priority threads
runs until preempted by a thread of higher priority, until some time-quantum has
elapsed, or until blocked. Threads possessing the same priority value get time-
sliced and scheduled in a round-robin fashion. Thread priorities are set by the
application; the system does not dynamically change a thread’s priority.
SCHED_OTHER — Selects the default scheduling policy for an implementation.
This policy typically uses time-slicing with dynamic adjustments to priority and
or time-slice quantum. According to the Linux manpage sched_setscheduler(2),
under Linux, this policy is “based on the nice level … and increased for each time
quantum the process [i.e. thread] is unable to run.”
Note that Linux POSIX limits the scheduling
policies SCHED_RR and SCHED_FIFO to processes with superuser privileges.
Therefore,
the RWSchedulingPolicy values RW_THR_PREEMPTIVE and RW_THR_TIME
_SLICED_FIXED are limited to superusers as well.
Table 7 shows how the Threads Module Linux POSIX implementation
maps RWSchedulingPolicy values to the underlying POSIX 1003.1c policy
values.
Threads Module RWSchedulingPolicy Values POSIX 1003.1c
Scheduling
Policy
RW_THR_PREEMPTIVE SCHED_FIFO
RW_THR_TIME_SLICED_FIXED SCHED_RR
RW_THR_TIME_SLICED_DYNAMIC(RW_THR_OTHER may SCHED_OTHER
be used to set)
Table 7 – Linux: Mapping of RWSchedulingPolicy to POSIX 1003.1c values
Attempts to set any other policy values result in
an RWTHROperationNotAvailable exception. None of these policies may be
explicitly requested unless the process has superuser privileges.
Note that the Threads Module has mapped two policy values to the same
underlying policy, SCHED_OTHER. Calls
to getSchedulingPolicy() return RW_THR_TIME_SLICED_DYNAMIC, since
that value gives the most meaningful interpretation.
In order for a new thread’s scheduling policy to inherit by default from the
creating thread, you must change the inheritance policy’s default value
from RW_THR_EXPLICIT to RW_THR_INHERIT. If you do not
set RW_THR_INHERIT, a new thread’s scheduling policy defaults
to RW_THR_TIME_SLICED_DYNAMIC.
Scheduling Policy: when to switch and what process to choose. Some scheduling
objectives:
– fast process response time
– avoidance of process starvation
– good throughput for background jobs
– support for soft real time processes
• Linux uses dynamically assigned process priorities for non real-time processes.
Processes running for a long time have their priorities decreased while processes
that are waiting have their priorities increased dynamically.
Managing I/O devices:
Device files
Device files are also known as device special files. Device files are employed to
provide the operating system and users an interface to the devices that they
represent. All Linux device files are located in the /dev directory, which is an
integral part of the root (/) filesystem because these device files must be available
to the operating system during the boot process.
One of the most important things to remember about these device files is that they
are most definitely not device drivers. They are more accurately described as
portals to the device drivers. Data is passed from an application or the operating
system to the device file which then passes it to the device driver which then
sends it to the physical device. The reverse data path is also used, from the
physical device through the device driver, the device file, and then to an
application or another device.
Let's look at the data flow of a typical command to visualize this.
In Figure 1, above, a simplified data flow is shown for a common command.
Issuing the cat /etc/resolv.conf command from a GUI terminal emulator such as
Konsole or xterm causes the resolv.conf file to be read from the disk with the disk
device driver handling the device specific functions such as locating the file on the
hard drive and reading it. The data is passed through the device file and then
from the command to the device file and device driver for pseudo-terminal 6
where it is displayed in the terminal session.
Of course, the output of the cat command could have been redirected to a file in
the following manner, cat /etc/resolv.conf > /etc/resolv.bak in order to create a
backup of the file. In that case, the data flow on the left side of Figure 1 would
remain the same while the data flow on the right would be through the
/dev/sda2 device file, the hard drive device driver and then onto the hard drive
itself.
These device files make it very easy to use standard streams (STD/IO) and
redirection to access any and every device on a Linux or Unix computer. Simply
directing a data stream to a device file sends the data to that device.
Classification
Device files can be classified in at least two ways. The first and most commonly
used classification is that of the data stream commonly associated with the device.
For example, tty (teletype) and serial devices are considered to be character based
because the data stream is transferred and handled one character or byte at a time.
Block type devices such as hard drives transfer data in blocks, typically a multiple
of 256 bytes.
If you have not already, go ahead and as a non-root user in a terminal session,
change the present working directory (PWD) to /dev and display a long listing.
This shows a list of device files with their file permissions and their major and
minor identification numbers. For example, the following device files are just a
few of the ones in the /dev/directory on my Fedora 24 workstation. They
represent disk and tty type devices. Notice the leftmost character of each line in
the output. The ones that have a "b" are block type devices and the ones that begin
with "c" are character devices.
brw-rw---- 1 root disk 8, 0 Nov 7 07:06 sda
brw-rw---- 1 root disk 8, 1 Nov 7 07:06 sda1
brw-rw---- 1 root disk 8, 16 Nov 7 07:06 sdb
brw-rw---- 1 root disk 8, 17 Nov 7 07:06 sdb1
brw-rw---- 1 root disk 8, 18 Nov 7 07:06 sdb2
crw--w---- 1 root tty 4, 0 Nov 7 07:06 tty0
crw--w---- 1 root tty 4, 1 Nov 7 07:07 tty1
crw--w---- 1 root tty 4, 10 Nov 7 07:06 tty10
crw--w---- 1 root tty 4, 11 Nov 7 07:06 tty11
The more detailed and explicit way to identify device files is using the device
major and minor numbers. The disk devices have a major number of 8 which
designates them as SCSI block devices. Note that all PATA and SATA hard drives
have been managed by the SCSI subsystem because the old ATA subsystem was
many years ago deemed as not maintainable due to the poor quality of its code.
As a result, hard drives that would previously have been designated as "hd[a-z]"
are now referred to as "sd[a-z]".
You can probably infer the pattern of disk drive minor numbers in the small
sample shown above. Minor numbers 0, 16, 32 and so on up through 240 are the
whole disk numbers. So major/minor 8/16 represents the whole disk /dev/sdb
and 8/17 is the device file for the first partition, /dev/sdb1. Numbers 8/34 would
be /dev/sdc2.
The tty device files in the list above are numbered a bit more simply from tty0
through tty63.
The Linux Allocated Devices file at Kernel.org is the official registry of device
types and major and minor number allocations. It can help you understand the
major/minor numbers for all currently defined devices.
ACCESSING LINUX FILES
Linux file access permissions are used to control who is able to read, write and
execute a certain file. This is an important consideration due to the multi-user
nature of Linux systems and as a security mechanism to protect the critical system
files both from the individual user and from any malicious software or viruses.
Access permissions are implemented at a file level with the appropriate
permission set based on the file owner, the group owner of the file and world
wide access. In Linux, directories are also files and therefore the file permissions
apply on a directory level as well, although some permission are applied
differently depending upon whether the file is a regular file or directory. As
devices are also represented as files then the same permissions commands can be
applied to access to certain resources or external devices.
Basic File Permissions
1. Permission Groups
Each file and directory has three user based permission groups:
Owner - The Owner permissions apply only the owner of the file or
directory, they will not impact the actions of other users.
o group - The Group permissions apply only to the group that has been
assigned to the file or directory, they will not effect the actions of other
users.
all users - The All Users permissions apply to all other users on the system,
this is the permission group that you want to watch the most.
Permission Types
Each file or directory has three basic permission types:
read - The Read permission refers to a user's capability to read the contents
of the file.
write - The Write permissions refer to a user's capability to write or modify a
file or directory.
Execute - The Execute permission affects a user's capability to execute a file
or view the contents of a directory.
Viewing the Permissions
You can view the permissions by checking the file or directory permissions
in your favorite GUI File Manager (which I will not cover here) or by
reviewing the output of the \"ls -l\" command while in the terminal and
while working in the directory which contains the file or folder.
The permission in the command line is displayed as: _rwxrwxrwx 1
owner:group
User rights/Permissions
o The first character that I marked with an underscore is the special
permission flag that can vary.
o The following set of three characters (rwx) is for the owner
permissions.
o The second set of three characters (rwx) is for the Group permissions.
o The third set of three characters (rwx) is for the All Users permissions.
Following that grouping since the integer/number displays the number of
hardlinks to the file.
The last piece is the Owner and Group assignment formatted as
Owner:Group.
1. Modifying the Permissions
When in the command line, the permissions are edited by using the
command chmod. You can assign the permissions explicitly or by using a
binary reference as described below.
2. Explicitly Defining Permissions
To explicitly define permissions we need to reference the Permission Group
and Permission Types.
The Permission Groups used are:
3. u - Owner
4. g - Group
5. o or a - All Users
The potential Assignment Operators are + (plus) and - (minus); these are
used to tell the system whether to add or remove the specific permissions.
The Permission Types that are used are:
6. r - Read
7. w - Write
8. x - Execute
So for an example, let’s say we have a file named file1 that currently has the
permissions set to_rw_rw_rw, which means that the owner, group and all
users have read and write permission. Now we want to remove the read
and write permissions from the all users group.
To make this modification we would invoke the command: chmod a-rw
file1
to add the permissions, we would invoke the command: chmod a+rw file1
ARCHITECTURE OF IOS: Contents
1 iPhone OS becomes iOS
2 An Overview of the iOS 5 Architecture
3 The Cocoa Touch Layer
3.1 UIKit Framework (UIKit.framework)
3.2 Map Kit Framework (MapKit.framework)
3.3 Push Notification Service
3.4 Message UI Framework (MessageUI.framework)
3.5 Address Book UI Framework (AddressUI.framework)
3.6 Game Kit Framework (GameKit.framework)
3.7 iAd Framework (iAd.framework)
3.8 Event Kit UI Framework
3.9 Accounts Framework (Accounts.framework)
3.10 Twitter Framework (Twitter.framework)
4 The iOS Media Layer
4.1 Core Video Framework (CoreVideo.framework)
4.2 Core Text Framework (CoreText.framework)
4.3 Image I/O Framework (ImageIO.framework)
4.4 Assets Library Framework (AssetsLibrary.framework)
4.5 Core Graphics Framework (CoreGraphics.framework)
4.6 Core Image Framework (CoreImage.framework)
4.7 Quartz Core Framework (QuartzCore.framework)
4.8 OpenGL ES framework (OpenGLES.framework)
4.9 GLKit Framework (GLKit.framework)
4.10 NewsstandKit Framework (NewsstandKit.framework)
5 iOS Audio Support
5.1 AV Foundation framework (AVFoundation.framework)
5.2 Core Audio Frameworks (CoreAudio.framework, AudioToolbox.framework
and
AudioUnit.framework)
5.3 Open Audio Library (OpenAL)
5.4 Media Player Framework (MediaPlayer.framework)
5.5 Core Midi Framework (CoreMIDI.framework)
6 The iOS Core Services Layer
6.1 Address Book Framework (AddressBook.framework)
6.2 CFNetwork Framework (CFNetwork.framework)
6.3 Core Data Framework (CoreData.framework)
6.4 Core Foundation Framework (CoreFoundation.framework)
6.5 Core Media Framework (CoreMedia.framework)
6.6 Core Telephony Framework (CoreTelephony.framework)
6.7 EventKit Framework (EventKit.framework)
6.8 Foundation Framework (Foundation.framework)
6.9 Core Location Framework (CoreLocation.framework)
6.10 Mobile Core Services Framework (MobileCoreServices.framework)
6.11 Store Kit Framework (StoreKit.framework)
6.12 SQLite library
6.13 System Configuration Framework (SystemConfiguration.framework)
6.14 Quick Look Framework (QuickLook.framework)
7 The iOS Core OS Layer
7.1 Accelerate Framework (Accelerate.framework)
7.2 External Accessory Framework (ExternalAccessory.framework)
7.3 Security Framework (Security.framework)
7.4 System (LibSystem)
An Overview of the iOS 5 Architecture
As previously mentioned, iOS consists of a number of different software layers,
each of which provides programming frameworks for the development of
applications that run on top of the underlying hardware.
These operating system layers can be presented diagrammatically as illustrated in
Figure 4-1:
Some diagrams designed to graphically depict the iOS software stack show an
additional box positioned above the Cocoa Touch layer to indicate the
applications running on the device. In the above diagram we have not done so
since this would suggest that the only interface available to the app is Cocoa
Touch. In practice, an app can directly call down any of the layers of the stack to
perform tasks on the physical device.
That said, however, each operating system
layer provides an increasing level of
abstraction away from the complexity of
working with the hardware. As an iOS
developer you should, therefore, always look
for solutions to your programming goals in
the frameworks located in the higher level
iOS layers before resorting to writing code
that reaches down to the lower level layers.
In general, the higher level of layer you
program to, the less effort and fewer lines of
code you will have to write to achieve your
objective. And as any veteran programmer
will tell you, the less code you have to write
the less opportunity you have to introduce
bugs.
Now that we have identified the various layers that comprise iOS 5 we can now
look in more detail at the services provided by each layer and the corresponding
frameworks that make those services available to us as application developers.
The Cocoa Touch Layer
The Cocoa Touch layer sits at the top of the iOS stack and contains the
frameworks that are most commonly used by iPad application developers. Cocoa
Touch is primarily written in Objective-C, is based on the standard Mac OS X
Cocoa API (as found on Apple desktop and laptop computers) and has been
extended and modified to meet the needs of the iPad hardware.
The Cocoa Touch layer provides the following frameworks for iPad app
development:
UIKit Framework (UIKit.framework)
The UIKit framework is a vast and feature rich Objective-C based programming
interface. It is, without question, the framework with which you will spend most
of your time working. Entire books could, and probably will, be written about the
UIKit framework alone. Some of the key features of UIKit are as follows:
User interface creation and management (text fields, buttons, labels, colors,
fonts etc)
Application lifecycle management
Application event handling (e.g. touch screen user interaction)
Multitasking
Wireless Printing
Data protection via encryption
Cut, copy, and paste functionality
Web and text content presentation and management
Data handling
Inter-application integration
Push notification in conjunction with Push Notification Service
Local notifications (a mechanism whereby an application running in the
background can gain the user’s attention)
Accessibility
Accelerometer, battery, proximity sensor, camera and photo library
interaction
Touch screen gesture recognition
File sharing (the ability to make application files stored on the device
available via iTunes)
Blue tooth based peer to peer connectivity between devices
Connection to external displays
Map Kit Framework (MapKit.framework)
If you have spent any appreciable time with an iPad then the chances are
you have needed to use the Maps application more than once, either to get a
map of a specific area or to generate driving directions to get you to your
intended destination. The Map Kit framework provides a programming
interface which enables you to build map based capabilities into your own
applications. This allows you to, amongst other things, display scrollable
maps for any location, display the map corresponding to the current
geographical location of the device and annotate the map in a variety of
ways.
Push Notification Service
The Push Notification Service allows applications to notify users of an event
even when the application is not currently running on the device. Since the
introduction of this service it has most commonly been used by news based
applications. Typically when there is breaking news the service will
generate a message on the device with the news headline and provide the
user the option to load the corresponding news app to read more details.
This alert is typically accompanied by an audio alert and vibration of the
device. This feature should be used sparingly to avoid annoying the user
with frequent interruptions.
Message UI Framework (MessageUI.framework)
The Message UI framework provides everything you need to allow users to
compose and send email messages from within your application. In fact, the
framework even provides the user interface elements through which the user
enters the email addressing information and message content. Alternatively, this
information may be pre-defined within your application and then displayed for
the user to edit and approve prior to sending.
Address Book UI Framework (AddressUI.framework)
Given that a key function of the iPad is as a communications device and digital
assistant it should not come as too much of a surprise that an entire framework is
dedicated to the integration of the address book data into your own applications.
The primary purpose of the framework is to enable you to access, display, edit
and enter contact information from the iPad address book from within your own
application.
Game Kit Framework (GameKit.framework)
The Game Kit framework provides peer-to-peer connectivity and voice
communication between multiple devices and users allowing those running the
same app to interact. When this feature was first introduced it was anticipated by
Apple that it would primarily be used in multi-player games (hence the choice of
name) but the possible applications for this feature clearly extend far beyond
games development.
iAd Framework (iAd.framework)
The purpose of the iAd Framework is to allow developers to include banner
advertising within their applications. All advertisements are served by Apple’s
own ad service.
Event Kit UI Framework
The Event Kit UI framework was introduced in iOS 4 and is provided to allow the
calendar events to be accessed and edited from within an application.
Accounts Framework (Accounts.framework)
iOS 5 introduces the concept of system accounts. These essentially allow the
account information for other services to be stored on the iOS device and accessed
from within application code. Currently system accounts are limited to Twitter
accounts, though other services such as Facebook will likely appear in future iOS
releases. The purpose of the Accounts Framework is to provide an API allowing
applications to access and manage these system accounts.
Twitter Framework (Twitter.framework)
The Twitter Framework allows Twitter integration to be added to applications.
The framework operates in conjunction the Accounts Framework to gain access to
the user’s Twitter account information.
The iOS Media Layer
The role of the Media layer is to provide iOS with audio, video, animation and
graphics capabilities. As with the other layers comprising the iOS stack, the Media
layer comprises a number of frameworks which may be utilized when developing
iPad apps. In this section we will look at each one in turn.
Core Video Framework (CoreVideo.framework)
The Core Video Framework provides buffering support for the Core Media
framework. Whilst this may be utilized by application developers it is typically
not necessary to use this framework.
Core Text Framework (CoreText.framework)
The iOS Core Text framework is a C-based API designed to ease the handling of
advanced text layout and font rendering requirements.
Image I/O Framework (ImageIO.framework)
The Image I/O framework, the purpose of which is to facilitate the importing and
exporting of image data and image metadata, was introduced in iOS 4. The
framework supports a wide range of image formats including PNG, JPEG, TIFF
and GIF.
Assets Library Framework (AssetsLibrary.framework)
The Assets Library provides a mechanism for locating and retrieving video and
photo files located on the iPad device. In addition to accessing existing images
and videos, this framework also allows new photos and videos to be saved to the
standard device photo album.
Core Graphics Framework (CoreGraphics.framework)
The iOS Core Graphics Framework (otherwise known as the Quartz 2D API)
provides a lightweight two dimensional rendering engine. Features of this
framework include PDF document creation and presentation, vector based
drawing, transparent layers, path based drawing, anti-aliased rendering, color
manipulation and management, image rendering and gradients. Those familiar
with the Quartz 2D API running on MacOS X will be pleased to learn that the
implementation of this API is the same on iOS.
Core Image Framework (CoreImage.framework)
A new framework introduced with iOS 5 providing a set of video and image
filtering and manipulation capabilities for application developers.
Quartz Core Framework (QuartzCore.framework)
The purpose of the Quartz Core framework is to provide animation capabilities on
the iPad. It provides the foundation for the majority of the visual effects and
animation used by the UIKit framework and provides an Objective-C based
programming interface for creation of specialized animation within iPad apps.
OpenGL ES framework (OpenGLES.framework)
For many years the industry standard for high performance 2D and 3D graphics
drawing has been OpenGL. Originally developed by the now defunct Silicon
Graphics, Inc (SGI) during the 1990s in the form of GL, the open version of this
technology (OpenGL) is now under the care of a non-profit consortium
comprising a number of major companies including Apple, Inc., Intel, Motorola
and ARM Holdings.
OpenGL for Embedded Systems (ES) is a lightweight version of the full OpenGL
specification designed specifically for smaller devices such as the iPad.
iOS 3 or later supports both OpenGL ES 1.1 and 2.0 on certain iPhone models
(such as the iPhone 3GS and iPhone 4). Earlier versions of iOS and older device
models support only OpenGL ES version 1.1.
GLKit Framework (GLKit.framework)
The GLKit framework is an Objective-C based API designed to ease the task of
creating OpenGL ES based applications.
NewsstandKit Framework (NewsstandKit.framework)
The Newsstand application is a new feature of iOS 5 and is intended as a central
location for users to gain access to newspapers and magazines. The NewsstandKit
framework allows for the development of applications that utilize this new
service.
iOS Audio Support
iOS is capable of supporting audio in AAC, Apple Lossless (ALAC), A-law,
IMA/ADPCM, Linear PCM, µ-law, DVI/Intel IMA ADPCM, Microsoft GSM 6.10
and AES3-2003 formats through the support provided by the following
frameworks.
AV Foundation framework (AVFoundation.framework)
An Objective-C based framework designed to allow the playback, recording and
management of audio content.
Core Audio Frameworks (CoreAudio.framework, AudioToolbox.framework
and AudioUnit.framework)
The frameworks that comprise Core Audio for iOS define supported audio types,
playback and recording of audio files and streams and also provide access to the
device’s built-in audio processing units.
Open Audio Library (OpenAL)
OpenAL is a cross platform technology used to provide high-quality, 3D audio
effects (also referred to as positional audio). Positional audio may be used in a
variety of applications though is typically used to provide sound effects in games.
Media Player Framework (MediaPlayer.framework)
The iOS Media Player framework is able to play video in .mov, .mp4, .m4v, and
.3gp formats at a variety of compression standards, resolutions and frame rates.
Core Midi Framework (CoreMIDI.framework)
Introduced in iOS 4, the Core MIDI framework provides an API for applications
to interact with MIDI compliant devices such as synthesizers and keyboards via
the iPad’s dock connector.
The iOS Core Services Layer
The iOS Core Services layer provides much of the foundation on which the
previously referenced layers are built and consists of the following frameworks.
Address Book Framework (AddressBook.framework)
The Address Book framework provides programmatic access to the iPad Address
Book contact database allowing applications to retrieve and modify contact
entries.
CFNetwork Framework (CFNetwork.framework)
The CFNetwork framework provides a C-based interface to the TCP/IP
networking protocol stack and low level access to BSD sockets. This enables
application code to be written that works with HTTP, FTP and Domain Name
servers and to establish secure and encrypted connections using Secure Sockets
Layer (SSL) or Transport Layer Security (TLS).
Core Data Framework (CoreData.framework)
This framework is provided to ease the creation of data modeling and storage in
Model-View-Controller (MVC) based applications. Use of the Core Data
framework significantly reduces the amount of code that needs to be written to
perform common tasks when working with structured data within an application.
Core Foundation Framework (CoreFoundation.framework)
The Core Foundation framework is a C-based Framework which provides basic
functionality such as data types, string manipulation, raw block data
management, URL manipulation, threads and run loops, date and times, basic
XML manipulation and port and socket communication. Additional XML
capabilities beyond those included with this framework are provided via the
libXML2 library. Though this is a C-based interface, most of the capabilities of the
Core Foundation framework are also available with Objective-C wrappers via the
Foundation Framework.
Core Media Framework (CoreMedia.framework)
The Core Media framework is the lower level foundation upon which the AV
Foundation layer is built. Whilst most audio and video tasks can, and indeed
should, be performed using the higher level AV Foundation framework, access is
also provided for situations where lower level control is required by the iOS
application developer.
Core Telephony Framework (CoreTelephony.framework)
The iOS Core Telephony framework is provided to allow applications to
interrogate the device for information about the current cell phone service
provider and to receive notification of telephony related events.
EventKit Framework (EventKit.framework)
An API designed to provide applications with access to the calendar and alarms
on the device.
Foundation Framework (Foundation.framework)
The Foundation framework is the standard Objective-C framework that will be
familiar to those who have programmed in Objective-C on other platforms (most
likely Mac OS X). Essentially, this consists of Objective-C wrappers around much
of the C-based Core Foundation Framework.
Core Location Framework (CoreLocation.framework)
The Core Location framework allows you to obtain the current geographical
location of the device (latitude, longitude and altitude) and compass readings
from with your own applications. The method used by the device to provide
coordinates will depend on the data available at the time the information is
requested and the hardware support provided by the particular iPad model on
which the app is running (GPS and compass are only featured on recent models).
This will either be based on GPS readings, Wi-Fi network data or cell tower
triangulation (or some combination of the three).
Mobile Core Services Framework (MobileCoreServices.framework)
The iOS Mobile Core Services framework provides the foundation for Apple’s
Uniform Type Identifiers (UTI) mechanism, a system for specifying and
identifying data types. A vast range of predefined identifiers have been defined
by Apple including such diverse data types as text, RTF, HTML, JavaScript,
PowerPoint .ppt files, PhotoShop images and MP3 files.
Store Kit Framework (StoreKit.framework)
The purpose of the Store Kit framework is to facilitate commerce transactions
between your application and the Apple App Store. Prior to version 3.0 of iOS, it
was only possible to charge a customer for an app at the point that they
purchased it from the App Store. iOS 3.0 introduced the concept of the “in app
purchase” whereby the user can be given the option to make additional payments
from within the application. This might, for example, involve implementing a
subscription model for an application, purchasing additional functionality or even
buying a faster car for you to drive in a racing game.
SQLite library
Allows for a lightweight, SQL based database to be created and manipulated from
within your iPad application.
System Configuration Framework (SystemConfiguration.framework)
The System Configuration framework allows applications to access the network
configuration settings of the device to establish information about the
“reachability” of the device (for example whether Wi-Fi or cell connectivity is
active and whether and how traffic can be routed to a server).
Quick Look Framework (QuickLook.framework)
The Quick Look framework provides a useful mechanism for displaying previews
of the contents of files types loaded onto the device (typically via an internet or
network connection) for which the application does not already provide support.
File format types supported by this framework include iWork, Microsoft Office
document, Rich Text Format, Adobe PDF, Image files, public.text files and comma
separated (CSV).
The iOS Core OS Layer
The Core OS Layer occupies the bottom position of the iOS stack and, as such, sits
directly on top of the device hardware. The layer provides a variety of services
including low level networking, access to external accessories and the usual
fundamental operating system services such as memory management, file system
handling and threads.
Accelerate Framework (Accelerate.framework)
The Accelerate Framework provides a hardware optimized C-based API for
performing complex and large number math, vector, digital signal processing
(DSP) and image processing tasks and calculations.
External Accessory Framework (ExternalAccessory.framework)
Provides the ability to interrogate and communicate with external accessories
connected physically to the iPad via the 30-pin dock connector or wirelessly via
Bluetooth.
Security Framework (Security.framework)
The iOS Security framework provides all the security interfaces you would expect
to find on a device that can connect to external networks including certificates,
public and private keys, trust policies, keychains, encryption, digests and Hash-
based Message Authentication Code (HMAC).
System (LibSystem)
As we have previously mentioned, iOS is built upon a UNIX-like foundation. The
System component of the Core OS Layer provides much the same functionality as
any other UNIX like operating system. This layer includes the operating system
kernel (based on the Mach kernel developed by Carnegie Mellon University) and
device drivers. The kernel is the foundation on which the entire iOS platform is
built and provides the low level interface to the underlying hardware. Amongst
other things, the kernel is responsible for memory allocation, process lifecycle
management, input/output, inter-process communication, thread management,
low level networking, file system access and thread management.