Unit 2 AOS
Unit 2 AOS
Unit-2
DISTRIBUTED OPERATING SYSTEMS
1. Client-Server Systems
2. Peer-to-Peer Systems
3. Middleware
4. Three-tier
5. N-tier
Client-Server System
This system allows the interface, and the client then sends its own
requests to be executed as an action. After completing the activity,
it sends a back response and transfers the result to the client.
Peer-to-Peer System
Middleware
Three-tier
N-tier
Openness
Scalability
It refers to the fact that the system's efficiency should not vary as
new nodes are added to the system. Furthermore, the performance
of a system with 100 nodes should be the same as that of a system
with 1000 nodes.
Resource Sharing
Flexibility
Transparency
Heterogeneity
Fault Tolerance
✓ Solaris:
✓ OSF/1:
✓ Micros:
The MICROS operating system ensures a balanced
data load while allocating jobs to all nodes in the system.
✓ DYNIX:
✓ Locus:
✓ Mach:
➢ Network Applications:
➢ Telecommunication Networks:
➢ Parallel Computation:
Disadvantages
ISSUES in DS:
There are mainly two reasons behind the operating system failure.
These reasons are as follows:
1. Software Problems
2. Hardware Problems
Software Problems:
1. Improper Drivers
2. Thrashing
3. Corrupt Registry
❖ The registry is a small database that stores all of the detail
about the kernel, drivers, and programs. The OS searches its
registry before beginning any app.
4. Virus
5. Trojan Horse
❖ you can check to see if you have installed the latest versions
of Windows on the system. Even security fixes must be kept
up to date. After that, the system will resume normal
operation.
7. Failure to Boot
8. Compatibility Error
Hardware Problems:
1. Power Problem
2. Overheating
3. Motherboard Failure
4. RAM
Comminicative primitive:
1. Message Passing
2. Remote procedure call.
5.In the unbuffered option, data is copied from one user buffer to
another user directly.
6.With Blocking primitives, the send primitive does not return the
control to the user program until the message has been sent (an
unreliable blocking primitive) or until an acknowledgment has
been received (a reliable blocking primitive).
Synchronous Vs AsynchronousPrimitives:
Binding:
RPC Problems:
1.Procedures reside on different machines
• This means we cannot simply jump to the start of the
procedure
• We need to use network communication techniques to
interact with the remote machine.
RPC Tools:
• Fortunately, we don't have to write the client and server
stub (skeleton) code ourselvesInstead
• Application Oriented
1. Begin with the Application
2. Build and test a working version that operate on single
machine
3. Divide the program into two or more pieces and add
communication protocol to allow each piece to execute
on separate machine
MSRPC:
COBRA:
MSRPC2:
COM/DCOM:
Reference:
• Process: Pi
• Event: Eij, where i is the process in number and j: jth event
in the ith process.
• tm: vector time span for message m.
• Ci vector clock associated with process Pi, the jth element is
Ci[j] and contains Pi‘s latest value for the current time in
process Pj.
• d: drift time, generally d is 1.
Implementation Rules[IR]:
For Example:
• Take the starting value as 1, since it is the 1st event and there
is no incoming value at the starting point:
e11 = 1
e21 = 1
e12 = e11 + d = 1 + 1 = 2
e13 = e12 + d = 2 + 1 = 3
e14 = e13 + d = 3 + 1 = 4
e15 = e14 + d = 4 + 1 = 5
e16 = e15 + d = 5 + 1 = 6
e22 = e21 + d = 1 + 1 = 2
e24 = e23 + d = 3 + 1 = 4
e26 = e25 + d = 6 + 1 = 7
• When there will be incoming value, then follow [IR2] i.e., take
the maximum value between Cj and Tm + d.
Limitation:
C++
#include <bits/stdc++.h>
if (a > b)
return a;
else
return b;
int i;
"events in P1:\n";
"events in P2:\n";
int m[5][3])
p1[i] = i + 1
for (i = 0; i < e2; i++)
p2[i] = i + 1;
if (m[i][j] == 1) {
p2[k] = p2[k - 1] + 1;
if (m[i][j] == -1) {
p1[k] = p1[k - 1] + 1;
}
}
int main()
int e1 = 5, e2 = 3, m[5][3];
m[0][0] = 0;
m[0][1] = 0;
m[0][2] = 0;
m[1][0] = 0;
m[1][1] = 0;
m[1][2] = 1;
m[2][0] = 0;
m[2][1] = 0;
m[2][2] = 0;
m[3][0] = 0;
m[3][1] = 0;
m[3][2] = 0;
m[4][0] = 0;
m[4][1] = -1;
m[4][2] = 0;
return 0;
}
1. Deadlock Prevention
2. Deadlock Avoidance
3. Deadlock Detection and Recovery
Deadlock Prevention
However, if we break one of the legs of the table then the table will
fall definitely. The same happens with deadlock, if we can be able
to violate one of the four necessary conditions and don't let them
occur together then we can prevent the deadlock.
1. Mutual Exclusion
• Mutual section from the resource point of view is the fact that
a resource can never be used by more than one process
simultaneously which is fair enough but that is the main
reason behind the deadlock.
Spooling
!(Hold and wait) = !hold or !wait (negation of hold and wait is,
either you don't hold or you don't wait)
3. No Preemption
4. Circular Wait
Deadlock avoidance
• In deadlock avoidance, the request for any resource will be
granted if the resulting state of the system doesn't cause
deadlock in the system. The state of the system will
continuously be checked for safe and unsafe states.
• In order to avoid deadlocks, the process must tell OS, the
maximum number of resources a process can request to
complete its execution.
Resources Assigned
Process Type Type Type Type
1 2 3 4
A 3 0 2 2
B 0 0 1 1
C 1 1 1 0
D 2 1 4 0
A 1 1 0 0
B 0 1 1 2
C 1 2 1 0
D 2 1 1 2
1. E = (7 6 8 4)
2. P = (6 2 8 3)
3. A = (1 4 0 1)
For Resource
We can snatch one of the resources from the owner of the resource
(process) and give it to the other process with the expectation that
it will complete the execution and will release this resource sooner.
Well, choosing a resource which will be snatched is going to be a
bit difficult.
Kill a process
Killing a process can solve our problem but the bigger concern is
to decide which process to kill. Generally, Operating system kills a
process which has done least amount of work until now.
Table of Contents
o Deadlock
• Deadlock Detection:
o 1. Resource Allocation Graph (RAG) Algorithm:
o 2. Resource-Requesting Algorithms:
• Deadlock Resolution:
o 1. Deadlock Prevention:
o 2. Deadlock Avoidance:
o 3. Deadlock Detection with Recovery:
Deadlock
Deadlock is a fundamental problem in distributed
•
systems.
• A process may request resources in any order, which may
not be known a priori and a process can request resource
while holding others.
• If the sequence of the allocations of resources to the
processes is not controlled.
• A deadlock is a state where a set of processes request
resources that are held by other processes in the set.
DEADLOCK DETECTION:
1. Deadlock Prevention:
DFS has two components in its services, and these are as follows:
1. Local Transparency
2. Redundancy
Local Transparency
Redundancy
Features
There are various features of the DFS. Some of them are as follows:
Transparency
1. Structure Transparency
2. Naming Transparency
3. Access Transparency
Local and remote files must be accessible in the same method. The
file system must automatically locate the accessed file and deliver
it to the client.
4. Replication Transparency
When a file is copied across various nodes, the copies files and
their locations must be hidden from one node to the next.
Scalability
Data Integrity
Many users usually share a file system. The file system needs to
secure the integrity of data saved in a transferred file. A
concurrency control method must correctly synchronize
concurrent access requests from several users who are competing
for access to the same file. A file system commonly provides users
with atomic transactions that are high-level concurrency
management systems for data integrity.
High Reliability
Ease of Use
Performance
It does not use Active Directory and only permits DFS roots that
exist on the local system. A Standalone DFS may only be acquired
on the systems that created it. It offers no-fault liberation and may
not be linked to other DFS.
DFS namespace
SMB routes of the form are used in traditional file shares that are
linked to a single server.
\\<SERVER>\<path>\<subpath>
\\<DOMAIN.NAME>\<dfsroot>\<path>
Hadoop
2.Openness:
• The openness of a computer system is the characteristic
that determines whether the system can be extended and
reimplemented in various ways
• The challenge to designers is to tackle the complexity of
distributed systems consisting of many components
engineered by different people
• Open systems are characterized by the fact that their key
interfaces are published\
• Open distributed systems are based on the provision of a
uniform communication mechanism and published
interfaces for access to shared resources
• Open distributed systems can be constructed from
heterogeneous hardware and software, possibly from
different vendors
3.Security
• Shared data must be protected
❖ Privacy - avoid unintentional disclosure of private data
❖ Security – data is not revealed to unauthorized parties
❖ Integrity – protect data and system state from corruption
• Denial of service attacks – put significant load on the
system, prevent users from accessing it
Security in detail concerned in the following areas:
❖ Authentication, Authorization/Access control: are the
means to identify the right user and user right.
Example:
Bank account, starting balance = $100
Client at bank machine A makes a deposit of $150
Client at bank machine B makes a withdrawal of $100
Which event happened first?
Should the bank charge the overdraft fee?
Partial Failures:
• Detection of failures - may be impossible
• Has a component crashed? Or is it just show?
• Is the network down? Or is it just slow?
• If it’s slow – how long should we wait?
• Handling of failures
• Re-transmission
• Tolerance for failures
• Roll back partially completed task
• Redundancy against failures
• Duplicate network routes
• Replicated databases
Scalability
• Does the system remain effective as of grows?
• As you add more components:
• More synchronization
• More communication à the system runs slowly.
• Avoiding performance bottlenecks:
• Everyone is waiting for a single shared resource
• In a centrally coordinated system, everyone waits for the co-
coordinator
Transparency:
Transparency categories:
Transparency categories
Introduction to Amoeba
• The microkernel
Server Basics:
• To use the object in the future, the client must present the
correct capability
Object protection:
• When an object is created, server generates random check
field, which it stores both in the capability and in its own
tables
Process Management:
Contains:
• platform description process' owner's capability etc
Memory Management:
Communication:
• Point-to-point (RPC) and Group
The Amoeba Servers:
The File System
AFS caching/sharing :
Introduction:
Coda on a client:
• Files on Coda servers are not stored in traditional file systems. These
partitions will contain files which are grouped into volumes. Each volume
has a directory structure like a file system: i.e. a root directory for the
volume and a tree below it.
• Coda holds volume and directory information, access control lists and file
attribute information in raw partitions. These are accessed through a log
based recoverable virtual memory package (RVM) for speed and
consistency.
• Only file data resides in the files in server partitions. RVM has built in
support for transactions - this means that in case of a server crash the
system can be restored to a consistent state without much effort.
• The advantage of this is higher availability of data: if one server fails others
take over without a client noticing the failure. Volumes can be stored on a
group of servers called the VSG (Volume Storage Group).
Coda in action:
Coda is in constant active use at CMU. Several dozen clients use it for
development work (of Coda), as a general-purpose file system and for specific
disconnected applications. The following two scenarios have exploited the
features of Coda very successfully.
There are a number of compelling future applications where Coda could provide
significant benefits.
• WWW replication servers should be Coda clients. Many ISPs are struggling
with a few WWW replication servers. They have too much access to use just
a single http server. Using NFS to share the documents to be served has
proven problematic due to performance problems, so manual copying of
files to the individual servers is frequently done.
Getting Coda:
Coda is available for ftp from ftp.coda.cs.cmu.edu. You will find RPM packages for
Linux as well as tar balls with source. Kernel support for Coda will come with the
Linux 2.2 kernels. On the WWW site www.coda.cs.cmu.edu you will find
additional resources such as mailing lists, manuals and research papers.