DoS - Unit 1

Download as pdf or txt
Download as pdf or txt
You are on page 1of 57

Distributed Operating Systems

[CS0445]

Dr Vidya Raj C
Professor CSE
Course Structure
Code: CS0445
Credits : 4
Unit 1 - Introduction to DS
Unit 2 - Synchronization in DS – sync algos
Unit 3 - Synchronization in DS – election algos
Unit 4 - Threads and processor allocation
Unit 5 - Distributed file systems
Unit 6 - Distributed shared memory
Text Book: Distributed OS – Andrew S Tanenebaum
Ref Book: DS, concepts and design - Dollimore
Course relevance
• Distributed applications around us - use data
which are inherently distributed
• For effective use of these applications
distributed systems are designed and used
What is a distributed system?
• Is a collection of autonomous computers that
appear to the users of the system as a single
computer
– linked by a computer network
– equipped with distributed system software
• Possible
– Development of powerful MPs
– Invention of high speed networks
Why distributed systems?
• Helps in resource sharing and making usage
efficient and cost effective

SOFTWARE - DB, FILES, OBJECTS

HARDWARE – PRINTER, N/W CLOCKS, MEMORY


RESOURCES

LOCAL – RAM, LOCAL I/F CARDS

REMOTE – NFS, DISK DRIVES


How does a DS look like?
Applications of DS
• Banking system - ATMs
• Automated Supply chain management
• Telephone networks and Cellular networks
• Industrial automation system
• Tele medicine system
• Airlines/ bus/ railway reservation system
• Computer supported cooperative games
Goals of DS

• To create a single system illusion


• To help in efficient resource sharing
• To support distributed applications
Distributed OS
An OS that supports distributed applications
running on multiple computers linked by
communication network

Examples: Mach OS, Amoeba, SUN Solaris,


Cloud, Arjuna
Advantages of DS over Centralized systems

1. Economics – Microprocessors offer a better price / performance


than mainframes

2. Speed – A DS may offer total computing power than a


Mainframe or CS

3. Inherent distribution – Most applications are inherently


distributed in nature so DS is a natural fit

4. Reliability – DS offers higher reliability than CS


5. Incremental growth – possibility of gradual expansion based
on workload
Advantages of DS over independent PC

1. Data Sharing – Online transactions, railway reservation


etc

2. Device sharing – printers, fax machines, storage devices


etc

3. Communication – email, fax, video conferencing etc

4. Flexibility – spreading the workload on available


machines in the most cost effective way
Disadvantages of DS
1. Software
– Development complex
– Choice of s/w to specific application is a challenge

2. Communication network
– Load variation on network
• Tendency to loose messages
• Network saturation

3. Security
– Easy access to data / data sharing
– Need for special security mechanisms
Key characteristics of DS

1. Resource sharing
2. Openness
3. Concurrency
4. Scalability
5. Fault tolerance
6. Transparency
Resource sharing
• Resource types
• Advantage of resource sharing
– Convenience of usage
– Reduces cost on resources
– Data consistency
– Exchange of information
• 2 Models
– Client - Server
– Object - based
Openness
• How a system can be extended?
• System - open or closed wrt h/w or s/w
• Openness is achieved
– publishing key interfaces and specifications
– standardization of interfaces
– provision for uniform IPC and std communication
protocols for access – extension of services and
resources
• Solve system integration problems
Concurrency
• Implies co-existence
• Multi-programming and Multi-processing
• Parallel executions in distributed systems
– Many computers with one or more CPU, each CPU is
able to run processes in parallel
– When several clients access the same resources
concurrently, the server process must synchronize their
actions to ensure they do not conflict

– Synchronization must be planned well to derive benefits


of concurrency
• Possible to achieve good throughput
Scalability
• How the system handles growth?
• When scaled, the system and application
software must not change
• To support scalability
– avoid centralization
– naming or numbering scheme must be carefully
chosen
– handle timing problems with caching, data
replication and load distribution
Fault tolerance
• A DS must be so designed, it should have min
failures or high tolerance to faults
• Two approaches for failure:
– hardware redundancy – h/w components
– software recovery – auto recovery mechanisms,
roll back
• Increase in availability for services
• Network is not normally redundant
Transparency
• Hiding the fact that components in DS are
separated

• System is seen as a whole rather than as a


collection of independent components -
Illusion of a single system

• ISO has defined 8 forms of transparencies


Forms of transparency
1. Access transparency
2. Location transparency
3. Concurrency transparency
4. Replication transparency
5. Failure transparency
6. Migration transparency
7. Performance transparency
8. Scaling transparency
Access transparency
enables local and remote resources to be accessed
using identical operations

Location transparency
all resources are accessed without the knowledge
of their location

Concurrency transparency
enables hiding the fact that multiple users could
be accessing and sharing the same resource at the
same time
Replication transparency
enables hiding the fact that more than one copy
or instance of a resource may exist

Failure transparency
enables hiding the fact that parts of the system
may fail and still allow the users to access the
resources

Migration transparency
hiding the fact that a resource or info object might
move from one place to another
Performance transparency
as load in the system varies, the system might be
reconfigured to improve performance and it is done
transparently

Scaling transparency
the system and applications are allowed to expand
without changing the system structure or application
algorithm
DS Hardware Concepts
• Flynn’s taxonomy
– Number of instruction streams
– Number of data streams
1. SISD
2. SIMD
3. MISD
4. MIMD
Taxonomy Of Parallel And Distributed Systems
Classification of DS

• Multiprocessor
– Consists of two or more CPUs that share a
common memory
– Communication is though shared memory

CPU CPU CPU MEM

BUS
Classification of DS
• Multicomputer
– Consists of two or more CPUs, each CPU has its
private memory
– Communication is though message passing

MEM MEM MEM MEM


CPU CPU CPU CPU

N/W
Multiprocessors
1. Bus-based multiprocessors
2. Switched multiprocessors
Bus-based multiprocessors
• Consists of some CPUs generally 32 or 64
numbers, all connected to a common bus
along a memory module
Bus-based multiprocessors
• Presence of shared memory  memory
coherence
Problem:
Single bus used  bus contention
– bus overloading and performance
degradation

Solution - ?
Bus-based multiprocessors

• To add a high speed cache memory between


CPU and the bus
– Cache - high speed memory, holds the most
recently accessed words
– All memory requests go thro cache
Bus-based multiprocessors
Bus-based multiprocessors
• Introduction of cache has a serious problem :
– Situation of memory incoherence and system is
difficult to program
Solution: one possible design solution
– to introduce a write-through cache
– a word written to the cache is also written to the memory

In this design,
cache hits for READS do not cause bus traffic, while
cache miss for READS and all WRITES cause bus traffic
Bus-based multiprocessors
• All caches constantly monitor the bus
– Whenever a cache sees a WRITE happening to a memory
address present in its cache, it either removes that entry
from its cache or updates the cache entry with new value.
– This is known as Snoopy Cache

• A design consisting of Snoopy Write-through Cache is


coherent and is invisible to the programmer

• All most all bus-based multiprocessors use this


architecture
Switched-based multiprocessors
For more than 64 processors to build a
multiprocessor
• Types:
1. Cross bar switch
2. Omega Switching network
3. Non Uniform Memory Access
A crossbar switch
• Memory is divided into modules
• CPUs are connected to memory modules with a
crossbar switch
• When a CPU wants to access a particular memory
module, the crossbar switch connecting them is
closed momentarily to allow access to take place
A crossbar switch
A crossbar switch
• Advantage
– Many CPUs can access the memory at the same time
however simultaneous access need to undergo wait time

• Disadvantage
– With n CPUs and n memories, n x n = n2 cross point
switches are needed. For large n it becomes expensive
Omega Switching Network
Omega Switching Network
• For 2 Input and 2 output ports, the network contains
4, 2x2 switches where each switch can route either
input to either output

• Every CPU can access every memory

• Switches can be set in nano seconds or less

• With n CPUs and n memories, the omega network


requires log2n switching stages, each containing
n/2 switches for a total of (nlog2n)/2 switches
Omega Switching Network
• Advantage
– For large n it is better than n2
• Disadvantage
– Large omega networks are both expensive and
slow
Non Uniform Memory Access
• Each CPU can have some memory associated with it
and it can access its own local memory quickly than
accessing any other CPU’s

• Better performance in terms of average access time


than omega networks

• Require complex algorithms for good software


placements
Non Uniform Memory Access
Multiprocessors
So, building a large, tightly-coupled , shared
memory multiprocessor is possible but it is
difficult and expensive
Multicomputer
• Types
– Bus-based multicomputer
– Switched multicomputer
Bus-based multicomputer

MEM MEM MEM MEM


CPU CPU CPU CPU

N/W

• Each CPU has its own local memory


• No shared memory
• Communication between CPU - CPU through a Bus
using message passing
• Traffic on the bus is low hence low speed LAN can be
used for interconnection
Switched Multicomputer

• Each CPU has direct and exclusive access to its own


private memory
• Two popular topologies – Grid and Hypercube

MEM
CPU
Multicomputer

• Building a multicomputer is easy compared to


multiprocessor
Software for multiple CPU systems

1. Network OS
2. Distributed OS
3. Multiprocessor OS
Network OS
• Loosely-coupled S/W on Loosely-coupled H/W
• Each machine can run its own OS with its own
application
• Designed primarily to support WS, PC connected to
LAN
• Client- Server and Peer to Peer networking models
• Possible to access and manage remote resources
• Communication through shared file systems, with
agreed upon communication protocol
• Eg Windows NT, Novel Netware
Distributed OS
• Tightly-coupled S/W on Loosely-coupled H/W
• Goal is to create a single system image in a
distributed environment.
– A single, global IPC through agreed upon protocols
– Global protection scheme is a must
– Process management must be same
– Global file system with well defined semantics
– No shared memory

• Communication is through messages


• All machines run the same OS
• Eg SUN Solaris, Mach OS
Multiprocessor OS
• Tightly-coupled S/W on a Tightly-coupled H/W
• goal is to offer single system image by
centralizing everything
• All machines run the same OS
• Communication is through shared memory,
no agreed upon communication protocol
• Presence of a single Run queue
• Eg Unix time sharing system
Multiprocessor timesharing system
Comparison of NOS, DOS, MOS
Characteristics NOS DOS MOS

1. Does it look like a virtual No yes yes


Uniprocessor?
2. Does all machines have to run No Yes yes
the same OS?
3.How many copies of OS are N N 1
there?
4. Are agreed upon network Yes Yes No
protocols required?
5. How is communication achieved? Shared files Messages Shared memory

6. Is there a single Run queue? No No Yes


7. Does file sharing have a well Usually No Yes Yes
defined semantics?
8. S/W and H/W coupling Loosely-coupled Tightly-coupled Tightly-coupled
s/w on s/w on s/w on
loosely-coupled loosely-coupled Tightly-coupled
h/w h/w s/w

You might also like