0% found this document useful (0 votes)
36 views7 pages

DC - Hand-Written

Uploaded by

leebha.pushparaj
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
36 views7 pages

DC - Hand-Written

Uploaded by

leebha.pushparaj
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 7

DISTRIBUTED COMPUTING - SYLLABUS

COURSE CODE: CS3551


REGULATION:2021
INTRODUCTION
Introduction: Definition-Relation to Computer System Components – Motivation – Message-Passing Systems
versus Shared Memory Systems – Primitives for Distributed Communication –Synchronous versus
Asynchronous Executions – Design Issues and Challenges; A Model of Distributed Computations: A Distributed
Program – A Model of Distributed Executions – Models of Communication Networks – Global State of a
Distributed System.
LOGICAL TIME AND GLOBAL STATE
Logical Time: Physical Clock Synchronization: NTP – A Framework for a System of Logical Clocks– Scalar
Time – Vector Time; Message Ordering and Group Communication: Message Ordering Paradigms –
Asynchronous Execution with Synchronous Communication – Synchronous Program Order on Asynchronous
System – Group Communication – Causal Order – Total Order; Global State and Snapshot Recording
Algorithms: Introduction – System Model and Definitions – Snapshot Algorithms for FIFO Channels.
DISTRIBUTED MUTEX AND DEADLOCK
Distributed Mutual exclusion Algorithms: Introduction– Preliminaries–Lamport’s algorithm–Ricart Agrawala’s
Algorithm– Token-Based Algorithms – Suzuki-Kasami’s Broadcast Algorithm;Deadlock Detection in
Distributed Systems: Introduction – System Model – Preliminaries – Models of Deadlocks – Chandy-Misra-
Haas Algorithm for the AND model and OR Model.
CONSENSUS AND RECOVERY
Consensus and Agreement Algorithms: Problem Definition – Overview of Results – Agreement in a Failure-
Free System(Synchronous and Asynchronous) – Agreement in Synchronous Systems with Failures; Check
pointing and Rollback Recovery: Introduction – Background and Definitions – Issues in Failure Recovery –
Checkpoint-based Recovery – Coordinated Check pointing Algorithm – Algorithm for Asynchronous Check
pointing and Recovery
CLOUD COMPUTING
Definition of Cloud Computing – Characteristics of Cloud – Cloud Deployment Models –Cloud Service Models
– Driving Factors and Challenges of Cloud – Virtualization – Load Balancing – Scalability and Elasticity –
Replication – Monitoring – Cloud Services and Platforms: Compute Services – Storage Services – Application
Services
UNIT 1: INTRODUCTION

 Definition of distributed system:


 Components located at Network Computers – Communicate their action by-passing messages
 Collection of independent entities – to solve a problem
 Tanenbaum’s definition: Collection of independent computer appears as single coherent system
 Collection of processors communicating over network
Features and Consequence:
 Concurrency : System capacity increased by adding more resources to the network
 No global clock : Only communication is by sending message through a network – Not possible to
have a process that can be aware of a single global state
 Independent failures : Failed process may go undetected due to process running in isolation
 Autonomy & heterogeneity : Processor are loosely coupled in that they have different speeds and
running a
different OS.
Disadvantages of Distributed System:
1) Software : Difficult to develop software for distributes system
2) Network : Saturation (consumption of system resource during latency),
lossy transmissions (the loss of information while handling data)
3) Security : Easy access also applies to secret data
4) Absence of global clock
Different between Parallel Computing and Distributed Computing
Parallel Computing
1. Provide performance in terms of processor power or memory
2. Interaction between processors is frequent
3. Fine grained with low overhead (additional resource required to complete a task)
4. Assumed to be reliable
5. Short execution time
Distributed Computing
1. Provide convenience in terms of availability, reliability and physical distribution
2. Interaction is infrequent
3. Heavier weight
4. Assumed to unreliable
5. Long up time
 Relation to Computer System Components

 Each node consists of a processor (CPU), local memory and interface


 Communication between nodes by message passing because not common memory
 Distributed software also called middleware
 Using layered architecture to break down the complexity of system design
 Each computer has memory processing unit
 Computers can communicate with each other through LAN and WAN
 It is an information-processing system that contains number of independent computers.
 Cooperate with one another to achieve a specific objective
 Difference between the various computer and their communications are hidden from user
 Do not use a common clock and
 Distributes system consists of multiple software components that are on multiple computers
 Middleware (software) enables the components to co-ordinate their activities
 Users and applications can interact with a distributed system in a consistent way by operates a middleware
 Computers that are in DS can be physically close in local network or distant in wide area network
 Distributed system consists of any number of possible configuration (different types of computers) such as
mainframes, personal computers, workstations, minicomputer and so on
 Motivation
 Message-Passing Systems versus Shared Memory Systems
 Primitives for Distributed Communication
 Synchronous versus Asynchronous Executions
 Design Issues and Challenges
 A Model of Distributed Computations:
 A Distributed Program
 A Model of Distributed Executions
 Models of Communication Networks
 Global State of a Distributed System

UNIT 2: LOGICAL TIME AND GLOBAL STATE


 Logical Time:
 Physical Clock Synchronization: NTP
 A Framework for a System of Logical Clocks
 Scalar Time – Vector Time
 Message Ordering and Group Communication
 Message Ordering Paradigms
 Asynchronous Execution with Synchronous Communication
 Synchronous Program Order on Asynchronous System
 Group Communication
 Causal Order
 Total Order
 Global State and Snapshot Recording Algorithms:
 Introduction
 System Model and Definitions
 Snapshot Algorithms for FIFO Channels

UNIT 3: DISTRIBUTED MUTEX AND DEADLOCK


 Distributed Mutual exclusion Algorithms
 Introduction
 Preliminaries
 Lamport’s algorithm
 Ricart Agrawala’s Algorithm
 Token-Based Algorithms
 Suzuki-Kasami’s Broadcast Algorithm
 Deadlock Detection in Distributed Systems
 Introduction
 System Model
 Preliminaries
 Models of Deadlocks
 Chandy-Misra-Haas Algorithm for the AND model and OR Model
UNIT 4: CONSENSUS AND RECOVERY
 Consensus and Agreement Algorithms:
 Problem Definition
 Overview of Results
 Agreement in a Failure
 Free System (Synchronous and Asynchronous)
 Agreement in Synchronous Systems with Failures
 Check pointing and Rollback Recovery:
 Introduction
 Background and Definitions
 Issues in Failure Recovery
 Checkpoint-based Recovery
 Coordinated Check pointing Algorithm
 Algorithm for Asynchronous Check pointing and Recovery

UNIT 5: CLOUD COMPUTING


 Definition of Cloud Computing
The main three types of cloud computing are
 public cloud,
 private cloud,
 hybrid cloud.
Within these deployment models, there are four main services:
 infrastructure as a service (IaaS),
 platform as a service (PaaS),
 software as a service (SaaS),
 serverless computing.

 Characteristics of Cloud
1. On-demand self-service
 Make resources available to users at the click of a button.
 Ex: AWS, Microsoft Azure, Google Cloud and other public cloud platform
2. Resource pooling
 Architectures to accommodate more users at the same time.
 Improve security and speed users' access to resources.
3. Scalability and rapid elasticity
 Resource pooling enables scalability for cloud providers
 Users can add or remove compute, storage, networking and other assets as needed.
4. Pay-per-use pricing
 Customers only pay for what they use.
 VMs should be right-sized, turned off while not in use, or scaled down as conditions dictate.
5. Measured service
 The provider and the customer monitor and report on the use of resources and services.
 Calculate the customer's consumption of cloud resources and feeds into the pay-per-use model.
6. Resiliency and availability
 Cloud providers use several techniques to guard against downtime
 Automatically distribute workloads across availability zones.
7. Security
 Cloud vendors employ some of the best security experts in the world.
 Some of the biggest financial firms in the world say the cloud is a security asset.
8. Broad network access
 Data can be uploaded and accessed from anywhere with an internet connection.
 Users can work from any location. Allows mix of operating systems, platforms and devices.

 Cloud Deployment Models


Factors Public Private Cloud Community Cloud Hybrid Cloud
Cloud
Initial Setup Easy Complex, requires a Complex, requires a Complex, requires a
professional team to professional team to professional team to
setup setup setup

Scalability and High High Fixed High


Flexibility
Cost- Cost-Effective Costly Distributed cost among Between public and
Comparison members private cloud

Reliability Low Low High High

Data Security Low High High High

Data Privacy Low High High High

 Cloud Service Models


 Driving Factors and Challenges of Cloud
 Virtualization
 Load Balancing
 Scalability and Elasticity
 Replication
 Monitoring
 Cloud Services and Platforms:
 Compute
 Services
 Storage Services
 Application Services

You might also like