5CS022 Lecture 1
5CS022 Lecture 1
Lecture 1
We will very likely cover topics such as
• Serial implementation
• Parallel implementation
Evolution of
Distributed
Systems
In the early days, computers
operated in isolation. The birth of
networking marked a transformative
era as machines started connecting
The Birth of for basic communication. This
allowed data exchange and paved
Networking the way for collaborative computing.
the Internet
ways previously unimaginable.
This global network laid the
The Web Unleashed foundation for distributed
computing on a larger scale.
Peer-to-peer networks gained
prominence, promoting
There are several ways to achieve distributed computing, each with its
own advantages and use cases.
2. Peer-to-Peer Networks
Description: In a peer-to-peer network, Use Case: Popular for file sharing (e.g.,
all nodes (computers or devices) are
BitTorrent) and decentralized applications.
considered equal, and they can share
resources directly with one another
without relying on a central server.
5. Distributed Databases
Description: Data is distributed across Use Case: Large-scale applications with
multiple nodes, allowing for improved high read and write demands, such as
scalability and fault tolerance. Different
types include sharded databases and social media platforms.
NoSQL databases.
6. Cloud Computing
Description: Resources are provided as a Use Case: General-purpose computing,
service over the internet. Users can scalable web applications, and data
access computing power, storage, and storage.
other services on a pay-as-you-go basis.
●Message
Distributed
Computing Passing
Models Model
●Actor
Model
The Message-Passing Model
MPI is not…
● a language or
compiler
specification
● a specific
implementation or
product
Reasons for Using MPI
MPI_SEND(buf,count,datatype,dest,tag,comm)
● The message buffer is described by buf, count, datatype.
● The target process is specified by dest and comm.
○ dest is the rank of the target process in the communicator specified by
comm.
● tag is a user-defined “type” for the message
● When this function returns, the data has been delivered to the
system and the buffer can be reused.
○ Thus this function is "blocking"
○ However, the message might not have been received by the target
process, yet.
MPI Basic Receive Message
MPI_RECV(buf,count,datatype,source,tag,comm,status)
● Waits until a matching on source, tag, comm message is received
from the system, and the buf buffer can be read.
● source is rank in communicator comm, or MPI_ANY_SOURCE.
● Receiving fewer than count occurrences of datatype is OK, but
receiving more is an error.
● status is a structure containing further information:
○ Who sent the message, which is useful if you used MPI_ANY_SOURCE
○ How much data was actually received
○ What tag was used with the message, which is useful if you used
MPI_ANY_TAG
○ MPI_STATUS_IGNORE can be used if we don’t need any additional
information
Running MPI Programs
● MPI programs can either run on the same computer or they can be
distributed to other computers(nodes) to share the
workload.
● In order to run MPI programs on other nodes, they have to be
copied to all the node.
Assessment
This assessment is a Portfolio for 5CS022 Distributed and Cloud Systems Programming,
which accounts for 100% of the module marks.
The workshop tasks will contribute 20% of the marks to the Portfolio. These
Part 2 – Quizzes
Part 3 – Coursework
The coursework will consist of a number of questions that you will have to answer by
writing a short
research-based report and a number of tasks which you will have to carry out by
creating a number
of specified programs. The coursework will contribute 50% of the marks to the Portfolio.