0% found this document useful (0 votes)
15 views5 pages

Unit-1: Peer-To-Peer (P2P) Systems

Uploaded by

nhari9904
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views5 pages

Unit-1: Peer-To-Peer (P2P) Systems

Uploaded by

nhari9904
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

1.

Infrastructure as a Service (IaaS): IaaS provides customers with virtualized computing


resources such as servers, storage, and networking. Customers can deploy and run their
Unit-1 own software, applications, and operating systems on these virtualized resources.
Peer-to-peer (P2P) systems Examples of IaaS providers include Amazon Web Services (AWS), Microsoft Azure, and
Google Cloud Platform (GCP).
Peer-to-peer (P2P) systems and cloud computing are two different computing paradigms, but
they can be combined to create a hybrid architecture that leverages the advantages of both. P2P
systems can be used within a cloud computing environment to improve resource utilization and 2. Platform as a Service (PaaS): PaaS provides customers with a platform for developing,

reduce costs. Here are some ways that P2P systems can be used in cloud computing: deploying, and running applications without the need to manage underlying infrastructure.
PaaS providers typically offer development tools, middleware, and operating systems that
1. Content delivery: P2P systems can be used for content delivery within a cloud computing
allow customers to build and deploy their applications. Examples of PaaS providers
environment. By distributing content across multiple nodes, P2P systems can reduce the
include Heroku, Google App Engine, and Microsoft Azure.
load on individual servers and improve the speed and reliability of content delivery.
2. Load balancing: P2P systems can also be used for load balancing within a cloud computing
3. Software as a Service (SaaS): SaaS provides customers with access to software
environment. Nodes can share processing power and network bandwidth with each other,
applications that are hosted and managed by the cloud provider. Customers do not need to
allowing for more efficient use of resources and improved performance.
install or manage the software themselves, as it is delivered over the internet. Examples of
3. Data storage: P2P systems can be used for distributed data storage within a cloud
SaaS providers include Salesforce, Dropbox, and Microsoft Office 365.
computing environment. Nodes can store data on each other's devices, reducing the need
for centralized storage and improving resilience.
4. Task scheduling: P2P systems can be used for task scheduling within a cloud computing
environment. Nodes can share processing power with each other to perform computational
tasks, allowing for more efficient use of resources.
5. Fault tolerance: P2P systems can improve fault tolerance within a cloud computing
environment. If one node fails, other nodes can pick up the slack and continue to perform
the required tasks.

Overall, combining P2P systems with cloud computing can lead to more efficient and cost-
effective use of resources, improved performance and reliability, and greater resilience.
However, implementing P2P systems within a cloud computing environment requires careful
planning and design to ensure security, privacy, and governance of the system.

Cloud computing delivery models and services

Cloud computing delivery models refer to the different ways in which cloud services are
delivered to customers. There are three main cloud delivery models:

1 2
Unit-1 Cloud Computing Unit-1 Cloud Computing
Cloud computing services refer to the various types of services that cloud providers offer to 2. Vendor lock-in: Cloud providers may use proprietary technologies and standards, which
customers within each delivery model. Here are some common cloud computing services: can make it difficult for customers to migrate to other cloud providers or to use their data
outside of the cloud environment. This can lead to vendor lock-in and limit customer
1. Compute: Compute services provide customers with virtualized computing resources such choice and flexibility.
as servers, virtual machines, and containers. Customers can deploy and run their own 3. Transparency and accountability: Cloud providers must be transparent about their policies
applications and software on these resources. and practices regarding data security, privacy, and compliance. They should provide clear
and concise terms of service and data protection agreements and be accountable for any
2. Storage: Storage services provide customers with scalable, reliable, and secure storage for breaches or failures to meet these standards.
their data. Customers can store and retrieve data from these services using a variety of 4. Cybersecurity threats: Cloud computing is vulnerable to cybersecurity threats such as
interfaces, such as file systems, block storage, or object storage. hacking, malware, and phishing attacks. Cloud providers must implement strong security
measures and regularly update their systems to protect against these threats.
3. Networking: Networking services provide customers with the ability to connect and manage 5. Service outages: Cloud computing services can experience service outages and
their cloud resources. These services may include virtual private networks (VPNs), load disruptions, which can impact the availability and reliability of critical applications and
balancers, firewalls, and domain name system (DNS) management. data. Cloud providers must have contingency plans in place to mitigate these risks and
ensure business continuity.
4. Database: Database services provide customers with scalable, reliable, and secure database
solutions. Customers can use these services to store, manage, and analyze their data. communication protocols
Communication protocols are a set of rules and procedures that govern the exchange of data
5. Analytics: Analytics services provide customers with tools for analyzing and visualizing between two or more entities in a networked environment. They specify the format and
their data. These services may include business intelligence (BI) tools, data warehousing, meaning of messages, as well as the procedures for transmitting, receiving, and processing
and machine learning. those messages.
There are several types of communication protocols used in networking, including:
Overall, cloud computing delivery models and services provide customers with a flexible and 1. Transmission Control Protocol/Internet Protocol (TCP/IP): This is the primary protocol
scalable way to access and use computing resources and software applications. used on the internet and is responsible for transmitting data packets between devices. TCP
provides reliable transmission by ensuring that all data is received and in order, while IP
Ethical issues, Vulnerabilities is responsible for routing the data packets to their destination.

Ethical issues and vulnerabilities are important considerations in cloud computing, as they can 2. Hypertext Transfer Protocol (HTTP): This protocol is used to transfer data over the World

impact the security, privacy, and reliability of cloud services. Here are some examples of Wide Web, and it specifies the format of messages and responses between a client and

ethical issues and vulnerabilities in cloud computing: server.


3. Simple Mail Transfer Protocol (SMTP): This protocol is used for sending and receiving

1. Data privacy: Cloud computing involves storing and processing sensitive data on remote email messages over the internet.

servers, which can raise concerns about data privacy. Cloud providers must ensure that 4. File Transfer Protocol (FTP): This protocol is used for transferring files between a client

customer data is secure and protected from unauthorized access, theft, and misuse. and server over a network.

3 4
Unit-1 Cloud Computing Unit-1 Cloud Computing
5. Transmission Control Protocol/User Datagram Protocol (TCP/UDP): These are two 5. Timeliness guarantee: This rule ensures that messages are delivered within a certain time
different transport layer protocols used for sending data packets over a network. TCP frame. This is important for real-time systems, where delays in message delivery can cause
problems.
provides reliable transmission, while UDP is faster but less reliable.
6. Atomicity guarantee: This rule ensures that a group of related messages are either all
6. Secure Sockets Layer (SSL) and Transport Layer Security (TLS): These are security delivered or none are delivered. This is important for maintaining consistency in distributed
transactions.
protocols used to encrypt data transmitted over the network, providing confidentiality and
integrity.
By following these message delivery rules, distributed systems can ensure that messages are
delivered reliably and consistently, even in the face of failures and network disruptions. This
Communication protocols are essential for ensuring that devices in a networked environment
helps to ensure the correctness and reliability of the system, which is essential for mission-
can communicate effectively and efficiently with each other. They allow for standardized
critical applications.
communication and enable devices from different manufacturers to interoperate seamlessly.
By following a set of established rules and procedures, communication protocols help to ensure
Concurrency
the security, reliability, and performance of networked systems.
Concurrency refers to the ability of a system or program to perform multiple tasks or processes
simultaneously. This is important for achieving efficient use of resources and improving the
message delivery rules
overall performance of a system.
Message delivery rules are a set of guidelines and protocols that govern how messages are
 Concurrency can be achieved through various techniques, such as parallelism, multitasking,
transmitted, received, and processed in a distributed system. These rules ensure that messages
and multithreading. Parallelism involves breaking down a large task into smaller subtasks
are delivered reliably, in the correct order, and without loss or duplication.
and running them concurrently on multiple processors or cores. Multitasking involves
switching between multiple tasks in rapid succession, allowing each task to make progress
There are several message delivery rules that are commonly used in distributed systems:
while waiting for resources or input/output operations. Multithreading involves dividing a
single process into multiple threads, each of which can run independently and concurrently.
1. At-most-once delivery: This rule ensures that a message is delivered at most once. If a
 Concurrency can provide several benefits, such as increased efficiency, faster execution
failure occurs during the transmission of the message, the sender will not resend the
message. times, and improved responsiveness. However, it can also introduce several challenges, such
as race conditions, deadlocks, and resource contention. These issues can lead to incorrect
2. At-least-once delivery: This rule ensures that a message is delivered at least once. If a failure
results or system failures if not properly managed.
occurs during the transmission of the message, the sender will resend the message until it is
successfully delivered.  To address these challenges, concurrency control techniques such as locks, semaphores, and
monitors can be used to ensure that shared resources are accessed in a safe and consistent
3. Exactly-once delivery: This rule ensures that a message is delivered exactly once. This is
manner. These techniques help to prevent race conditions, deadlocks, and other
the most difficult rule to implement, as it requires coordination between the sender and
receiver to ensure that the message is not lost or duplicated. concurrency-related problems.
 Overall, concurrency is a crucial concept in modern computing and is essential for achieving
4. Ordering guarantee: This rule ensures that messages are delivered in the order in which they
high performance and efficient use of resources. However, it also requires careful design
were sent. This is important for maintaining consistency in distributed systems.
and management to ensure that it is used effectively and safely in complex systems.

5 6
Unit-1 Cloud Computing Unit-1 Cloud Computing
Model concurrency with Petri Nets 1. Client-server architecture: In this architecture, the system is divided into two parts - the
Petri Nets are a mathematical modeling technique used to describe and analyze concurrent client and the server. The client sends requests to the server, which then processes the
systems. They are particularly useful for modeling systems where there are multiple processes requests and sends back a response.
or resources that can be active at the same time.
Petri Nets consist of two main components: places and transitions. Places represent the states
or conditions of the system, while transitions represent the actions or events that can occur in
the system. Tokens are used to represent the flow of resources or processes between places and
transitions.
 To model concurrency with Petri Nets, we can use the concept of mutual exclusion. Mutual
exclusion means that only one process can access a shared resource at a time. We can
model this by using a Petri Net with two places: one for the resource and one for the
process. The transition between the two places represents the process acquiring the
resource, while the reverse transition represents the process releasing the resource. 2. Peer-to-peer architecture: In this architecture, all nodes in the system have equal

 Another way to model concurrency with Petri Nets is by using the concept of responsibilities and can communicate with each other directly, without the need for a

synchronization. Synchronization means that two or more processes must work together centralized server.

in a coordinated way to accomplish a task. We can model this by using a Petri Net with
multiple places and transitions, with tokens representing the state of each process. The
transitions between the places represent the events or actions that cause the processes to
move forward or backward in their states.
 Petri Nets are a powerful tool for modeling concurrency, as they can help us to identify
potential problems and optimize the performance of complex systems. By analyzing Petri
Nets, we can gain insights into the behavior of concurrent systems and make improvements
to their design and implementation. 3. Message-oriented architecture: In this architecture, nodes in the system communicate with
each other by exchanging messages, rather than by accessing shared resources or data.

The architecture of a distributed system


Distributed systems are computer systems that consist of multiple interconnected computers
that work together to perform a common task. These systems are designed to enable the sharing
of resources, data, and processing power across multiple nodes, allowing for more efficient and
scalable computing.
The architecture of a distributed system can vary depending on its intended use and the specific
requirements of the application. However, there are several common architectural patterns that
4. Service-oriented architecture: In this architecture, the system is composed of loosely-
are often used in distributed systems, including:
coupled, independent services that communicate with each other to perform a larger task.

7 8
Unit-1 Cloud Computing Unit-1 Cloud Computing
Distributed systems can provide several benefits, such as increased scalability, fault tolerance, 1. Shared-memory systems: In a shared-memory system, all PEs have access to a
and performance. However, they also introduce several challenges, such as network latency, common pool of memory, allowing them to communicate and share data
quickly and efficiently.
data consistency, and security. These challenges must be addressed through careful system
2. Distributed-memory systems: In a distributed-memory system, each PE has its
design, robust communication protocols, and effective concurrency control mechanisms. own local memory, and communication between PEs is accomplished through
message passing over a network.
Overall, the architecture of a distributed system plays a critical role in its performance, 3. Hybrid systems: Hybrid systems combine elements of both shared-memory
and distributed-memory architectures, using a combination of local and
scalability, and resilience. By selecting the appropriate architecture and designing the system shared memory to balance performance and scalability.
with the specific requirements in mind, it is possible to build robust and efficient distributed
systems that meet the needs of a wide range of applications. Overall, the architecture of a parallel system plays a critical role in determining its
performance, scalability, and efficiency. By selecting the appropriate architecture and
designing the system with the specific requirements in mind, it is possible to build
The architecture of a parallel system powerful and efficient parallel systems that can tackle complex computational tasks
with ease.

The architecture of a parallel system refers to the design and organization of the
hardware and software components that enable the system to perform multiple tasks
simultaneously. Parallel systems are designed to improve computational
performance and efficiency by dividing tasks into smaller parts that can be executed
concurrently on multiple processors or cores.

The architecture of a parallel system can be characterized by several key


components, including:

1. Processing Elements (PEs): PEs are the individual computing units in a parallel
system, each of which is capable of executing a portion of the overall
computation. PEs can take many forms, including CPUs, GPUs, FPGAs, and
custom-designed accelerators.
2. Interconnects: Interconnects are the communication channels that connect the
PEs together and allow them to exchange data and synchronize their
computations. Interconnects can take many forms, including buses, networks,
and high-speed serial links.
3. Memory Hierarchy: The memory hierarchy in a parallel system refers to the
different levels of memory that are used to store data and instructions. This
hierarchy typically includes several levels of cache memory, as well as main
memory and disk storage.
4. Control Unit: The control unit is responsible for coordinating the activities of
the PEs and ensuring that they work together effectively. This includes
managing the flow of data between the PEs, scheduling tasks, and
coordinating memory access.

Parallel systems can be further classified based on their organization and topologies.
Some common parallel system architectures include:

9 10
Unit-1 Cloud Computing Unit-1 Cloud Computing

You might also like