Cloud Computing 1st-Unit
Cloud Computing 1st-Unit
Cloud services are accessible over the internet from various devices,
providing ubiquitous access to computing resources.
Resource Pooling:
Users can control and manage their own resources through web-based
interfaces or portals, facilitating direct interaction with the cloud
environment.
Managed Service:
Cloud services are accessible over the internet from various devices,
providing ubiquitous access to computing resources.
Resource Pooling:
Users can control and manage their own resources through web-based
interfaces or portals, facilitating direct interaction with the cloud
environment.
Managed Service:
Cloud Provider
Patient Portal/Application
Specialist's Portal/Application
pharmacies Diagnostic
Lab Public/private cloud
Cloud Provider: Company offering tailored cloud services for healthcare.
Healthcare Applications: Specialized software like EHR, telemedicine, and
billing systems running on cloud infrastructure.
Cloud Storage & Data Centers: Scalable storage for secure management of
patient records, medical images, and lab results.
Security & Compliance Services: Cloud provider ensures HIPAA (Health
Insurance Portability and Accountability Act) compliance with encryption,
access controls, audits, and monitoring.
Patient Portal/Application: Allows patients to view medical records, schedule
appointments, and communicate with healthcare providers.
Doctor's Portal/Application: Designed for healthcare professionals to access
patient records, review diagnostic reports, and communicate with patients.
Specialist's Portal/Application: Tailored for specialists to access specific
patient data and diagnostic tools related to their expertise.
•Energy systems utilize
thousands of sensors for real-
time maintenance data
collection to monitor
conditions and predict
failures.
•Critical components within
energy systems, such as
bearings in wind turbines,
require careful monitoring to
prevent failures. Cloud Computing for Energy System
•Phasor Measurement Units (PMUs) collect real-time data in power grids for system state
estimation and failure prediction.
•Maintenance and repair of complex energy systems are costly and time-consuming, with
failures leading to significant losses and supply outages for consumers.
Cloud Computing for Education
•Cloud computing facilitates broader
access to quality online education
through collaboration tools and
information management systems.
•Universities, colleges, and schools
utilize cloud-based applications for
admissions, administrative tasks,
online/distance education, exams,
student progress tracking, and feedback
collection.
•Cloud-based online learning systems
offer high-quality educational resources
to students while reducing IT
infrastructure costs for educational
institutions.
Cloud Computing for Transportation Systems
• Modern Intelligent Transportation Systems (ITS) rely on data from
various sources for providing advanced services like route guidance and
dynamic vehicle routing.
• Challenges in data collection and organization for ITS arise due to the
large size of databases and the lack of real-time analysis tools.
• Recent advancements in massive-scale data processing systems offer a
promising solution for storage and analysis of large volumes of data in
ITS.
Cloud computing for manufacturing industry:
• Industrial Control Systems (ICS) like supervisory control and data acquisition
SCADA, DCS distributed control systems, and PLCs Programmable Logic
Controllers generate continuous monitoring and control data.
• Cloud-based real-time collection, management, and analysis of ICS data
enhance system state estimation, plant and personnel safety, and prevent
catastrophic failures.
VIRTUALIZATION
Guest OS
• A guest OS is installed within a virtual
machine alongside the host OS in
virtualization setups.
• Multiple guest OS instances can vary in
virtualization environments.
Full Virtualization:
d. Least Connections:
In least connections load balancing, the incoming requests are routed to the server with the least
number of connections.
e. Priority:
In priority load balancing, each server is assigned a
priority. The incoming traffic is routed to the highest
priority server as long as the server is available. When
the highest priority server fails, the incoming traffic is
routed to a server with a lower priority.
f. Overflow:
Overflow load balancing is similar to priority load
balancing. When the incoming requests to highest
priority server overflow, the requests are routed to a
lower priority server.
Scalability and Elasticity
1.Scalability:
Definition: Scalability is the ability of a system to handle
an increasing amount of work or a growing number of
users by adding resources (e.g., servers, storage, or
network capacity) to the existing infrastructure.
Types of Scalability:
Vertical Scalability: This involves increasing the capacity of
existing resources (e.g., upgrading CPU, RAM) within a single
server or virtual machine.
• Horizontal Scalability: Also known as "scaling out," it involves
adding more machines or instances to a system, typically in a
distributed or clustered fashion.
Benefits of Scalability:
• Improved performance and responsiveness during high traffic
periods.
• Cost optimization by matching resource allocation to actual
demand.
• Enhanced fault tolerance and reliability through redundancy.
Elasticity:
Definition: Elasticity is a subset of scalability and refers to the automatic or
dynamic allocation and deallocation of resources in response to changing
workload demands. It ensures that resources are available when needed and
released when no longer necessary.
Characteristics of Elasticity:
Automatic Resource Management: Cloud platforms use auto-scaling policies to add
or remove resources based on predefined conditions (e.g., CPU utilization, network
traffic).
Real-time Responsiveness: Elastic systems react quickly to changes in demand,
often within seconds.
Pay-as-You-Go: Elasticity aligns with the pay-as-you-go pricing model, allowing
organizations to minimize costs during periods of lower demand.
Benefits of Elasticity:
• Efficient resource utilization, reducing operational
costs.
• Maintains optimal performance under varying
workloads.
• Simplifies resource management and minimizes
manual intervention.
Deployment
• Monitoring includes assessing CPU, memory, disk, and I/O usage across application tiers
to ensure performance requirements are met.
Deployment Refinement:
• Deployment refinements are made based on
performance evaluation, utilizing methods like
vertical scaling, horizontal scaling, and alternative
load balancing strategies.
• Various alternatives such as adjusting server
interconnections and replication strategies are
explored to ensure the application meets
performance requirements.
Replication
Replication is used to create and maintain multiple copies of
the data in the cloud. Replication of data is important for
practical reasons such as business continuity and disaster
recovery. In the event of data loss at the primary location,
organizations can continue to operate their from secondary
data sources. With real-time replication of data, organizations
can achieve faster recovery from failures.
Array-based Replication:
Array-based replication uses compatible storage arrays to automatically copy data from a local storage array
to a remote storage array.
Thus array-based replication can work in heterogeneous environments with different operating systems.
Array-based replication uses Network Attached Storage (NAS) or Storage Area Network (SAN), to replicate.
A drawback of this array-based replication is that it requires similar arrays at local and remote locations.
Thus the costs for setting up array-based replication are higher than the other approaches.
Network-based Replication
Network-based replication uses an appliance that sits on the network and intercepts packets
that are sent from hosts and storage arrays.
The intercepted packets are replicated to a secondary location.
The benefits of this approach is that it supports heterogeneous environments and requires a
single point of management.
However, this approach involves higher initial costs due to replication hardware and
software.
Host-based Replication
Host-based replication runs on standard servers and uses software to transfer data from a
local to remote location.
The host acts the replication control mechanism.
An agent is installed on the hosts that communicates with the agents on the other hosts.
Host-based replication can either be block-based or file-based.
Block-based replication typically require dedicated volumes of the same size on both the
local and remote servers.
File-based replication requires less storage as compared to block-based storage.
File-based replication gives additional allows the administrators to choose the files or
folders to be replicated.
Monitoring
Cloud resources can be monitored by monitoring services provided by the cloud service
providers. Monitoring services allow cloud users to collect and analyze the data on various
monitoring metrics.
•Cost Reduction: NFV eliminates the need for dedicated hardware appliances,
reducing capital and operational expenses.
•Scalability: VNFs can be dynamically scaled up or down based on network
traffic and service demand.
•Flexibility: NFV allows network operators to deploy and manage network
services more flexibly, adapting to changing requirements and market conditions.
•Faster Service Deployment: Virtualized network functions can be provisioned
and deployed more quickly than traditional hardware-based solutions.
•Vendor Neutrality: NFV encourages interoperability and vendor-neutral
solutions, making it easier for network operators to choose best-of-breed
components.
MapReduce
The MapReduce model simplifies the process of parallelizing and distributing data
processing tasks, allowing developers and data engineers to work with large datasets
without worrying about the complexities of distributed computing. The model consists
of two primary phases:
1.Map Phase:
1. Input data is divided into smaller chunks or splits.
2. A "Map" function is applied to each split independently in parallel.
3. The Map function processes the input data and produces intermediate key-value pairs.
4. These intermediate key-value pairs are typically sorted and grouped by key.
2.Reduce Phase:
1. A "Reduce" function is applied to each unique key in parallel.
2. The Reduce function takes a set of intermediate key-value pairs with the same key and
performs any desired aggregation or computation.
3. The final output is generated by the Reduce function and typically consists of key-
value pairs.
MapReduce provides several key benefits:
• [Permissions]: Permissions represent the specific actions or operations that users or roles
can perform on resources. They define what users are allowed or denied to do.
• [Policies]: Policies are sets of rules and permissions that determine who can access
specific resources and what actions they can perform. Policies are often attached to roles
or users to control access.
IAM services in cloud computing provide the framework for defining and managing user
identities, roles, permissions, and policies within a cloud environment
1.User Authentication: IAM services enable organizations to authenticate users who want
to access cloud resources. This typically involves username/password authentication or the
use of multi-factor authentication (MFA) for added security.
2.User Identity Management: Cloud IAM allows administrators to create, manage, and
delete user accounts. It also provides capabilities for defining user attributes and storing
user information.
3.Role-Based Access Control (RBAC): IAM services enable organizations to define roles
and assign permissions to those roles. Users or groups can then be associated with roles,
making it easier to manage access control at scale. RBAC simplifies permission
management and reduces the risk of unauthorized access.
4.Fine-Grained Access Control: Cloud IAM often provides fine-grained control over
permissions. This means you can specify exactly which actions users or roles can perform
on specific resources, such as read, write, or delete access.
5.Policy Management: Policies define what actions are allowed or denied for specific
6. Auditing and Monitoring: IAM services often include auditing and monitoring capabilities.
You can track who accessed resources, what actions were performed, and when they occurred.
This is crucial for security and compliance purposes.
7.Temporary Access: Cloud IAM services often support the concept of temporary or time-
bound access. This is useful for granting temporary permissions to users or services when
needed.
8.Integration with Other Services: IAM services are tightly integrated with other cloud
services. They can work in conjunction with services like Virtual Private Cloud (VPC),
databases, storage, and more to ensure secure access to resources.
9.Identity Federation: Many cloud providers support identity federation, allowing you to
integrate your on-premises identity systems with cloud IAM. This enables single sign-on (SSO)
and centralized identity management.
10.Compliance and Security: IAM services help organizations meet compliance requirements
by ensuring that access control policies align with regulatory standards. They also enhance
security by reducing the risk of unauthorized access and data breaches.
SERVICE LEVEL AGREEMENTS
Service Level Agreements (SLAs) in cloud computing are formal agreements between a
cloud service provider (CSP) and its customers, specifying the level of service,
performance, and availability that the customer can expect from the cloud services.
SLAs are essential for establishing clear expectations, responsibilities, and consequences in
case the agreed-upon service levels are not met.
BILLING
1.Pay-As-You-Go (PAYG): Elastic pricing
1. Description: With PAYG, customers are charged based on their actual usage of cloud
resources. They pay for the computing power, storage, data transfer, and other resources they
consume, typically on an hourly or per-minute basis.
2. Use Case: PAYG is suitable for businesses with fluctuating workloads or for testing and
development environments where resource needs vary.
2.Reserved Instances (RIs): Fixed pricing
1. Description: RIs allow customers to reserve cloud resources (e.g., virtual machines) for a fixed
period, usually one to three years, at a significantly reduced cost compared to PAYG rates.
2. Use Case: RIs are cost-effective for stable, predictable workloads with long-term resource
requirements.
3.Spot Instances: Spot pricing
1. Description: Spot Instances enable customers to bid on unused cloud resources, and they are
granted access to those resources when their bid exceeds the current spot price. Prices for spot
instances are typically lower than PAYG rates.
2. Use Case: Spot Instances are suitable for workloads that can tolerate interruptions and are
cost-sensitive.