Cloud Computing
Cloud Computing
o On-demand self-service
o Resource pooling
o Rapid elasticity
o Measured service
o Public Cloud
o Private Cloud
o Community Cloud
o Hybrid Cloud
1. On-Demand Self-Service
Definition:
"A consumer can unilaterally provision computing capabilities, such
as server time and network storage, as needed automatically
without requiring human interaction with each service provider."
Analysis:
This characteristic emphasizes the autonomy provided to consumers in
managing and provisioning resources. Users can request and allocate
computing resources such as virtual machines or storage without needing
to interact with the service provider, thereby reducing delays and
improving efficiency.
Analysis:
This ensures cloud services are accessible from diverse devices and
platforms via standard protocols. The emphasis on heterogeneity supports
a wide range of devices and operating systems, enhancing accessibility
and usability for end-users.
3. Resource Pooling
Definition:
"The provider's computing resources are pooled to serve multiple
consumers using a multi-tenant model, with different physical and
virtual resources dynamically assigned and reassigned according to
consumer demand. There is a sense of location independence in
that the customer generally has no control or knowledge over the
exact location of the provided resources but may be able to specify
location at a higher level of abstraction (e.g., country, state, or
datacenter). Examples of resources include storage, processing,
memory, and network bandwidth."
Analysis:
Resource pooling leverages a multi-tenant architecture to optimize
resource utilization. Consumers benefit from location-independent
services where resources are abstracted, ensuring cost efficiency and
scalability. At the same time, providers can dynamically allocate resources
based on demand.
4. Rapid Elasticity
Definition:
"Capabilities can be elastically provisioned and released, in some
cases automatically, to scale rapidly outward and inward
commensurate with demand. To the consumer, the capabilities
available for provisioning often appear to be unlimited and can be
appropriated in any quantity at any time."
Analysis:
This characteristic highlights the flexibility of cloud services to handle
varying workloads. The ability to scale up or down dynamically ensures
that consumers pay only for the resources they use, providing cost
savings while maintaining service performance during peak and off-peak
periods.
5. Measured Service
Definition:
"Cloud systems automatically control and optimize resource use by
leveraging a metering capability at some level of abstraction
appropriate to the type of service (e.g., storage, processing,
bandwidth, and active user accounts). Resource usage can be
monitored, controlled, and reported, providing transparency for both
the provider and consumer of the utilized service."
Analysis:
The metering and monitoring capabilities of cloud systems ensure
transparency and accountability in resource usage. Both providers and
consumers gain insights into consumption patterns, enabling cost
management and resource optimization. This characteristic underpins the
pay-as-you-go model, which is a core benefit of cloud computing.
The service models are depicted using "L-shaped" horizontal and vertical
bars, instead of the traditional "three-layer cake" stack. This
representation highlights the flexibility and modularity of cloud service
models. The reason for this depiction is that while cloud services can be
dependent on one another within the service stack, it is also possible for
these services to function independently and interact directly with the
resource abstraction and control layer. This flexibility allows for different
architectures and configurations based on the requirements of each
service and the specific layer of the cloud infrastructure.
Silos:
Information Silo: An information silo in cloud computing occurs
when user clouds are isolated, and their management system
cannot interoperate with other private clouds. These silos can often
be seen in Platform-as-a-Service (PaaS) offerings like
Force.com or QuickBase, which create isolated ecosystems within
the cloud infrastructure.
Private Virtual Networks and Silos: When private virtual
networks are set up within an IaaS framework (such as creating
private subnets), it often results in silos. These silos limit
interoperability between different clouds, leading to a more isolated
environment. This isolation can enhance security and protection but
at the cost of flexibility.
Vendor Lock-in: Silos often lead to vendor lock-in, where
organizations become dependent on a particular cloud provider's
ecosystem and tools. This can limit the ability to switch to a
different provider or integrate with external systems.
Kubernetes Pods:
A Pod is the smallest execution unit in Kubernetes. It encapsulates
one or more containers (like Docker containers) that share the same
network namespace, storage volumes, and configuration data. Pods
are ephemeral by nature, meaning they are short-lived and can be
automatically recreated if they fail or if the node they run on fails.
Pods are the fundamental building blocks for deploying and
managing applications in a Kubernetes cluster.
Benefits of Pods:
1. Simplified Communication: When pods contain multiple
containers, communication and data sharing are simplified as all
containers in a pod share the same network namespace and can
communicate via localhost.
2. Scalability: Pods make it easy to scale applications. Kubernetes can
automatically replicate pods and scale them up or down based on
demand.
3. Efficient Resource Sharing: Containers within a pod share the
same resources, making it efficient for tightly coupled applications
to run together.
Pod Communication:
Internal Communication: Containers within a pod can
communicate with each other using localhost, as they share the
same network namespace.
External Communication: Pods can communicate with each other
across the cluster using their unique IP addresses. Kubernetes
automatically assigns a cluster-private IP address to each pod,
which allows it to interact with other pods without needing to map
ports or explicitly link them.
Exposing Ports: Pods expose their internal ports to communicate
with the outside world or other services within the cluster.
Self-Service Provisioning:
End users can independently provision compute resources, such as
server time and network storage, for almost any workload. This
eliminates the traditional need for IT administrators to manage and
allocate these resources.
Elasticity:
Cloud computing allows organizations to scale resources up or down
according to demand. This flexibility removes the need for
significant upfront investments in local infrastructure, ensuring cost
efficiency.
Pay-Per-Use:
Cloud resources are metered at a granular level, enabling users to
pay only for the resources they consume.
Workload Resilience:
Cloud service providers (CSPs) implement redundancy across
multiple regions to ensure resilient storage and consistent
availability of workloads, minimizing downtime.
Migration Flexibility:
Organizations can migrate workloads to and from the cloud or
between different cloud platforms. This capability enables cost
savings and provides access to emerging services and technologies.
Broad Network Access:
Cloud computing allows users to access data and applications from
anywhere using any device with an internet connection, enhancing
flexibility and collaboration.
Multi-Tenancy and Resource Pooling:
Multi-tenancy enables multiple customers to share the same
physical infrastructure while maintaining privacy and security.
Resource pooling allows providers to service numerous customers
simultaneously, with large and flexible resource pools to meet
diverse demands.
Advantages of Cloud Computing
Cost Management:
Cloud infrastructure reduces capital expenditures by eliminating the
need to purchase and maintain hardware, facilities, utilities, and
large data centers. Companies also save on staffing costs, as cloud
providers manage data center operations. Additionally, the high
reliability of cloud services minimizes downtime, reducing
associated costs.
Data and Workload Mobility:
Cloud storage enables users to access data from anywhere using
any internet-connected device. This eliminates the need for physical
storage devices like USB drives or external hard drives. Remote
employees can stay connected and productive, while vendors
ensure automatic updates and upgrades, saving time and effort.
Business Continuity and Disaster Recovery (BCDR):
Storing data in the cloud ensures accessibility even in emergencies
such as natural disasters or power outages. Cloud services facilitate
quick data recovery, enhancing BCDR strategies and ensuring
workload and data availability despite disruptions.
1. Cloud Security:
Security is often cited as the most critical challenge in cloud
computing. Organizations face risks such as data breaches, hacking
of APIs and interfaces, compromised credentials, and authentication
issues. Moreover, there is often a lack of transparency about how
and where sensitive data is managed by cloud providers. Effective
security requires meticulous attention to cloud configurations,
business policies, and best practices.
2. Cost Unpredictability:
Pay-as-you-go subscription models, coupled with the need to scale
resources for fluctuating workloads, can make it difficult to predict
final costs. Cloud services often interdepend, with one service
utilizing others, leading to complex and sometimes unexpected
billing structures. This unpredictability can result in unforeseen
expenses.
3. Lack of Capability and Expertise:
The rapid evolution of cloud technologies has created a skills gap.
Organizations often struggle to find employees with the expertise
required to design, deploy, and manage cloud-based workloads
effectively. This lack of capability can hinder cloud adoption and
innovation.
4. IT Governance:
Cloud computing's emphasis on self-service capabilities can
complicate IT governance. Without centralized control over
provisioning, deprovisioning, and infrastructure management,
organizations may struggle to manage risks, ensure compliance,
and maintain data quality.
5. Compliance with Industry Regulations:
Moving data to the cloud can create challenges in adhering to
industry-specific regulations. Organizations must know where their
data is stored to maintain compliance and proper governance, which
can be difficult when relying on third-party cloud providers.
6. Management of Multiple Clouds:
Multi-cloud deployments, while advantageous in some cases, often
exacerbate the challenges of managing diverse cloud environments.
Each cloud platform has unique features, interfaces, and
requirements, complicating unified management efforts.
7. Cloud Performance:
Performance issues, such as latency, are largely beyond an
organization's control when relying on cloud services. Network
outages and provider downtimes can disrupt business operations if
contingency plans are not in place.
8. Building a Private Cloud:
Architecting, implementing, and managing private cloud
infrastructures can be a complex and resource-intensive process.
This challenge is magnified when private clouds are integrated into
hybrid cloud environments.
9. Cloud Migration:
Migrating applications and data to the cloud is often more
complicated and costly than initially anticipated. Migration projects
frequently exceed budget and timelines. Additionally, the process of
repatriating workloads and data back to on-premises infrastructure
can create unforeseen challenges related to cost and performance.
10. Vendor Lock-In:
Switching between cloud providers can result in significant
difficulties, including technical incompatibilities, legal and regulatory
constraints, and substantial costs for data migration. This vendor
lock-in can limit flexibility and increase long-term dependency on
specific providers.
Cloud Computing Examples and Use Cases
Cloud computing and traditional web hosting are often confused, but they
differ in several key aspects.
Cloud services offer three characteristics that set them apart from
traditional web hosting:
2. Elasticity: Cloud services are elastic, meaning users can scale their
services up or down as required, giving them more control over their
resources.
The cloud service market is diverse, with many providers offering a range
of services. The three largest public cloud service providers (CSPs)
dominating the industry are:
Microsoft Azure
Apple
Citrix
IBM
Salesforce
Alibaba
Oracle
VMware
SAP
Joyent
Rackspace
The Cloud Security Alliance (CSA) Stack Model defines the security
responsibilities between the cloud service provider and the customer
across different service models. This model helps clarify where the
provider's responsibilities end and where the customer's responsibilities
begin.
Key Points of the CSA Model:
Service Models:
Security Inheritance:
Security Boundaries:
Since data stored in cloud can be accessed from anywhere, we must have
a mechanism to isolate data and protect it from client’s direct access.
Brokered Cloud Storage Access is an approach for isolating storage in the
cloud. In this approach, two services are created:
Encryption
protect data from any unauthorized access, it does not prevent data loss.
The frontend represents the user's interaction with the cloud. It includes:
Client Infrastructure:
Infrastructure (IaaS)
Cloud infrastructure relies on virtual machine (VM) technology, allowing a
single physical machine to run multiple VMs. The software responsible for
managing these VMs is known as the Virtual Machine Monitor (VMM) or
hypervisor.
Platforms (PaaS)
A cloud platform provides the necessary hardware and software to build
custom web apps or services that leverage the platform’s capabilities. It
encompasses the full software stack, except for the presentation layer,
enabling developers to focus on building applications without managing
underlying infrastructure.
Virtual Appliances
A virtual appliance is a software solution that integrates an application
with its operating system, packaged for use in a virtualized environment.
Unlike a complete virtual machine platform, a virtual appliance contains
only the software stack required to run specific applications, including the
application and a minimal operating system. irtual appliances are easier
to manage and update as they are bundled as a single unit, making it
simpler to deploy and maintain them in virtualized environments.
Examples include Linux-based solutions like Ubuntu JeOS.
Clients can connect to a cloud service using various devices and methods.
Below are the most common methods and security techniques to ensure
safe connections.
1. Web Browser:
2. Proprietary Applications:
1. Secure Protocols:
o Examples:
Microsoft RDP (Remote Desktop Protocol): Enables
secure remote desktop access.
3. Data Encryption:
1. Full Virtualization
Advantages:
This type uses binary translation to virtualize instruction sets and emulate
hardware using software instruction sets. Examples include:
Virtual PC
VMware Server
o Microsoft Hyper-V
o Oracle VM
o Citrix Hypervisor
o Oracle VM VirtualBox
o Windows Virtual PC
o Parallels Desktop
Advantages:
3. Para-Virtualization
IBM LPAR
Advantages:
S.No
Full Virtualization Paravirtualization
.
Paravirtualization is faster in
Full virtualization is slower than
4 operation as compared to full
paravirtualization in operation.
virtualization.
Oracle Solaris
Linux LCX
AIX WPAR
Advantages:
The hypervisor creates an abstraction layer between the host and guest
components of the VM, using virtual processors. For efficient hardware
virtualization, the virtual machine interacts directly with hardware
components without relying on an intermediary host OS. Multiple VMs can
run simultaneously, each isolated from the others to prevent cyber threats
or system crashes, improving overall system efficiency.
Types of Virtualization
Benefits of Virtualization
Disadvantages of Virtualization
When you add the Hyper-V role on Windows Server, what happens is that
Hyper-V takes control of the hardware, and the host OS becomes a virtual
machine running within Hyper-V (i.e., it’s running in "Partition 0").
However, Hyper-V itself remains a Type 1 hypervisor because it operates
directly on the hardware, not relying on an underlying OS to manage
hardware resources.
Goal
The main goal of load balancing is to distribute workloads evenly across
multiple resources (e.g., servers or virtual machines) to prevent any single
resource from becoming a bottleneck.
These processes are carried out by three units: the load balancer,
resource discovery, and task migration units.
1. Physical Machine Level: The first level load balancer balances the
given workload on individual Physical Machines by distributing the
workload among its respective associated Virtual Machines.
3. Task Scheduling
After identifying the resource details, tasks are scheduled to
appropriate VMs using a scheduling algorithm. This phase ensures
that tasks are assigned based on available resources.
4. Resource Allocation
Resources are allocated to scheduled tasks for execution. A resource
allocation policy governs this process, aiming to improve resource
management and performance. The strength of the load balancing
algorithm depends on the efficiency of both scheduling and
allocation policies.
5. Migration
Migration ensures that load balancing remains effective when VMs
are overloaded. There are two types of migration:
o VM Migration: Moves VMs from one physical host to another
to alleviate overloading. This can be live migration (without
downtime) or non-live migration (with downtime).
1. Scheduling Algorithms
Scheduling algorithms are decomposed into three key activities:
2. Allocation Algorithms
Key Concepts:
Benefits of Containers:
History:
VMs :
o Full OS: VMs require a complete operating system to run,
including a guest OS, which increases resource consumption
and startup time.
o Isolation: VMs provide strong isolation since each VM is a self-
contained environment with its own OS.
o Flexibility: VMs can run different operating systems (e.g.,
running Linux on a Windows host) and are used for various
types of workloads requiring full OS functionality.
o Resource-Intensive: VMs are heavier, require more resources
(CPU, memory), and are slower to start compared to
containers.
Containers:
o OS-Level Virtualization: Containers virtualize the OS, sharing
the host OS kernel, making them lightweight and faster to
start.
o Smaller Footprint: Containers package only the application
and necessary dependencies, making them more portable and
easier to deploy across environments.
o Isolation: While containers offer some isolation, they share the
host OS kernel, which can introduce security concerns, unlike
VMs that have better isolation.
o Faster Start and Deployment: Containers can start and stop
quickly, making them ideal for cloud-native applications that
need fast scaling.
1. Microservices:
2. DevOps:
1. Migration Challenges:
2. Container Security:
o Security Concerns:
Vulnerabilities in Container Images: Containers may
have insecure components or malware.
o Best Practices:
3. Container Networking:
Docker Overview:
Kubernetes:
1. Istio:
Both Istio and Knative are part of the growing container ecosystem,
extending the capabilities of container orchestration and enabling the
efficient management of microservices and serverless applications.
Open SaaS:
3. Capabilities Required:
OAuth OpenID
OAuth authorizes the user with the OpenID authenticates the user
resource. into the service provider.
3. SSO Flow:
1. Standard Scopes:
Single Login for Multiple Sites: Users can log in once and access
multiple applications without repeated sign-ins.
What is a Service?
Roles in SOA:
1. Service Provider:
2. Service Consumer:
Characteristics of SOA:
3. Abstraction:
Services are abstracted, with their implementation hidden. They are
only defined by their service contracts and description documents,
making them easier to use without knowledge of the internal
workings.
4. Interoperability:
Services in SOA can interact across different platforms and
technologies, ensuring that they can work together seamlessly.
5. Reusability:
Services are designed as reusable components, reducing
development time and cost by reusing services across multiple
applications.
6. Autonomy:
Services have control over their own logic and implementation, and
consumers do not need to understand how they work internally.
7. Discoverability:
Services are described by metadata and service contracts, making
them easily discoverable for integration and reuse.
8. Composability:
Services can be combined to form more complex business
processes, enabling businesses to achieve more sophisticated
operations through service orchestration and choreography.
I. Functional Aspects:
1. Transport:
Responsible for transporting service requests from the consumer to
the provider and responses from the provider to the consumer.
3. Service Description:
Provides metadata that describes the service and its requirements.
4. Service:
The core unit of SOA that provides specific functionality.
5. Business Process:
Represents a sequence of services with associated business rules
designed to meet business requirements.
6. Service Registry:
A centralized directory that contains the descriptions of available
services, allowing service consumers to locate them.
1. Policy:
Specifies the protocols for how services should be provided to
consumers.
2. Security:
Defines protocols for service authentication, authorization, and data
protection.
3. Transaction:
Ensures consistency in service execution, ensuring that either all or
none of the tasks within a service group are completed successfully.
4. Management:
Defines attributes and processes for managing services in the
architecture.
Advantages of SOA:
Service Reusability:
Applications are made from existing services, leading to reduced
development time as services can be reused.
Easy Maintenance:
Since services are independent, they can be modified or updated
without affecting other services.
Platform Independence:
SOA allows the integration of services from different platforms,
making the application platform-agnostic.
Availability:
Services are accessible upon request, providing easy access to
necessary resources.
Reliability:
Smaller, independent services are easier to debug and maintain,
enhancing the reliability of SOA applications.
Scalability:
Services can be distributed across multiple servers, increasing the
ability to scale as demand grows.
Disadvantages of SOA:
High Overhead:
The need for input validation during service interaction can reduce
performance, increasing load and response time.
High Investment:
Implementing SOA often requires a significant initial investment in
infrastructure and development.
2. Healthcare:
SOA is applied to improve healthcare delivery by integrating
disparate systems and services, making data more accessible and
manageable.
3. Mobile Applications:
SOA is commonly used in mobile applications, such as integrating
GPS functionality within apps by accessing built-in device services.
Google Web AWS (Amazon Microsoft Cloud
Feature
Services Web Services) Services
Google Search,
Google Analytics, Azure, Azure
Google Ads, AppFabric, SQL
EC2, S3, EBS,
Google Translate, Azure, CDN, Windows
SimpleDB, RDS,
Google App Live Services, Azure
Lambda, Elastic
Main Engine, Google Virtual Machines,
Load Balancing,
Services APIs, Google Cloud Azure Functions,
Amazon VPC, AWS
Storage, Google Azure Blob Storage,
Batch, AWS Glue,
Compute Engine, Azure Kubernetes
AWS Lightsail
Google Kubernetes Service, Power BI,
Engine, Google Office 365
BigQuery
Relational
Database
Service (RDS):
Managed relational
databases
Translate: supporting applications. - Azure
Multilingual multiple engines Virtual Machines
translation. - (MySQL, (VMs): Scalable
Google App PostgreSQL, SQL virtual machines for
Engine: Platform- Server, etc.). - running apps and
as-a-service (PaaS) Lambda: services. - Azure
for scalable web Serverless Kubernetes
app development. - computing for Service (AKS):
Google Cloud running code in Managed Kubernetes
Storage: Object response to cluster for container
storage service for events. - Elastic orchestration. -
large-scale data. - Load Balancing Azure Content
Google Compute (ELB): Distributes Delivery Network
Engine: Virtual incoming traffic (CDN): Caching
machines for across multiple service to speed up
running instances for the delivery of
applications. - better availability. - content globally. -
Google Amazon VPC: Azure Power BI:
Kubernetes Virtual network Business analytics
Engine: Container service for and data
orchestration and isolating and visualization tool. -
management. - controlling network Windows Live
Google resources. - AWS Services:
BigQuery: Glue: Data Integration with
Scalable data integration service Microsoft-based
warehouse for for ETL (extract, services like Outlook,
analytics. transform, load). - OneDrive, Office 365.
AWS Batch:
Managed batch
processing for
large-scale
computations.
(IAM): Granular
control over user and access
access to management. -
Management (IAM) resources. - Data Azure Security
to manage access. encryption and Center: Unified
- Data encryption secure networking security
at rest and in with VPC. - Regular management
transit. - Multi- security audits and system. - Encryption
Factor compliance for data at rest and
Authentication certifications. - in transit. - Regular
(MFA). - Advanced AWS Shield: security certifications
threat detection Managed DDoS and compliance with
through Google protection. - AWS industry standards. -
Security Command WAF: Web Azure Sentinel:
Center. Application Firewall Security information
to protect against and event
common web management (SIEM).
exploits.
- Google Cloud
Auto-scaling - EC2 instances - Auto-scaling for
adjusts computing can be dynamically both web
resources scaled up or down. applications (App
automatically. - - Auto Scaling Service) and virtual
Managed groups for EC2. - machines. - Azure
Scalability Kubernetes Engine S3 and RDS Functions supports
for scaling services are automatic scaling
containerized designed to scale based on demand. -
applications. - automatically to Scalable storage
Global content meet increasing solutions like Azure
delivery with low demands. Blob and Azure SQL.
latency.
different
Google Cloud Azure Logic Apps for
applications. - AWS
Pub/Sub for connecting and
Marketplace for
messaging and automating
third-party
event-driven workflows between
application
systems. services.
integration.
- Google Cloud’s
global - Azure Content
- Elastic Load
infrastructure Delivery Network
Balancer (ELB)
delivers high (CDN) improves
distributes traffic
performance global content
efficiently. - S3
across delivery speed. -
storage optimized
Performanc geographies. - Azure Traffic
for performance
e Compute Engine Manager for traffic
with different
Optimizatio offers virtual routing and
storage classes. -
n machines with fast improving
Use of multiple
CPUs, GPUs, and performance. -
availability zones
SSDs. - BigQuery Performance
for redundancy
for fast data optimization tools
and performance
analysis with like Azure Monitor
optimization.
distributed and Azure Advisor.
computing.
Cloud Management
1. Service Definition
4. Service Optimization
5. Operational Management
6. Service Retirement
Initially, SLAs were custom-negotiated for each client. Now, large utility-
style providers often offer standardized SLAs, customized only for large
consumers of services.
Not all SLAs are legally enforceable. Some function more as Operating
Level Agreements (OLAs), lacking legal weight.
o Details the roles and duties of the provider and the client for
maintaining the service.
5. Warranties:
Cloud Transactions
Examples include:
Cloud Bursting
Cloud APIs
Cloud-based Storage
Cloud-based storage refers to the storage of data in an online
environment that is hosted on remote servers, often provided by third-
party cloud service providers. This storage is accessible via the internet
and can scale according to the user’s needs. There are two types of cloud
storage setups:
Examples:
Unmanned cloud storage refers to cloud storage solutions that are often
fully automated, requiring minimal intervention from the provider’s
administrators. Users are responsible for their data management, with the
service offering automated storage provisioning, scaling, and backups. In
some cases, there may be little direct human oversight.
Examples:
Webmail Services
Webmail services refer to email services that allow users to access their
emails via a web browser, eliminating the need for email client software or
desktop applications. These services are cloud-based, meaning users can
access their emails from anywhere with an internet connection. Webmail
services often offer additional features such as calendars, file sharing, and
integration with other web services.
Common Webmail Services:
1. Google Gmail:
2. Mail2Web:
4. Yahoo Mail:
Syndication Service
RSS is structured in XML format, which allows for easy parsing and
integration with different tools and platforms.
Atom:
JSON Feed:
JSON Feed aims to offer the simplicity of RSS while leveraging the
advantages of JSON, such as being more lightweight and easier to
handle in JavaScript-based environments.