Module 2 CloudEnablingTechnology
Module 2 CloudEnablingTechnology
Enabling
Technolog
y
1
Module 2
Cloud Enabling Technology: Broadband Networks and Internet Architecture ‐
Data Center Technology ‐ Virtualization Technology ‐ Web Technology ‐
Multitenant Technology ‐ Service Technology (T1: Chapter 5).
Virtual Machines and Containers: Virtualization - Basics of Virtualization,
Types of Virtualizations, Implementation Levels of Virtualization; Serverless
Computing; Using and Managing Containers: Container Basics, Docker, Docker
Components, Docker Hub; Microservices and Container Resource Managers:
Docker Swarm, Amazon EC2 Container Service, Google's Kubernetes (T2: Chapter
4,6,7).
Cloud Middleware Services: OpenStack: Conceptual architecture, Nova,
Neutron, Glance, Keystone, Horizon, Cinder, Swift, Cellometer; Openstack
Networking - OpenStack with Legacy Networking, OpenStack with Neutron
Networking.
Enabling
technologies
Cloud-enabling technologies are technologies that
allow applications and services to be deployed and
run on cloud computing platforms. These
technologies provide the necessary infrastructure,
tools, and frameworks to build, deploy, and manage
cloud-based applications and services
Cloud consumers have the option of accessing the cloud using only private and
dedicated network links in LANs, although most clouds are Internet-enabled.
The potential of cloud platforms in general therefore grows in parallel with
advancements in Internet connectivity and service quality.
Figure: Messages travel over dynamic network routes in this ISP internetworking configuration
The concept of the Internet was based on a decentralized provisioning and management
model. ISPs can freely deploy, operate, and manage their networks in addition to selecting
partner ISPs for interconnection. No centralized entity comprehensively governs the Internet,
although bodies like the Internet Corporation for Assigned Names and Numbers (ICANN)
supervise and coordinate Internet communications.
The Internet’s topology has become a dynamic and complex aggregate of ISPs that are
highly interconnected via its core protocols
11
Router-Based Interconnectivity
Routers manage network traffic and gauge the most efficient hop for packet delivery, since
they are privy to both the packet source and packet destination.
The basic mechanics of internetworking are illustrated in Figure 1, in which a message is
coalesced from an incoming group of disordered packets. The depicted router receives and
forwards packets from multiple data flows.
The communication path that connects a cloud consumer with its cloud provider may
involve multiple ISP networks. The Internet’s mesh structure connects Internet hosts
(endpoint systems) using multiple alternative network routes that are determined at runtime.
Communication can therefore be sustained even during simultaneous network failures,
although using multiple network paths can cause routing fluctuations and latency.
Internet reference
model
13
This applies to ISPs that implement the Internet’s internetworking layer and interact with other
network technologies, as follows:
Physical Network
IP packets are transmitted through underlying physical networks that connect adjacent nodes, such as
Ethernet, ATM network, and the 3G mobile HSDPA. Physical networks comprise a data link layer that
controls data transfer between neighboring nodes, and a physical layer that transmits data bits through
both wired and wireless media.
Transport Layer Protocol
Transport layer protocols, such as the Transmission Control Protocol (TCP) and User Datagram
Protocol (UDP), use the IP to provide standardized, end-to-end communication support that facilitates
the navigation of data packets across the Internet.
Application Layer Protocol
Protocols such as HTTP, SMTP for e-mail, BitTorrent for P2P, and SIP for IP telephony use transport
layer protocols to standardize and enable specific data packet transferring methods over the Internet.
Many other protocols also fulfill application-centric requirements and use either TCP/IP or UDP as
their primary method of data transferring across the Internet and LANs.
Figure 1 presents the Internet Reference Model and the protocol stack.
Technical and Business
Considerations
Connectivity Issues
TCP/IP facilitates both Internet access and on-premise data exchange over
LANs Although not commonly referred to as a cloud model, this configuration
has been implemented numerous times for medium and large on-premise
networks.
Organizations using this deployment model can directly access the network
traffic to and from the Internet and usually have complete control over and can
safeguard their corporate networks using firewalls and monitoring software..
Figure 1 – The internetworking architecture of a private cloud. The physical IT
resources that constitute the cloud are located and managed within the
organization.
End-user devices that are connected to the network through the Internet can be granted
continuous access to centralized servers and applications in the cloud (Figure 1).
A salient cloud feature that applies to end-user functionality is how centralized IT resources
can be accessed using the same network protocols regardless of whether they reside inside
or outside of a corporate network. Whether IT resources are on-premise or Internet-based
dictates how internal versus external end-users access services, even if the end-users
themselves are not concerned with the physical location of cloud-based IT resources
Figure – The internetworking architecture of an Internet-based cloud
deployment model. The Internet is the connecting agent between non-
proximate cloud consumers, roaming end-users, and the cloud provider’s
own network.
ON-PREMISE IT RESOURCES CLOUD-BASED IT RESOURCES
Internal end-user devices access Internal end-user devices access
corporate IT services through the corporate IT services through an
corporate network Internet connection
Internal users access corporate IT Internal users access corporate IT
services through the corporate Internet services while roaming in external
connection while roaming in external networks through the cloud provider’s
networks Internet connection
External users access corporate IT External users access corporate IT
services through the corporate Internet services through the cloud provider’s
connection Internet connection
Network Bandwidth and Latency
Issues
In addition to being affected by the bandwidth of the data link that connects networks to
ISPs, end-to-end bandwidth is determined by the transmission capacity of the shared
data links that connect intermediary nodes.
•End-to-end bandwidth is affected by ISP data link bandwidth and shared transmission
capacity.
•ISPs use broadband technology to ensure connectivity.
•Bandwidth is increasing due to web acceleration technologies.(dynamic
caching,compression,CDN)
•Latency refers to the time a packet takes to travel between nodes.
•Latency increases with each intermediary node.
•Heavy loads and queues can raise latency.
•Internet latency is variable and unpredictable.
•Packet networks operate on a "best effort" QoS model.
•Congestion leads to bandwidth reduction, increased latency, and packet loss.
•Dynamic packet switching impacts end-to-end QoS.
•IT solutions should align with business needs related to bandwidth and latency.
Cloud Carrier and Cloud Provider
Selection
•Service levels between cloud consumers and providers depend on their
respective ISPs, which are often different.
•Multiple ISP networks are involved in the connection paths, complicating
Quality of Service (QoS) management.
•Achieving consistent end-to-end QoS requires collaboration between ISPs on
both sides.
•Cloud consumers and providers may need multiple cloud carriers to meet
connectivity and reliability needs.
•Using multiple cloud carriers can lead to increased costs.
•Cloud adoption is easier for applications with less stringent latency and
bandwidth requirements.
2. Data Center
Technology
Grouping IT resources in close proximity with one another, rather
than having them geographically dispersed, allows for power sharing,
higher efciency in shared IT resource usage, and improved
accessibility for IT personnel.
• A data center is a facility used to house computer systems and
associated components, such as telecommunications and storage
systems.
• Modern data centers exist as specialized IT infrastructure used to
house centralized IT resources, such as servers, databases,
networking and telecommunication devices, and software systems
Virtualization
Standardization and Modularity
Automation
Remote Operation and Management 21
Virtualization
•Data centers consist of physical and virtualized IT resources.
•The physical IT resource layer includes facility infrastructure,
computing/networking systems, and hardware with operating systems.
•The virtualization layer provides resource abstraction and control via
management tools.
•Virtualization platforms abstract physical resources (computing and
networking) into virtualized components.
•Virtualized resources are easier to allocate, operate, release, monitor, and
control.
Hardware Independence
25
Standardization and Modularity
28
Automation
Facilities
•Data center facilities are custom-designed locations equipped with
specialized computing, storage, and network equipment.
•These facilities feature multiple functional layout areas tailored for
specific operations.
•Various power supplies, cabling, and environmental control
stations are installed to manage:
•Heating and ventilation
•Air conditioning (HVAC)
•Fire protection
•Other related subsystems.
•The site and layout of a data center are often organized into
segregated spaces, enhancing operational efficiency and security
3. Virtualization
technology
• Virtualization is a process of converting a physical IT resource into
a virtual IT resource
Server
Virtual server ↔ virtual machine
Storage
Network
Power
A virtual machine is just the software image of a complete machine that can
be loaded onto the server and run like any other program. The server in the data
center runs a piece of software called a hypervisor that allocates and manages
the server’s resources that are granted to its “guest” virtual machines.
35
Hardware based
virtualization
37
Basic web
technology
• Uniform resource locator (URL)
Commonly informally referred to as a web address
a reference to a web resource that specifies its location
on a computer network and a mechanism for retrieving
it
Example: http://www.example.com/index.html
• Hypertext transfer protocol (HTTP)
Primary communication protocol used to exchange
content
• Markup languages (HTML, XML)
Express Web‐centric data and metadata
38
Web
applications
• Applications running in a web browser
Rely on web browsers for the presentation of user‐
interfaces
39
5. Multitenant
technology
• Enable multiple users (tenants) to access the same application simultaneously
• Multitenant applications ensure that tenants do not have access to data and
configuration information that is not their own
•Tenants can individually customize features of the application, such as:
• User Interface – Tenants can define a specialized “look and feel” for their
application interface.
• Business Process – Tenants can customize the rules, logic, and workflows
of the business processes that are implemented in the application.
• Data Model – Tenants can extend the data schema of the application to
include, exclude, or rename fields in the application data structures.
• Access Control – Tenants can independently control the access rights for
users and groups.
40
common characteristics of multitenant applications include:
Usage Isolation – The usage behavior of one tenant does not affect the application
availability and performance of other tenants.
Data Security – Tenants cannot access data that belongs to other tenants.
Recovery – Backup and restore procedures are separately executed for the data of
each tenant.
Application Upgrade – Tenants are not negatively affected by the synchronous
upgrading of shared software artifacts.
Scalability – The application can scale to accommodate increases in usage by
existing tenants and/or increases in the number of tenants.
Metered Usage – Tenants are charged only for the application processing and
features that are actually consumed.
Data Tier Isolation – Tenants can have individual databases, tables, and/or schemas
isolated from other tenants. Alternatively, databases, tables, and/or schemas can be
designed to be intentionally shared by tenants.
A simple
example
42
6. Service Technology
Several prominent service technologies that are used to realize and build upon cloud-based
environments
• Web Service Description Language (WSDL) – This markup language is used to create a
WSDL definition that defines the application programming interface (API) of a Web
service
• XML Schema Defi nition Language (XML Schema) – Messages exchanged by Web
services must be expressed using XML. XML schemas are created to define the data
structure of the XML-based input and output messages exchanged by Web
services.
• SOAP – Formerly known as the Simple Object Access Protocol, this standard defines a
common messaging format used for request and response messages
exchanged by Web services.
• Universal Description, Discovery, and Integration (UDDI) – This standard regulates
service registries in which WSDL definitions can be published as part of a service catalog
for discovery purposes.
Serverless Computing
● Serverless Computing (or simply serverless) is emerging as a new and compelling model
● Conventionally, applications were written and run in servers which are allocated fixed
resources. Soon problems arose with sudden spikes of traffic as demands increased and
the servers were not able to handle the enormous amount requests. To address these
problems, came Platform as a Service (PaaS) in which providers offered scaling but it has
its drawbacks.
• Servers are still needed, but developers need not concern themselves
• with managing those servers.
• Decisions such as the number of servers and their capacity are taken care
of by the serverless platform, with server capacity automatically
provisioned as needed by the workload.
• The core capability of a serverless platform is that of an event
processing system.
• The service must manage a set of user defined functions, take an
• event sent over HTTP or received from an event source.
Architecture
• Independent, server-side, logical functions : small, separate, units of logic that take input
arguments, process them in some manner, then return the result.
• Cost : Typically its Pay As You Go
• Simple Deployment : deployments are simple and quick.
• Ephemeral : designed to spin up quickly, do their work and then shut down again.
• Programming languages : Serverless services support a wide variety of programming
languages - Node, Python.
• Stateless : FaaS are stateless, not storing states ,as containers running code will
automatically destroy.
Serverless : Characteristics
• Scalable by Default
• Event Triggered :Although functions can be invoked
directly, they are typically triggered by events from
other cloud services, such as incoming HTTP
requests,
• Simple Deployment Model.
• Small Deployable Units and More focus on
Business Value.
• Managed by third party .
• No more “Works on my Machine”
Commercial platforms
• OpenLambda
Amazon’s AWS Lambda
• The serverless paradigm has advantages for both consumers and providers.
• The stateless programming model gives the provider more control over the software
stack, allowing them to, among other things, more transparently deliver security
patches and optimize the platform.
Conclusion
• VMs, because they are complete OS instances, can take a few minutes to
start up.