0% found this document useful (0 votes)
21 views66 pages

Module 2 CloudEnablingTechnology

Uploaded by

vidhyapm
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views66 pages

Module 2 CloudEnablingTechnology

Uploaded by

vidhyapm
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 66

Cloud‐

Enabling
Technolog
y

1
Module 2
Cloud Enabling Technology: Broadband Networks and Internet Architecture ‐
Data Center Technology ‐ Virtualization Technology ‐ Web Technology ‐
Multitenant Technology ‐ Service Technology (T1: Chapter 5).
Virtual Machines and Containers: Virtualization - Basics of Virtualization,
Types of Virtualizations, Implementation Levels of Virtualization; Serverless
Computing; Using and Managing Containers: Container Basics, Docker, Docker
Components, Docker Hub; Microservices and Container Resource Managers:
Docker Swarm, Amazon EC2 Container Service, Google's Kubernetes (T2: Chapter
4,6,7).
Cloud Middleware Services: OpenStack: Conceptual architecture, Nova,
Neutron, Glance, Keystone, Horizon, Cinder, Swift, Cellometer; Openstack
Networking - OpenStack with Legacy Networking, OpenStack with Neutron
Networking.
Enabling
technologies
Cloud-enabling technologies are technologies that
allow applications and services to be deployed and
run on cloud computing platforms. These
technologies provide the necessary infrastructure,
tools, and frameworks to build, deploy, and manage
cloud-based applications and services

1. Broadband networks and internet architecture


2. Data center technology
3. Virtualization technology
4. Web technology
3
5. Multitenant technology
Broadband Networks and Internet
Architecture
All clouds must be connected to a network.

Internetworks, or the Internet, allow for the remote provisioning of IT


resources and are directly supportive of ubiquitous network access.

Cloud consumers have the option of accessing the cloud using only private and
dedicated network links in LANs, although most clouds are Internet-enabled.
The potential of cloud platforms in general therefore grows in parallel with
advancements in Internet connectivity and service quality.

•Internet Service Providers (ISPs)


•Connectionless Packet Switching (Datagram Networks)
•Router-Based Interconnectivity
•Technical and Business Considerations
1. Broadband networks & Internet
architecture
• All clouds must be connected to a network
• Internet’s largest backbone networks, established and deployed
by ISPs, are interconnected by core routers
 ISP: internet service provider

Internetworks, or the Internet, allow for the remote provisioning


of IT resources and are directly supportive of ubiquitous network
access. Cloud consumers have the option of accessing the cloud
using only private and dedicated network links in LANs,
although most clouds are Internet-enabled.

Internet Service Providers (ISPs)


Established and deployed by ISPs, the Internet’s largest backbone networks
5

are strategically interconnected by core routers that connect the world’s


multinational networks. As shown in Figure 1, an ISP network interconnects
Internet connecting provider and
consumer

Figure: Messages travel over dynamic network routes in this ISP internetworking configuration
The concept of the Internet was based on a decentralized provisioning and management
model. ISPs can freely deploy, operate, and manage their networks in addition to selecting
partner ISPs for interconnection. No centralized entity comprehensively governs the Internet,
although bodies like the Internet Corporation for Assigned Names and Numbers (ICANN)
supervise and coordinate Internet communications.

The Internet’s topology has become a dynamic and complex aggregate of ISPs that are
highly interconnected via its core protocols

Worldwide connectivity is enabled through a hierarchical topology composed of Tiers 1, 2,


and 3 (Figure 2). The core Tier 1 is made of large-scale, international cloud providers that
oversee massive interconnected global networks, which are connected to Tier 2’s large
regional providers. The interconnected ISPs of Tier 2 connect with Tier 1 providers, as well
as the local ISPs of Tier 3. Cloud consumers and cloud providers can connect directly using
a Tier 1 provider, since any operational ISP can enable Internet connection.
Figure 2 – An abstraction of the internetworking structure of
the Internet.
Two fundamental
components
The communication links and routers of the Internet and ISP
networks are IT resources that are distributed among countless
traffic generation paths. Two fundamental components used to
construct the internetworking architecture are connectionless
packet switching (datagram networks) and router-based
interconnectivity.
• Connectionless packet switching
 End‐to‐end (sender‐receiver pair) data flows are divided into packets
of a limited size
 Packets are processed through network switches and routers, then
queued and forwarded from one intermediary node to the next
• Router‐based interconnectivity
 A router is a device that is connected to multiple networks
through which it forwards packets
 Each packet is individually processed
9
 Use multiple alternative network routes
Connectionless Packet Switching (Datagram
Networks)
End-to-end (sender-receiver pair) data flows are divided into packets of a
limited size that are received and processed through network switches and
routers, then queued and forwarded from one intermediary node to the next.
Each packet carries the necessary location information, such as the Internet
Protocol (IP) or Media Access Control (MAC) address, to be processed and
routed at every source, intermediary, and destination node.
Packets travelling through
Internet

11
Router-Based Interconnectivity

A router is a device that is connected to multiple networks through which it forwards


packets. Even when successive packets are part of the same data fl ow, routers process and
forward each packet individually while maintaining the network topology information that
locates the next node on the communication path between the source and destination nodes.

Routers manage network traffic and gauge the most efficient hop for packet delivery, since
they are privy to both the packet source and packet destination.
The basic mechanics of internetworking are illustrated in Figure 1, in which a message is
coalesced from an incoming group of disordered packets. The depicted router receives and
forwards packets from multiple data flows.

The communication path that connects a cloud consumer with its cloud provider may
involve multiple ISP networks. The Internet’s mesh structure connects Internet hosts
(endpoint systems) using multiple alternative network routes that are determined at runtime.
Communication can therefore be sustained even during simultaneous network failures,
although using multiple network paths can cause routing fluctuations and latency.
Internet reference
model

13
This applies to ISPs that implement the Internet’s internetworking layer and interact with other
network technologies, as follows:
Physical Network
IP packets are transmitted through underlying physical networks that connect adjacent nodes, such as
Ethernet, ATM network, and the 3G mobile HSDPA. Physical networks comprise a data link layer that
controls data transfer between neighboring nodes, and a physical layer that transmits data bits through
both wired and wireless media.
Transport Layer Protocol
Transport layer protocols, such as the Transmission Control Protocol (TCP) and User Datagram
Protocol (UDP), use the IP to provide standardized, end-to-end communication support that facilitates
the navigation of data packets across the Internet.
Application Layer Protocol
Protocols such as HTTP, SMTP for e-mail, BitTorrent for P2P, and SIP for IP telephony use transport
layer protocols to standardize and enable specific data packet transferring methods over the Internet.
Many other protocols also fulfill application-centric requirements and use either TCP/IP or UDP as
their primary method of data transferring across the Internet and LANs.
Figure 1 presents the Internet Reference Model and the protocol stack.
Technical and Business
Considerations
Connectivity Issues

In traditional, on-premise deployment models, enterprise applications and


various IT solutions are commonly hosted on centralized servers and storage
devices residing in the organization’s own data center. End-user devices, such as
smartphones and laptops, access the data center through the corporate network,
which provides uninterrupted Internet connectivity.

TCP/IP facilitates both Internet access and on-premise data exchange over
LANs Although not commonly referred to as a cloud model, this configuration
has been implemented numerous times for medium and large on-premise
networks.
Organizations using this deployment model can directly access the network
traffic to and from the Internet and usually have complete control over and can
safeguard their corporate networks using firewalls and monitoring software..
Figure 1 – The internetworking architecture of a private cloud. The physical IT
resources that constitute the cloud are located and managed within the
organization.
End-user devices that are connected to the network through the Internet can be granted
continuous access to centralized servers and applications in the cloud (Figure 1).
A salient cloud feature that applies to end-user functionality is how centralized IT resources
can be accessed using the same network protocols regardless of whether they reside inside
or outside of a corporate network. Whether IT resources are on-premise or Internet-based
dictates how internal versus external end-users access services, even if the end-users
themselves are not concerned with the physical location of cloud-based IT resources
Figure – The internetworking architecture of an Internet-based cloud
deployment model. The Internet is the connecting agent between non-
proximate cloud consumers, roaming end-users, and the cloud provider’s
own network.
ON-PREMISE IT RESOURCES CLOUD-BASED IT RESOURCES
Internal end-user devices access Internal end-user devices access
corporate IT services through the corporate IT services through an
corporate network Internet connection
Internal users access corporate IT Internal users access corporate IT
services through the corporate Internet services while roaming in external
connection while roaming in external networks through the cloud provider’s
networks Internet connection
External users access corporate IT External users access corporate IT
services through the corporate Internet services through the cloud provider’s
connection Internet connection
Network Bandwidth and Latency
Issues
In addition to being affected by the bandwidth of the data link that connects networks to
ISPs, end-to-end bandwidth is determined by the transmission capacity of the shared
data links that connect intermediary nodes.

•End-to-end bandwidth is affected by ISP data link bandwidth and shared transmission
capacity.
•ISPs use broadband technology to ensure connectivity.
•Bandwidth is increasing due to web acceleration technologies.(dynamic
caching,compression,CDN)
•Latency refers to the time a packet takes to travel between nodes.
•Latency increases with each intermediary node.
•Heavy loads and queues can raise latency.
•Internet latency is variable and unpredictable.
•Packet networks operate on a "best effort" QoS model.
•Congestion leads to bandwidth reduction, increased latency, and packet loss.
•Dynamic packet switching impacts end-to-end QoS.
•IT solutions should align with business needs related to bandwidth and latency.
Cloud Carrier and Cloud Provider
Selection
•Service levels between cloud consumers and providers depend on their
respective ISPs, which are often different.
•Multiple ISP networks are involved in the connection paths, complicating
Quality of Service (QoS) management.
•Achieving consistent end-to-end QoS requires collaboration between ISPs on
both sides.
•Cloud consumers and providers may need multiple cloud carriers to meet
connectivity and reliability needs.
•Using multiple cloud carriers can lead to increased costs.
•Cloud adoption is easier for applications with less stringent latency and
bandwidth requirements.
2. Data Center
Technology
Grouping IT resources in close proximity with one another, rather
than having them geographically dispersed, allows for power sharing,
higher efciency in shared IT resource usage, and improved
accessibility for IT personnel.
• A data center is a facility used to house computer systems and
associated components, such as telecommunications and storage
systems.
• Modern data centers exist as specialized IT infrastructure used to
house centralized IT resources, such as servers, databases,
networking and telecommunication devices, and software systems
 Virtualization
 Standardization and Modularity
 Automation
 Remote Operation and Management 21
Virtualization
•Data centers consist of physical and virtualized IT resources.
•The physical IT resource layer includes facility infrastructure,
computing/networking systems, and hardware with operating systems.
•The virtualization layer provides resource abstraction and control via
management tools.
•Virtualization platforms abstract physical resources (computing and
networking) into virtualized components.
•Virtualized resources are easier to allocate, operate, release, monitor, and
control.
Hardware Independence

•Installing an operating system on specific hardware creates software-


hardware dependencies.

•In a non-virtualized environment, the OS is configured for particular


hardware models and requires reconfiguration if hardware changes.

•Virtualization converts unique IT hardware into standardized, software-


based copies, providing hardware independence.

•Virtual servers can be easily moved to different hosts, automatically


resolving hardware-software incompatibility issues.

•Cloning and manipulating virtual IT resources is easier compared to


duplicating physical hardware.
Server Consolidation
•Virtualization software enables the creation of multiple virtual servers
on a single physical server.
•This process, known as server consolidation, allows different virtual
servers to share one physical server.
•Server consolidation helps improve hardware utilization, load
balancing, and resource optimization.
•Virtual servers can run different guest operating systems on the same
host, offering flexibility.
•These features support key cloud computing characteristics such as:
•On-demand usage
•Resource pooling
•Elasticity
•Scalability
•Resiliency
Virtualizati
on

25
Standardization and Modularity

•Data centers use standardized commodity hardware and modular architectures


to support scalability and fast hardware replacements.
•Modularity and standardization reduce investment and operational costs by
enabling economies of scale for procurement, deployment, operation, and
maintenance.
•Virtualization strategies and the increasing capacity and performance of
physical devices promote IT resource consolidation.
•Fewer physical components are needed to support complex configurations,
reducing the overall hardware footprint.
•Consolidated IT resources can serve multiple systems and be shared among
different cloud consumers , enhancing efficiency.
Figure 1 – The common components of a data center working together to provide
virtualized IT resources supported by physical IT resources.
Standardization and
Modularity
• Data centers are built upon standardized
commodity hardware and designed with modular
architecture.

28
Automation

Data centers have specialized platforms that automate tasks like


provisioning, configuation, patching, and monitoring without supervision.
Advances in data center management platforms and tools leverage
autonomic computing technologies to enable self-configuration and self-
recovery.
Remote Operation and
Management

Most of the operational and administrative tasks of IT resources in data


centers are commanded through the networks remote consoles and
management systems. Technical personnel are not required to visit the
dedicated rooms that house servers, except to perform highly specific
tasks, such as equipment handling and cabling or hardware-level
installation and maintenance.
High Availability

Data center outages severely impact business continuity for organizations


relying on their services.
To ensure high availability, data centers are built with redundant
systems. {they have backup components like power supplies, network
connections, and storage)

•Redundancy includes uninterruptible power supplies (UPS), cabling,


and environmental control subsystems to handle system failures.
•Redundant communication links and clustered hardware are used for
load balancing and maintaining operations during failures.
•This high level of redundancy helps sustain continuous operations and
minimize downtime.
Seurity-Aware Design, Operation,
and Management

•Security requirements for data centers must be thorough and comprehensive,


including:
•Physical access controls (to protect the facility from unauthorized entry).
•Logical access controls (to safeguard data and systems from unauthorized digital
access).
•Data recovery strategies (to ensure data can be restored after a loss or breach).
•Data centers serve as centralized structures for storing and processing business
data, making security critical.
•Building and operating on-premise data centers can be prohibitively expensive,
leading many organizations to outsource data center-based IT resources.
•Outsourcing models often involve long-term commitments from consumers and
typically lack the elasticity needed to adapt to changing business demands.
•Cloud computing addresses these issues with features such as:
•Ubiquitous access (data and services available from anywhere).
•On-demand provisioning (resources can be allocated as needed).
•Rapid elasticity (ability to scale resources up or down quickly).
•Pay-per-use (customers only pay for the resources they consume).
•.

Facilities
•Data center facilities are custom-designed locations equipped with
specialized computing, storage, and network equipment.
•These facilities feature multiple functional layout areas tailored for
specific operations.
•Various power supplies, cabling, and environmental control
stations are installed to manage:
•Heating and ventilation
•Air conditioning (HVAC)
•Fire protection
•Other related subsystems.
•The site and layout of a data center are often organized into
segregated spaces, enhancing operational efficiency and security
3. Virtualization
technology
• Virtualization is a process of converting a physical IT resource into
a virtual IT resource
 Server
 Virtual server ↔ virtual machine
 Storage
 Network
 Power

A virtual machine is just the software image of a complete machine that can
be loaded onto the server and run like any other program. The server in the data
center runs a piece of software called a hypervisor that allocates and manages
the server’s resources that are granted to its “guest” virtual machines.

Hypervisor allows one machine to run multiple virtual machines.eg Vmware


ESXi,Microsoft Hyper-v and manages the resources allocated to 34
virtual
environment from physical server.
Creating a new virtual
server
• Allocation of physical IT resources
• Installation of an operating system, i.e., guest
operating system

35
Hardware based
virtualization

• Reduce the overhead


• May introduce compatibility issue
36
4. Web
technology
• Cloud computing relies on internet.
• Web technology is generally used as both the
implementation medium and the management
interface for cloud services

37
Basic web
technology
• Uniform resource locator (URL)
 Commonly informally referred to as a web address
 a reference to a web resource that specifies its location
on a computer network and a mechanism for retrieving
it
 Example: http://www.example.com/index.html
• Hypertext transfer protocol (HTTP)
 Primary communication protocol used to exchange
content
• Markup languages (HTML, XML)
 Express Web‐centric data and metadata

38
Web
applications
• Applications running in a web browser
 Rely on web browsers for the presentation of user‐
interfaces

39
5. Multitenant
technology
• Enable multiple users (tenants) to access the same application simultaneously
• Multitenant applications ensure that tenants do not have access to data and
configuration information that is not their own
•Tenants can individually customize features of the application, such as:
• User Interface – Tenants can define a specialized “look and feel” for their
application interface.
• Business Process – Tenants can customize the rules, logic, and workflows
of the business processes that are implemented in the application.
• Data Model – Tenants can extend the data schema of the application to
include, exclude, or rename fields in the application data structures.
• Access Control – Tenants can independently control the access rights for
users and groups.
40
common characteristics of multitenant applications include:
 Usage Isolation – The usage behavior of one tenant does not affect the application
availability and performance of other tenants.
 Data Security – Tenants cannot access data that belongs to other tenants.
 Recovery – Backup and restore procedures are separately executed for the data of
each tenant.
 Application Upgrade – Tenants are not negatively affected by the synchronous
upgrading of shared software artifacts.
 Scalability – The application can scale to accommodate increases in usage by
existing tenants and/or increases in the number of tenants.
 Metered Usage – Tenants are charged only for the application processing and
features that are actually consumed.
 Data Tier Isolation – Tenants can have individual databases, tables, and/or schemas
isolated from other tenants. Alternatively, databases, tables, and/or schemas can be
designed to be intentionally shared by tenants.
A simple
example

42
6. Service Technology
Several prominent service technologies that are used to realize and build upon cloud-based
environments

• Web Service Description Language (WSDL) – This markup language is used to create a
WSDL definition that defines the application programming interface (API) of a Web
service
• XML Schema Defi nition Language (XML Schema) – Messages exchanged by Web
services must be expressed using XML. XML schemas are created to define the data
structure of the XML-based input and output messages exchanged by Web
services.
• SOAP – Formerly known as the Simple Object Access Protocol, this standard defines a
common messaging format used for request and response messages
exchanged by Web services.
• Universal Description, Discovery, and Integration (UDDI) – This standard regulates
service registries in which WSDL definitions can be published as part of a service catalog
for discovery purposes.
Serverless Computing

● Serverless Computing (or simply serverless) is emerging as a new and compelling model

for the deployment of cloud applications

● Conventionally, applications were written and run in servers which are allocated fixed

resources. Soon problems arose with sudden spikes of traffic as demands increased and

the servers were not able to handle the enormous amount requests. To address these

problems, came Platform as a Service (PaaS) in which providers offered scaling but it has

its drawbacks.

● It is a platform for rapidly deploying small pieces of cloud-native code


Introduction

There are many immediate benefits to not


managing your own servers:
● You don't have to worry about them
randomly rebooting or going down.
● You don't end up with snowflake servers,
where you don't know quite what's installed
on them but they are mission-critical to your
organisation.
What is Serverless Computing?

● Serverless Computing is a cloud computing execution model in which the

cloud provider dynamically manages the allocation of machine resources,

and bills based on the actual amount of resources consumed by an

application, rather than billing based on pre-purchased units of capacity..

● The version of serverless that explicitly uses functions as the

deployment unit is also called Function-as-a-Service (FaaS).


What is Serverless Computing?

• The Infrastructure-as-a-Service (IaaS) model is where the developer has the


most control over both the application code and operating infrastructure in the
cloud
• The developer is responsible for provisioning the hardware or virtual
machines.
• Can customize every aspect of how an application gets deployed and
executed.
• On the opposite extreme are the PaaS and SaaS models, where the developer
is unaware of any infrastructure.
• The developer has access to prepackaged components or full applications.
The developer is allowed to host code here, though that code may be tightly
Serverless?

• Serverless can be explained by varying level of


developer control over the cloud infrastructure.
Serverless

Custom Infrastructure Shared Shared Infrastructure


Custom Application Infrastructure Shared Service Code
Code Customer
Application
Code
Hardware/VM Full Stack
Deployment Services
(IaaS) More Developer Less (SaaS)
Control
Architecture

• Servers are still needed, but developers need not concern themselves
• with managing those servers.
• Decisions such as the number of servers and their capacity are taken care
of by the serverless platform, with server capacity automatically
provisioned as needed by the workload.
• The core capability of a serverless platform is that of an event
processing system.
• The service must manage a set of user defined functions, take an
• event sent over HTTP or received from an event source.
Architecture

• The challenge is to implement such


functionality while considering
metrics such as cost, scalability, and
fault tolerance.

• The platform must quickly and


efficiently start a function and
process
its input.
• The platform also needs to queue
events.
Serverless : Characteristics

• Independent, server-side, logical functions : small, separate, units of logic that take input
arguments, process them in some manner, then return the result.
• Cost : Typically its Pay As You Go
• Simple Deployment : deployments are simple and quick.
• Ephemeral : designed to spin up quickly, do their work and then shut down again.
• Programming languages : Serverless services support a wide variety of programming
languages - Node, Python.

• Stateless : FaaS are stateless, not storing states ,as containers running code will
automatically destroy.
Serverless : Characteristics

• Scalable by Default
• Event Triggered :Although functions can be invoked
directly, they are typically triggered by events from
other cloud services, such as incoming HTTP
requests,
• Simple Deployment Model.
• Small Deployable Units and More focus on
Business Value.
• Managed by third party .
• No more “Works on my Machine”
Commercial platforms

• Amazon’s AWS Lambda

• Google’s Cloud Functions

• Microsoft Azure Functions

• IBM Cloud Functions

• OpenLambda
Amazon’s AWS Lambda

• Amazon’s AWS Lambda was the first serverless platform ,it is a


compute service that lets you run code without provisioning or
managing servers.”

• AWS Lambda executes code only when needed and scales


automatically, from a few requests per day to thousands per
second.

• Pay only for the compute time.

• Can run code for virtually any type of application or backend


service
Amazon’s AWS Lambda

• Currently AWS Lambda supports Node.js, Java, C# , Go and Python


and PowerShell

• AWS Lambda automatically scales application by running code in


response to each trigger.

• With AWS Lambda,we are charged for every 100ms


Google’s Cloud Functions

• Google Cloud Functions provides basic FaaS


functionality to run serverless functions written in Node
js , Go, Python and Java.
• Automatically scales, highly available and fault tolerant.
• No servers to provision, manage, or upgrade
• Pay only while your code runs.
Microsoft Azure Functions

• Microsoft Azure Functions provides HTTP webhooks


and integration with Azure services to run user
provided functions.
• The platform supports C#, F#, Node.js, Python, java
and PowerShell.
• Pay only for the time spent running your code with
Consumption plan.
• The runtime code is open-source and available on
GitHub under an MIT License.
IBM Cloud Functions (OpenWhisk )

• IBM OpenWhisk provides event-based serverless programming with the ability to


chain serverless functions to create composite functions.

• It supports Node.js, Java, Swift, Python, as well as arbitrary binaries embedded in a


Docker container.

• OpenWhisk is available on GitHub under an Apache open source license.


OpenLambda

• OpenLambda is an open-source serverless computing platform. The


source-code is available in GitHub under an Apache License.

• The Lambda model allows developers to specify functions that run in


response to various events.

• OpenLambda will consist of a number of subsystems that will


coordinate to run Lambda handlers:
Benefits

• Compared to IaaS platforms, serverless architectures offer different tradeoffs in terms


of control, cost, and flexibility.

• The serverless paradigm has advantages for both consumers and providers.

• From the consumer perspective, a cloud developer no longer needs to provision


and manage servers, VMs, or containers as the basic computational building
block for offering distributed services.

• The stateless programming model gives the provider more control over the software
stack, allowing them to, among other things, more transparently deliver security
patches and optimize the platform.
Conclusion

• It is an evolution of the trend towards higher levels of abstractions in cloud


programming models. Currently exemplified by the Function-as-a-Service
(FaaS) .
• There are some drawbacks also to Serverless computing like vendor lock-in
and Vendor control.
• The developers are dependent on vendors for debugging and monitoring
tools. Debugging distributed systems is difficult and usually requires access
to a significant amount of relevant metrics to identify the root cause.
Containers

Containers and virtual machines are technologies that make your


applications independent from your IT infrastructure resources. A
container is a software code package containing an application’s code,
its libraries, and other dependencies.

Containerization makes your applications portable so that the same


code can run on any device. A virtual machine is a digital copy of a
physical machine. You can have multiple virtual machines with their
own individual operating systems running on the same host operating
system. In addition you can create a virtual machine that contains
everything required to run your application.
Rather than run a full OS, a container is layered on top of the host OS and
uses that OS’s resources. you can run a web server in one container and a
database server in another; these two containers can discover each other and
communicate as needed.
Containers
• Containers have the advantage of being extremely lightweight. Once
you have downloaded a container to a host, you can start it an d the
application(s) that it contains quasi-instantly.

• VMs, because they are complete OS instances, can take a few minutes to
start up.

• Building a container to run a single application is simple compared with


the task of customizing a VM to run a single application. All you need to
do is create a script that identifies the needed libraries, source files, and
data.
• Containers also have downsides. The most serious issue is security.
Because containers share the same host OS instance, two containers
running on the same host are less isolated
Docker

Docker is a software platform that allows you to build, test,


and deploy applications quickly.

Docker packages software into standardized units called


containers that have everything the software needs to run
including libraries, system tools, code, and runtime.

Using Docker, you can quickly deploy and scale applications


into any environment and know your code will run.

Docker is an operating system for containers.

Docker is installed on each server and provides simple


commands you can use to build, start, or stop containers.

You might also like