0% found this document useful (0 votes)
63 views31 pages

Cloud Computing - Piyushwairale

Cloud computing

Uploaded by

Himanshu Singh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
63 views31 pages

Cloud Computing - Piyushwairale

Cloud computing

Uploaded by

Himanshu Singh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 31

Specialist Officer

Exam 2024

GENERAL
IT KNOWLEDGE
Cloud Computing
For Notes & Test
Series
www.piyushwairale.com
Piyush Wairale
MTech, IIT Madras
Course Instructor at IIT Madras BS Degree

SBI SO Test Series 2024


General IT Knowledge Tests

No. of Tests: 5
Price: Rs.300

Get at Rs.200, use code SBI100 to get Rs.100 Off


(Offer Valid for Limited Seats)

Click here to register for Test Series


Preparing for GATE DA 2025???

www.piyushwairale.com
Cloud Computing Notes
by Piyush Wairale

Instructions:
• Kindly go through the lectures/videos on our website www.piyushwairale.com
• Read this study material carefully and make your own handwritten short notes. (Short notes must not be
more than 5-6 pages)

• Attempt the mock tests available on portal.


• Revise this material at least 5 times and once you have prepared your short notes, then revise your short
notes twice a week
• If you are not able to understand any topic or required a detailed explanation and if there are any typos or
mistake in study materials. Mail me at [email protected]

1
Contents
1 Introduction to Cloud Computing 4

2 Characteristics of Cloud Computing 4

3 Types of Cloud Services (Service Models) 5


3.1 Infrastructure as a Service (IaaS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
3.2 Platform as a Service (PaaS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
3.3 Software as a Service (SaaS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

4 Types of Cloud Deployment Models 7


4.1 Public Cloud . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
4.2 Private Cloud . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
4.3 Hybrid Cloud . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
4.4 Multi-cloud . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
4.5 Community Cloud . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

5 Public vs. Private Cloud: A Comparative Overview 9


5.1 Public Cloud . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
5.1.1 Key Characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
5.1.2 Advantages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
5.1.3 Challenges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
5.2 Private Cloud . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
5.2.1 Key Characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
5.2.2 Advantages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
5.2.3 Challenges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
5.3 Key Differences: Public Cloud vs. Private Cloud . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
5.4 Which to Choose: Public or Private? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
5.5 Hybrid Cloud: A Middle Ground . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

6 Comparison of Distributed Computing, Parallel Computing, and Cloud Computing 12

7 Virtualization 16

8 What are hypervisors? 17


8.1 What’s the difference between Type 1 and Type 2 Hypervisors? . . . . . . . . . . . . . . . . . . . . . 18
8.2 Server-based vs Hypervisor-based Virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
8.3 Full vs Para Virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

9 Containers vs. Virtual Machines 20


9.1 What is a container? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
9.2 What is a virtual machine? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
9.3 Comparison of Containers vs. Virtual Machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

10 Containerization 22

11 Continuous Integration and Continuous Delivery (CI/CD) 25

12 References 28

2
LinkedIn

Youtube Channel

Instagram

Telegram Group

Facebook

Download Andriod App


1 Introduction to Cloud Computing
• Cloud computing refers to the delivery of computing services—servers, storage, databases, networking, soft-
ware, analytics, and intelligence—over the internet (often called “the cloud”). Instead of owning physical data
centers and servers, users can rent computing resources on demand, which allows them to store, manage, and
process data more efficiently and cost-effectively.
• Cloud computing is the on-demand availability of computing resources (such as storage and infrastructure),
as services over the internet. It eliminates the need for individuals and businesses to self-manage physical
resources themselves, and only pay for what they use.
• Cloud computing service models are based on the concept of sharing on-demand computing resources, soft-
ware, and information over the internet. Companies or individuals pay to access a virtual pool of shared
resources, including compute, storage, and networking services, which are located on remote servers that are
owned and managed by service providers.
• One of the many advantages of cloud computing is that you only pay for what you use. This allows orga-
nizations to scale faster and more efficiently without the burden of having to buy and maintain their own
physical data centers and servers.
• In simpler terms, cloud computing uses a network (most often, the internet) to connect users to a cloud
platform where they request and access rented computing services. A central server handles all the commu-
nication between client devices and servers to facilitate the exchange of data. Security and privacy features
are common components to keep this information secure and safe.
• When adopting cloud computing architecture, there is no one-size-fits-all. What works for another company
may not suit you and your business needs. In fact, this flexibility and versatility is one of the hallmarks of
cloud, allowing enterprises to quickly adapt to changing markets or metrics.
• There are three different cloud computing deployment models: public cloud, private cloud, and hybrid cloud.

2 Characteristics of Cloud Computing


Cloud computing is defined by several key characteristics that differentiate it from traditional IT infrastructure:

• On-Demand Self-Service: Users can access computing resources (such as server time and network storage)
as needed, without requiring human interaction with each service provider.
• Broad Network Access: Cloud services are available over the network and can be accessed through standard
mechanisms by a variety of devices (e.g., laptops, mobile phones, tablets, etc.).
• Resource Pooling: The cloud provider pools computing resources to serve multiple consumers, using a
multi-tenant model. Resources such as storage, processing, memory, and network bandwidth are shared
among users, ensuring high availability.

• Rapid Elasticity: Cloud services can be rapidly and elastically provisioned to scale up or down based on
demand. For end users, this gives the impression of unlimited resources.
• Measured Service: Cloud computing systems automatically control and optimize resource use by leveraging
a metering capability. Resources such as storage, processing, bandwidth, and user accounts are tracked,
enabling a pay-per-use model.

• Cost Efficiency: The pay-as-you-go model allows organizations to avoid upfront infrastructure costs and
pay only for what they consume.
• Security: Cloud providers offer enhanced security, with features such as data encryption, secure access
controls, and compliance with industry standards like GDPR and HIPAA.
3 Types of Cloud Services (Service Models)
Cloud computing is commonly divided into three main service models, each offering different levels of control,
flexibility, and management:

3.1 Infrastructure as a Service (IaaS)


Description: IaaS provides basic infrastructure services such as virtual machines, networking, and storage. It
allows businesses to rent IT infrastructure from a cloud provider, replacing the need for on-premises servers.
Examples: Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP).
Use Case: Organizations seeking flexibility and scalability while controlling the operating system and applications,
such as hosting web applications or running development and testing environments.

3.2 Platform as a Service (PaaS)


Description: PaaS provides a platform allowing customers to develop, run, and manage applications without
dealing with the underlying infrastructure.
Examples: Microsoft Azure App Services, Google App Engine, Heroku.
Use Case: Developers who want to focus on coding and app management without managing infrastructure. It is
commonly used for web and mobile application development.

3.3 Software as a Service (SaaS)


Description: SaaS delivers software applications over the internet, on a subscription basis. It is fully managed
by the provider, meaning users only need to worry about using the software without installation, maintenance, or
updates.
Examples: Google Workspace, Microsoft 365, Salesforce, Dropbox.
Use Case: End-users or organizations who want to access software through a web browser or app without the
need for installation or maintenance.

Figure 1: credit: https://www.clairvoyant.ai/blog/cloud-computing-architecture-an-overview

Shared Responsibility Model in Cloud Computing


The shared responsibility model is a crucial concept in cloud computing that outlines the division of responsibilities
between the cloud service provider and the consumer. Understanding this model helps define who is accountable
for various aspects of cloud security and maintenance.
Traditional Datacenter Responsibilities
In a traditional on-premises datacenter setup, the organization is fully responsible for all aspects of the datacenter
management, including:
• Physical Space and Security: Managing and securing the physical premises.
• Power, Cooling, and Hardware: Ensuring power supply, cooling systems, and maintaining server hard-
ware.
• Software and Infrastructure: Installing, updating, and patching software systems, along with ensuring
network and data security.

The Shift to the Shared Responsibility Model


With cloud computing, the shared responsibility model changes the dynamics of managing infrastructure and data
security. Here, responsibilities are divided between the cloud provider and the consumer based on the service
type:

Cloud Provider’s Responsibilities


• Physical Security: The cloud provider is responsible for securing the physical infrastructure, including the
datacenter premises, power management, and network connectivity.
• Physical Hardware Maintenance: Maintaining, replacing, and upgrading physical hardware like servers,
network devices, and storage units.

Consumer’s Responsibilities
• Data and Information: Consumers are responsible for managing and securing their data stored in the
cloud. This includes data classification, encryption, and access controls.
• Access Security: Ensuring that only authorized personnel, services, or devices have access to the cloud
resources.
• Endpoint Management: Securing the devices (e.g., laptops, smartphones) that connect to the cloud
resources.

Responsibility Based on Service Models


The shared responsibility model also varies based on the cloud service type—Infrastructure as a Service (IaaS),
Platform as a Service (PaaS), and Software as a Service (SaaS).
• Infrastructure as a Service (IaaS): The cloud provider manages the physical infrastructure, networking,
and virtualization layer. The consumer is responsible for managing the operating system, applications, and
data.
• Platform as a Service (PaaS): The cloud provider manages the physical infrastructure, networking, op-
erating system, and runtime environment. The consumer is responsible for managing applications and data.
• Software as a Service (SaaS): The cloud provider manages everything from the physical infrastructure to
applications. The consumer is responsible only for managing the data and access controls.

Example Scenarios of Shared Responsibility


1. Cloud SQL Database: If you’re using a managed SQL database service provided by the cloud provider,
they are responsible for maintaining the underlying database infrastructure, including updates and patches.
However, the consumer is responsible for managing and securing the data stored in the database.
2. Virtual Machine (VM) Deployment: If you deploy a VM and install your own SQL database, you
become responsible for maintaining the VM, applying patches and updates, and managing the data within
it.
We are consider Microsoft as the cloud provider here

Figure 2: credit: Microsoft Learn

4 Types of Cloud Deployment Models


Cloud services can be deployed in different ways, depending on organizational needs:

4.1 Public Cloud


Description: In the public cloud, services are offered to the general public over the internet. Cloud resources such
as storage and servers are owned and operated by third-party providers, and multiple customers share the same
infrastructure.
Advantages: Lower costs, no maintenance, high reliability, and flexibility in scaling.
Examples: AWS, Google Cloud, Microsoft Azure.

4.2 Private Cloud


Description: Private clouds are dedicated exclusively to a single organization. They can be hosted on-premises
or by a third-party provider but offer more control and security compared to the public cloud.
Advantages: Greater control, enhanced security, and compliance with data privacy laws.
Examples: VMware Private Cloud, OpenStack, Microsoft Azure Stack.

4.3 Hybrid Cloud


Description: Hybrid clouds combine public and private clouds, allowing data and applications to be shared
between them. This model enables businesses to leverage the benefits of both, such as keeping sensitive data in the
private cloud while using the public cloud for scalable workloads.
Advantages: Flexibility, scalability, and optimized infrastructure.
Examples: AWS Outposts, Microsoft Azure Arc, Google Anthos.

4.4 Multi-cloud
A fourth, and increasingly likely scenario is a multi-cloud scenario. In a multi-cloud scenario, you use multiple
public cloud providers. Maybe you use different features from different cloud providers. Or maybe you started
your cloud journey with one provider and are in the process of migrating to a different provider. Regardless, in a
multi-cloud environment you deal with two (or more) public cloud providers and manage resources and security in
both environments.
4.5 Community Cloud
Description: A community cloud is shared by several organizations with common concerns, such as industry
standards, security, compliance requirements, or shared mission objectives. It can be managed internally or by a
third-party vendor.
Advantages: Shared costs and resources between organizations with similar needs.
Examples: Government agencies, research institutions, universities.

Benefits of Cloud Computing


• Cost Reduction: Cloud eliminates the capital expense of buying hardware and software, and setting up
and running on-site data centers.

• Scalability: Cloud services can be scaled based on business needs, making them highly adaptable to changing
workloads.
• Disaster Recovery: Cloud computing offers robust disaster recovery solutions due to its distributed nature.
• Accessibility: Cloud-based applications and data are accessible from any location with an internet connec-
tion.

• Collaboration: Cloud platforms allow for easier collaboration as users can access shared resources from
anywhere.

Challenges of Cloud Computing


• Downtime: Cloud services can experience outages, impacting business operations.
• Security Risks: Data security and privacy concerns are significant as organizations store sensitive informa-
tion on third-party servers.
• Compliance: Not all cloud providers adhere to industry-specific regulatory standards, which can complicate
data management for companies with strict compliance requirements.
• Vendor Lock-In: Switching cloud providers can be challenging, leading to dependency on specific vendors.
5 Public vs. Private Cloud: A Comparative Overview
Introduction
Cloud computing is generally categorized into different deployment models, with Public Cloud and Private
Cloud being the most common. Both offer unique advantages and challenges based on the needs of the organization.

5.1 Public Cloud


Definition: The public cloud refers to computing services offered by third-party providers over the public internet.
These resources are shared among multiple users (tenants), who can access services like storage, servers, and
networking on a pay-per-use model.

5.1.1 Key Characteristics


• Multi-tenancy: Multiple customers share the same hardware, storage, and network devices.

• Cost Efficiency: Typically cheaper, as the costs of hardware and maintenance are shared across multiple
customers.
• Scalability: Highly scalable, offering near-instant provisioning of additional resources to accommodate vari-
able demand.

• Maintenance: Managed by the cloud provider, which handles infrastructure, security updates, and compli-
ance.
• Accessibility: Services are accessible from anywhere via the internet, supporting geographically distributed
teams.

5.1.2 Advantages
• Lower Costs: No need for large upfront investments in hardware or infrastructure.

• High Availability: Public cloud providers offer extensive redundancy and failover mechanisms, ensuring
high availability.
• Flexibility: Resources can be scaled up or down depending on business needs.
• No Maintenance: The cloud provider takes care of all infrastructure management, freeing the client from
these responsibilities.
• Global Reach: Providers like AWS, Azure, and Google Cloud offer data centers worldwide, allowing busi-
nesses to serve global customers with low latency.

5.1.3 Challenges
• Security Concerns: Since resources are shared, there can be concerns about data security and compliance,
especially for sensitive data.
• Limited Control: Organizations have limited control over infrastructure, making it harder to customize
environments.
• Compliance Issues: Some industries (e.g., healthcare, finance) have strict compliance requirements, which
may limit the use of public clouds for certain data or applications.

Examples of Public Cloud Providers:

• Amazon Web Services (AWS)


• Microsoft Azure
• Google Cloud Platform (GCP)
5.2 Private Cloud
Definition: A private cloud is dedicated to a single organization, either hosted on-premises or by a third-party
provider. Unlike public cloud services, the infrastructure is not shared with other users, offering more control and
security.

5.2.1 Key Characteristics


• Single-Tenancy: The infrastructure is used exclusively by one organization, providing greater control over
data and applications.
• Customization: The organization can tailor its cloud environment to meet specific needs, such as perfor-
mance, security, and compliance requirements.
• Higher Security: With dedicated hardware, there is no sharing of resources, which reduces the risk of data
breaches.
• Maintenance: While the organization has more control, they are also responsible for maintaining, updating,
and securing the infrastructure.
• Accessibility: Access can be restricted to a private network, enhancing security, but limiting remote acces-
sibility unless explicitly configured.

5.2.2 Advantages
• Enhanced Security: Provides better control over data privacy and compliance, particularly important for
highly regulated industries.

• Full Customization: The organization has control over all aspects of the cloud environment, including
software and hardware configurations.
• Compliance: Easier to comply with strict regulatory requirements, such as HIPAA, GDPR, or PCI-DSS.
• High Performance: Dedicated resources can result in higher performance for specific workloads, without
the latency concerns of a shared environment.

5.2.3 Challenges
• Higher Costs: Requires higher upfront investments in infrastructure, and ongoing maintenance costs are
generally higher than public cloud.
• Limited Scalability: Scaling resources requires investing in new hardware, which may not be as rapid or
cost-effective as the public cloud.

• Complex Management: The organization needs IT staff to manage and maintain the private cloud, which
can add complexity and overhead.

Examples of Private Cloud Solutions:


• VMware Private Cloud
• OpenStack

• Microsoft Azure Stack


Aspect Public Cloud Private Cloud
Ownership Third-party provider Owned or exclusively used by a
single organization
Cost Lower costs, pay-as-you-go pric- Higher upfront costs, ongoing
ing model maintenance expenses
Scalability Highly scalable, near-instant Limited by hardware capacity,
provisioning slower to scale
Security Standard security, shared infras- Higher security, dedicated infras-
tructure tructure
Control Limited control over infrastruc- Full control over environment
ture and configurations
Customization Standardized offerings, limited Highly customizable to meet spe-
customization cific needs
Maintenance Managed by the provider Managed internally or through a
third party
Compliance May face challenges with strict Easier to meet regulatory com-
regulations pliance
Accessibility Accessible over the internet glob- Typically restricted to the orga-
ally nization’s network

5.3 Key Differences: Public Cloud vs. Private Cloud


5.4 Which to Choose: Public or Private?
The decision between public and private cloud depends on several factors:

• Cost Considerations: Public cloud is more cost-effective for organizations with less sensitive workloads or
when rapid scaling is required. Private cloud, on the other hand, is suitable for organizations that require
greater control and security, despite the higher cost.

• Security and Compliance: For industries like healthcare or finance, private cloud often makes more sense
due to its higher security levels and easier compliance management.
• Workload Nature: Applications that require significant customization or high performance may benefit
more from private cloud environments, while public cloud is ideal for general-purpose workloads with fluctu-
ating demand.

5.5 Hybrid Cloud: A Middle Ground


Hybrid Cloud is a combination of both public and private cloud environments, allowing data and applications
to be shared between them. This model offers greater flexibility, combining the security of private clouds with the
scalability of public clouds.
6 Comparison of Distributed Computing, Parallel Computing, and
Cloud Computing
Introduction
Distributed computing, parallel computing, and cloud computing are three different computational paradigms that
help organizations and researchers solve complex problems more efficiently. This document highlights their key
characteristics, similarities, and differences.

Definitions
• Distributed Computing: A computational approach where multiple independent computers (nodes) work
together to solve a problem. Each node in the system operates independently, and they communicate over a
network to share tasks and results.

• Parallel Computing: A computational technique where multiple processors work simultaneously on differ-
ent parts of a single problem. It involves breaking down a problem into smaller sub-problems that can be
solved concurrently to achieve faster results.
• Cloud Computing: A model of delivering computing services over the internet. It provides on-demand
access to computing resources (servers, storage, databases, etc.) and services, allowing users to leverage
distributed resources without owning the underlying infrastructure.

The following image illustrates the architectural differences between Distributed Computing and Parallel Com-
puting:

Distributed Computing
• Description: Distributed computing involves multiple independent systems (nodes), each with its own
processor and memory, connected over a network.

• Structure: Each node operates independently with its own resources and communicates with other nodes
over the network to solve complex problems collaboratively.
• Use Case: Distributed computing is used for scenarios that require high availability, scalability, and fault
tolerance, such as large-scale scientific simulations, cloud-based services, and distributed databases.

Parallel Computing
• Description: Parallel computing involves multiple processors working simultaneously within a single system
that shares the same memory.
• Structure: All processors are tightly coupled and communicate through shared memory, allowing them to
process multiple tasks in parallel.
• Use Case: Parallel computing is ideal for scenarios requiring high computational power and fast processing
speeds, such as image processing, scientific simulations, and large-scale data analysis.

Key Differences
• Distributed Computing: The system is composed of multiple independent nodes. Each node has its own
processor and memory and communicates with other nodes over a network.
• Parallel Computing: The system uses multiple processors within a single system, sharing the same memory.
The processors work simultaneously to execute different parts of a task in parallel.
Comparison Table

Criteria Distributed Comput- Parallel Computing Cloud Computing


ing
Definition Multiple autonomous sys- Multiple processors or Provision of on-demand com-
tems working together, cores working together puting services (servers, stor-
communicating over a net- on a single machine to age, applications) over the in-
work, to solve complex perform simultaneous ternet, with a distributed infras-
tasks. computations. tructure managed by a third-
party provider.
Architecture Comprised of multi- Tightly coupled proces- Virtualized and distributed in-
ple independent nodes, sors that share memory frastructure across multiple data
often geographically dis- and perform operations si- centers, managed and main-
tributed, working as a multaneously. tained by a cloud provider.
cohesive unit.
Communication Communication happens Communication occurs Internet-based communication
over a network (e.g., LAN, through shared memory, between client and server. Con-
WAN) between nodes us- making synchronization sumers access resources via
ing message passing or dis- and data consistency APIs, SDKs, or management
tributed shared memory. crucial. portals.
Scalability Highly scalable, as new Limited by the number of Extremely scalable, as resources
nodes can be added to the processors or cores in a can be dynamically allocated or
system with minimal dis- single system. deallocated based on demand.
ruption.
Fault Tolerance High fault tolerance, Lower fault tolerance. High fault tolerance through re-
as failures in individual Failure in one processor dundancy and replication. Cloud
nodes do not impact the can halt or affect the providers ensure data and service
overall system. Tasks can entire system’s operation. availability even during outages.
be reallocated to other
nodes.
Resource Uti- Resources are spread Maximizes the use of CPU Optimized resource allocation
lization across multiple systems, cores and memory within through virtualization, ensuring
and efficiency depends on a single machine, leading optimal use and cost efficiency.
network latency and load to high resource utiliza-
balancing. tion.
Cost High setup and mainte- Lower cost for small se- Pay-as-you-go model. Cost is
nance cost due to the need tups but can be expen- based on resource consumption,
for multiple machines and sive for large systems with with no upfront capital expenses.
networking equipment. many processors.
Examples Apache Hadoop, Dis- Multi-core processors, AWS, Microsoft Azure, Google
tributed Databases, GPUs, supercomputers Cloud Platform (GCP).
Blockchain networks. like Cray, MPI (Message
Passing Interface).

Table 1: Comparison of Distributed Computing, Parallel Computing, and Cloud Computing

Use Cases
• Distributed Computing: Used in applications that require high availability and fault tolerance, such as
distributed databases, scientific research, and complex simulations.

• Parallel Computing: Used in applications that require intensive computations, such as image processing,
scientific simulations, and machine learning model training.
• Cloud Computing: Used for on-demand access to scalable resources and services, including web hosting,
data analytics, machine learning, and serverless computing.
7 Virtualization
Virtualization extends beyond server virtualization to include various other components of IT infrastructure, pro-
viding significant benefits to IT managers and enterprises. This document outlines different types of virtualization.

Types of Virtualization
1. Desktop Virtualization: Allows multiple desktop operating systems to run on the same computer using
Virtual Machines (VMs).

• Virtual Desktop Infrastructure (VDI): Runs multiple desktops in VMs on a central server and
streams them to users who log in from any device.
• Local Desktop Virtualization: Runs a hypervisor on a local computer, enabling the user to run one
or more additional operating systems without altering the primary OS.
2. Network Virtualization: Uses software to create a virtual view of the network, abstracting hardware
components such as switches and routers.
• Software-Defined Networking (SDN): Virtualizes hardware that controls network traffic routing.
• Network Function Virtualization (NFV): Virtualizes network hardware appliances such as firewalls
and load balancers, simplifying their configuration and management.

3. Storage Virtualization: Aggregates all storage devices into a single shared pool that can be accessed and
managed as a unified storage entity.
4. Data Virtualization: Creates a software layer between applications and data sources, allowing applications
to access data irrespective of source, format, or location.
5. Application Virtualization: Allows applications to run without being installed on the user’s operating
system.
• Local Application Virtualization: Runs the entire application on the endpoint device in a runtime
environment.
• Application Streaming: Streams parts of the application to the client device as needed.
• Server-based Application Virtualization: Runs applications entirely on a server, sending only the
interface to the client.
6. Data Center Virtualization: Abstracts the data center’s hardware to create multiple virtual data centers,
enabling multiple clients to access their own infrastructure.
7. CPU Virtualization: Divides a single CPU into multiple virtual CPUs, enabling multiple VMs to share
processing power.
8. GPU Virtualization: Allows multiple VMs to utilize the processing power of a single GPU for tasks such
as video rendering and AI computations.
• Pass-through GPU: Allocates the entire GPU to a single guest OS.
• Shared vGPU: Divides the GPU into multiple virtual GPUs for server-based VMs.
9. Linux Virtualization: Utilizes the kernel-based virtual machine (KVM) on Linux to create x86-based VMs.
It is highly customizable and supports security-hardened workloads.
10. Cloud Virtualization: Virtualizes resources such as servers, storage, and networking to provide:

• Infrastructure as a Service (IaaS): Virtualized server, storage, and network resources.


• Platform as a Service (PaaS): Development tools, databases, and services for building applications.
• Software as a Service (SaaS): Cloud-based software applications accessible from the web.
Benefits of Virtualization
• Cost Efficiency: Reduces hardware costs by running multiple VMs on a single physical machine.
• Scalability and Flexibility: Easily scalable infrastructure with dynamic allocation and deallocation of
resources.
• Simplified Management: Centralized management of virtual environments and resources.

• High Availability: Supports disaster recovery and high availability through VM backups and migrations.
• Resource Optimization: Maximizes hardware resource utilization and reduces power consumption.

Virtualization is a powerful technology that optimizes IT infrastructure, reduces costs, and provides flexibility
and scalability. Understanding the different types of virtualization can help organizations implement the best
solutions to meet their business needs.

Virtual machines
Virtual machines are virtual environments that simulate a physical computer in software form. They normally
comprise several files containing the VM’s configuration, the storage for the virtual hard drive, and some snapshots
of the VM that preserve its state at a particular point in time.

8 What are hypervisors?


• Virtualization requires the use of a hypervisor, which was originally called a virtual machine monitor or
VMM. A hypervisor abstracts operating systems and applications from their underlying hardware. The
physical hardware that a hypervisor runs on is typically referred to as a host machine, whereas the VMs that
the hypervisor creates and supports are collectively called guest machines, guest VMs or simply VMs.
• A hypervisor lets the host hardware operate multiple VMs independent of each other and share abstracted
resources among those VMs. Virtualization with a hypervisor increases a data center’s efficiency compared
to physical workload hosting.

• A hypervisor is the software layer that coordinates VMs. It serves as an interface between the VM and the
underlying physical hardware, ensuring that each has access to the physical resources it needs to execute.
It also ensures that the VMs don’t interfere with each other by impinging on each other’s memory space or
compute cycles.

• Type 1 hypervisor A type 1 hypervisor, or a bare metal hypervisor, interacts directly with the underlying
machine hardware. A bare metal hypervisor is installed directly on the host machine’s physical hardware,
not through an operating system. In some cases, a type 1 hypervisor is embedded in the machine’s firmware.
The type 1 hypervisor negotiates directly with server hardware to allocate dedicated resources to VMs. It
can also flexibly share resources, depending on various VM requests.
• Type 2 hypervisor A type 2 hypervisor, or hosted hypervisor, interacts with the underlying host machine
hardware through the host machine’s operating system. You install it on the machine, where it runs as an
application.
The type 2 hypervisor negotiates with the operating system to obtain underlying system resources. However,
the host operating system prioritizes its own functions and applications over the virtual workloads.
8.1 What’s the difference between Type 1 and Type 2 Hypervisors?
Type 1 and type 2 hypervisors are software you use to run one or more virtual machines (VMs) on a single physical
machine. A virtual machine is a digital replica of a physical machine. It’s an isolated computing environment that
your users experience as completely independent of the underlying hardware. The hypervisor is the technology that
makes this possible. It manages and allocates physical resources to VMs and communicates with the underlying
hardware in the background.

The type 1 hypervisor sits on top of the bare metal server and has direct access to the hardware resources.
Because of this, the type 1 hypervisor is also known as a bare metal hypervisor. In contrast, the type 2 hypervisor
is an application installed on the host operating system. It’s also known as a hosted or embedded hypervisor.

Why are type 1 and type 2 hypervisors important?


A hypervisor, sometimes called a virtual machine monitor (VMM), creates and coordinates virtual machines (VMs),
an essential technology in modern computing infrastructure. A hypervisor is what makes the virtualization of com-
puters and servers possible.

Virtualization is technology that you use to create virtual representations of hardware components like server
or network resources. The software representation uses the underlying physical resource to operate as if it were
a physical component. Similarly, a VM is a software-based instance of a computer, with elements like memory,
processing power, storage, and an operating system.

8.2 Server-based vs Hypervisor-based Virtualization


• Server-based Virtualization: In server-based virtualization, multiple virtual machines (VMs) run on a
single physical server. Each VM operates independently with its own operating system and applications.
This type of virtualization helps consolidate server resources and reduces hardware costs.
• Hypervisor-based Virtualization: Hypervisor-based virtualization uses a hypervisor layer to manage and
monitor multiple VMs on a host system. The hypervisor enables efficient allocation of resources and ensures
that VMs are isolated from one another. This approach provides better flexibility and performance.

8.3 Full vs Para Virtualization


• Full Virtualization:
– In full virtualization, the hypervisor creates a complete virtual replica of the underlying hardware.
– The guest operating system is not aware that it is running in a virtual environment and does not need
to be modified.
– This approach is suitable for running unmodified operating systems and applications.
• Para Virtualization:
– Para virtualization involves modifying the guest operating system to be aware of the virtual environment.
– It provides better performance by reducing the overhead caused by full virtualization.
– The guest OS interacts directly with the hypervisor, allowing more efficient resource utilization.
– Examples include Xen’s para virtualization mode.
9 Containers vs. Virtual Machines
9.1 What is a container?
Containers are lightweight software packages that contain all the dependencies required to execute the contained
software application. These dependencies include things like system libraries, external third-party code packages,
and other operating system level applications. The dependencies included in a container exist in stack levels that
are higher than the operating system.

Pros
• Iteration speed Because containers are lightweight and only include high level software, they are very fast to
modify and iterate on.

• Robust ecosystem Most container runtime systems offer a hosted public repository of pre-made containers.
These container repositories contain many popular software applications like databases or messaging systems
and can be instantly downloaded and executed, saving time for development teams

Cons
Shared host exploits:Containers all share the same underlying hardware system below the operating system
layer, it is possible that an exploit in one container could break out of the container and affect the shared hardware.
Most popular container runtimes have public repositories of pre-built containers. There is a security risk in using
one of these public images as they may contain exploits or may be vulnerable to being hijacked by nefarious actors.

Popular container providers


• Docker Docker is the most popular and widely used container runtime. Docker Hub is a giant public repository
of popular containerized software applications. Containers on Docker Hub can instantly downloaded and
deployed to a local Docker runtime.

• RKT Pronounced ”Rocket”, RKT is a security-first focused container system. RKT containers do not allow
insecure container functionality unless the user explicitly enables insecure features. RKT containers aim to
address the underlying cross contamination exploitive security issues that other container runtime systems
suffer from.
• Linux Containers (LXC) The Linux Containers project is an open-source Linux container runtime system.
LXC is used to isolate operating, system-level processes from each other. Docker actually uses LXC behind
the scenes. Linux Containers aim to offer a vender neutral open-source container runtime.
• CRI-O CRI-O is an implementation of the Kubernetes Container Runtime Interface (CRI) that allows the
use of Open Container Initiative (OCI) compatible runtimes. It is a lightweight alternative to using Docker
as the runtime for Kubernetes.

9.2 What is a virtual machine?


Virtual machines are heavy software packages that provide complete emulation of low level hardware devices like
CPU, Disk and Networking devices. Virtual machines may also include a complementary software stack to run on
the emulated hardware. These hardware and software packages combined produce a fully functional snapshot of a
computational system.

Pros
• Full isolation security Virtual machines run in isolation as a fully standalone system. This means that
virtual machines are immune to any exploits or interference from other virtual machines on a shared host.
An individual virtual machine can still be hijacked by an exploit but the exploited virtual machine will be
isolated and unable to contaminate any other neighboring virtual machines.
• Interactive development Containers are usually static definitions of the expected dependencies and config-
uration needed to run the container. Virtual machines are more dynamic and can be interactively developed.
Once the basic hardware definition is specified for a virtual machine the virtual machine can then be treated
as a bare bones computer. Software can manually be installed to the virtual machine and the virtual ma-
chine can be snapshotted to capture the current configuration state. The virtual machine snapshots can be
used to restore the virtual machine to that point in time or spin up additional virtual machines with that
configuration.

Cons
• Iteration speed: Virtual machines are time consuming to build and regenerate because they encompass a
full stack system. Any modifications to a virtual machine snapshot can take significant time to regenerate
and validate they behave as expected.
• Storage size cost: Virtual machines can take up a lot of storage space. They can quickly grow to several
gigabytes in size. This can lead to disk space shortage issues on the virtual machines host machine.

Popular virtual machine providers


• Virtualbox: Virtualbox is a free and open source x86 architecture emulation system owned by Oracle.
Virtualbox is one of the most popular and established virtual machine platforms with an ecosystem of sup-
plementary tools to help develop and distribute virtual machine images.
• VMware: VMware is a publicly traded company that has built its business on one of the first x86 hardware
virtualization technologies. VMware comes included with a hypervisor which is a utility that will deploy and
manage multiple virtual machines. VMware has robust UI for managing virtual machines. VMware is a great
enterprise virtual machine option offering support.
• QEMU: QEUM is the most robust hardware emulation virtual machine option. It has support for any generic
hardware architecture. QEMU is a command line only utility and does not offer a graphical user interface
for configuration or execution. This trade-off makes QEMU one of the fastest virtual machine options.

9.3 Comparison of Containers vs. Virtual Machines


Characteristics Container Virtual Machine
Definition Software code package containing an Digital replica of a physical ma-
application’s code, its libraries, and chine. Partitions the physical hard-
dependencies, forming the applica- ware into multiple environments.
tion running environment.
Virtualization Virtualizes the operating system. Virtualizes the underlying physical
infrastructure.
Encapsulation Software layer above the operating Operating system and all software
system required for running the ap- layers above it, along with multiple
plication or its components. applications.
Technology Container engine coordinates with Hypervisor coordinates with the un-
the underlying operating system for derlying operating system or hard-
resources. ware.
Size Lightweight (measured in MB). Larger (measured in GB).
Control Less control over the environment More control over the entire environ-
outside the container. ment.
Flexibility Highly flexible. Allows quick migra- Less flexible. Migration can be chal-
tion between on-premises and cloud lenging.
environments.
Scalability Highly scalable with granular scala- Scaling can be costly. Requires
bility possible using microservices. switching from on-premises to cloud
instances for cost-effective scaling.

Table 2: Summary of differences between Containers and Virtual Machines

10 Containerization
Containerization is a software deployment process that bundles an application’s code with all the files and libraries
it needs to run on any infrastructure. Traditionally, to run any application on your computer, you had to install the
version that matched your machine’s operating system. For example, you needed to install the Windows version
of a software package on a Windows machine. However, with containerization, you can create a single software
package, or container, that runs on all types of devices and operating systems.

What are the benefits of containerization?


Developers use containerization to build and deploy modern applications because of the following advantages.

• Portability Software developers use containerization to deploy applications in multiple environments without
rewriting the program code. They build an application once and deploy it on multiple operating systems. For
example, they run the same containers on Linux and Windows operating systems. Developers also upgrade
legacy application code to modern versions using containers for deployment.

• Scalability Containers are lightweight software components that run efficiently. For example, a virtual
machine can launch a containerized application faster because it doesn’t need to boot an operating system.
Therefore, software developers can easily add multiple containers for different applications on a single machine.
The container cluster uses computing resources from the same shared operating system, but one container
doesn’t interfere with the operation of other containers.

• Fault tolerance Software development teams use containers to build fault-tolerant applications. They use
multiple containers to run microservices on the cloud. Because containerized microservices operate in isolated
user spaces, a single faulty container doesn’t affect the other containers. This increases the resilience and
availability of the application.
• Agility Containerized applications run in isolated computing environments. Software developers can trou-
bleshoot and change the application code without interfering with the operating system, hardware, or other
application services. They can shorten software release cycles and work on updates quickly with the container
model.

What are containerization use cases?


The following are some use cases of containerization.

• Cloud migration: Cloud migration, or the lift-and-shift approach, is a software strategy that involves
encapsulating legacy applications in containers and deploying them in a cloud computing environment. Or-
ganizations can modernize their applications without rewriting the entire software code.
• Adoption of microservice architecture: Organizations seeking to build cloud applications with microser-
vices require containerization technology. The microservice architecture is a software development approach
that uses multiple, interdependent software components to deliver a functional application. Each microservice
has a unique and specific function. A modern cloud application consists of multiple microservices. For exam-
ple, a video streaming application might have microservices for data processing, user tracking, billing, and
personalization. Containerization provides the software tool to pack microservices as deployable programs on
different platforms.

• IoT devices: Internet of Things (IoT) devices contain limited computing resources, making manual software
updating a complex process. Containerization allows developers to deploy and update applications across
IoT devices easily.

How does containerization work?


Containerization involves building self-sufficient software packages that perform consistently, regardless of the ma-
chines they run on. Software developers create and deploy container images—that is, files that contain the necessary
information to run a containerized application. Developers use containerization tools to build container images
based on the Open Container Initiative (OCI) image specification. OCI is an open-source group that provides a
standardized format for creating container images. Container images are read-only and cannot be altered by the
computer system.
Container images are the top layer in a containerized system that consists of the following layers.
• Infrastructure Infrastructure is the hardware layer of the container model. It refers to the physical computer
or bare-metal server that runs the containerized application.
Operating system The second layer of the containerization architecture is the operating system. Linux is a
popular operating system for containerization with on-premise computers. In cloud computing, developers
use cloud services such as AWS EC2 to run containerized applications.
• Container engine: The container engine, or container runtime, is a software program that creates containers
based on the container images. It acts as an intermediary agent between the containers and the operating
system, providing and managing resources that the application needs. For example, container engines can
manage multiple containers on the same operating system by keeping them independent of the underlying
infrastructure and each other.
• Application and dependencies: The topmost layer of the containerization architecture is the application
code and the other files it needs to run, such as library dependencies and related configuration files. This
layer might also contain a light guest operating system that gets installed over the host operating system.

What is container orchestration?


Container orchestration is a software technology that allows the automatic management of containers. This is nec-
essary for modern cloud application development because an application might contain thousands of microservices
in their respective containers. The large number of containerized microservices makes it impossible for software
developers to manage them manually.
Benefits of container orchestration
Developers use container orchestration tools to automatically start, stop, and manage containers. Container or-
chestrators allow developers to scale cloud applications precisely and avoid human errors. For example, you can
verify that containers are deployed with adequate resources from the host platform.

What are the types of container technology?


The following are some examples of popular technologies that developers use for containerization.

• Docker Docker, or Docker Engine, is a popular open-source container runtime that allows software developers
to build, deploy, and test containerized applications on various platforms. Docker containers are self-contained
packages of applications and related files that are created with the Docker framework.
• Linux: Linux is an open-source operating system with built-in container technology. Linux containers are
self-contained environments that allow multiple Linux-based applications to run on a single host machine.
Software developers use Linux containers to deploy applications that write or read large amounts of data.
Linux containers do not copy the entire operating system to their virtualized environment. Instead, the
containers consist of necessary functionalities allocated in the Linux namespace.
• Kubernetes: Kubernetes is a popular open-source container orchestrator that software developers use to
deploy, scale, and manage a vast number of microservices. It has a declarative model that makes automating
containers easier. The declarative model ensures that Kubernetes takes the appropriate action to fulfil the
requirements based on the configuration files.

What is a virtual machine?


A virtual machine (VM) is a digital copy of the host machine’s physical hardware and operating system. A host
machine might have several VMs sharing its CPU, storage, and memory. A hypervisor, which is software that
monitors VMs, allocates computing resources to all the VMs regardless of whether the applications use them.
Containerization compared to virtual machines
Containerization is a similar but improved concept of a VM. Instead of copying the hardware layer, containeriza-
tion removes the operating system layer from the self-contained environment. This allows the application to run
independently from the host operating system. Containerization prevents resource waste because applications are
provided with the exact resources they need.

What is serverless computing?


Serverless computing refers to a cloud computing technology where the cloud vendor fully manages the server
infrastructure powering an application. This means that developers and organizations do not need to configure,
maintain, or provision resources on the cloud server. Serverless computing allows organizations to automatically
scale computing resources according to the workload.
Containerization compared to serverless computing
Serverless computing allows instant deployment of applications because there are no dependencies such as libraries
or configuration files involved. The cloud vendor doesn’t charge for computing resources when the serverless
application is idle. Containers, on the other hand, are more portable, giving developers complete control of the
application’s environment.

What is cloud native?


Cloud native is a software development method that builds, tests, and deploys an application in the cloud. The
term cloud native means that the application is born and resides in a cloud computing environment. Organizations
build cloud-native applications because they are highly scalable, resilient, and flexible.
Containerization compared to cloud native Cloud-native application development requires different tech-
nologies and approaches than conventional monolithic applications. Containerization is one of the technologies that
allows developers to build cloud-native applications. It works with other cloud-native technologies, such as service
mesh and APIs, to allow microservices to work cohesively in a cloud-native application.
11 Continuous Integration and Continuous Delivery (CI/CD)
• CI/CD, which stands for continuous integration and continuous delivery/deployment, aims to streamline and
accelerate the software development lifecycle.
• Continuous integration (CI) refers to the practice of automatically and frequently integrating code changes
into a shared source code repository. Continuous delivery and/or deployment (CD) is a 2 part process that
refers to the integration, testing, and delivery of code changes. Continuous delivery stops short of automatic
production deployment, while continuous deployment automatically releases the updates into the production
environment.
• Taken together, these connected practices are often referred to as a ”CI/CD pipeline” and are supported by
development and operations teams working together in an agile way with either a DevOps or site reliability
engineering (SRE) approach.

Figure 3: credit: https://www.redhat.com/en/topics/devops/what-is-ci-cd

What is Continuous Integration (CI)?


• Continuous Integration (CI) is a development practice where developers frequently integrate code into a shared
repository, multiple times a day. Each integration is automatically verified by a build process, including
automated testing, to detect and address issues early in the development cycle.
• The ”CI” in CI/CD always refers to continuous integration, an automation process for developers that fa-
cilitates more frequent merging of code changes back to a shared branch, or “trunk.” As these updates are
made, automated testing steps are triggered to ensure the reliability of merged code changes.
• In modern application development, the goal is to have multiple developers working simultaneously on different
features of the same app. However, if an organization is set up to merge all branching source code together
on one day (known as “merge day”), the resulting work can be tedious, manual, and time-intensive.
That’s because when a developer working in isolation makes a change to an application, there’s a chance
it will conflict with different changes being simultaneously made by other developers. This problem can be
further compounded if each developer has customized their own local integrated development environment
(IDE), rather than the team agreeing on one cloud-based IDE.
• CI can be thought of as a solution to the problem of having too many branches of an app in development at
once that might conflict with each other.

Key Principles of CI
• Frequent Code Integration: Developers integrate code into the repository multiple times a day, ensuring
that the latest code is available for testing.
• Automated Builds: CI involves automated builds to compile and package the code for deployment.
• Automated Testing: Automated tests run with each integration to verify the correctness and quality of
the code.
• Immediate Feedback: Quick feedback is provided to developers if there are any integration or testing
issues, allowing them to resolve problems quickly.
Benefits of CI
• Early Detection of Bugs: CI helps identify bugs early in the development process, reducing the complexity
of fixes.
• Reduced Integration Issues: Frequent integrations ensure that integration problems are addressed early
and not delayed until the end of the development cycle.
• Improved Collaboration: CI promotes collaboration between team members by integrating and testing
code frequently.

What is Continuous Delivery (CD)?


• The ”CD” in CI/CD refers to continuous delivery and/or continuous deployment, which are related con-
cepts that sometimes get used interchangeably. Both are about automating further stages of the pipeline,
but they’re sometimes used separately to illustrate just how much automation is happening. The choice
between continuous delivery and continuous deployment depends on the risk tolerance and specific needs of
the development teams and operations teams.
• Continuous Delivery (CD) is a software development practice where code changes are automatically built,
tested, and prepared for a release to production. It extends CI by ensuring that every code change is deployable
and that deployments can be performed on demand with minimal effort.

• Continuous delivery usually means a developer’s changes to an application are automatically bug tested
and uploaded to a repository (like GitHub or a container registry), where they can then be deployed to a
live production environment by the operations team. It’s an answer to the problem of poor visibility and
communication between dev and business teams. To that end, the purpose of continuous delivery is to have a
codebase that is always ready for deployment to a production environment, and ensure that it takes minimal
effort to deploy new code.

What is continuous deployment?


• The final stage of a mature CI/CD pipeline is continuous deployment. Continuous deployment is an extension
of continuous delivery, and can refer to automating the release of a developer’s changes from the repository
to production, where it is usable by customers.

• CD addresses the problem of overloading operations teams with manual processes that slow down app delivery.
It builds on the benefits of continuous delivery by automating the next stage in the pipeline.
• In practice, continuous deployment means that a developer’s change to a cloud application could go live within
minutes of writing it (assuming it passes automated testing). This makes it much easier to continuously receive
and incorporate user feedback. Taken together, all of these connected CI/CD practices make the deployment
process less risky, whereby it’s easier to release changes to apps in small pieces, rather than all at once.

Key Principles of CD
• Automated Deployment Pipelines: CD involves setting up automated pipelines that build, test, and
package the application.

• Continuous Testing: Continuous testing ensures that the application is thoroughly tested at every stage
of the pipeline.
• Deployable Code: Every code change is kept in a deployable state, making it easy to release the latest
version of the application.

• Manual Approval: CD pipelines can include manual approval steps before deploying to production, allowing
for better control over releases.
Benefits of CD
• Faster Time-to-Market: CD enables rapid and reliable delivery of new features and bug fixes to customers.
• Reduced Deployment Risk: Automated testing and validation at each stage of the pipeline reduce the
risk of issues in production.
• Improved Quality and Reliability: Continuous testing ensures that the application is thoroughly validated
before release, leading to higher quality and reliability.

CI/CD Pipeline
The CI/CD pipeline is a series of automated processes that take code from version control through build, testing,
and deployment stages. It typically consists of the following stages:
1. Code Commit: Developers commit code changes to the version control system (e.g., Git).
2. Build Stage: The code is automatically built and compiled into an executable format.
3. Test Stage: Automated tests are run to validate the functionality, performance, and security of the appli-
cation.
4. Deploy Stage: The application is packaged and deployed to different environments (e.g., development,
staging, production) based on the pipeline configuration.
5. Monitoring and Feedback: The deployed application is monitored for performance and errors, and feed-
back is provided to developers for continuous improvement.

CI/CD Tools and Platforms


Several tools and platforms are available to implement CI/CD pipelines. Some of the popular ones include:
• Jenkins: An open-source automation server that supports building, testing, and deploying code.
• GitLab CI/CD: An integrated CI/CD tool within GitLab that automates code building, testing, and
deployment.
• CircleCI: A cloud-based CI/CD platform that automates the entire pipeline from code commit to deploy-
ment.
• Travis CI: A CI/CD service for building and testing projects hosted on GitHub.
• Azure DevOps: A Microsoft service that provides development collaboration tools, including CI/CD
pipelines.

CI/CD Best Practices


• Commit Frequently: Commit code frequently to the repository to ensure that integrations are smooth and
issues are detected early.
• Automate Everything: Automate as many processes as possible in the pipeline to reduce manual errors
and save time.
• Keep Builds Fast: Optimize the pipeline to keep build and test times to a minimum, ensuring quick
feedback.
• Implement Rollback Strategies: Have rollback strategies in place to quickly revert to a previous version
in case of deployment issues.
• Monitor and Measure: Continuously monitor the pipeline and application to measure performance, detect
issues, and gather feedback.
12 References
• redhat.com
• aws.amazom.com
• learn.microsoft.com
• ibm.com

• atlassian.com

ALL THE BEST FOR THE EXAM

You might also like