Cloud Notes1

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 31

Department of Collegiate and Technical Education Diploma in CS&E

WEEK-1: Introduction to Cloud Computing


Session-1(Lecture)
Building Blocks of Cloud Computing:
Basic Architecture of Computer:
Computer is an electronic machine that makes performing any task very easy. In computer, the CPU
executes each instruction provided to it, in a series of steps, this series of steps is called Machine
Cycle, and is repeated for each instruction. One machine cycle involves fetching of instruction,
decoding the instruction, transferring the data, executing the instruction.

Computer system has five basic units that help the computer to perform operations, which are given
below:

1. Input Unit

2. Output Unit

3. Storage Unit

4. Arithmetic Logic Unit

5. Control Unit

Input Unit:

Input unit connects the external environment with internal computer system. It provides data and
instructions to the computer system. Commonly used input devices are keyboard, mouse, magnetic
tape etc.

Input unit performs following tasks:

• Accept the data and instructions from the outside environment.

• Convert it into machine language.

[Cloud Computing-20CS53I] Page 1


Department of Collegiate and Technical Education Diploma in CS&E

• Supply the converted data to computer system.

Output Unit:
It connects the internal system of a computer to the external environment. It provides the
results of any computation, or instructions to the outside world. Some output devices are printers,
monitor etc.
Storage Unit:
This unit holds the data and instructions. It also stores the intermediate results before these are
sent to the output devices. It also stores the data for later use.
The storage unit of a computer system can be divided into two categories:
• Primary Storage: This memory is used to store the data which is being currently executed. It
is used for temporary storage of data. The data is lost, when the computer is switched off.
RAM is used as primary storage memory.
• Secondary Storage: The secondary memory is slower and cheaper than primary memory. It is
used for permanent storage of data. Commonly used secondary memory devices are hard disk,
CD etc.
Arithmetic Logical Unit:
All the calculations are performed in ALU of the computer system. The ALU can perform basic
operations such as addition, subtraction, division, multiplication etc. Whenever calculations are
required, the control unit transfers the data from storage unit to ALU. When the operations are done,
the result is transferred back to the storage unit.
Control Unit:
It controls all other units of the computer. It controls the flow of data and instructions to and
from the storage unit to ALU. Thus it is also known as central nervous system of the computer.
CPU:
It is Central Processing Unit of the computer. The control unit and ALU are together known as
CPU. CPU is the brain of computer system. It performs following tasks:
• It performs all operations.
• It takes all decisions.
• It controls all the units of computer.

Servers vs Desktop and laptops:


What is the difference between a Server and a Desktop or Laptop?

 A desktop is a personal computer intended for personal use, while server is a dedicated
computer that runs a software service that can be obtained by other computers in the network.

[Cloud Computing-20CS53I] Page 2


Department of Collegiate and Technical Education Diploma in CS&E

 Servers are normally made up of powerful components such as faster CPUs, high performing
RAM and larger hard disks than desktop computers, since it needs to satisfy large number
request at a given time.

 Furthermore, servers contain special server oriented OS that is capable of maintaining backups
and providing improved security while the OS contained in desktop normally do not offer or
offer simple versions of these services.

Client-Server Computing:

In client server computing, the clients requests a resource and the server provides that resource.
A server may serve multiple clients at the same time while a client is in contact with only one server.
Both the client and server usually communicate via a computer network but sometimes they may
reside in the same system.
An illustration of the client server system is given as follows −

Characteristics of Client Server Computing:


The salient points for client server computing are as follows:
 The client server computing works with a system of request and response. The client sends a
request to the server and the server responds with the desired information.
 The client and server should follow a common communication protocol so they can easily
interact with each other. All the communication protocols are available at the application layer.
 A server can only accommodate a limited number of client requests at a time. So it uses a
system based to priority to respond to the requests.
 Denial of Service attacks hindera servers ability to respond to authentic client requests by
inundating it with false requests.
 An example of a client server computing system is a web server. It returns the web pages to the
clients that requested them.

Hard Drives - HDDs and SSDs:


What is a Solid State Drive (SSD)?

Solid State Drive (SSD) is a non-volatile storage device that stores and retrieves data constantly on
solid-state flash memory. However, this data is stored on interconnected flash memory chips instead of
platters, which makes them faster than HDDs. It provides better performance compared to HDD.

What is a Hard Disk Drive (HDD)?

[Cloud Computing-20CS53I] Page 3


Department of Collegiate and Technical Education Diploma in CS&E

An HDD uses magnetism, which allows you to store data on a rotating platter. It has a read/write head
that floats above the spinning platter for Reading and Writing of the data. The faster the platter spins,
the quicker an HDD can perform. HDD also consists of an I/O controller and firmware, which tells the
hardware what to do and communicates with the remaining system. The full form of HDD is Hard
Disk Drive

Key Difference between SSD and HDD:

 SSD is faster at reading and writing data, whereas HDD has a slower reading and writing data
speed.
 SSD has lower latency, whereas HDD has higher latency.
 SSD supports more I/O operations per second(IOPS), while HDD supports fewer I/O
operations per second (IOPS).
 SSD does not produce such noise, on the other hand, HDD can produce noise due to
mechanical movements.
 The moving parts of HDDs make them vulnerable to crashes and damage, but SSD drives can
tolerate vibration up to 2000Hz.
 SSD stands for Solid State Drive, whereas HDD stands for Hard Disk Drive.

Storage - block vs file vs object:


Block storage, object storage, and file storage: the three primary architectures used to build custom
data storage solutions that determine how data is processed, stored, organized, and retrieved. Each
storage type has unique capabilities and limitations, which means enterprise data storage systems are
not “one size fits all” solutions.

What is Block Storage?

[Cloud Computing-20CS53I] Page 4


Department of Collegiate and Technical Education Diploma in CS&E

Block storage, also known as block level storage or elastic block storage, is a sequence of
data bytes that contain a number of whole records that have a maximum length (a block size). The
process of storing data into blocks is called blocking, and the process of retrieving data from blocks is
called deblocking. Blocked data is One of the most notable pros of block storage is the ability to
efficiently access and retrieve structured data from a database generally stored in a data buffer, and
read or written one block at a time, which is aimed at reducing overhead and speeding up the handling
of the data-stream.
What is Object Storage?
Object storage, also called object-based storage, is an architecture that manages data as
objects, a key difference when compared with a storage architecture like a file system. Object storage
can work well for unstructured data in which data is written once and read once (or many times).
Static online content, data backups, image archives, videos, pictures, and music files can be stored as
objects.
What is File Storage?
File storage, also referred to as file-based storage (FBS) or file system, is a format or
platform used to store and manage data as a hierarchical tree structured (as a file hierarchy), where
files are identifiable in a directory structure.
File systems store data as a set of individual file paths, which are strings of characters used to
uniquely identify the file in a directory structure. These unique identifiers include the file name,
extension, and its path and are how a file system controls the storage, retrieval, and graphical display
of the data for a user.

WEEK-1: Introduction to Cloud Computing


Session-2(Lecture)
Building Blocks of Cloud Computing:
IP addressing:
What is an IP Address?
 An IP address represents a unique address that distinguishes any device on the internet or any
network from another.
 An IP address is a 32 bit number like 11000000101010000000000100000001 in binary or
3232235777 in decimal. So it is written in 4 parts like

[Cloud Computing-20CS53I] Page 5


Department of Collegiate and Technical Education Diploma in CS&E

11000000.10101000.00000001.00000001 in binary form and 192.168.1.1 in decimal form. This way it


is easier to understand.
 All the computers of the world on the Internet network communicate with each other with
underground or underwater cables or wirelessly.
 If I want to download a file from the internet or load a web page or literally do anything related to
the internet, my computer must have an address so that other computers can find and locate
mine in order to deliver that particular file or webpage that I am requesting.
 In technical terms, that address is called IP Address or Internet Protocol Address.

Networking - Routers and Switches:


Both Routers and Switches are network connecting devices. Routers work at the network layer and
are responsible to find the shortest path for a packet across the network, whereas Switches connect
various devices in a network. Routers connect devices across multiple networks.

What is a Switch?
 A switch is a networking device, which provides the facility to share the information & resources by
connecting different network devices, such as computers, printers, and servers, within a small
business network.

What is a Router?
 A router is a networking device used to connect multiple switches and their corresponding networks
to build a large network. These switches and their corresponding networks may be in a single
location or different locations.
 It works on the network layer & route the data packets through the shortest path across the
network.

Difference Chart between Switch and Router

Switch Router

It connects multiple networked devices in the network. It connects multiple switches & their corresponding
networks.

It works on the data link layer of the OSI model. It works on the network layer of the OSI model.

[Cloud Computing-20CS53I] Page 6


Department of Collegiate and Technical Education Diploma in CS&E

It is used within a LAN. It can be used in LAN or MAN.

A switch cannot perform NAT or Network Address A router can perform Network Address
Translation. Translation.

The switch takes more time while making complicated A router can take a routing decision much faster than
routing decisions. a switch.

It provides only port security. It provides security measures to protect the network
from security threats.

It comes in the category of semi-Intelligent devices. It is known as an Intelligent network device.

It works in either half or full-duplex transmission mode. It works in the full-duplex transmission mode.
However, we can change it manually to work on half-
duplex mode.

It sends information from one device to another in the It sends information from one network to another
form of Frames (for L2 switch) and the form of packets network in the form of data packets.
(for L3 switch).

Switches can only work with the wired network.


Routers can work with both wired & wireless
networks.

Switches are available with different ports, such as 8, 16, A router contains two ports by default, such as Fast
24, 48, and 64. Ethernet Port. But we can also add the serial ports
explicitly.

Networking – Firewalls:
 Firewalls prevent unauthorized access to networks through software or firmware. By utilizing a set
of rules, the firewall examines and blocks incoming and outgoing traffic.
 Fencing your property protects your house and keeps trespassers at bay; similarly, firewalls are
used to secure a computer network.

[Cloud Computing-20CS53I] Page 7


Department of Collegiate and Technical Education Diploma in CS&E

 Firewalls are network security systems that prevent unauthorized access to a network.
 It can be a hardware or software unit that filters the incoming and outgoing traffic within a private
network, according to a set of rules to spot and prevent cyberattacks.

Databases:
 A cloud database is an organized and managed collection of data in an IT system that resides on a
public, private or hybrid cloud computing platform.
 From an overall design and functionality perspective, a cloud database is no different than an on-
premises one that runs on an organization's own data center systems.
 The biggest difference between them lies in how the database is deployed and managed.

How cloud databases work


 In businesses, databases are used to collect, organize and deliver data to executives and workers
for operational and analytics applications.
 In general, cloud databases provide the same data processing, management and access
capabilities as on-premises ones.
 Existing on-premises databases usually can be migrated to the cloud, along with the applications
they support.
 Instead of traditional software licenses, pricing is based on the use of system resources, which can
be provisioned on demand as needed to meet processing workloads.
 Alternatively, users can reserve database instances -- typically for at least a year -- to get
discounted pricing on regular workloads with consistent capacity requirements.

Server virtualization:
What is Server Virtualization in Cloud Computing?
Server Virtualization is the process of dividing a physical server into several individuals and isolated
virtual servers with software applications. Every virtual server can run its own operating systems
individually. Why Server Virtualization?

 Server Virtualization is one of the most cost-effective methods to offer Web hosting services and
uses the existing resources effectively in IT Infrastructure.
 If there is no server Virtualization, the servers will only use a tiny section of their processing
power. It will result in idle servers because the workload is divided into one portion of the network
servers.
 Data centers have become overcrowded with unutilized servers, resulting in wasting resources and
heavy power consumption.

[Cloud Computing-20CS53I] Page 8


Department of Collegiate and Technical Education Diploma in CS&E

 By having every physical server divided into multiple virtual servers, server virtualization will
authorize each virtual server to behave as a unique device.
 Every Virtual Server is capable of running its own application and operating systems.
 The following process helps to increase resource Utilization by creating each virtual server to
behave as a physical server, and it develops the capacity of every physical device. Key Benefits of
Server Virtualization
 Server Virtualization contains higher server capability
 Organizations experience cheaper operational cost  It eliminates the complexity of the server
 It helps in developing the application performance

Application Programming Interfaces (API):


 A Cloud API is a software interface that allows developers to link cloud computing services
together. Application programming interfaces (APIs) allow one computer program to make its data
and functionality available for other programs to use.
 Developers use APIs to connect software components across a network.
 Cloud APIs are often categorized as being vendor-specific or cross-platform.
 Vendor-specific cloud APIs are written to support the cloud services of one specific provider, while
cross-platform APIs allow developers to connect functionalities from two or more cloud providers.
Cloud APIs are often categorized by type:
 PaaS APIs: Platform as a Service APIs provide access to back-end services such as databases.
 SaaS APIs: Software as a Service APIs facilitate connections between cloud services at the
application layer.
 IaaS APIs: Infrastructure as a Service APIs enable cloud-based compute and storage resources to
be provisioned and de-provisioned as quickly as possible.

WEEK-1: Introduction to Cloud Computing


Session-4(Lecture)
Cloud Deployment Models:

Deployment Models
The cloud deployment model identifies the specific type of cloud environment based on ownership, scale, and
access, as well as the cloud’s nature and purpose. The location of the servers you’re utilizing and who controls
them are defined by a cloud deployment model.

Different types of cloud computing deployment models are:

[Cloud Computing-20CS53I] Page 9


Department of Collegiate and Technical Education Diploma in CS&E

1. Public cloud
2. Private cloud
3. Hybrid cloud
4. Community cloud

Public cloud :
 It is accessible to the public. Public deployment models in the cloud are perfect for organizations with
growing and fluctuating demands.
 It also makes a great choice for companies with low-security concerns. Thus, you pay a cloud service
provider for networking services, compute virtualization & storage available on the public internet.
 It is also a great delivery model for the teams with development and testing.
 Its configuration and deployment are quick and easy, making it an ideal choice for test environments.
Advantages of Public Cloud Model:
 Minimal Investment: Because it is a pay-per-use service, there is no substantial upfront fee, making it

excellent for enterprises that require immediate access to resources.


 No setup cost: The entire infrastructure is fully subsidized by the cloud service providers, thus there is

no need to set up any hardware.


 Infrastructure Management is not required: Using the public cloud does not necessitate infrastructure

management.
 No maintenance: The maintenance work is done by the service provider (Not users).

 Dynamic Scalability: To fulfill your company’s needs, on-demand resources are accessible.

Disadvantages of Public Cloud Model:


 Less secure: Public cloud is less secure as resources are public so there is no guarantee of high-level

security.
 Low customization: It is accessed by many public so it can’t be customized according to personal

requirements.

Private Cloud:
 The private cloud deployment model is the exact opposite of the public cloud deployment model. It’s a
one-on-one environment for a single user (customer).
 There is no need to share your hardware with anyone else. The distinction between private and public
clouds is in how you handle all of the hardware.

[Cloud Computing-20CS53I] Page 10


Department of Collegiate and Technical Education Diploma in CS&E

 It is also called the “internal cloud” & it refers to the ability to access systems and services within a
given border or organization.
 The cloud platform is implemented in a cloud-based secure environment that is protected by powerful
firewalls and under the supervision of an organization’s IT department.  The private cloud gives
greater flexibility of control over cloud resources.
Advantages of Private Cloud Model:
 Better Control: You are the sole owner of the property. You gain complete command over service

integration, IT operations, policies, and user behavior.


 Data Security and Privacy: It’s suitable for storing corporate information to which only authorized staff

have access. By segmenting resources within the same infrastructure, improved access and security
can be achieved.
 Supports Legacy Systems: This approach is designed to work with legacy systems that are unable to

access the public cloud.


 Customization: Unlike a public cloud deployment, a private cloud allows a company to tailor its

solution to meet its specific needs.


Disadvantages of Private Cloud Model:
 Less scalable: Private clouds are scaled within a certain range as there is less number of clients.

 Costly: Private clouds are more costly as they provide personalized facilities.

Hybrid Cloud:
 By bridging the public and private worlds with a layer of proprietary software, hybrid cloud computing
gives the best of both worlds.
 With a hybrid solution, you may host the app in a safe environment while taking advantage of the
public cloud’s cost savings.
 Organizations can move data and applications between different clouds using a combination of two or
more cloud deployment methods, depending on their needs.
Advantages of Hybrid Cloud Model:
 Flexibility and control: Businesses with more flexibility can design personalized solutions that meet

their particular needs.

 Cost: Because public clouds provide scalability, you’ll only be responsible for paying for the extra

capacity if you require it.


 Security: Because data is properly separated, the chances of data theft by attackers are considerably

reduced.
Disadvantages of Hybrid Cloud Model:

[Cloud Computing-20CS53I] Page 11


Department of Collegiate and Technical Education Diploma in CS&E

 Difficult to manage: Hybrid clouds are difficult to manage as it is a combination of both public and

private cloud. So, it is complex.


 Slow data transmission: Data transmission in the hybrid cloud takes place through the public cloud so

latency occurs.

Community Cloud
 It allows systems and services to be accessible by a group of organizations.
 It is a distributed system that is created by integrating the services of different clouds to address the
specific needs of a community, industry, or business.
 The infrastructure of the community could be shared between the organization which has shared
concerns or tasks.
 It is generally managed by a third party or by the combination of one or more organizations in the
community.
Advantages of Community Cloud Model:
 Cost Effective: It is cost-effective because the cloud is shared by multiple organizations or communities.

 Security: Community cloud provides better security.


 Shared resources: It allows you to share resources, infrastructure, etc. with multiple organizations.

 Collaboration and data sharing: It is suitable for both collaboration and data sharing.

Disadvantages of Community Cloud Model:


 Limited Scalability: Community cloud is relatively less scalable as many organizations share the same

resources according to their collaborative interests.


 Rigid in customization: As the data and resources are shared among different organizations according

to their mutual interests if an organization wants some changes according to their needs they cannot

do so because it will have an impact on other organizations.

Cloud Computing Service Models:


There are the following three types of cloud service models -
1. Infrastructure as a Service (IaaS)
2. Platform as a Service (PaaS)
3. Software as a Service (SaaS)

[Cloud Computing-20CS53I] Page 12


Department of Collegiate and Technical Education Diploma in CS&E

Infrastructure as a Service (IaaS):


 IaaS is also known as Hardware as a Service (HaaS).
 It is a computing infrastructure managed over the internet.
 The main advantage of using IaaS is that it helps users to avoid the cost and complexity of purchasing
and managing the physical servers.
Characteristics of IaaS:
There are the following characteristics of IaaS -
 Resources are available as a service

 Services are highly scalable

 Dynamic and flexible

 GUI and API-based access

 Automated administrative tasks

Example: DigitalOcean, Linode, Amazon Web Services (AWS), Microsoft Azure, Google Compute Engine (GCE),
Rackspace, and Cisco Metacloud.

Platform as a Service (PaaS):


 PaaS cloud computing platform is created for the programmer to develop, test, run, and manage the
applications.
Characteristics of PaaS:
There are the following characteristics of PaaS -
 Accessible to various users via the same development application.

 Integrates with web services and databases.

[Cloud Computing-20CS53I] Page 13


Department of Collegiate and Technical Education Diploma in CS&E

 Builds on virtualization technology, so resources can easily be scaled up or down as per the
organization's need.
 Support multiple languages and frameworks.

 Provides an ability to "Auto-scale".

Example: AWS Elastic Beanstalk, Windows Azure, Heroku, Force.com, Google App Engine, Apache Stratos,
Magento Commerce Cloud, and OpenShift.

Software as a Service (SaaS):


 SaaS is also known as "on-demand software".
 It is a software in which the applications are hosted by a cloud service provider.
 Users can access these applications with the help of internet connection and web browser.
Characteristics of SaaS:
There are the following characteristics of SaaS -
 Managed from a central location

 Hosted on a remote server

 Accessible over the internet

 Users are not responsible for hardware and software updates. Updates are applied automatically.
 The services are purchased on the pay-as-per-use basis

Example: BigCommerce, Google Apps, Salesforce, Dropbox, ZenDesk, Cisco WebEx, ZenDesk, Slack, and
GoToMeeting.

WEEK-1: Introduction to Cloud Computing


Session-5(Lecture) Cloud
Architecture:
Introduction:
Cloud computing architecture is a combination of service-oriented architecture and event-driven
architecture.
Cloud computing architecture is divided into the following two parts -
 Front End
 Back End
The below diagram shows the architecture of cloud computing-

[Cloud Computing-20CS53I] Page 14


Department of Collegiate and Technical Education Diploma in CS&E

Front End
The front end is used by the client. It contains client-side interfaces and applications that are required to access
the cloud computing platforms. The front end includes web servers (including Chrome, Firefox, internet
explorer, etc.), thin & fat clients, tablets, and mobile devices.
Back End
The back end is used by the service provider. It manages all the resources that are required to provide cloud
computing services. It includes a huge amount of data storage, security mechanism, virtual machines,
deploying models, servers, traffic control mechanisms, etc.
Components of Cloud Computing Architecture
There are the following components of cloud computing architecture -
1. Client Infrastructure
Client Infrastructure is a Front end component. It provides GUI (Graphical User Interface) to interact with the
cloud.
2. Application
The application may be any software or platform that a client wants to access.
3. Service
A Cloud Services manages that which type of service you access according to the client’s requirement.
Cloud computing offers the following three type of services:
i. Software as a Service (SaaS) Example: Google Apps, Salesforce Dropbox, Slack, Hubspot,Cisco WebEx.
ii. Platform as a Service (PaaS) Example: Windows Azure, Force.com, Magento Commerce Cloud,
OpenShift.

[Cloud Computing-20CS53I] Page 15


Department of Collegiate and Technical Education Diploma in CS&E

iii. Infrastructure as a Service (IaaS) Amazon Web Services (AWS) EC2, Google Compute Engine (GCE),
Cisco Metapod
4. Runtime Cloud
Runtime Cloud provides the execution and runtime environment to the virtual machines.
5. Storage
Storage is one of the most important components of cloud computing. It provides a huge amount of storage
capacity in the cloud to store and manage data.
6. Infrastructure
It provides services on the host level, application level, and network level. Cloud infrastructure includes
hardware and software components such as servers, storage, network devices, virtualization software, and
other storage resources that are needed to support the cloud computing model.
7. Management
Management is used to manage components such as application, service, runtime cloud, storage,
infrastructure, and other security issues in the backend and establish coordination between them.

Stateful Vs Stateless Service:


 Stateful Service:
In Stateful Service If client send a request to the server then it expects some kind of response, if it
does not get any response then it resend the request.

 FTP (File Transfer Protocol), Telnet are the example of Stateful Protocol.

Silent features of Stateful Service:


 Stateful Service provide better performance to the client by keeping track of the connection
information.
 Stateful Application require Backing storage.
 Stateful request are always dependent on the server-side state.
 TCP session follow stateful protocol because both systems maintain information about the session
itself during its life.

 Stateless Service:
Stateless Services are the type of network protocols in which Client send request to the server and
server response back according to current state.

 It does not require the server to retain session information or a status about each communicating
partner for multiple request.

[Cloud Computing-20CS53I] Page 16


Department of Collegiate and Technical Education Diploma in CS&E

 HTTP (Hypertext Transfer Protocol), UDP (User Datagram Protocol), DNS (Domain Name System) are
the example of Stateless Protocol.
Salient features of Stateless Service:
 Stateless Service simplify the design of Server.

 The stateless Service requires less resources because system do not need to keep track of the multiple

link communications and the session details.


 In Stateless Service each information packet travel on it’s own without reference to any other packet.

 Each communication in Stateless Protocol is discrete and unrelated to those that precedes or follow.

Scaling up Vs Scaling out:


 Scaling up (vertical scaling) and scaling out (horizontal scaling) are key methods organizations use to
add capacity to their infrastructure.
 To an end user, these two concepts may seem to perform the same function. However, they each
handle specific needs and solve specific capacity issues for the system’s infrastructure in different ways.
 Scaling up is adding further resources, like hard drives and memory, to increase the computing capacity
of physical servers.
 Whereas scaling out is adding more servers to your architecture to spread the workload across more
machines.
Scaling Up:
 Scaling up storage infrastructure aims to add resources supporting an application to improve or
maintain ample performance.
 Both virtual and hardware resources can be scaled up. In the context of hardware, it may be as
straightforward as using a larger hard drive to greatly increase storage capacity.
 Note, though, scaling up does not necessarily require changes to your system architecture.
 Scaling up infrastructure is viable until individual components are impossible to scale anymore —
making this a rather short-term solution. When to Scale Up Infrastructure
 When there’s a performance impact: A good indicator of when to scale up is when your workloads

start reaching performance limits, resulting in increased latency and performance bottlenecks caused
by I/O and CPU capacity.
 When storage optimization does not work: Whenever the effectiveness of optimization solutions for

performance and capacity diminishes, it may be time to scale up.


Scaling Out:
 Scale-out infrastructure replaces hardware to scale functionality, performance, and capacity.

[Cloud Computing-20CS53I] Page 17


Department of Collegiate and Technical Education Diploma in CS&E

 Scaling out addresses some of the limitations of scale-up infrastructure, as it is generally more efficient
and effective.
 Furthermore, scaling out using the cloud ensures you do not have to buy new hardware whenever you
want to upgrade your system.
 While scaling out allows you to replicate resources or services, one of its key differentiators is fluid
resource scaling. This allows you to respond to varying demand quickly and effectively.
When to Scale Out Infrastructure
 When you need a long-term scaling strategy: The incremental nature of scaling out allows you to

scale your infrastructure for expected, long-term data growth. Components can be added or removed
depending on your goals.
 When upgrades need to be flexible: Scaling out avoids the limitations of depreciating technology, as

well as vendor lock-in for specific hardware technologies.


 When storage workloads need to be distributed: Scaling out is perfect for use cases that require

workloads to be distributed across several storage nodes.


Scale Up or Scale Out?
Notably, both scale-up and scale-out approaches have different purposes in data center infrastructures.
However, the right approach for your business depends on factors such as current performance, cost-
effectiveness, and your challenges, goals, and use case.

Load Balancing:
 Load balancing is the method that allows you to have a proper balance of the amount of work being
done on different pieces of device or hardware equipment. Typically, what happens is that the load of
the devices is balanced between different servers or between the CPU and hard drives in a single cloud
server.
 Load balancing was introduced for various reasons. One of them is to improve the speed and
performance of each single device, and the other is to protect individual devices from hitting their
limits by reducing their performance.
 Cloud load balancing is defined as dividing workload and computing properties in cloud computing. It
enables enterprises to manage workload demands or application demands by distributing resources
among multiple computers, networks or servers. Cloud load balancing involves managing the
movement of workload traffic and demands over the Internet.

WEEK-1: Introduction to Cloud Computing


Session-6(Lecture)

[Cloud Computing-20CS53I] Page 18


Department of Collegiate and Technical Education Diploma in CS&E

Monolithic and Microservices architecture:

 Monolithic architecture is built as one large system and is usually one code-base. Monolithic
application is tightly coupled and entangled as the application evolves, making it difficult to isolate
services for purposes such as independent scaling or code maintainability.
 It extremely difficult to change technology or language or framework because everything is tightly
coupled and depend on each other.
 Microservices architecture is built as small independent module based on business functionality. In
microservices application, each project and services are independent from each other at the code
level. Therefore it is easy to configure and deploy completely and also easy to scale based on demand.

Sr. Key Monolithic architecture Microservices architecture


No.

1 Basic Monolithic architecture is built as Microservices architecture is


one large system and is usually one built as small independent
code-base module based on business
functionality

2 Scale It is not easy to scale based on It is easy to scale based on


demand demand.

3 Database It has shared database Each project and module has


their own database

4 Deployment Large code base makes IDE slow and Each project is independent
build time gets increase. and small in size. So overall
build and development time
gets decrease.
Sr. Key Monolithic architecture Microservices architecture
No.

[Cloud Computing-20CS53I] Page 19


Department of Collegiate and Technical Education Diploma in CS&E

5 Tightly It extremely difficult to change Easy to change technology or


Coupled and technology or language or framework because every
Loosely framework because everything is module and project is
coupled tightly coupled and depend on each independent
other

Cloud Service providers:


 Cloud Service providers (CSP) offers various services such as Software as a Service, Platform as a
service, Infrastructure as a service, network services, business applications, mobile applications,
and infrastructure in the cloud.
 The cloud service providers host these services in a data center, and users can access these services
through cloud provider companies using an Internet connection.
There are the following Cloud Service Providers Companies -
1. Amazon Web Services (AWS):
AWS (Amazon Web Services) is a secure cloud service platform provided by Amazon. It offers various services
such as database storage, computing power, content delivery, Relational Database, Simple Email, Simple
Queue, and other functionality to increase the organization's growth.

Features of AWS
AWS provides various powerful features for building scalable, cost-effective, enterprise applications. Some
important features of AWS is given below-

 AWS is scalable because it has an ability to scale the computing resources up or down according to the
organization's demand.
 AWS is cost-effective as it works on a pay-as-you-go pricing model.

 It provides various flexible storage options.

 It offers various security services such as infrastructure security, data encryption, monitoring &
logging, identity & access control, penetration testing, and DDoS attacks.
 It can efficiently manage and secure Windows workloads.
2. Microsoft Azure

[Cloud Computing-20CS53I] Page 20


Department of Collegiate and Technical Education Diploma in CS&E

Microsoft Azure is also known as Windows Azure. It supports various operating systems, databases,
programming languages, frameworks that allow IT professionals to easily build, deploy, and manage
applications through a worldwide network. It also allows users to create different groups for related utilities.

Features of Microsoft Azure


 Microsoft Azure provides scalable, flexible, and cost-effective  It allows developers to quickly manage
applications and websites.
 It managed each resource individually.

 Its IaaS infrastructure allows us to launch a general-purpose virtual machine in different platforms such
as Windows and Linux.
 It offers a Content Delivery System (CDS) for delivering the Images, videos, audios, and applications.

3. Google Cloud Platform


Google cloud platform is a product of Google. It consists of a set of physical devices, such as computers, hard
disk drives, and virtual machines. It also helps organizations to simplify the migration process.

Features of Google Cloud


 Google cloud includes various big data services such as Google BigQuery, Google CloudDataproc,
Google CloudDatalab, and Google Cloud Pub/Sub.
 It provides various services related to networking, including Google Virtual Private Cloud (VPC),
Content Delivery Network, Google Cloud Load Balancing, Google Cloud Interconnect, and Google Cloud
DNS.

[Cloud Computing-20CS53I] Page 21


Department of Collegiate and Technical Education Diploma in CS&E

 It offers various scalable and high-performance

 GCP provides various serverless services such as Messaging, Data Warehouse, Database,

Compute, Storage, Data Processing, and Machine learning (ML)  It


provides a free cloud shell environment with Boost Mode.

WEEK-1: Introduction to Cloud Computing


Session-7(Lecture)
AWS Cloud Overview:
What is AWS?
 The Amazon Web Services (AWS) platform provides more than 200 fully featured services from data
centers located all over the world, and is the world's most comprehensive cloud platform.
 Amazon web service is an online platform that provides scalable and cost-effective cloud computing
solutions.
 AWS is a broadly adopted cloud platform that offers several on-demand operations like compute
power, database storage, content delivery, etc., to help corporates scale and grow.
Advantages of AWS:
 AWS provides a user-friendly programming model, architecture, database as well as operating system
that has been already known to employers.
 AWS is a very cost-effective service. There is no such thing as long-term commitments for anything you
would like to purchase.
 It offers billing and management for the centralized sector, hybrid computing, and fast installation or
removal of your application in any location with few clicks.
 There is no need to pay extra money on running data servers by AWS.
 AWS offers a total ownership cost at very reasonable rates in comparison to other private cloud
servers.
Disadvantages of AWS:
 AWS has supportive paid packages for intensive or immediate response. Thus, users might need to pay
extra money for that.
 There might be some cloud computing problems in AWS especially when you move to a cloud Server
such as backup protection, downtime, and some limited control.
 From region to region, AWS sets some default limitations on resources such as volumes, images, or
snapshots.

[Cloud Computing-20CS53I] Page 22


Department of Collegiate and Technical Education Diploma in CS&E

 If there is a sudden change in your hardware system, the application on the cloud might not offer great
performance.

Applications of AWS:
The most common applications of AWS are storage and backup, websites, gaming, mobile, web, and social
media applications. Some of the most crucial applications in detail are as follows:
1. Storage and Backup
One of the reasons why many businesses use AWS is because it offers multiple types of storage to choose
from and is easily accessible as well. It can be used for storage and file indexing as well as to run critical
business applications.
2. Websites
Businesses can host their websites on the AWS cloud, similar to other web applications.

3. Gaming
There is a lot of computing power needed to run gaming applications. AWS makes it easier to provide the
best online gaming experience to gamers across the world.
4. Mobile, Web and Social Applications
A feature that separates AWS from other cloud services is its capability to launch and scale mobile, e-
commerce, and SaaS applications. API-driven code on AWS can enable companies to build uncompromisingly
scalable applications without requiring any OS and other systems.
5. Big Data Management and Analytics (Application)
 Amazon Elastic MapReduced to process large amounts of data via the Hadoop framework.

 Amazon Kinesis to analyze and process the streaming data.

 AWS Glue to handle, extract, transform and load jobs.


AWS Services
Amazon has many services for cloud applications. Let us list down a few key services of the AWS ecosystem
and a brief description of how developers use them in their business.
Amazon has a list of services:

 Compute service

 Storage

 Database

 Networking and delivery of content

 Security tools

 Developer tools

 Management tools

[Cloud Computing-20CS53I] Page 23


Department of Collegiate and Technical Education Diploma in CS&E

AWS Cloud Shell:


 AWS CloudShell is a browser-based, pre-authenticated shell that you can launch directly from the AWS
Management Console. You can run AWS CLI commands against AWS services using your preferred shell
(Bash, PowerShell, or Z shell). And you can do this without needing to download or install command line
tools.
 When you launch AWS CloudShell, a compute environment that's based on Amazon Linux 2 is created.
Within this environment, you've access to an extensive range of pre-installed development tools, options
for uploading and downloading files, and file storage that persists between sessions.
 When you launch AWS CloudShell, a compute environment is created that has the following AWS
command line tools already installed:
 AWS CLI
 AWS Elastic Beanstalk CLI
 Amazon ECS CLI
 AWS SAM

AWS CloudShell features:


 AWS Command Line Interface:
 You launch AWS CloudShell from the AWS Management Console, and the AWS credentials you used to
sign in to the console are automatically available in a new shell session.
 Shells and development tools:
 With the shell that's created for AWS CloudShell sessions, you can switch seamlessly between your
preferred command-line shells. More specifically, you can switch between Bash, PowerShell, and Z
shell. You also have access to pre-installed tools and utilities such as git, make, pip, sudo, tar, tmux,
vim, wget, and zip.
 Persistent storage:
 When using AWS CloudShell you have persistent storage of 1 GB for each AWS Region at no additional
cost. The persistent storage is located in your home directory ($HOME) and is private to you.
 Security:
 The AWS CloudShell environment and its users are protected by specific security features such as IAM
permissions management, shell session restrictions, and Safe Paste for text input.
Creating a key pair:
 Access keys consist of an access key ID and secret access key, which are used to sign programmatic
requests that you make to AWS.
 If you don't have access keys, you can create them from the AWS
Management Console. As a best practice, do not use the AWS account root user access keys for any task
where it's not required. Instead, create a new administrator IAM user with access keys for yourself.

SESSION-3
AWS IAM Multi-factor authentication (MFA) Overview
 AWS Multi-Factor Authentication (MFA) is a simple best practice that adds an extra layer of protection on
top of your user name and password.
 MFA needs to be enabled on the Root user and IAM user separately as they are distinct entities.
Why is MFA Important?

[Cloud Computing-20CS53I] Page 24


Department of Collegiate and Technical Education Diploma in CS&E

 The main benefit of MFA is it will enhance your organization's security by requiring your users to
identify themselves by more than a username and password. While important, usernames and
passwords are vulnerable to brute force attacks and can be stolen by third parties.
 Enforcing the use of an MFA factor like a thumbprint or physical hardware key means increased
confidence that your organization will stay safe from cyber criminals.

Available MFA methods for IAM


You can manage your MFA devices in the IAM console. The following options are the MFA methods that IAM
supports.
FIDO security key
A device that you plug into a USB port on your computer.FIDO2 is an open authentication standard hosted by
the FIDO Alliance. When you enable a FIDO2 security key, you sign in by entering your credentials and then
tapping the device instead of manually entering a code.
Virtual MFA devices
A software app that runs on a phone or other device and emulates a physical device. The device generates a
six-digit numeric code based upon a timesynchronized one-time password algorithm. The user must type a
valid code from the device on a second webpage during sign-in. A user cannot type a code from another user's
virtual MFA device to authenticate.
Hardware MFA device
A hardware device that generates a six-digit numeric code based upon a timesynchronized one-time password
algorithm. The user must type a valid code from the device on a second webpage during sign-in.

SESSION-4
IAM Roles
 "An IAM role is an IAM identity that you can create in your account that has specific permissions." It is not
uniquely associated with a single person; it can be used by anyone who needs it.
 Roles and users are both AWS identities with permissions policies that determine what the identity can and
cannot do in AWS. However, instead of being uniquely associated with one person, a role is intended to be
assumable by anyone who needs it. Also, a role does not have standard long-term credentials such as a
password or access keys associated with it. For example, you can create S3Admin role and assign it to your
EC2 instance. This will enable that EC2 instance to manage S3 resources.
 An IAM User can use a role in the same AWS account or a different account.
 You can use the roles to delegate access to users, applications or services that generally do not have access
to your AWS resources.
 IAM roles are of 4 types, primarily differentiated by who or what can assume the role:
 AWS service role
 AWS service role for an EC2 instance
 AWS service role for Cross-Account Access
 AWS service-linked role

Virtualization in Cloud Computing


What is Virtualization?

[Cloud Computing-20CS53I] Page 25


Department of Collegiate and Technical Education Diploma in CS&E

 Virtualization can be defined as a process that enables the creation of a virtual version of a desktop,
operating system, network resources, or server. Virtualization plays a key and dominant role in cloud
computing.
 This ensures that the physical delivery of the resource or an application is separated from the actual
resource itself. It helps reduce the space or cost involved with the resource. This technique enables the
end-user to run multiple desktop operating systems and applications simultaneously on the same
hardware and software.

Types of Virtualizations
Application Virtualization
 This can be defined as the type of Virtualization that enables the end-user of an application to have remote
access.This is achieved through a server. This server has all personal information and other applicable
characteristics required to use the application.
 The server is accessible through the internet, and it runs on a local workstation. With Application
virtualization, an end-user can run two different versions of the same software or the same application.
 Application virtualization is offered through packaged software or a hosted application.
Network Virtualization
 This kind of virtualization can execute many virtual networks, and each has a separate control and data
plan. It co-occurs on the top of a physical network, and it can be run by parties who are not aware of one
another.
 Network virtualization creates virtual networks, and it also maintains a provision of virtual networks.
 Through network virtualization logical switches, firewalls, routers, load balancers, and workload security
management systems can be created. Desktop Virtualization
 This can be defined as the type of Virtualization that enables the operating system of end-users to be
remotely stored on a server or data center. It enables the users to access their desktops remotely and do
so by sitting in any geographical location. They can also use different machines to virtually access their
desktops.
 With desktop virtualization, an end-user can work on more than one operating systems basis the business
need of that individual.
 It delivers portability, user mobility, easy software management with patches and updates.
Storage Virtualization
 This type of Virtualization provides virtual storage systems that facilitate storage management.
 It facilitates the management of storage effectively and through multiple sources accessed from a single
repository. Storage virtualizations ensure consistent performance and smooth performance.
 It also offers continuous updates and patches on advanced functions. It also helps cope with the changes
that come up in the underlying storage equipment.
Server Virtualization
 This kind of Virtualization ensures masking of servers. The main or the intended server is divided into many
virtual servers. Such servers keep changing their identity numbers and processors to facilitate the masking
process. This ensures that each server can run its own operating systems in complete isolation.
Data Virtualization
 This can be defined as the type of Virtualization wherein data are sourced and collected from several
sources and managed from a single location. There is no technical knowledge from where such data is
sourced and collected, stored, or formatted for such data.
 The data is arranged logically, and the interested parties and stakeholders then access the virtual view of
such data. These are reports are also accessed by end-users on a remote basis.

[Cloud Computing-20CS53I] Page 26


Department of Collegiate and Technical Education Diploma in CS&E

Amazon Elastic Compute Cloud (Amazon EC2)


Instance
SESSION-5 EC2
instance types basics:
 Amazon Elastic Compute Cloud (Amazon EC2) provides scalable computing capacity in the Amazon Web
Services (AWS) Cloud.
 Amazon EC2 enables you to scale up or down to handle changes in requirements or spikes in popularity,
reducing your need to forecast traffic.
 EC2 Instance types offer different compute, memory & storage capabilities and are grouped in instance
families based on these capabilities
 EC2 provides each instance with a consistent and predictable amount of CPU capacity, regardless of its
underlying hardware.

Features of Amazon EC2:


Amazon EC2 provides the following features:

 Virtual computing environments, known as instances


 Preconfigured templates for your instances, known as Amazon Machine Images (AMIs)
 Various configurations of CPU, memory, storage, and networking capacity for your instances, known as
instance types
 Secure login information for your instances using key pairs
 Static IPv4 addresses for dynamic cloud computing, known as Elastic IP addresses
 Metadata, known as tags, that you can create and assign to your Amazon EC2 resources
 Virtual networks you can create that are logically isolated from the rest of the AWS Cloud, and that you can
optionally connect to your own network, known as virtual private clouds (VPCs)

EC2 Instances types:


General Purpose Instances
 General purpose instances provide a balance of compute, memory and networking resources, and can
be used for a variety of diverse workloads.
 General purpose instances are optimized to have a high number of CPU cores, on-demand storage and
memory.
 These instances are ideal for applications that use these resources in equal proportions such as web
servers and software development and testing.
Compute Optimized Instances
 Compute Optimized instances are ideal for compute bound applications that benefit from high
performance processors.
 Instances belonging to this family are well suited for batch processing workloads, media transcoding,
high performance web servers, high performance computing (HPC), scientific modeling, dedicated
gaming servers and ad server engines, machine learning inference and other compute intensive
applications.
Memory-Optimized Instances
 Memory optimized instances are designed to deliver fast performance for workloads that process large
data sets in memory.

[Cloud Computing-20CS53I] Page 27


Department of Collegiate and Technical Education Diploma in CS&E

 Examples include high performance databases and distributed cache, inmemory analytics.
Accelerated Computing Instances
 Accelerated computing instances use hardware accelerators, or coprocessors, to perform functions,
such as floating point number calculations, graphics processing, or data pattern matching, more
efficiently than is possible in software running on CPUs.

Storage Optimized Instances


 Storage optimized instances are designed for workloads that require high, sequential read and write
access to very large data sets on local storage.
 They are optimized to deliver tens of thousands of low-latency, random I/O operations per second
(IOPS) to applications.
HPC Optimized
 High performance computing (HPC) instances are built to offer the best price performance for running
HPC workloads at scale on AWS.
 HPC instances are ideal for applications that benefit from highperformance processors such as large,
complex simulations and deep learning workloads.

Course: Cloud Computing Code: 20CS53I


Week-11
Session -5

DNS Services and CDN in Azure


The Domain Name System, or DNS, is responsible for translating (or resolving) a service name to an IP address.
Azure DNS is a hosting service for domains and provides naming resolution using the Microsoft Azure
infrastructure. Azure DNS not only supports internet-facing DNS domains, but it also supports private DNS
zones.

Private DNS
Azure Private DNS provides a reliable and secure DNS service for your virtual network. Azure Private DNS
manages and resolves domain names in the virtual network without the need to configure a custom DNS
solution. By using private DNS zones, you can use your own custom domain name instead of the Azureprovided
names during deployment. Using a custom domain name helps you tailor your virtual network architecture to
best suit your organization's needs. It provides a naming resolution for virtual machines (VMs) within a virtual
network and connected virtual networks. Additionally, you can configure zones names with a split-horizon view,
which allows a private and a public DNS zone to share the name.

Benefits
Azure Private DNS provides the following benefits:

• Removes the need for custom DNS solutions. Previously, many customers created custom DNS
solutions to manage DNS zones in their virtual network. You can now manage DNS zones using the native
Azure infrastructure, which removes the burden of creating and managing custom DNS solutions.
• Use all common DNS records types. Azure DNS supports A, AAAA, CNAME, MX, PTR, SOA, SRV, and TXT
records.

[Cloud Computing-20CS53I] Page 28


Department of Collegiate and Technical Education Diploma in CS&E

• Automatic hostname record management. Along with hosting your custom DNS records, Azure
automatically maintains hostname records for the VMs in the specified virtual networks. In this scenario,
you can optimize the domain names you use without needing to create custom DNS solutions or modify
applications.
• Hostname resolution between virtual networks. Unlike Azureprovided host names, private DNS zones
can be shared between virtual networks. This capability simplifies cross-network and service-discovery
scenarios, such as virtual network peering.
• Familiar tools and user experience. To reduce the learning curve, this service uses well-established
Azure DNS tools (Azure portal, Azure PowerShell, Azure CLI, Azure Resource Manager templates, and the
REST API).
• Split-horizon DNS support. With Azure DNS, you can create zones with the same name that resolve to
different answers from within a virtual network and from the public internet. A typical scenario for split-
horizon DNS is to provide a dedicated version of a service for use inside your virtual network.
• Available in all Azure regions. The Azure DNS private zones feature is available in all Azure regions in
the Azure public cloud.

CDN- Content Delivery Network

A content delivery network (CDN) is a distributed network of servers that can efficiently deliver web content to
users. CDNs store cached content on edge servers that are close to end users to minimize latency.

Overview of Azure CDN

Azure CDN provides developers with a global solution for delivering highbandwidth content to users by caching
it at strategically-positioned physical nodes throughout the world. By exploiting multiple network
improvements employing CDN POPs, It can also speed up dynamic material that cannot be cached.

Azure CDN Features

• Fast content delivery: Caching static content at locations near to the user base improves the speed
with which user requests can be completed.
• Dynamic site acceleration: With the increasing requirement for delivering personalized content to
users, CDNs also need to provide a solution to deliver dynamic content quickly as they cannot be
cached.

• High availability and highly reliable uptime: This really speeds up loading times while providing
best-in-class security.
• Significant increase in load times: The vast network of edge servers from Microsoft Azure, makes up
for significant increase in load times for applications that serve global audiences.
• Easy to set up and manage: Azure CDN leverages Microsoft’s global presence to deliver content at
astonishing speeds all the while remaining very easy to set up with low maintenance requirements.

WEEK-12: Industry Class: Kubernetes in Azure


Session No. 11

[Cloud Computing-20CS53I] Page 29


Department of Collegiate and Technical Education Diploma in CS&E

Azure Kubernetes Service (AKS) offers the quickest way to start developing and deploying
cloudnative apps in Azure, datacentres, or at the edge with built-in code-to-cloud pipelines and
guardrails. Get unified management and governance for on-premises, edge, and multi-cloud
Kubernetes clusters.

Azure Virtual Machines

An Azure virtual machine is an on-demand, scalable computer resource that is available in Azure.
Virtual machines are generally used to host applications when the customer requires more control
over the computing environment than what is offered by other compute resources.

WEEK 10
Azure Overview
Session 2

Azure is a cloud computing platform and an online portal that allows you to access and manage cloud services
and resources provided by Microsoft. These services and resources include storing our data and transforming
it, depending on our requirements

Important components of Microsoft Azure are Compute, Storage, Database, Monitoring & management
services, Content Delivery Network, Azure Networking, Web & Mobile services, etc What is Azure and how it
works?

Azure is a huge collection of servers and networking hardware, which runs a complex set of distributed
applications. These applications orchestrate the configuration and operation of virtualized hardware and
software on those servers. The orchestration of these servers is what makes Azure so powerful.

The front end hosts the services that handle customer requests. The requests allocate Azure resources and
services such as virtual machines. First, the front end validates and verifies if the user is authorized to allocate
the requested resources. If so, the front end checks a database to locate a server rack with sufficient capacity,
which instructs the fabric controller to allocate the resource.

Azure terminology:
Resource: An entity that's managed by Azure. Examples include Azure Virtual Machines, virtual networks, and
storage accounts.

Azure Active Directory (Azure AD): The Microsoft cloud-based identity and access management service. Azure
AD lets your employees sign in and access resources.

[Cloud Computing-20CS53I] Page 30


Department of Collegiate and Technical Education Diploma in CS&E

Azure AD tenant: A dedicated and trusted instance of Azure AD. When your organization signs up for a
Microsoft cloud service subscription, it automatically creates an Azure AD tenant. For example, Microsoft
Azure, Intune, or Microsoft 365. An Azure tenant represents a single organization.

Azure AD directory: Each Azure AD tenant has a single, dedicated, and trusted directory. The directory includes
the tenant's users, groups, and applications. Use the directory to manage identity and access management
functions for tenant resources.

Resource groups: Logical containers that you use to group related resources in a subscription. Each resource
can exist in only one resource group. Resource groups allow for more granular grouping within a subscription.
They're commonly used to represent a collection of assets that are required to support a workload,
application, or specific function within a subscription.

Management groups: Logical containers that you use for one or more subscriptions. You can define a hierarchy
of management groups, subscriptions, resource groups, and resources to efficiently manage access, policies,
and compliance through inheritance.

Azure regions and region pairs


What is an Azure Region
Azure Region is a set of Datacenters that are connected through a dedicated low-latency network. How many
datacenters does a region contain. Well, we do not have a fixed number. It varies. There are regions of different
sizes. A Region could be made up of just 1 dataceneter or multiple datacenters. The point is, an Azure Region is
a group of one or more Azure Datacenters. As of this course recording, Azure has 58 regions worldwide.

[Cloud Computing-20CS53I] Page 31

You might also like