Cloud Notes1
Cloud Notes1
Cloud Notes1
Computer system has five basic units that help the computer to perform operations, which are given
below:
1. Input Unit
2. Output Unit
3. Storage Unit
5. Control Unit
Input Unit:
Input unit connects the external environment with internal computer system. It provides data and
instructions to the computer system. Commonly used input devices are keyboard, mouse, magnetic
tape etc.
Output Unit:
It connects the internal system of a computer to the external environment. It provides the
results of any computation, or instructions to the outside world. Some output devices are printers,
monitor etc.
Storage Unit:
This unit holds the data and instructions. It also stores the intermediate results before these are
sent to the output devices. It also stores the data for later use.
The storage unit of a computer system can be divided into two categories:
• Primary Storage: This memory is used to store the data which is being currently executed. It
is used for temporary storage of data. The data is lost, when the computer is switched off.
RAM is used as primary storage memory.
• Secondary Storage: The secondary memory is slower and cheaper than primary memory. It is
used for permanent storage of data. Commonly used secondary memory devices are hard disk,
CD etc.
Arithmetic Logical Unit:
All the calculations are performed in ALU of the computer system. The ALU can perform basic
operations such as addition, subtraction, division, multiplication etc. Whenever calculations are
required, the control unit transfers the data from storage unit to ALU. When the operations are done,
the result is transferred back to the storage unit.
Control Unit:
It controls all other units of the computer. It controls the flow of data and instructions to and
from the storage unit to ALU. Thus it is also known as central nervous system of the computer.
CPU:
It is Central Processing Unit of the computer. The control unit and ALU are together known as
CPU. CPU is the brain of computer system. It performs following tasks:
• It performs all operations.
• It takes all decisions.
• It controls all the units of computer.
A desktop is a personal computer intended for personal use, while server is a dedicated
computer that runs a software service that can be obtained by other computers in the network.
Servers are normally made up of powerful components such as faster CPUs, high performing
RAM and larger hard disks than desktop computers, since it needs to satisfy large number
request at a given time.
Furthermore, servers contain special server oriented OS that is capable of maintaining backups
and providing improved security while the OS contained in desktop normally do not offer or
offer simple versions of these services.
Client-Server Computing:
In client server computing, the clients requests a resource and the server provides that resource.
A server may serve multiple clients at the same time while a client is in contact with only one server.
Both the client and server usually communicate via a computer network but sometimes they may
reside in the same system.
An illustration of the client server system is given as follows −
Solid State Drive (SSD) is a non-volatile storage device that stores and retrieves data constantly on
solid-state flash memory. However, this data is stored on interconnected flash memory chips instead of
platters, which makes them faster than HDDs. It provides better performance compared to HDD.
An HDD uses magnetism, which allows you to store data on a rotating platter. It has a read/write head
that floats above the spinning platter for Reading and Writing of the data. The faster the platter spins,
the quicker an HDD can perform. HDD also consists of an I/O controller and firmware, which tells the
hardware what to do and communicates with the remaining system. The full form of HDD is Hard
Disk Drive
SSD is faster at reading and writing data, whereas HDD has a slower reading and writing data
speed.
SSD has lower latency, whereas HDD has higher latency.
SSD supports more I/O operations per second(IOPS), while HDD supports fewer I/O
operations per second (IOPS).
SSD does not produce such noise, on the other hand, HDD can produce noise due to
mechanical movements.
The moving parts of HDDs make them vulnerable to crashes and damage, but SSD drives can
tolerate vibration up to 2000Hz.
SSD stands for Solid State Drive, whereas HDD stands for Hard Disk Drive.
Block storage, also known as block level storage or elastic block storage, is a sequence of
data bytes that contain a number of whole records that have a maximum length (a block size). The
process of storing data into blocks is called blocking, and the process of retrieving data from blocks is
called deblocking. Blocked data is One of the most notable pros of block storage is the ability to
efficiently access and retrieve structured data from a database generally stored in a data buffer, and
read or written one block at a time, which is aimed at reducing overhead and speeding up the handling
of the data-stream.
What is Object Storage?
Object storage, also called object-based storage, is an architecture that manages data as
objects, a key difference when compared with a storage architecture like a file system. Object storage
can work well for unstructured data in which data is written once and read once (or many times).
Static online content, data backups, image archives, videos, pictures, and music files can be stored as
objects.
What is File Storage?
File storage, also referred to as file-based storage (FBS) or file system, is a format or
platform used to store and manage data as a hierarchical tree structured (as a file hierarchy), where
files are identifiable in a directory structure.
File systems store data as a set of individual file paths, which are strings of characters used to
uniquely identify the file in a directory structure. These unique identifiers include the file name,
extension, and its path and are how a file system controls the storage, retrieval, and graphical display
of the data for a user.
What is a Switch?
A switch is a networking device, which provides the facility to share the information & resources by
connecting different network devices, such as computers, printers, and servers, within a small
business network.
What is a Router?
A router is a networking device used to connect multiple switches and their corresponding networks
to build a large network. These switches and their corresponding networks may be in a single
location or different locations.
It works on the network layer & route the data packets through the shortest path across the
network.
Switch Router
It connects multiple networked devices in the network. It connects multiple switches & their corresponding
networks.
It works on the data link layer of the OSI model. It works on the network layer of the OSI model.
A switch cannot perform NAT or Network Address A router can perform Network Address
Translation. Translation.
The switch takes more time while making complicated A router can take a routing decision much faster than
routing decisions. a switch.
It provides only port security. It provides security measures to protect the network
from security threats.
It works in either half or full-duplex transmission mode. It works in the full-duplex transmission mode.
However, we can change it manually to work on half-
duplex mode.
It sends information from one device to another in the It sends information from one network to another
form of Frames (for L2 switch) and the form of packets network in the form of data packets.
(for L3 switch).
Switches are available with different ports, such as 8, 16, A router contains two ports by default, such as Fast
24, 48, and 64. Ethernet Port. But we can also add the serial ports
explicitly.
Networking – Firewalls:
Firewalls prevent unauthorized access to networks through software or firmware. By utilizing a set
of rules, the firewall examines and blocks incoming and outgoing traffic.
Fencing your property protects your house and keeps trespassers at bay; similarly, firewalls are
used to secure a computer network.
Firewalls are network security systems that prevent unauthorized access to a network.
It can be a hardware or software unit that filters the incoming and outgoing traffic within a private
network, according to a set of rules to spot and prevent cyberattacks.
Databases:
A cloud database is an organized and managed collection of data in an IT system that resides on a
public, private or hybrid cloud computing platform.
From an overall design and functionality perspective, a cloud database is no different than an on-
premises one that runs on an organization's own data center systems.
The biggest difference between them lies in how the database is deployed and managed.
Server virtualization:
What is Server Virtualization in Cloud Computing?
Server Virtualization is the process of dividing a physical server into several individuals and isolated
virtual servers with software applications. Every virtual server can run its own operating systems
individually. Why Server Virtualization?
Server Virtualization is one of the most cost-effective methods to offer Web hosting services and
uses the existing resources effectively in IT Infrastructure.
If there is no server Virtualization, the servers will only use a tiny section of their processing
power. It will result in idle servers because the workload is divided into one portion of the network
servers.
Data centers have become overcrowded with unutilized servers, resulting in wasting resources and
heavy power consumption.
By having every physical server divided into multiple virtual servers, server virtualization will
authorize each virtual server to behave as a unique device.
Every Virtual Server is capable of running its own application and operating systems.
The following process helps to increase resource Utilization by creating each virtual server to
behave as a physical server, and it develops the capacity of every physical device. Key Benefits of
Server Virtualization
Server Virtualization contains higher server capability
Organizations experience cheaper operational cost It eliminates the complexity of the server
It helps in developing the application performance
Deployment Models
The cloud deployment model identifies the specific type of cloud environment based on ownership, scale, and
access, as well as the cloud’s nature and purpose. The location of the servers you’re utilizing and who controls
them are defined by a cloud deployment model.
1. Public cloud
2. Private cloud
3. Hybrid cloud
4. Community cloud
Public cloud :
It is accessible to the public. Public deployment models in the cloud are perfect for organizations with
growing and fluctuating demands.
It also makes a great choice for companies with low-security concerns. Thus, you pay a cloud service
provider for networking services, compute virtualization & storage available on the public internet.
It is also a great delivery model for the teams with development and testing.
Its configuration and deployment are quick and easy, making it an ideal choice for test environments.
Advantages of Public Cloud Model:
Minimal Investment: Because it is a pay-per-use service, there is no substantial upfront fee, making it
management.
No maintenance: The maintenance work is done by the service provider (Not users).
Dynamic Scalability: To fulfill your company’s needs, on-demand resources are accessible.
security.
Low customization: It is accessed by many public so it can’t be customized according to personal
requirements.
Private Cloud:
The private cloud deployment model is the exact opposite of the public cloud deployment model. It’s a
one-on-one environment for a single user (customer).
There is no need to share your hardware with anyone else. The distinction between private and public
clouds is in how you handle all of the hardware.
It is also called the “internal cloud” & it refers to the ability to access systems and services within a
given border or organization.
The cloud platform is implemented in a cloud-based secure environment that is protected by powerful
firewalls and under the supervision of an organization’s IT department. The private cloud gives
greater flexibility of control over cloud resources.
Advantages of Private Cloud Model:
Better Control: You are the sole owner of the property. You gain complete command over service
have access. By segmenting resources within the same infrastructure, improved access and security
can be achieved.
Supports Legacy Systems: This approach is designed to work with legacy systems that are unable to
Costly: Private clouds are more costly as they provide personalized facilities.
Hybrid Cloud:
By bridging the public and private worlds with a layer of proprietary software, hybrid cloud computing
gives the best of both worlds.
With a hybrid solution, you may host the app in a safe environment while taking advantage of the
public cloud’s cost savings.
Organizations can move data and applications between different clouds using a combination of two or
more cloud deployment methods, depending on their needs.
Advantages of Hybrid Cloud Model:
Flexibility and control: Businesses with more flexibility can design personalized solutions that meet
Cost: Because public clouds provide scalability, you’ll only be responsible for paying for the extra
reduced.
Disadvantages of Hybrid Cloud Model:
Difficult to manage: Hybrid clouds are difficult to manage as it is a combination of both public and
latency occurs.
Community Cloud
It allows systems and services to be accessible by a group of organizations.
It is a distributed system that is created by integrating the services of different clouds to address the
specific needs of a community, industry, or business.
The infrastructure of the community could be shared between the organization which has shared
concerns or tasks.
It is generally managed by a third party or by the combination of one or more organizations in the
community.
Advantages of Community Cloud Model:
Cost Effective: It is cost-effective because the cloud is shared by multiple organizations or communities.
Collaboration and data sharing: It is suitable for both collaboration and data sharing.
to their mutual interests if an organization wants some changes according to their needs they cannot
Example: DigitalOcean, Linode, Amazon Web Services (AWS), Microsoft Azure, Google Compute Engine (GCE),
Rackspace, and Cisco Metacloud.
Builds on virtualization technology, so resources can easily be scaled up or down as per the
organization's need.
Support multiple languages and frameworks.
Example: AWS Elastic Beanstalk, Windows Azure, Heroku, Force.com, Google App Engine, Apache Stratos,
Magento Commerce Cloud, and OpenShift.
Users are not responsible for hardware and software updates. Updates are applied automatically.
The services are purchased on the pay-as-per-use basis
Example: BigCommerce, Google Apps, Salesforce, Dropbox, ZenDesk, Cisco WebEx, ZenDesk, Slack, and
GoToMeeting.
Front End
The front end is used by the client. It contains client-side interfaces and applications that are required to access
the cloud computing platforms. The front end includes web servers (including Chrome, Firefox, internet
explorer, etc.), thin & fat clients, tablets, and mobile devices.
Back End
The back end is used by the service provider. It manages all the resources that are required to provide cloud
computing services. It includes a huge amount of data storage, security mechanism, virtual machines,
deploying models, servers, traffic control mechanisms, etc.
Components of Cloud Computing Architecture
There are the following components of cloud computing architecture -
1. Client Infrastructure
Client Infrastructure is a Front end component. It provides GUI (Graphical User Interface) to interact with the
cloud.
2. Application
The application may be any software or platform that a client wants to access.
3. Service
A Cloud Services manages that which type of service you access according to the client’s requirement.
Cloud computing offers the following three type of services:
i. Software as a Service (SaaS) Example: Google Apps, Salesforce Dropbox, Slack, Hubspot,Cisco WebEx.
ii. Platform as a Service (PaaS) Example: Windows Azure, Force.com, Magento Commerce Cloud,
OpenShift.
iii. Infrastructure as a Service (IaaS) Amazon Web Services (AWS) EC2, Google Compute Engine (GCE),
Cisco Metapod
4. Runtime Cloud
Runtime Cloud provides the execution and runtime environment to the virtual machines.
5. Storage
Storage is one of the most important components of cloud computing. It provides a huge amount of storage
capacity in the cloud to store and manage data.
6. Infrastructure
It provides services on the host level, application level, and network level. Cloud infrastructure includes
hardware and software components such as servers, storage, network devices, virtualization software, and
other storage resources that are needed to support the cloud computing model.
7. Management
Management is used to manage components such as application, service, runtime cloud, storage,
infrastructure, and other security issues in the backend and establish coordination between them.
FTP (File Transfer Protocol), Telnet are the example of Stateful Protocol.
Stateless Service:
Stateless Services are the type of network protocols in which Client send request to the server and
server response back according to current state.
It does not require the server to retain session information or a status about each communicating
partner for multiple request.
HTTP (Hypertext Transfer Protocol), UDP (User Datagram Protocol), DNS (Domain Name System) are
the example of Stateless Protocol.
Salient features of Stateless Service:
Stateless Service simplify the design of Server.
The stateless Service requires less resources because system do not need to keep track of the multiple
Each communication in Stateless Protocol is discrete and unrelated to those that precedes or follow.
start reaching performance limits, resulting in increased latency and performance bottlenecks caused
by I/O and CPU capacity.
When storage optimization does not work: Whenever the effectiveness of optimization solutions for
Scaling out addresses some of the limitations of scale-up infrastructure, as it is generally more efficient
and effective.
Furthermore, scaling out using the cloud ensures you do not have to buy new hardware whenever you
want to upgrade your system.
While scaling out allows you to replicate resources or services, one of its key differentiators is fluid
resource scaling. This allows you to respond to varying demand quickly and effectively.
When to Scale Out Infrastructure
When you need a long-term scaling strategy: The incremental nature of scaling out allows you to
scale your infrastructure for expected, long-term data growth. Components can be added or removed
depending on your goals.
When upgrades need to be flexible: Scaling out avoids the limitations of depreciating technology, as
Load Balancing:
Load balancing is the method that allows you to have a proper balance of the amount of work being
done on different pieces of device or hardware equipment. Typically, what happens is that the load of
the devices is balanced between different servers or between the CPU and hard drives in a single cloud
server.
Load balancing was introduced for various reasons. One of them is to improve the speed and
performance of each single device, and the other is to protect individual devices from hitting their
limits by reducing their performance.
Cloud load balancing is defined as dividing workload and computing properties in cloud computing. It
enables enterprises to manage workload demands or application demands by distributing resources
among multiple computers, networks or servers. Cloud load balancing involves managing the
movement of workload traffic and demands over the Internet.
Monolithic architecture is built as one large system and is usually one code-base. Monolithic
application is tightly coupled and entangled as the application evolves, making it difficult to isolate
services for purposes such as independent scaling or code maintainability.
It extremely difficult to change technology or language or framework because everything is tightly
coupled and depend on each other.
Microservices architecture is built as small independent module based on business functionality. In
microservices application, each project and services are independent from each other at the code
level. Therefore it is easy to configure and deploy completely and also easy to scale based on demand.
4 Deployment Large code base makes IDE slow and Each project is independent
build time gets increase. and small in size. So overall
build and development time
gets decrease.
Sr. Key Monolithic architecture Microservices architecture
No.
Features of AWS
AWS provides various powerful features for building scalable, cost-effective, enterprise applications. Some
important features of AWS is given below-
AWS is scalable because it has an ability to scale the computing resources up or down according to the
organization's demand.
AWS is cost-effective as it works on a pay-as-you-go pricing model.
It offers various security services such as infrastructure security, data encryption, monitoring &
logging, identity & access control, penetration testing, and DDoS attacks.
It can efficiently manage and secure Windows workloads.
2. Microsoft Azure
Microsoft Azure is also known as Windows Azure. It supports various operating systems, databases,
programming languages, frameworks that allow IT professionals to easily build, deploy, and manage
applications through a worldwide network. It also allows users to create different groups for related utilities.
Its IaaS infrastructure allows us to launch a general-purpose virtual machine in different platforms such
as Windows and Linux.
It offers a Content Delivery System (CDS) for delivering the Images, videos, audios, and applications.
GCP provides various serverless services such as Messaging, Data Warehouse, Database,
If there is a sudden change in your hardware system, the application on the cloud might not offer great
performance.
Applications of AWS:
The most common applications of AWS are storage and backup, websites, gaming, mobile, web, and social
media applications. Some of the most crucial applications in detail are as follows:
1. Storage and Backup
One of the reasons why many businesses use AWS is because it offers multiple types of storage to choose
from and is easily accessible as well. It can be used for storage and file indexing as well as to run critical
business applications.
2. Websites
Businesses can host their websites on the AWS cloud, similar to other web applications.
3. Gaming
There is a lot of computing power needed to run gaming applications. AWS makes it easier to provide the
best online gaming experience to gamers across the world.
4. Mobile, Web and Social Applications
A feature that separates AWS from other cloud services is its capability to launch and scale mobile, e-
commerce, and SaaS applications. API-driven code on AWS can enable companies to build uncompromisingly
scalable applications without requiring any OS and other systems.
5. Big Data Management and Analytics (Application)
Amazon Elastic MapReduced to process large amounts of data via the Hadoop framework.
Compute service
Storage
Database
Security tools
Developer tools
Management tools
SESSION-3
AWS IAM Multi-factor authentication (MFA) Overview
AWS Multi-Factor Authentication (MFA) is a simple best practice that adds an extra layer of protection on
top of your user name and password.
MFA needs to be enabled on the Root user and IAM user separately as they are distinct entities.
Why is MFA Important?
The main benefit of MFA is it will enhance your organization's security by requiring your users to
identify themselves by more than a username and password. While important, usernames and
passwords are vulnerable to brute force attacks and can be stolen by third parties.
Enforcing the use of an MFA factor like a thumbprint or physical hardware key means increased
confidence that your organization will stay safe from cyber criminals.
SESSION-4
IAM Roles
"An IAM role is an IAM identity that you can create in your account that has specific permissions." It is not
uniquely associated with a single person; it can be used by anyone who needs it.
Roles and users are both AWS identities with permissions policies that determine what the identity can and
cannot do in AWS. However, instead of being uniquely associated with one person, a role is intended to be
assumable by anyone who needs it. Also, a role does not have standard long-term credentials such as a
password or access keys associated with it. For example, you can create S3Admin role and assign it to your
EC2 instance. This will enable that EC2 instance to manage S3 resources.
An IAM User can use a role in the same AWS account or a different account.
You can use the roles to delegate access to users, applications or services that generally do not have access
to your AWS resources.
IAM roles are of 4 types, primarily differentiated by who or what can assume the role:
AWS service role
AWS service role for an EC2 instance
AWS service role for Cross-Account Access
AWS service-linked role
Virtualization can be defined as a process that enables the creation of a virtual version of a desktop,
operating system, network resources, or server. Virtualization plays a key and dominant role in cloud
computing.
This ensures that the physical delivery of the resource or an application is separated from the actual
resource itself. It helps reduce the space or cost involved with the resource. This technique enables the
end-user to run multiple desktop operating systems and applications simultaneously on the same
hardware and software.
Types of Virtualizations
Application Virtualization
This can be defined as the type of Virtualization that enables the end-user of an application to have remote
access.This is achieved through a server. This server has all personal information and other applicable
characteristics required to use the application.
The server is accessible through the internet, and it runs on a local workstation. With Application
virtualization, an end-user can run two different versions of the same software or the same application.
Application virtualization is offered through packaged software or a hosted application.
Network Virtualization
This kind of virtualization can execute many virtual networks, and each has a separate control and data
plan. It co-occurs on the top of a physical network, and it can be run by parties who are not aware of one
another.
Network virtualization creates virtual networks, and it also maintains a provision of virtual networks.
Through network virtualization logical switches, firewalls, routers, load balancers, and workload security
management systems can be created. Desktop Virtualization
This can be defined as the type of Virtualization that enables the operating system of end-users to be
remotely stored on a server or data center. It enables the users to access their desktops remotely and do
so by sitting in any geographical location. They can also use different machines to virtually access their
desktops.
With desktop virtualization, an end-user can work on more than one operating systems basis the business
need of that individual.
It delivers portability, user mobility, easy software management with patches and updates.
Storage Virtualization
This type of Virtualization provides virtual storage systems that facilitate storage management.
It facilitates the management of storage effectively and through multiple sources accessed from a single
repository. Storage virtualizations ensure consistent performance and smooth performance.
It also offers continuous updates and patches on advanced functions. It also helps cope with the changes
that come up in the underlying storage equipment.
Server Virtualization
This kind of Virtualization ensures masking of servers. The main or the intended server is divided into many
virtual servers. Such servers keep changing their identity numbers and processors to facilitate the masking
process. This ensures that each server can run its own operating systems in complete isolation.
Data Virtualization
This can be defined as the type of Virtualization wherein data are sourced and collected from several
sources and managed from a single location. There is no technical knowledge from where such data is
sourced and collected, stored, or formatted for such data.
The data is arranged logically, and the interested parties and stakeholders then access the virtual view of
such data. These are reports are also accessed by end-users on a remote basis.
Examples include high performance databases and distributed cache, inmemory analytics.
Accelerated Computing Instances
Accelerated computing instances use hardware accelerators, or coprocessors, to perform functions,
such as floating point number calculations, graphics processing, or data pattern matching, more
efficiently than is possible in software running on CPUs.
Private DNS
Azure Private DNS provides a reliable and secure DNS service for your virtual network. Azure Private DNS
manages and resolves domain names in the virtual network without the need to configure a custom DNS
solution. By using private DNS zones, you can use your own custom domain name instead of the Azureprovided
names during deployment. Using a custom domain name helps you tailor your virtual network architecture to
best suit your organization's needs. It provides a naming resolution for virtual machines (VMs) within a virtual
network and connected virtual networks. Additionally, you can configure zones names with a split-horizon view,
which allows a private and a public DNS zone to share the name.
Benefits
Azure Private DNS provides the following benefits:
• Removes the need for custom DNS solutions. Previously, many customers created custom DNS
solutions to manage DNS zones in their virtual network. You can now manage DNS zones using the native
Azure infrastructure, which removes the burden of creating and managing custom DNS solutions.
• Use all common DNS records types. Azure DNS supports A, AAAA, CNAME, MX, PTR, SOA, SRV, and TXT
records.
• Automatic hostname record management. Along with hosting your custom DNS records, Azure
automatically maintains hostname records for the VMs in the specified virtual networks. In this scenario,
you can optimize the domain names you use without needing to create custom DNS solutions or modify
applications.
• Hostname resolution between virtual networks. Unlike Azureprovided host names, private DNS zones
can be shared between virtual networks. This capability simplifies cross-network and service-discovery
scenarios, such as virtual network peering.
• Familiar tools and user experience. To reduce the learning curve, this service uses well-established
Azure DNS tools (Azure portal, Azure PowerShell, Azure CLI, Azure Resource Manager templates, and the
REST API).
• Split-horizon DNS support. With Azure DNS, you can create zones with the same name that resolve to
different answers from within a virtual network and from the public internet. A typical scenario for split-
horizon DNS is to provide a dedicated version of a service for use inside your virtual network.
• Available in all Azure regions. The Azure DNS private zones feature is available in all Azure regions in
the Azure public cloud.
A content delivery network (CDN) is a distributed network of servers that can efficiently deliver web content to
users. CDNs store cached content on edge servers that are close to end users to minimize latency.
Azure CDN provides developers with a global solution for delivering highbandwidth content to users by caching
it at strategically-positioned physical nodes throughout the world. By exploiting multiple network
improvements employing CDN POPs, It can also speed up dynamic material that cannot be cached.
• Fast content delivery: Caching static content at locations near to the user base improves the speed
with which user requests can be completed.
• Dynamic site acceleration: With the increasing requirement for delivering personalized content to
users, CDNs also need to provide a solution to deliver dynamic content quickly as they cannot be
cached.
• High availability and highly reliable uptime: This really speeds up loading times while providing
best-in-class security.
• Significant increase in load times: The vast network of edge servers from Microsoft Azure, makes up
for significant increase in load times for applications that serve global audiences.
• Easy to set up and manage: Azure CDN leverages Microsoft’s global presence to deliver content at
astonishing speeds all the while remaining very easy to set up with low maintenance requirements.
Azure Kubernetes Service (AKS) offers the quickest way to start developing and deploying
cloudnative apps in Azure, datacentres, or at the edge with built-in code-to-cloud pipelines and
guardrails. Get unified management and governance for on-premises, edge, and multi-cloud
Kubernetes clusters.
An Azure virtual machine is an on-demand, scalable computer resource that is available in Azure.
Virtual machines are generally used to host applications when the customer requires more control
over the computing environment than what is offered by other compute resources.
WEEK 10
Azure Overview
Session 2
Azure is a cloud computing platform and an online portal that allows you to access and manage cloud services
and resources provided by Microsoft. These services and resources include storing our data and transforming
it, depending on our requirements
Important components of Microsoft Azure are Compute, Storage, Database, Monitoring & management
services, Content Delivery Network, Azure Networking, Web & Mobile services, etc What is Azure and how it
works?
Azure is a huge collection of servers and networking hardware, which runs a complex set of distributed
applications. These applications orchestrate the configuration and operation of virtualized hardware and
software on those servers. The orchestration of these servers is what makes Azure so powerful.
The front end hosts the services that handle customer requests. The requests allocate Azure resources and
services such as virtual machines. First, the front end validates and verifies if the user is authorized to allocate
the requested resources. If so, the front end checks a database to locate a server rack with sufficient capacity,
which instructs the fabric controller to allocate the resource.
Azure terminology:
Resource: An entity that's managed by Azure. Examples include Azure Virtual Machines, virtual networks, and
storage accounts.
Azure Active Directory (Azure AD): The Microsoft cloud-based identity and access management service. Azure
AD lets your employees sign in and access resources.
Azure AD tenant: A dedicated and trusted instance of Azure AD. When your organization signs up for a
Microsoft cloud service subscription, it automatically creates an Azure AD tenant. For example, Microsoft
Azure, Intune, or Microsoft 365. An Azure tenant represents a single organization.
Azure AD directory: Each Azure AD tenant has a single, dedicated, and trusted directory. The directory includes
the tenant's users, groups, and applications. Use the directory to manage identity and access management
functions for tenant resources.
Resource groups: Logical containers that you use to group related resources in a subscription. Each resource
can exist in only one resource group. Resource groups allow for more granular grouping within a subscription.
They're commonly used to represent a collection of assets that are required to support a workload,
application, or specific function within a subscription.
Management groups: Logical containers that you use for one or more subscriptions. You can define a hierarchy
of management groups, subscriptions, resource groups, and resources to efficiently manage access, policies,
and compliance through inheritance.