Azure 900
Azure 900
Covers assessed skills: Identify the benefits of cloud computing, such as High Availability, Scalability, Elasticity, Agility, and Disaster Recovery
• Cloud computing is fundamentally capacity, just like on-premises, but with a huge amount of capacity available in cloud services
• Capacity is housed in data centers, with building blocks in the cloud, including physical data centers, clusters of servers, and racks with
nodes that run workloads
Key Takeaways
• Cloud computing offers a huge amount of capacity, with a diverse range of services, and a cadence of innovation
• The cloud is multi-tenant, with a pay-per-use model, and accessible over the internet or through private connectivity
• The benefits of cloud computing include agility, high availability, disaster recovery, scalability, and elasticity
Covers assessed skills: Identify the differences between Capital Expenditure (CapEx) and Operational Expenditure (OpEx)
Identifying the differences between capital expenditure (CapEx) and operational expenditure (Opex) is essential in understanding the consumption-based model.
CapEx involves purchasing an asset upfront, such as servers, storage, or networking equipment, which provides capacity for running services.
This model is typically used for on-premises infrastructure, with the asset depreciated over time.
On the other hand, Opex is based on purchasing resources and services as needed, without any upfront costs. This consumption-based model is common in cloud
services, where users pay for what they use. Opex offers better cost management, flexibility, and scalability compared to CapEx.
CapEx requires a significant upfront expenditure and a clear understanding of future needs, which can be challenging due to the rapid pace of innovation. Opex,
however, allows businesses to pay for resources as they grow and become more successful. This model is particularly beneficial for startups, as it eliminates the
need for a huge initial investment.
Consumption-based models enable organizations to pay for the services they use, without any upfront costs or long-term commitments. This flexibility is crucial for
managing variability in workloads and resource requirements. The cloud's consumption-based nature opens up unique scenarios that would be challenging to
replicate on-premises.
In summary, CapEx and Opex have distinct advantages and disadvantages. CapEx is suitable for on-premises infrastructure with predictable workloads, while Opex
is ideal for cloud services and variable workloads. The consumption-based model offers better cost management, flexibility, and scalability, making it a popular
choice for modern businesses.
In this lesson, we will explore the differences between various categories of cloud services. We will focus on the shared responsibility model and how it applies to Infrastructure as a Service
(IaaS), Platform as a Service (PaaS), Serverless Computing, and Software as a Service (SaaS).
The shared responsibility model varies depending on the service being used. The customer is responsible for certain things, and the provider, such as Microsoft Azure, is responsible for others.
The best way to understand this is to think about layers. The storage, networking, compute, and hypervisor layers are the responsibility of the cloud provider. The operating system, runtime,
application, and data layers are the responsibility of the customer.
When it comes to IaaS, the customer is responsible for everything inside the operating system and above it. This includes picking the operating system, installing runtimes, and managing the
application and data. The cloud provider is responsible for the physical fabric, including storage, network, compute, and the hypervisor.
PaaS is different from IaaS in that the cloud provider is now responsible for the operating system, runtime, and middleware systems. The customer is only responsible for their application and
data. This shifts the line of responsibility up to the application layer.
Serverless Computing takes this a step further. The cloud provider is responsible for everything from the operating system and above. The customer is only responsible for their code and data.
This allows the customer to focus solely on their application and not have to worry about the underlying infrastructure.
SaaS is the most complete category of cloud service. The cloud provider is responsible for everything, including the application and data. The customer is only responsible for using the service.
In summary, the customer's responsibility varies depending on the category of cloud service they are using. IaaS requires the customer to manage the operating system and above, while PaaS
and Serverless Computing shift the responsibility up to the application layer. SaaS shifts the responsibility entirely to the cloud provider. It is essential to understand the shared responsibility
model and how it applies to each category of cloud service to ensure a secure and efficient cloud environment.
In this lesson, we will focus on identifying the right service type based on a particular use case. When selecting a service, we want the business function delivered to us,
so software as a service (SaaS) is often the best choice. For example, if someone needs a messaging solution, such as email or collaboration, we would look for a SaaS
solution like Microsoft 365.
If we need to move on-premises infrastructure to the cloud, such as domain controllers or file servers, and we still require access to the operating system, we will need
to use a virtual machine (IaaS). For web services, such as Apache Tomcat or IIS on Windows, where we want to minimize responsibility, we can use Azure App Services.
For single container workloads, we can use Azure Container Instances. However, if we have a microservice-based architecture using containers and require auto-scale
capabilities, rich networking integration, and larger scale deployments, we should use Azure Kubernetes Service (AKS).
For core container-based workloads in a full Azure environment, AKS is typically the best choice. For serverless workloads that need to run anytime a certain file is
written to a storage account or a message is written to a queue, Azure Functions is the best option.
Finally, for workloads that require graphical design of a series of steps based on specific triggers, such as a tweet or file upload, Logic Apps is the best choice as it allows
for drag-and-drop components without the need for coding. Each service type fits specific workloads best, and it's essential to understand the needs of the use case to
make the right choice
Covers assessed skills: Describe the differences between types of cloud computing
This lesson describes the differences between types of cloud computing, focusing on five skills: defining cloud computing, and describing public, private, hybrid cloud, and
comparing/contrasting the three types.
Cloud Principles
Key cloud principles include pooling resources, self-service, and on-demand resources. Pooling eliminates isolated resource groups, creating a large potential capacity. Self-
service allows users to provision resources as needed, with controls like quotas and policies. On-demand resources enable users to access what they need when they need it.
Public Cloud
Public clouds, like Microsoft Azure, offer the most complete set of cloud capabilities. They are true Opex, with users paying for what they use, primarily accessed over the
internet. Public clouds offer limitless resources, many regions, and a wide range of services. They also provide strong governance policies and role-based access control.
Private Cloud
Private clouds exist on-premises, utilizing physical servers running hypervisors. A management infrastructure exposes cloud capabilities, enabling the creation of service
offerings for different business groups. Private clouds are Capex, with companies buying servers, licenses, and other necessary components upfront. They offer full flexibility
within the capabilities of the management stack.
Hybrid Cloud
Hybrid cloud refers to using both private and public clouds, often seamlessly. Users might burst from private to public clouds during busy times or use global load balancers to
distribute workloads. Hybrid cloud allows for flexibility, with users not needing to worry about where services are running. Data and regulatory requirements may necessitate
private cloud use, even with a preference for public cloud features.
In summary, cloud computing includes public, private, and hybrid types, each with its own benefits and considerations. Understanding these differences is crucial for making
informed decisions about cloud computing strategies.
Covers assessed skill: Describe the Benefits of Reliability and Predictability in the Cloud
This lesson discusses the differences between reliability and predictability in cloud platforms. Reliability refers to the ability of a cloud platform to automatically respond to problems, such as node
and rack failures. Storage services automatically have replicas of data, with three copies spread over different racks or availability zones. Reliability of services is also crucial, with features like auto-
scale and service level agreements. Reliability is the financially backed commitment from Azure for a service, which can be higher or lower depending on the number of services present.
Designing for failure is essential, considering the resilience features built into Azure at a data center level but also considering regional factors. This may involve multi-region service deployment,
active active over multiple regions, or active with a backup going to another region. Monitoring is also essential, as Azure may be fine but an application may have problems. Azure Monitor
application insights can be used to create alerts and action groups to automatically address problems.
The author discusses the importance of monitoring Azure applications and considering using Azure Monitor application insightsto create alerts and action groups in case of problems. They also
highlight the predictability of Azure's performance characteristics, such as processor performance, memory, IOPs, and networkthroughput. The author also emphasizes the need for predictable
behavior in interactions, such as using templates for deployments and automation for maintenance activities. They suggest using templates like JSON, Bicep, or Terraform files, and automating
processes like DevOps to ensure consistency and predictability. The author emphasizes the need for automation to eliminate human intervention and drive predictability in Azure's platform. The
author concludes that automation can help drive predictability in application behavior.
Reliability and predictability are related concepts with different considerations. Reliability in Azure involves automatic healing in case of node or rack failure, and redundancy through replicas of data. For example, a VM can be redeployed to another
node or rack, and storage services like Azure Storage Account have at least three copies of data.
Reliability also includes the ability to respond to changes in load through features like auto-scale, ensuring capacity to meet load without wasting resources. Azure services have a Service Level Agreement (SLA), a financially-backed commitment for
reliability.
Designing for failure is also a key aspect of reliability, considering that something could still happen at a regional level. This involves designing services to be active-active, active-passive, or with a backup in another region, depending on the Recovery
Point Objective (RPO) and Recovery Time Objective (RTO).
Monitoring is also crucial, as Azure may be fine, but the application could have a problem. Azure Monitor and Application Insights can be used to create alerts and action groups to automatically respond to issues.
Predictability in Azure involves defined SKUs with specific performance characteristics, behavior, and pricing. Predictability also means ensuring consistent interactions through templates, automation, and DevOps practices.
In summary, reliability and predictability in Azure involve automatic healing, redundancy, scaling, SLA, designing for failure, monitoring, and predictable interactions through templates, automation, and DevOps practices.
In this lesson, we will explore the benefits and usage of regions and region pairs in Azure. Regions are groups of data
centers with a latency envelope of two milliseconds round-trip latency. They are distributed around the world and include
environments such as US Government, Germany, and China, which have logical and physical isolation requirements.
Regions provide low latency and high performance for users, as well as regulatory compliance for data sovereignty. They
also offer resiliency in the event of natural disasters or regional-level failures. Azure services can be deployed to multiple
regions for performance, regulatory, and resiliency purposes.
Region pairs are used by Azure for built-in replication and disaster recovery. Microsoft has documented the regional
pairings, which are always in the same geopolitical boundary and hundreds of miles apart. Azure uses these pairings for
prioritizing service restoration in the event of large-scale outages and staggering updates to its fabric.
In this lesson, we will explore the benefits and usage of availability zones. Previously, I discussed the concept of regions, which are comprised of data centers located
within a certain distance based on a two-millisecond latency round-trip window. It is important to note that for a particular region, there are multiple data centers,
each requiring power, cooling, and networking. These data centers are protected by physical firewalls and air gaps to prevent disasters such as fires.
Availability zones are exposed as a result of independent power calling and networking for each data center. This setup allow s for resiliency from a facility level
failure. Every subscription will see three availability zones per region, if the region supports it. When deploying resources , users can select an availability zone to
physically separate their resources within the region.
It is important to note that there is no correlation between physical buildings and availability zones between subscriptions. The purpose of availability zones is to
provide resiliency from a building level problem. When deploying services, users should ensure that they deploy instances in multiple availability zones to protect
themselves from building or data center level problems.
Availability zones provide resiliency from data center level failures and are used as part of certain types of update rollout s. Depending on the service, users may have
a choice between zonal and zone redundant availability zones. Zone redundant availability zones span multiple availability zo nes automatically, while zonal
availability zones deploy resources to a specific availability zone.
In summary, availability zones provide resiliency within a region from data center level problems. Users should ensure that t hey deploy resources in multiple
availability zones to protect themselves from building or data center level failures.
Resource groups are a fundamental part of Azure's resource management. When you create a resource, such as a virtual machine, storage account, or app service, it must live
in one and only one resource group. A resource group has its own metadata, and while it lives in a specific region, the resou rces within it can be from multiple regions.
Resource groups are not nested, and you can't put a resource group inside another resource group. However, you can move resou rces between resource groups. When you
think about your resource group, you're creating resources inside it, and those resources may have certain relationships. A r esource group is not a boundary of usage, and
resources in different resource groups can still interact with each other.
Resource groups are primarily used for organizational lifecycle management and access control. When you have multiple resourc es that operate together, such as a virtual
machine, disk, network interface, and public IP address, they share a life cycle. Putting them in the same resource group ens ures they get created, run, and deleted together.
Role-based access control is another key purpose of using resource groups. You can define roles, such as owner or contributor, and grant them to users or groups at a
resource group level. This allows you to set consistent permissions across all resources within that resource group.
Additionally, you can apply policies at a resource group level, such as limiting resource creation to specific regions or ena bling logging. You can also apply budgets to resource
groups, setting spend limits for the resources created within them.
Metadata is a significant part of resource groups, and tags are a crucial aspect of this metadata. Tags are key -value pairs that allow you to categorize resources and apply
budgets, policies, and access control. However, tags are not inherited by resources within a resource group by default.
In summary, resource groups are used to contain resources that get provisioned, run, and de -provisioned together. They offer capabilities around budget, role-based access
control, and policy, and their primary use cases include organizational lifecycle management, access control, and metadata ta gging. How you use resource groups will vary
based on your organization's needs.
• A subscription is the base unit of an agreement between a customer and Microsoft, with a specific billing model (e.g. pay-as-you-go, enterprise agreement) and trust boundary for security and
permissions.
• Azure Active Directory (AAD) is separate from subscriptions and is where user, group, and device accounts are managed; each subscription trusts one AAD tenant.
• Roles, role assignments, budgets, and policies can be applied and inherited at the subscription level, and then further applied and inherited by resource groups and resources within the
subscription.
• Subscriptions have limits, some of which can be increased with a request, and these limits can influence the number of subscriptions needed for a company's resources and environments.
• Common reasons for having multiple subscriptions include separating environments (prod, test, etc.), different permissions and policies, billing purposes, and reaching resource limits.
• Tagging can be used for billing and resource management across subscriptions and resource groups.
• Resource groups, which contain resources, live within subscriptions and can be moved between subscriptions with certain limitations and tools.
• A subscription functions as a billing boundary with inherent security isolation, which can be broadened with additional configurations.
• Management groups in Azure are used to manage multiple subscriptions and their associated resources.
• The benefits of using management groups include the ability to apply role-based access control, policy, and budgets at a higher level, rather than on a per-subscription basis.
• By default, every Azure AD tenant has a root management group, and you can manually add up to six levels of management groups (not including the root or subscriptions).
• The hierarchy of management groups can be organized based on factors such as business units, environments (e.g., dev, production), or geography.
• Role-based access control, policy, and budgets can be set at the management group level and will be inherited down to the subscriptions, resource groups, and resources within that group.
• More general policies and role-based access control can be set at higher levels in the hierarchy, while more specific settings can be applied closer to the resources.
• Inherited permissions, policies, and budgets can be viewed and managed at the subscription and resource group levels.
• Management groups are designed to simplify the management of large numbers of subscriptions and resources, so it's important to create them in a way that makes sense for your
organization and will bring the most benefit.
• The video covers the benefits and usage of Azure Resource Manager (ARM).
• ARM is the second version of Azure, replacing the initial Azure Service Manager-based Azure.
• ARM is built around the idea of resource providers, which define all the different types of resources
available in Azure.
• Examples of resource providers include support, features, cost management, commerce, and billing.
• The Microsoft. Compute namespace includes resources like virtual machines, extensions, locations,
VM sizes, run commands, disks, and VM images.
• Every single entity in Azure is its own resource, such as a virtual machine, network interface, public
IP, virtual network, disk, and network security group.
• The Azure Resource Manager (ARM) is the management layer and deployment layer for Azure, with
all interactions going through it.
• Features like policy and authorization are enforced at the ARM level.
• The portal is user-friendly but not scalable for creating resources in an efficient manner.
• ARM enables provisioning resources through JSON templates, which are declarative and describe
the desired state of resources.
• ARM templates can be exported for existing resources, and the templates can be used to create,
update, or delete resources.
• Bicep is a new, more human-friendly language for defining resources, which gets transpiled into
ARM JSON templates.
• ARM is the management and deployment construct for Azure, enforcing policy and authorization for
all interactions and tools.
• Azure is a platform that provides a vast amount of capacity, exposed across numerous regions worldwide, offering various resources such as virtual machines, AKS clusters,
managed database services, app services, and machine learning.
• Azure also provides governance and management features, including Azure Policy, role-based access control, tagging, and various Defender for solutions for enhanced
security.
• Azure Resource Manager serves as the control plane for interacting with Azure resources.
• Azure Arc extends the Azure control plane and governance capabilities to services outside of Azure, including on-premises and other cloud platforms like AWS and GCP.
• Arc-enabled servers allow for the extension of Azure capabilities to operating systems, including Windows, Linux, VMs, and bare m etal servers, through an agent installed on
the OS.
• Arc-enabled Kubernetes brings Azure capabilities to Kubernetes clusters, regardless of their location (on -premises or other clouds), enabling features like tagging, policy,
Defender for Kubernetes, and GitOps. [ Kubernetes CNCF]
• Arc-enabled data services, app services, and machine learning services can be deployed on top of Arc-enabled Kubernetes, providing a consistent hybrid cloud experience.
• Azure Arc enables a consistent hybrid cloud by extending the Azure control plane to capacity, be it an OS or Kubernetes, wherever that capacity may reside, and bringing
Azure governance, security features, and services to that capacity.
Covers assessed skill: Describe the resources required for virtual machines
• The video discusses the resources required for a virtual machine (VM) in Azure.
• A VM is created within a subscription and a specific resource group, in a certain region.
• A VM requires an operating system, which can be hosted on a managed disk or an ephemeral
disk.
• An ephemeral disk is not a managed disk, but uses the host's capacity for cache or temporary
disk.
• A VM may also have one or more optional data disks, which can have different caching options.
• A VM requires connectivity, which is provided through a virtual network and a virtual NIC.
• The virtual network and NIC are their own resources, and there are costs associated with egress
traffic, Azure to Azure communications, traffic over peered connections, and private endpoints.
• A VM may have public IP addresses, which are also their own resources and have associated
costs.
• Network security groups can be created to lock down communications and are also their own
resources.
• Best practice is to use resource groups to organize all the resources that make up a VM.
• Other services, such as extensions, log analytics workspace, and Azure bastion, may be used
with a VM.
• VM SKUs - https://docs.microsoft.com/azure/virtual-machines/sizes
Containers
• Virtualize the OS, providing isolated namespaces, resource controls, and networking
• Faster to create and start than VMs
• Can be used for microservices and serverless applications Summary
• Azure Container Instances (ACI) provide a managed container service The video script explores the benefits and usage of core compute resources in Azure, including virtual machines, app services,
container services, and Azure virtual desktop. It also highlights the differences between infrastructure as a service (IaaS) and
Azure Container Instances (ACI) platform as a service (PaaS), and the options for scaling and managing these resources.
• Run containers without managing the underlying infrastructure Highlights
• Specify a container image, size, and networking settings • 00:00:38 Virtual machines (VMs) are a building block in Azure, providing virtualized hardware for running applications.
• Pay only for the time the container is running • 00:02:22 VM scale sets allow for easy scaling and management of multiple VM instances.
• Great for simple containerized applications • 00:05:12 Containers virtualize the operating system, enabling faster startup and efficient resource usage.
• 00:07:10 Azure Container Instances are ideal for running individual containers, while Azure Kubernetes Service provides
Azure Kubernetes Service (AKS) advanced container orchestration.
• Provides a managed Kubernetes service for container orchestration • 00:08:57 App Services in Azure offer a managed environment for running web applications, APIs, and mobile apps.
• Specify a deployment YAML file and AKS will create and manage the nodes • 00:11:20 Serverless options like Azure Functions and Logic Apps allow for pay-as-you-go execution of code without
• Provides rich networking, storage, and identity integration managing underlying infrastructure.
• Great for complex containerized applications • 00:13:38 Azure Virtual Desktop provides a managed solution for delivering desktops and applications as a service.
Key Insights
Azure App Service • Virtual machines (VMs) in Azure provide a familiar infrastructure for running applications, with full access and control over
• Provides a managed platform for web-based applications the operating system. They are suitable for various workloads and can be easily scaled using VM scale sets.
• Supports multiple runtime stacks (e.g..NET, Node.js, Python) • Containers offer a lightweight and efficient way to package and run applications, virtualizing the operating system rather
• Can run containers, but also provides a fully managed OS than the hardware. Azure provides options like Azure Container Instances and Azure Kubernetes Service for running and
• Great for web applications, APIs, and mobile apps orchestrating containers at scale.
• App Services in Azure are a managed environment for hosting web applications, APIs, and mobile apps. They offer built -in
Serverless features like auto scaling, deployment slots, and integration with Azure services, making it easy to deploy and manage
• Provides a consumption-based pricing model, where you only pay for the work done applications.
• Azure Functions and Logic Apps are serverless services • Serverless options like Azure Functions and Logic Apps allow developers to focus on code without worrying about
• Azure Functions are event-driven, code-based functions infrastructure management. Functions are event-driven and can be triggered by various events, while Logic Apps provide a
• Logic Apps are visual, codeless workflows with built-in connectors visual designer for building workflows and integrations.
• Azure Virtual Desktop provides a complete managed solution for delivering desktops and applications as a service. It
Azure Virtual Desktop eliminates the need for managing underlying infrastructure and offers flexibility in terms of deployment options, including
• Provides a managed desktop as a service offering personal or pooled desktops.
• Can provide a full desktop experience or publish individual applications
• Integrates with Azure Active Directory and virtual networks
• Great for remote desktop and application publishing scenarios