Azure For Your Linux and Open Source Components
Azure For Your Linux and Open Source Components
components
Migrate, develop, deploy and operate your applications on an
open, flexible, secure and trusted platform
For the latest information about open source on Azure, please see
https://azure.microsoft.com/en-us/overview/choose-azure-opensource/
For the latest information about Cloud-native applications on Azure, please see
https://azure.microsoft.com/en-us/overview/cloudnative/
This page is intentionally left blank.
Modernizing applications
Gartner predicts that every dollar invested in digital transformation and innovation through to the end of 2020 will
require organizations to spend at least three times that to continuously modernize the legacy application portfolio.
Modernization strategies for on-premises hosted applications to the public cloud have been well-theorized notably
by Gartner and implemented given the increasingly massive adoption of the cloud by organizations, even though
the “lift-and-shift” migration solution reproducing identically the application and its environment remains the easy
way.
There is one journey, but each application can take a radically different path to get to the cloud. “Application
modernization is not one ‘thing,’” says Stefan van der Zijden, research director at Gartner. “If you’re faced with a
legacy challenge, the best approach depends on the problem you’re trying to solve”, the workload itself, and its
architecture.
At a very high level, applications consist of three layers. The first layer is the application code – functionality and
business logic. Then, there’s the data that the application consumes and generates – every application works with
data, and that data can come from many different sources. Finally, there’s the physical or virtualized infrastructure
the application runs on – servers or virtual machines, networking and so on.
When you are looking to modernize an application, you will need to look at all these layers individually.
According to the Gartner, seven different modernization approaches can be considered depending on your goals
and on the problem to solve : “The key is to understand if your problem is caused by technology, architecture or
functionality of the application, and how each modernization approach improves those aspects […]”
1. Encapsulate. To leverage and extend an application’s features and value, encapsulate data and functions
in the application and make them available as services via an application programming interface (API).
Implementation specifics and knowledge are hidden behind the interface.
2. Rehost. Redeploy an application component to another physical, virtual or cloud infrastructure without
recompiling, altering the application code, or modifying features and functions.
3. Replatform. Migrate an application component to a new runtime platform. Make minimal changes to code
to adapt to the new platform, but don’t change the code structure or the features and functions it provides.
Note The Azure Database Migration Service reduces the complexity of your cloud migration by using a single
comprehensive service instead of multiple tools. This is a fully managed service designed to enable seamless migrations
from multiple database sources to Azure data platforms with minimal downtime (online migrations). This service allows you
to migrate on-premise databases such as MySQL, MongoDB, and PostgreSQL to an Azure managed database in the cloud
or to your own database running in an Azure VM.
The service uses the Data Migration Assistant (DMA) to generate assessment reports that provide recommendations to
guide you through the changes required prior to performing a migration.
3. Rearchitect. The application architecture is completely redesigned to take advantage of the cloud
capabilities and the provided services’ portfolio. For example, some parts of the application may be
redistributed as services to match business functions, others will disappear being replaced by cloud services,
others will be reviewed or dissociated for better availability or scalability. The new architecture will be based
on up-to-date models implementing recent technologies such as microservices.
These three approaches are gradually more complex. Rehost (lift-and-shift) will allow to migrate more quickly and
with less effort. This constitutes a viable path to the cloud for many applications. Some cloud benefits are quickly
unlocked, and you can take advantage of advanced cloud capabilities such as autoscaling or improved resiliency
gradually by modernizing afterwards.
Modernizing an application involves some change to application design, but it does not necessarily require
wholesale changes to application code. Refactor consists of an adaptation of the major functionalities without
reconsidering the complete architecture of the application while limiting the modernization effort. Rearchitect is
however more complex, more expensive, as it requires changes to the application source code or/and architecture,
but this is the one that will offer maximum benefits in its new cloudified version if we leave aside Rebuild (see below).
With the modernization, the application takes advantage of IaaS and potentially PaaS capabilities from a cloud
service provider (CSP) while maintaining the existing code strategic to the application's use case. This approach is
Note For more information about the different modernization strategies, see article Contoso migration:
Overview/Migration strategies.
Rebuild leads to a complete rethink of the application to develop a new version almost "from scratch", which is
outside the scope of this paper since this aims at developing a brand-new application, which will bring at least the
features of the previous version.
However, if you are looking to get the most from the cloud and tap into advanced capabilities like improved
resiliency, global scale or maximum agility, this route allows to embrace new and more relevant standards, e.g. the
Cloud-Native Computing Foundation (CNCF) to have applications be built from the ground up and optimized for
cloud scale and performance.
Note Microsoft has joined the CNCF as a Platinum member in 2017. CNCF is a part of the Linux Foundation, which
helps govern for a wide range of cloud-oriented open source projects, such as Kubernetes, Prometheus, OpenTracing,
Fluentd, Linkerd, containerd, Helm, gRPC, and many others.
In other words, options range from migration (moving the application, its infrastructure and data as-is to the cloud)
through modernization (where an application is modified to better take advantage of the cloud) to re-building
where the application is recreated using a cloud-native approach.
In addition, and in this context, we cannot stress enough that the success of the digital transformation of
organizations of all size reside notably in their ability to make the most of their data.
We will have to cope with all the above as part of your journey to Microsoft Azure.
Note Azure updates allows customers to stay up to date on what product features are planned and what’s coming
next. Customers can learn about important Azure product updates, roadmap and announcements, and subscribe to
notifications to stay informed.
Azure serves as a development, service hosting, and service management environment, providing customers with
on-demand IaaS and PaaS resources, and content delivery capabilities to host, scale, and manage applications on
the Internet.
Microsoft Azure has established itself in recent years as a great leader in the cloud market. In fact, Azure is
considered by Gartner1 in 2019 to be a leader in a number of magic quadrants (MQ) in the cloud: Cloud
Infrastructure as a Service, Disaster Recovery as a Service, Access Management, Public Cloud Storage
Services, etc.
As such, Microsoft Azure provides:
• An open platform to support your choices and preferences for your applications,
• A globally available infrastructure for your applications,
• Clearly stated cloud principles of trust for your applications and data.
Let’s consider the above in the next sections.
1
Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select
only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner's research
organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this
research, including any warranties of merchantability or fitness for a particular purpose.
We are investing in and improving these new products, building new muscle
as a service provider, and embracing Linux and other open-source efforts, all
while keeping focus on our customers.
Microsoft had long held that the open-source software from Linux was the
ennemy. We couldn’t afford to cling to that attitude any longer. We had to
meet the customers where they are and, more importantly, we needed to
ensure that we viewed our opportunity not through a rearview mirror, but
with a more future-oriented perspective.
Today, an increasing number of customers are choosing to build open source solutions on top of Azure.
You can seamlessly and easily develop and test your Linux and open source components in Azure. Microsoft Azure
enables every developer and organization to more easily adopt open source in the cloud, without having to be an
expert.
Note For more information about open source software on Azure, see page Open source on Azure.
You can bring the tools you love and skills you already have, and run virtually any application, using your data
source, with your OS. You can even install and run Microsoft SQL Server on Linux.
As such, you can use Microsoft Azure to deploy a variety of existing and new (business-critical) workloads and
benefit from rapid feature growth, resiliency, and the cost-effective operation of the hyperscale public cloud while
still obtaining the levels of isolation, security, compliance, and confidence required to handle your workloads.
In addition, you can complement what you’ve already built by using Azure, augment your open source workloads,
and add values to them with technologies and fully managed services that work well with each other.
For that purpose, you can tap into a growing ecosystem of solutions, including open source, available from the
Azure Marketplace that enable rapid deployment in the cloud.
Note Azure Marketplace is a service on Azure that helps connect you with offerings, virtual (network) appliances and
services, which are optimized to run on Azure. Azure Marketplace allows you to find, try, purchase, and provision applications
and services from hundreds of leading service providers, all certified to run on Azure.
For testing purposes, you can leverage Azure Test Drives. Azure Test Drives are free ready-to-go environments that
allow you to experience a product or a technology for free without needing an Azure subscription at all. One can
for example test drive Ansible Tower by Red Hat that helps organizations scale IT automation and manage complex
deployments across physical, virtual, and cloud infrastructures.
An additional benefit with a Test Drive is that it is pre-provisioned - you don’t have to download, set up or configure
the product or the technology and can instead spend your time on evaluating the user experience, key features,
and benefits of the product or the technology.
As illustrated in the above Figure 2, Microsoft joined the Linux Foundation organization in 2016 as a Platinum
member to confirm the steadily increasing interest and engagement in the open source development.
Microsoft is also a member of the Cloud-Native Computing Foundation (CNCF) as already outlined, the Cloud
Foundry Foundation, the Apache Software Foundation, the .NET Foundation, and recently MariaDB.
Judge us by the actions we have taken in the recent past, our actions today and in the
future.
Microsoft is working together with open source projects and vendors and is also a major contributor of code to
many open source projects. Microsoft has the most GitHub contributors, second most projects.
For example, Microsoft is also a key contributor to the Kubernetes project. To make Kubernetes easier for
organizations to adopt - and easier for developers to use - Microsoft has tripled the number of employees who
participate in the open source project in just three years. Now the third-leading corporate contributor, Microsoft
works to make Kubernetes more enterprise-friendly and accessible by bringing the latest learnings and best
practices from working with diverse customers to the Kubernetes community. As an illustration, we are delivering a
simplified end-to-end experience for Kubernetes and adding new container capabilities with Docker and serverless
Kubernetes integration. (See section § “Scaling and orchestrating containers” below).
All these investments that pertain to Kubernetes are led by the following people:
• Brendan Burns, Kubernetes cofounder, Director of Engineering at Microsoft leading the Azure Container
Service and Azure Resource Manager teams, and now Microsoft Distinguished Engineer for containers and
DevOps.
Note For more information about the above announcements, see blog posts Announcing Distributed Application
Runtime (Dapr), an open source project to make it easier for every developer to build microservice applications and
Announcing the Open Application Model (OAM), an open standard for developing and operating applications on Kubernetes
and other platforms.
Plus, we're committed to sharing our cloud learnings with you and for your datacenters, thanks to Linux and open
source support in Azure Resource Manager (ARM) and Azure Stack.
Note For more information about open source trends in Microsoft Azure, see the Open Source Blog.
Due to the success of this partnership, it was extended to other products. Microsoft SQL Server on Red Hat Enterprise
Linux was for example formally added to the partnership in July 2017 and the Generally Available (GA) version made
available to the public in October 2017.
Note Data residency refers to the physical or geographic location of an organization's data or information. It defines
the legal or regulatory requirements imposed on data based on the country or region in which it resides. See eponym section
§ “Data residency” below.
Note China regions are operated through a partner called 21Vianet. Microsoft Azure operated by 21Vianet (Azure
China 21Vianet) is a physically separated instance of cloud services located in mainland China, independently operated and
transacted by Shanghai Blue Cloud Technology Co., Ltd. ("21Vianet"), a wholly owned subsidiary of Beijing 21Vianet
Broadband Data Center Co., Ltd..
Microsoft has one of the world's 3 largest networks (bandwidth, latency, etc.) to ensure interconnection between
all regions. A region consists of a set of datacenters deployed in a perimeter with defined latency and connected
through a dedicated regional network with low latency.
To build, develop and lead this global network, we rely on three guiding principles:
1. Being as close as possible to our customers for optimal latency.
Note Latency is therefore a function of the distance (in the sense of the network path) between the client and the
data center. Microsoft uses innovative software to optimize network routing and build and deploy as direct network paths
as possible between customers and their data and services. This reduces latency to the limits imposed by the speed of light.
You can measure this latency between your current location and our data centers with the Azure Speed online tool.
2. Stay in control of capacity and resilience to ensure the network can survive multiple failures.
3. Proactively manage network-wide traffic through a software-defined network (SDN) approach.
Customer traffic enters our global network via strategically placed Microsoft Edge nodes, our points of presence.
These Edge nodes are directly interconnected to more than 2,500 unique Internet partners through thousands of
connections in more than 150 locations. Our rich interconnect strategy optimizes the paths taken by data traveling
on our global network. With all of that, you get a better network experience with less latency, packet loss and more
throughput. Direct interconnections give customers better quality of service over transit links because there are
fewer transitions, fewer intermediaries, and better network paths.
Note For more information, see Azure Stack Hub user documentation, and more specifically article Using Services
or Building Apps for Azure Stack Hub.
Azure Stack Hub is not dependent on connectivity to Azure to run deployed applications and enable operations via
local connectivity.
Azure Stack Hub can help deploying an application or a full solution differently depending on the country or region.
You can develop and deploy applications in Azure, with full flexibility to deploy on-premises with Azure Stack Hub
based on the need to meet data sovereignty or custom compliance requirements. You can leverage Azure Stack
Hub architecture for data sovereignty, e.g., transmit data from Azure Virtual Network (VNET, see section §
“Leveraging network virtualization capabilities” below) to Azure Stack Hub VNET (or vice versa) over private
connection and thus making technology placement decisions based on business needs - simplifying meeting
custom compliance, sovereignty, and data gravity requirements. You can use Azure Stack Hub to accommodate
even more restrictive requirements such as the need to deploy solutions in a completely disconnected environment
managed by security cleared, in-country personnel.
Azure Stack Edge is a Cloud managed and AI-enable edge appliance that brings the compute power and intelligence
of Azure right to where you need it—whether that’s your corporate data center, your branch office, or your remote
field asset.
Azure Stack Edge runs containers to analyze, transform, and filter data at the edge locations or datacenters. Aside
the Azure IoT Edge container platform that is currently used to provision and manage containers, Azure Stack Edge
will also soon support VMs and Kubernetes clusters so that you have a single platform to run most of your edge
compute workloads, be it net-new container based applications or the existing virtual machine (VM) based
applications. This capability is part of the recently announced Azure Arc.
For customers who want to simplify complex and distributed environments across on-premises, edge and
multicloud, Azure Arc, currently in preview, enables deployment of Azure services anywhere and extends
Azure management to any infrastructure.
People will not use technology they do not trust. And they cannot trust technology they
do not understand.
We are guided by a set of "Trusted Cloud Principles," that articulate our vision of what Enterprise
organizations are entitled to expect from their Cloud Service Provider (CSP) if any.
Such a vision and its day-to-day translation in our investments, practices and operations, etc. allow our Azure
services to deliver enterprise-grade security at every layer, helping ensure your data is safe. We operate them with
high ethical standards that provide transparency on how we design our solutions and protect your data. We
eventually collaborate with global security experts and proactively invest in technology, policy and regulations that
enhance the public security ecosystem.
The Microsoft Trust Center lists the four underlying foundational principles that guide the way Microsoft Azure is
built and operated for a Trusted Cloud:
1. Security,
2. Privacy,
3. Transparency,
4. Compliance.
Note For more information, see whitepaper Trusted Cloud: Microsoft Azure Security, Privacy, Compliance,
Reliability/Resiliency, and Intellectual Property.
Note You can review the Azure ISO/IEC 27001 certificate, assessment report, and statement of applicability on the
Service Trust Portal. For more information, see article ISO/IEC 27001:2013 Information Security Management Standards on
the Microsoft Trust Center.
Note The Service Trust Portal provides independent, third-party audit reports and other related documentation. You
can use the portal to download and review this documentation for assistance with your own regulatory requirements. For
more information on how to use the Service Trust Portal, see article Get started with the Microsoft Service Trust Portal on
the Microsoft Trust Center.
• The ISO/IEC 27017:2015 “Information technology - Security techniques - Code of practice for information
security controls based on ISO/IEC 27002 for cloud services”.
Note You can review the Azure ISO/IEC 27017 certificate, assessment report, and statement of applicability on the
Service Trust Portal. For more information, see article ISO/IEC 27017:2015 Code of Practice for Information Security Controls
on the Microsoft Trust Center.
The Microsoft Security Policy for Microsoft Azure is written according this international standard, and
provides additional controls to address cloud-specific information security threats and risks referring to
clauses 5 to 18 in ISO/IEC 27002:2013 “Information technology — Security techniques — Code of practice
for information security controls” for controls, implementation guidance, and other information. Specifically,
this standard provides guidance on 37 controls in ISO/IEC 27002, and it also features 7 new controls that
are not duplicated in ISO/IEC 27002. These new controls address the following important areas:
o Shared roles and responsibilities within a cloud computing environment.
o Removal and return of cloud service customer assets upon contract termination.
o Protection and separation of a customer’s virtual environment from that of other customers.
o Virtual machine (VM) hardening requirements to meet business needs.
o Procedures for administrative operations of a cloud computing environment.
Note You can review the Azure standard response for request for information, CAIQ, and responses to the CSA CAIQ
v3.0.1 on the Service Trust Portal. For more information, see article Cloud Security Alliance (CSA) STAR Self-Assessment on
the Microsoft Trust Center.
Microsoft establishes and institutionalizes contact with selected groups and associations within the security
community to facilitate ongoing security education and training for organizational personnel.
Microsoft Azure partners with the Microsoft Trustworthy Computing (TwC) Group to maintain contact with external
parties such as regulatory bodies, service providers, and industry forums to ensure appropriate action can be quickly
taken and advice obtained when(ever) necessary.
Privacy
You must be able to trust that the privacy of your data will be protected and that it will be used only in ways that
are consistent with your expectations. The Microsoft Privacy Statement describes the specific privacy policy and
practices that pertain to customer data in Microsoft Azure. Microsoft was also the first major CSP to adopt the first
international code of practice for cloud privacy, ISO/IEC 27018:2014 “Information technology -- Security techniques
-- Code of practice for protection of personally identifiable information (PII) in public clouds acting as PII
processors”.
Note You can review the Azure ISO/IEC 27018 certificate, assessment report, and statement of applicability on the
on the Service Trust Portal. For more information, see article ISO/IEC 27018 Code of Practice for Protecting Personal Data in
the Cloud on the Microsoft Trust Center.
Furthermore, since May 25, 2018, a European privacy law, the General Data Protection Regulation (GDPR) takes
effect. The GDPR imposes new rules on companies, government agencies, non-profits, and other organizations that
offer goods and services to people in the European Union (EU), or that collect and analyze data tied to EU residents.
The GDPR applies no matter where you are located.
Microsoft believes the GDPR represents an important step forward for individual privacy rights. It gives EU residents
more control over their “personal data” (which is precisely defined by the GDPR). The goals of the GDPR are
consistent with Microsoft’s long-standing commitment to principles we are discussing here.
You can find Microsoft’s contractual commitments with regard to the GDPR in the Online Services Terms (OST). The
GDPR Terms commit Microsoft to the requirements on processors in GDPR Article 28 and other Articles of GDPR.
Note The GDPR Terms are in Attachment 4 to the Online Services Terms, at the end of the document.
The GDPR accountability documentation provides information about the capabilities of Microsoft Azure you can
use to address specific requirements of the GDPR and to support your GDPR accountability for Data Subject
Note For more information, see article The General Data Protection Regulation (GDPR) on the Microsoft Trust
Center.
Transparency
As an hyperscale cloud, Microsoft Azure provide a global infrastructure as already introduced. Most Azure services
enable you to specify the region where your data will be stored. This has key cloud implications for data residency
and data sovereignty, as well as the fundamental principles guiding Microsoft’s handling of worldwide law
enforcement requests for customer data, including CLOUD Act provisions.
Microsoft defines customer data as all data, including text, sound, video, or image files and software that customers
provide to Microsoft to manage on customer’s behalf through customer’s use of Microsoft Azure.
Data residency
Microsoft may replicate to other regions for data resiliency, but Microsoft will not replicate or move customer data
outside the Geo. Customers and their end users may move, copy, or access their customer data from any location
globally.
Microsoft provides strong customer commitments regarding cloud services data residency and transfer policies:
• Data storage for regional services. Most Azure services are deployed regionally and enable the customer
to specify the region into which the service will be deployed, e.g., Europe. Microsoft will not store customer
data outside the customer-specified Geo except for Azure Databricks (managed Spark), Cloud Services,
Cognitive Services, and Preview services. This commitment helps ensure that customer data stored in a
given region will remain in the corresponding Geo and will not be moved to another Geo for the majority
of regional services, including virtual machines (VMs), storage, etc.
• Data storage for non-regional services. Certain Azure services do not enable the customer to specify the
region where the services will be deployed.
Resources:
• Microsoft Azure - Where is my customer data? for details about how Microsoft treats customer data.
• Services by region for a complete list of non-regional services.
Data sovereignty
Data sovereignty implies data residency. However, it also introduces rules and requirements that define who has
control over and access to customer data stored in the cloud. In many cases, data sovereignty mandates that
customer data be subject to the laws and legal jurisdiction of the country in which data resides. These laws can
have direct implications on data access even for service troubleshooting or customer-initiated support requests.
You can use Azure public multi-tenant cloud in combination with Azure Stack or other solutions for on-
premises and edge solutions to meet your data sovereignty requirements. These additional products can
be deployed to put you solely in control of your data, including storage, processing, transmission, and
remote access. This is fully aligned with our so-called ”Intelligent Cloud, Intelligent Edge” strategy.
Government requests for customer data follow a strict procedure. Microsoft takes strong measures to help protect
customer data from inappropriate access or use by unauthorized persons. This includes restricting access by
Microsoft personnel and subcontractors and carefully defining requirements for responding to government
requests for customer data. Microsoft ensures that there are no back-door channels and no direct or unfettered
government access to customer data. Microsoft imposes special requirements for government and law
enforcement requests for customer data.
As stated in the Online Services Terms (OST), Microsoft will not disclose customer data to law enforcement unless
required by law. If law enforcement contacts Microsoft with a demand for customer data, Microsoft will attempt to
redirect the law enforcement agency to request that data directly from the customer. If compelled to disclose
customer data to law enforcement, Microsoft will promptly notify the customer and provide a copy of the demand
unless legally prohibited from doing so.
Government requests for customer data must comply with applicable laws. A subpoena or its local equivalent is
required to request non-content data and a warrant, court order, or its local equivalent is required for content data.
Every year, Microsoft rejects a number of law enforcement requests for customer data. Challenges to government
requests can take many forms. In many of these cases, Microsoft simply informs the requesting government that
it is unable to disclose the requested information and explains the reason for rejecting the request. Where
appropriate, Microsoft challenges requests in court.
To verify that Microsoft meets the standards it sets for itself, the Law Enforcement Requests Report that Microsoft
publishes twice a year provides extensive information and statistics about how Microsoft has responded to law
enforcement requests, US national security orders, and content removal requests.
The Clarifying Lawful Overseas Use of Data Act or CLOUD Act (H.R. 4943) is a United States law that was enacted in
March 2018. You should refer to the following blog for more information, as well as the follow-up posting that
describes Microsoft’s call for principle-based international agreements governing law enforcement access to data.
Key points of interest to customers procuring Azure services are captured below.
• The CLOUD Act enables governments to negotiate new government-to-government agreements that will
result in greater transparency and certainty for how information is disclosed to law enforcement agencies
across international borders.
• The CLOUD Act is not a mechanism for greater government surveillance; it is a mechanism toward ensuring
that customer data is ultimately protected by the laws of each customer’s home country while continuing
to facilitate lawful access to evidence for legitimate criminal investigations. Law enforcement in the U.S. still
needs to obtain a warrant demonstrating probable cause of a crime from an independent court before
seeking the contents of communications. The CLOUD Act requires similar protections for other countries
seeking bilateral agreements.
• While the CLOUD Act creates new rights under new international agreements, it also preserves the common
law right of cloud service providers to go to court to challenge search warrants when there is a conflict of
laws – even without these new treaties in place.
• Microsoft retains the legal right to object to a law enforcement order in the United States where the order
clearly conflicts with the laws of the country where customer data is hosted. Microsoft will continue to
carefully evaluate every law enforcement request and exercise its rights to protect customers where
appropriate.
Microsoft does not disclose additional data as a result of the CLOUD Act. This law does not practically change any
of the legal and privacy protections that previously applied to law enforcement requests for data – and those
protections continue to apply. Microsoft adheres to the same principles and customer commitments related to
government demands for user data.
Azure offers an unmatched variety of public, private, and hybrid cloud deployment models to address each
customer’s concerns regarding the control of their data. Customers worldwide expect to be fully in control of
protecting their data in the cloud. Azure enables customers to protect their data through its entire lifecycle whether
in transit, at rest, or in use (see section § “Data security” below).
Compliance
Microsoft includes in its Online Services Terms (OST) a specific section § “Compliance with Laws”:
“Microsoft will comply with all laws and regulations applicable to its provision of the Online Services, including security
breach notification law. However, Microsoft is not responsible for compliance with any laws or regulations applicable
to Customer or Customer’s industry that are not generally applicable to information technology service providers.
Microsoft does not determine whether Customer Data includes information subject to any specific law or regulation.
All Security Incidents are subject to the Security Incident Notification terms below.
Customer must comply with all laws and regulations applicable to its use of Online Services, including laws related to
privacy, Personal Data, biometric data, data protection and confidentiality of communications. Customer is responsible
for determining whether the Online Services are appropriate for storage and processing of information subject to any
specific law or regulation and for using the Online Services in a manner consistent with Customer’s legal and regulatory
obligations. Customer is responsible for responding to any request from a third party regarding Customer’s use of an
Online Service, such as a request to take down content under the U.S. Digital Millennium Copyright Act or other
applicable laws.”
As an hyperscale CSP, Microsoft must be able to comply with many regulatory and industry obligations as Microsoft
Azure is adopted by many industries around the world, Microsoft’s compliance programs as well as its ability to
share third party reviews of its capabilities are key to meeting this challenge. Microsoft is committed to respecting
and accommodating regional regulatory standards.
To address the needs of customers across regulated markets worldwide, Azure maintains a comprehensive
compliance portfolio based on formal third-party certifications and other types of assurance documents to help
you meet your own compliance obligations. As of this writing, this portfolio includes 90+ compliance offerings
spanning globally applicable certifications, US Government specific programs, industry assurances, and regional /
country specific offerings to help you.
Microsoft offers the most comprehensive set of certifications and attestations of any CSP for a wide range of
international, industry and local standards, regulations, legislation and policies. When deploying applications to
Azure that are subject to regulatory compliance obligations, you seek assurances that all cloud services comprising
the solution be included in cloud service provider’s audit scope. Azure offers industry leading depth of compliance
coverage judged by the number of cloud services in audit scope for each Azure certification.
A new International Data Corporation (IDC) white paper based on original research by IDC (and sponsored by
Microsoft) on Azure customers who are using Azure as a platform to meet regulatory compliance needs stresses
that:
Study participants reported use of Azure as a compliance platform helped them carry out
their day–to-day compliance responsibilities more effectively. Azure helped them better
manage spikes in the workload, enabled faster access to (and analysis of) data during
audits, and reduced exposure to risk based on the strong internal controls of Azure.
Note Read more about the IDC findings by visiting the article.
You can build and deploy realistic applications and benefit from extensive compliance coverage provided by Azure
independent third-party audits. Azure compliance and certification resources are intended to help customers
address their own compliance obligations with various regulations. Current certifications and accreditations are
Note Azure Stack also provides compliance documentation to help you integrate Azure Stack into solutions that
address regulated workloads. As an illustration, the Azure Stack - Cloud Security Alliance (CSA) Cloud Controls Matrix (CCM)
v. 3.0.1 Assessment Report includes Azure Stack control mapping to CCM domains and controls.
Compliance Manager is an online tool available from the Microsoft Service Trust Portal that you can use to get
insight into how Microsoft implements controls specified in leading standards such as ISO/IEC 27001:2013, ISO/IEC
27018:2014, NIST SP 800-53 Rev 4, and others. For example, Compliance Manager provides insight into Microsoft
control implementation and test details for controls that are part of Microsoft responsibility. Moreover, you can use
this interactive tool to track progress for control implementations that you own.
Azure Compliance and Security Blueprints is a set of reference architectures with Industry-specific overview and
guidance, supporting deployment automation and guidance, security control mappings, customer responsibility
matrices to assist you with deploying applications to Azure that meet established compliance standards. Customer
responsibility matrices outline which controls are part of customer’s responsibility (see section § “Understanding the
shared responsibilities’ model for your applications” below).
To offer such an industry leading depth of compliance coverage, Microsoft has implemented a common controls
framework, which maps and aligns control domains and activities across services, requirements, and operations for
each audit, certification, and accreditation.
This mechanism allows to build a 1,000+ controls backbone and is regularly maintained and updated with new
controls when new services or standards are incorporated into Microsoft's continuous cloud compliance program.
Resources:
• Overview of Microsoft Azure compliance.
• 2-minute video to introduce key Compliance Manager features.
• Azure Compliance and Security Blueprints.
While security compliance shares many activities with managing security risk, the measure of success for
compliance is quite different as illustrated hereafter.
Security compliance is focused on satisfying regulators and auditors and is typically focused on meeting a very
specific set of standards that don’t change frequently. While many controls in security standards are relevant to
current threats at any point, the standards may also include many requirements that have no effect on current
threats and techniques.
In contrast, managing security risk requires mitigating actual and anticipated risks to a specific organization. This is
frequently a very dynamic endeavor as there may be frequent changes in the adversaries of concern, the techniques
Note For a full list of compute services available with Azure and the context on when to use them, see page Compute.
Virtual machines (VMs) are software emulations of physical computers. They include a virtual processor, memory,
storage, and networking resources. They host an operating system (OS), and you are able to install and run software
just like a physical computer. The use and the control of a VM is typically done through a remote connection client
such as SSH a client for Linux-based VMs.
Virtual machines (VMs) support in Azure comprises the following services and capabitilites:
• Azure Virtual Machines.
• Azure Virtual Machine Scale Set.
General purpose B, Dsv3, Dv3, Dasv3, Balanced CPU-to-memory ratio. Ideal for testing and development, small
Dav3, DSv2, Dv2, Av2, to medium databases, and low to medium traffic web servers.
DC
Compute optimized Fsv2 High CPU-to-memory ratio. Good for medium traffic web servers,
network appliances, batch processes, and application servers.
Memory optimized Esv3, Ev3, Easv3, Eav3, High memory-to-CPU ratio. Great for relational database servers,
Mv2, M, DSv2, Dv2 medium to large caches, and in-memory analytics.
Storage optimized Lsv2 High disk throughput and IO ideal for Big Data, SQL, NoSQL databases,
data warehousing and large transactional databases.
GPU NC, NCv2, NCv3, ND, Specialized virtual machines targeted for heavy graphic rendering and
NDv2 (Preview), NV, video editing, as well as model training and inferencing (ND) with deep
NVv3 learning. Available with single or multiple GPUs.
High performance HB, HC, H Our fastest and most powerful CPU virtual machines with optional high-
compute throughput network interfaces (RDMA).
Note Support for generation 2 virtual machines (VMs) is now available in preview in Azure. Generation 2 VMs support
key features that aren't supported in above generation 1 VMs. These features include increased memory, Intel Software
Guard Extensions (Intel SGX), and virtualized persistent memory (vPMEM). Generation 2 VMs use the new UEFI-based boot
architecture rather than the BIOS-based architecture used by generation 1 VMs. Compared to generation 1 VMs, generation
2 VMs might have improved boot and installation times
Resources:
• Sizes for Linux virtual machines in Azure for information about available sizes and options for the various VM type.
• Virtual Machines Pricing for information about pricing of the various sizes.
• Products available by region for availability of VM sizes in Azure regions.
• Azure subscription and service limits, quotas, and constraints to see general limits on Azure VMs.
• Azure compute units (ACU) to compare compute performance across Azure SKUs.
• Support for generation 2 VMs (preview) on Azure.
Shared Image Gallery provides an Azure-based solution to make the custom management of VM managed images
easier in Azure - A managed image is a copy of either a full VM (including any attached data disks) or just the OS
disk, depending on how the image has been created -. As such, Shared Image Gallery provides a simple way to
share applications with others in your organization, within or across regions, enabling to expedite regional
Azure Batch
If you need to run large-scale batch or high-performance computing (HPC) applications on Azure, you can use Azure
Batch.
Azure Batch creates and manages a collection of thousands of VMs, installs the applications you want to run, and
schedules jobs on the VMs. They don’t need to deploy and manage individual VMs or server clusters; Batch
schedules, manages, and auto- scales your jobs so you use only the VMs you need.
Batch is well suited to run parallel workloads at scale.
Resource:
• What is Azure Batch?.
Note For a full list of networking services available with Azure, and context on when to use them, see page
Networking.
VNET peering enables to seamlessly connect VNETs. Once peered, the VNETs appear as one, for connectivity
purposes. The traffic between VMs in the peered VNETs is routed through the Microsoft backbone infrastructure,
much like traffic is routed between VMs in the same VM, through private IP addresses only.
Azure supports:
• VNet peering. Connecting VNets within the same Azure region.
• Global VNet peering. Connecting VNets across Azure regions.
Resource:
• What is Azure Virtual Network?.
• Virtual network peering.
Azure DNS
Azure DNS is a hosting service for DNS domains that provides name resolution by using Microsoft Azure
infrastructure. As such, Azure DNS uses a global network of name servers to provide fast responses to DNS queries.
Microsoft uses Anycast networking, so DNS queries automatically route to the closest name servers to give you the
best possible performance.
In addition, by hosting your domains in Azure, you can manage their DNS records by using the same credentials,
APIs, tools, and billing as your other Azure services. Azure DNS seamlessly integrates Azure-based services with
corresponding DNS updates and streamline your end-to-end deployment process.
Resource:
• What is Azure DNS?.
Note For more information about connecting an on-premises network to Azure, see article Choose a solution for
connecting an on-premises network to Azure.
ExpressRoute is a streamlined solution for establishing a secure private connection facilitated by a connectivity
provider between customer infrastructure and Azure datacenters.
Connectivity can be from an any-to-any (IP VPN) network, a point-to-point Ethernet network, or a virtual cross-
connection through a connectivity provider at a co-location facility. ExpressRoute connections do not go over the
public Internet. This allows ExpressRoute connections to offer more reliability, faster speeds, consistent latencies,
and higher security than typical connections over the Internet.
The termination point of ExpressRoute private peering can affect firewall capacity, scalability, reliability, and network
traffic visibility:
Note Managed disks provide better reliability for availability sets (see section § “What can Azure do for high-
availability?” below) by ensuring that the disks of VMs in an availability set are sufficiently isolated from each other to avoid
single points of failure. It does this by automatically placing the disks in different storage fault domains (storage clusters)
and aligning them with the VM fault domain. If a storage fault domain fails due to hardware or software failure, only the VM
instance with disks on the storage fault domain fails.
Typical scenarios for using disk storage are if a customer want to “lift-and-shift” applications that read and
write data to persistent disks, or they you are storing data that is not required to be accessed from outside
the VM to which the disk is attached.
Disks come in many different sizes and performance levels, from solid-state drives (SSDs) to traditional spinning
hard disk drives (HDDs), with varying performance abilities.
Resources:
• What disk types are available in Azure?.
• Introduction to Azure managed disks.
Azure Files
Azure Files offers fully managed file shares in the cloud that are accessible via the industry standard Server Message
Block (SMB) protocol. Azure file shares can be mounted concurrently by cloud or on-premises deployments of Linux.
Applications running in Azure Virtual Machine (see above eponym section) or cloud services in Azure can mount a
file storage share to access file data, just as a desktop application would mount a typical SMB share. Any number of
VMs or roles in Azure can mount and access the file storage share simultaneously. Typical usage scenarios would
be to share files anywhere in the world, diagnostic data, or application data sharing.
Resources:
• What is Azure Files?.
• Use Azure Files with Linux.
Leveraging containerization
“Containerization” is one of those technology buzzwords flying around in the news. But containers are more than
just buzz - they’re actually very useful for running your applications.
Containers are a virtualization environment. However, unlike VMs, they do not include an operating system (OS).
Instead, they reference the OS of the host environment that runs the container. Containers are meant to be
lightweight and are designed to be created in a few seconds, scaled out, and stopped dynamically. This allows you
to respond to changes on demand and quickly restart in case of a crash or hardware interruption.
Containers also offer tremendous portability, which makes them ideal for developing an application locally on your
machine and then hosting it in the cloud, in test, and later in production.
Note For more information on Kubernetes support in Azure, see page Kubernetes.
Note For additional information, see the article Integrating Azure CNI and Calico: A technical deep dive.
Note Already available on Azure, AKS is expected later this calendar year 2019 on Azure Stack. As of this writing, you
can install Kubernetes using ARM templates (see section § “Are you saying “Infrastructure as Code”?” below) generated by
the Azure Container Service (ACS)-Engine on Azure Stack, but any certified Kubernetes distribution will work. (Kubeadm for
example is a simple tool to deploy a Kubernetes cluster and is supported by Kubernetes). For more information, see article
Azure Kubernetes Service (AKS) on Azure Stack.
Resource:
• Introduction to Azure Kubernetes Service.
Kubernetes applications’ connection to Azure Services with Open Service Broker API
You need an easy way to connect your containers to Azure services. Microsoft open sourced the Open Service Broker
for Azure (OSBA) built using the Open Service Broker API.
Note The Open Service Broker API is an industry-wide effort to meet that demand, simply and securely. The Open
Service Broker API provides a standard way for service providers to expose backing services to applications running in cloud
native platforms like Kubernetes.
In a multi-cloud, multi-platform world, developers want a standard way to connect their applications to the wealth
of services available in the marketplace.
Developing a Kubernetes application can be challenging. You need Docker and Kubernetes configuration files. You
need to figure out how to test your application locally and interact with other dependent services. You might need
to handle developing and testing on multiple services at once and with a team of developers.
Azure Dev Spaces provides a rapid, iterative Kubernetes development experience for teams in AKS clusters. You can
collaborate with your team in a shared AKS cluster. Azure Dev Spaces also allows you to test all the components of
your application in AKS without replicating or mocking up dependencies. You can iteratively run and debug
containers directly in AKS with minimal development machine setup.
Azure Dev Spaces helps teams to focus on the development and rapid iteration of their microservice application by
allowing teams to work directly with their entire microservices architecture or application running in AKS. Azure Dev
Spaces also provides a way to independently update portions of your microservices architecture in isolation without
affecting the rest of the AKS cluster or other developers. Azure Dev Spaces is for development and testing in lower-
level development and testing environments and is not intended to run on production AKS clusters.
Azure Dev Spaces provides tooling to generate Docker and Kubernetes assets for your projects. This tooling allows
you to easily add new and existing applications to both a dev space and other AKS clusters.
Resource:
• What are Azure Dev Spaces?.
• How Azure Dev Spaces works and is configured.
Back in 2017, Microsoft released the Azure Container Instances (ACI) Connector for Kubernetes, an experimental
open-source project to extend Kubernetes with ACI, a serverless container runtime that provides per-second billing
and no VM management, see section § “Leveraging containerization” above.
A new version of the above Kubernetes connector, i.e. the Virtual Kubelet, is available and can be used by customers
to target ACI or any equivalent runtime.
The Virtual Kubelet open source project from the Cloud Native Computing Foundation (CNCF) is an open source
Kubernetes kubelet implementation that masquerades as a kubelet for the purposes of connecting Kubernetes to
One of the critical problems solved by containers is the hermetic packaging of a binary into a package that is easy
to share and deploy around the world. But a Cloud-native application is more than a binary, and this is what led to
the co-development, with HashiCorp and others, of the Cloud Native Application Bundle (CNAB) specification to
reduce the complexity of running multiple services packaged together.
Note Cloud-native applications are based on the above microservices architecture, use managed services, and take
advantage of continuous delivery to achieve reliability and faster time to market. They offer you greater agility, resilience,
and portability across clouds. Cloud-native is a way of approaching the development and deployment of applications in a
way that takes full account of the characteristics and nature of the cloud – resulting in applications and workflows that unlock
all cloud benefits.
If you are looking to get the most from the cloud and tap into advanced capabilities like improved resiliency, global scale or
maximum agility, this route allows to embrace new and more relevant standards, e.g. the Cloud-native computing foundation
(CNCF) to have applications be built from the ground up and optimized for cloud scale and performance.
CNAB can greatly simplify simplifies container-style deployments for distributed application.
“Today if you’re using just container-based applications maybe you’re building Helm artifacts or for Azure you’re
targeting an ARM artifact or something like Terraform,” says Gabe Monroy. “The problem comes when the app
you’re building is a mix of these things, so it’s got, say, Terraform and containers and functions, because we’re
starting to see that diversity in different runtimes and cloud APIs emerge today. How do you wrap your hands
around that and turn it into something you can manage like a simple application? Can we offer that familiar
experience around repeatability, immutability and cryptographic assurances that the workload hasn’t been modified
in a world that’s containers plus… or that doesn’t even include containers at all?”
CNABs allow you to package images alongside configuration tools like Terraform and other artifacts to allow
seamlessly deploying an application from a single package (see section § “Are you saying “Infrastructure as Code”
below).
Resource:
• Cloud Native application bundles (CNAB).
Note For more information about Consul, see the official What is Consul? documentation.
HashiCorp Consul can be fully installed into a Kubernetes cluster on AKS. Furthermore, as of this writing, you can
take advantage of HashiCorp Consul Services on Azure powered by the Azure Managed Applications platform. With
this managed offering, you can focus on the value of Consul while being confident that the experts at HashiCorp
are taking care of the management of the service, de facto reducing complexity for you and enabling you to focus
on cloud native innovation.
Istio is an open-source service mesh that provides a key set of functionalities across the microservices in a
Kubernetes cluster. These features include traffic management, service identity and security, policy enforcement,
and observability.
Note For more information about Istio, see article What is Istio?.
Istio can be fully installed into a Kubernetes cluster on AKS. One should also note the availability of an Application
Insights adapter for Istio Mixer project's GitHub.
Resources:
• HashiCorp announcement regarding the HashiCorp Consul Service offering on Azure.
• Install and use Consul Connect in Azure Kubernetes Service (AKS).
• Install and use Istio in Azure Kubernetes Service (AKS).
In-memory caching
Every modern application works with data. When you retrieve data from a data store like a database, this typically
involves scanning multiple tables or documents in some distant server, weaving the results together, and then
sending the result to the requestor.
To eliminate some of these “roundtrips,” you can cache data that doesn’t change often. This way, instead of querying
the database every time, you could retrieve some of the data from a cache, like Azure Cache for Redis, a fully
managed, open source compatible in-memory data store to power fast, scalable applications.
As its name indicates, Azure Cache for Redis is based on the popular software Redis and is backed by industry-
leading SLAs (see section § “What can Azure do for high-availability?” below).
It is typically used as a cache to improve the performance and scalability of systems that rely heavily on backend
data-stores. The benefit of the cache is that it stores data in a simple format, such as key-value. You don’t need to
run a complex query to get this data. Instead, you just need to know the key to retrieve the value.
Performance is improved by temporarily copying frequently accessed data to fast storage located close to the
application. With Azure Cache for Redis, this fast storage is located in-memory with Azure Cache for Redis instead
of being loaded from disk by a database.
Azure Cache for Redis can also be used as an in-memory data structure store, a distributed non-relational database,
and a message broker. Application performance is improved by taking advantage of the low-latency, high-
throughput performance of the Redis engine. Azure Cache for Redis has advanced options like clustering and geo-
replication.
Azure Cache for Redis provides you access to a secure, dedicated Redis cache. Azure Cache for Redis is managed
by Microsoft, hosted within Azure, and accessible to any application within or outside of Azure. Azure provides
Cache-as-a-Service with Redis Cache.
Resource:
• Azure Cache for Redis description.
Message bus
Modern, globally distributed applications often must deal with large amounts of messages coming in, so they need
to be designed with decoupling and scaling in mind. As such, a message bus offers a platform for services and
applications to exchange messages. This enables asynchronous consumption of services and reduce their mutual
dependence, hence bringing decoupling.
Note There are numerous debates on when sending a message is more appropriate than directly consuming a REST
API, so really it all depends on the type of application architecture we want to implement. Even though message bus offers
a buffer in-between two services, some REST implementations may offer something similar through specific
implementations.
Note A message is raw data produced by a service to be consumed or stored elsewhere. The message contains the
data that triggered the message pipeline. A message is in binary format, which can contain JSON, XML, or just text. The
publisher of the message has an expectation about how the consumer handles the message. A contract exists between the
two sides. For example, the publisher sends a message with the raw data, and expects the consumer to create a file from
that data and send a response when the work is done.
Note For a full list of serverless services available with Azure, see page Azure serverless.
Azure Functions
When you are concerned only about the code running your service and not the underlying platform or infrastructure,
Azure Functions are ideal.
They're commonly used when you need to perform work in response to an event (often via a REST request), timer,
or message from another Azure service, and when that work can be completed quickly, within seconds or less.
Azure Functions scale automatically, and charges accrue only when a function is triggered, so they're a solid choice
when demand is variable. They can scale out to accommodate these busier times.
Furthermore, Azure Functions are stateless; they behave as if they're restarted every time they respond to an event.
This is ideal for processing incoming data. And if state is required, they can be connected to an Azure storage
service.
Durable Functions is an extension of Azure Functions that lets customer write stateful functions in a serverless
compute environment. The extension lets them define stateful workflows by writing orchestrator functions and
stateful entities by writing entity functions using the Azure Functions programming model. Behind the scenes, the
extension manages state, checkpoints, and restarts for you, allowing you to focus on your business logic.
With Azure Functions, it’s possible to pay only for functions that run, rather than having to keep compute instances
running all month. This is also called serverless because it only requires you to create your application - you don’t
have to deal with any servers or even scaling of servers.
The Functions runtime is open-source and available on GitHub.
Resources:
• An introduction to Azure Functions.
• What are Durable Functions?.
Azure Event Grid is a fully managed, intelligent event routing service that uses a publish-subscribe model for uniform
event consumption. Azure Event Grid hooks into almost every service in Azure as well as into custom publishers and
subscribers, and automatically pushes messages to subscribers, making it a real-time, reactive event service.
In addition to its default event schema, Azure Event Grid natively supports events in the CloudEvents JSON schema.
CloudEvents is an open specification for describing event data. As such, CloudEvents simplifies interoperability by
Ingesting/streaming data
Data streaming consists of ingesting data from various sources in real time, with the objective of making it available
to various consumers for further processing or storage. When streaming data, one usually refers to very large
quantity of data, produced at a highly rate and in some sort of steady and continuous way.
This usually shapes big data architecture. A big data architecture is designed to handle the ingestion/streaming,
processing, and analysis of data that is too large or complex for traditional database systems.
Note The threshold at which organizations enter into the big data realm differs, depending on the capabilities of the
users and their tools. For some, it can mean hundreds of gigabytes of data, while for others it means hundreds of terabytes.
As tools for working with big data sets advance, so does the meaning of big data. More and more, this term relates to the
value customers can extract from their data sets through advanced analytics, rather than strictly the size of the data, although
in these cases they tend to be quite large.
A typical big data pipeline has four stages, i.e. ingest/stream, process, store, and analysis/reporting. To perform the
real-time data ingestion/streaming, you are provided on Azure with a number of technical options, and amongst
the technical possibilities, Apache Kafka in the context of this paper.
Apache Kafka is featured in the Azure IoT Reference Architecture Guide. This guide aims to accelerate customers
building IoT solutions on Azure by providing a (Cloud-native) proven production ready architecture, with links to
Solution Accelerator reference architecture implementations such as Remote Monitoring or Connected Factory.
Note As such, technical content covers topics such as microservices, containers, orchestrators (e.g. Kubernetes),
serverless usage, etc. with proven technology implementation recommendations per subsystems and options in terms of
technology. Each of these options provide different levels of control, customizability/extensibility, and simplicity. These
attributes have different levels of importance for different customers; e.g. some customers need a solution that is highly
customized while others might be able to use what is “in the box” and nothing more.
Consequently, primary options customers choose from range from a Microsoft SaaS offering, i.e. Azure IoT Central,
abstracting all technical choices, to the use of open source components (e.g. Apache Kafka, Apache Spark, etc.) to bootstrap
their system and host it on IaaS VMs/VM scale sets, containers and/or run it on top of fully managed service(s), e.g. Azure
Apache Kafka
Apache Kafka is an open source distributed streaming platform that can be used to build real-time streaming data
pipelines and applications. Microsoft Azure offers a number of options to benefit from Apache Kafka starting from
running Apache Kafka on top of Azure IaaS capabilities (see section § “Leveraging compute virtualization
capabilities” above).
Beyond this core capabilities, you can leverage fully managed PaaS services in Azure, and more especially Azure
HDInsight and Azure Event Hubs as depicted hereafter.
Azure HDInsight is a platform within Azure that you can use to run open source data analytics services. You can use
it to run specialized clusters of your favorite open source data analytics tools and frameworks, such as Apache Kafka,
Apache Hadoop, Apache Spark, Apache Hive (and Live Long And Process (LLAP)), Apache Storm, etc.
Note To see available Hadoop technology stack components on HDInsight, see Components and versions available
with HDInsight.
The advantage of running these tools and frameworks in Azure HDInsight is that they’re managed, which
means you don’t have to maintain VMs or patch operating systems. Plus, they can scale and easily connect
to one another, other Azure services, and on-premises data sources and services.
Azure HDInsight and some of its benefits:
• It is a fully managed service that provides a simplified configuration process. The result is a configuration
that is tested and supported by Microsoft.
• Microsoft provides a 99.9% Service Level Agreement (SLA) on Kafka uptime.
• HDInsight allows you to change the number of worker nodes (which host the Kafka-broker) after cluster
creation. Scaling can be performed from the Azure portal, Azure PowerShell, and other Azure management
interfaces. For Kafka, you should rebalance partition replicas after scaling operations. Rebalancing partitions
allows Kafka to take advantage of the new number of worker nodes.
In addition, Azure HDInsight integrates seamlessly with other Azure services, including Azure Data Factory (see
section § “Building data pipelines for data movement, transformation, and analytics” below) and Azure Data Lake
Storage (see section § “Using long term storage for your data” below) for building comprehensive analytics pipelines.
Lastly, with the Enterprise Security Package (ESP), you can enable role-based access control (RBAC) by integrating
HDInsight clusters with your own Azure AD Domain Services. Azure AD Domain Services (Azure AD DS) is a feature
of Azure AD (see section § “Identity and access management” below) that provides managed domain services, such
as domain join, group policy, LDAP, and Kerberos / NTLM authentication that is fully compatible with Windows
Server Active Directory; Azure AD DS replicates identity information from Azure AD.
Resources:
• What is Azure HDInsight?.
• What is Apache Kafka in Azure HDInsight?.
Azure Event Hubs is a fully managed and massively scalable distributed streaming platform designed for a plethora
of use cases and for ingesting millions of event messages per second. (As an internal illustration, Azure Event Hubs
are used by the Xbox One Halo team, as well as powers both Microsoft Teams and Microsoft Office client application
telemetry pipelines.) The captured event data can be processed by multiple consumers in parallel.
Azure Event Hubs has been immensely popular with Azure’s largest customers and even more so with the release
of Event Hubs for Apache Kafka.
Along with the native support of the Advanced Message Queuing Protocol (AMQP) 1.0, Azure Event Hubs also
provides a binary compatibility layer that allows existing applications, including Apache Kafka MirrorMaker, using
Apache Kafka protocol 1.0 and later to process events using Azure Event Hubs with no application changes.
With this powerful new capability, you can stream events from Kafka applications seamlessly into Azure Event Hubs
without having to run Zookeeper or manage Kafka clusters, all while benefitting from a fully managed PaaS service
with features like auto-inflate and geo-disaster recover.
Resources:
• What is Azure Event Hubs?.
• Data streaming with Event Hubs using the Kafka protocol.
• Use Azure Event Hubs from Apache Kafka applications.
• Integrate Apache Kafka Connect support on Azure Event Hubs (Preview).
• Connect your Apache Spark application with Kafka-enabled Azure Event Hubs.
• Use Kafka MirrorMaker with Event Hubs for Apache Kafka.
Note Azure IoT Hub allows devices to use the following protocols for device-side communications: Message
Queuing Telemetry Transport (MQTT), MQTT over WebSockets, Advanced Message Queuing Protocol (AMQP) 1.0, AMQP
over WebSockets, and HTTPS.
Azure Cosmos DB represents a new kind of fully managed database service made for the cloud. Its key features
include:
• A 99.99 percent SLA (99.999% for read operations) that includes low latencies (less than 10 milliseconds on
reads and less than 15 milliseconds on writes).
• Turnkey global distribution and transparent multi-master replication geo-replication, which replicates data
to other geographical regions in real time.
• Tunable data consistency levels so you can enable a truly globally distributed data system. You can choose
from a spectrum of data consistency models, including strong consistency, session consistency, and
eventual consistency.
• Traffic Manager, which sends users to the service endpoint to which they are closest.
• Limitless global scale, so you pay only for the throughput and storage that you need.
• Automatic indexing of data, which removes the need to maintain or tune the database.
In addition to all these features, Azure Cosmos DB offers different APIs with which you can store and retrieve data.
Aside SQL, implements wire protocols for common NoSQL APIs including Not only SQL (NoSQL) APIs:
• MongoDB API compatible with version 3.2 of the MongoDB's wire protocol.
Features or query operators added in version 3.4 of the wire protocol are currently available as a preview
feature. Any MongoDB client driver that understands these protocol versions should be able to natively
connect to Cosmos DB.
Apache Cassandra API, allowing the use of Cassandra client drivers compliant with the Cassandra Query
Language (CQL) v4, and Cassandra-based tools.
• Gremlin API based on the Apache TinkerPop graph database standard, and uses the Gremlin query
language.
• Etcd API allowing o scale Kubernetes state management on a fully managed cloud native PaaS service.
Note Azure Cosmos DB named a leader in the Forrester WaveTM: Big Data NoSQL, Q1 2019 report. Azure Cosmos
DB received the highest scores in the Strategy and Roadmap criteria and among the highest scores in the Ability to Execute,
Install Base, Market Awareness, Partnerships, Reach, Professional Services and Technical Support criteria.
The Forrester report notably stresses that “Microsoft starts to get strong traction with Azure Cosmos DB […] Customer
references like its resilience, low maintenance, cost effectiveness, high scalability, multi-model support, and faster time-to-
value. They use Cosmos DB for operational apps, real-time analytics, streaming analytics, and internet-of-things (IoT)
analytics.”
Resources:
• What is Azure Cosmos DB?.
• Azure Cosmos DB's API for MongoDB.
• Global data distribution with Azure Cosmos DB - overview.
Azure SQL Database is a general-purpose relational database, provided as a fully managed service.
With it, you can create a highly available and high-performance data storage layer for your applications and
solutions in Azure. SQL Database can be the right choice for a variety of modern applications because it enables
you to process both relational data and non-relational structures, such as graphs, JSON, spatial, and XML.
As such, it provides the broadest SQL Server engine compatibility and up to a 212% return on investment.
SQL Database offers several service tiers that are geared toward specific scenarios.
• General purpose/standard. This tier offers budget-oriented, balanced, and scalable compute and storage
options. This tier is the best option for most business workloads.
• Business Critical/Premium. This tier offers the highest resilience to failures using several isolated replicas.
With consistently high IO, it includes a built-in availability group for high availability. This is the best option
for your critical Online Transactional Processing (OLTP) (normal CRUD operations) business applications
with consistently high IO requirements.
• Hyperscale. This tier offers very large database (VLDB) support without the headaches. With a built-for-
the-cloud architecture of highly scalable storage and a multilayer cache optimized for very large and
demanding workloads, it provides low latency and high throughput regardless of the size of data
operations. This is the best tier for your very large and demanding workloads with highly scalable storage
and read-scale requirements.
Resource:
• What is the Azure SQL Database service?.
For your low-latency scenarios, Azure provides community-based Azure Database for MySQL, Azure Database for
PostgreSQL, and Azure Database for MariaDB databases as Enterprise-ready, managed databases, which means that
you just spin them up and don’t have to worry about any of the underlying infrastructure. Just like above Azure
Cosmos DB and Azure SQL Database, these databases are universally available, scalable, highly secure, and fully
managed (DBaaS).
Note Forrester has named Microsoft as a leader in the Forrester WaveTM: Database-as-a-service, Q2 2019. This
decision is based on their evaluation of Azure relational and non-relational databases. According to the Forrester report,
customers “like Microsoft’s automation, ease of provisioning, high availability, security, and technical support.” We believe
Microsoft’s position as a leader is further underscored by its standing in the recent Forrester WaveTM: Big Data NoSQL, Q1
2019 report.
Each of these databases is suited for slightly different use cases, but in general their functionality overlaps a lot. You
would typically use Azure databases for MySQL, PostgreSQL, and MariaDB when you’ve already been using one of
their on-premises community versions and want the advantage of having it run fully managed in the cloud.
Resources:
• What is Azure Database for MySQL?.
• What is Azure Database for PostgreSQL?.
• What is Azure Database for MariaDB?.
Note The serving layer deals with processed data from both the hot path and cold path. In the well-known lambda
architecture, the serving layer is subdivided into a speed serving layer, which stores data that has been processed
incrementally, and a batch serving layer, which contains the batch-processed output. The serving layer requires strong
support for random reads with low latency. Data storage for the speed layer should also support random writes, because
batch loading data into this store would introduce undesired delays. On the other hand, data storage for the batch layer
does not need to support random writes, but batch writes instead.
There is no single best data management choice for all data storage tasks. Different data management solutions
are optimized for different tasks. Most real-world cloud apps and big data processes have a variety of data storage
requirements and often use a combination of data storage solutions. Unsurprisingly, you are provided on Azure
with a number of technical options to prepare data for analysis and then serve the processed data in a structured
format that can be queried using analytical tools. However, most of them if not all (will) rely on Azure Data Lake
Storage Gen 2.
Important note Blob Storage APIs are disabled to prevent feature operability issues that could arise because Blob
Storage APIs aren't yet interoperable with Azure Data Lake Gen2 API. With the public preview of multi-protocol access on
Data Lake Storage, blob APIs and Data Lake Storage Gen2 APIs can operate on the same data. For more information, see
article Multi-protocol access on Azure Data Lake Storage (preview).
Note With multi-protocol access on Data Lake Storage, you can work with all of your data by using the entire
ecosystem of tools, applications, and services. This includes Azure services such as Azure HDInsight, Azure Event Hubs, Azure
IoT Hub, Azure Data Factory, Azure Stream Analytics, Power BI, and many others. For a complete list, see article Integrate
Azure Data Lake Storage with Azure services.
Resource:
Note Azure Archive Storage provides an extremely cost-effective alternative to on-premises storage for cold data as
highlighted in the Forrester Total Economic Impact (TEI) study, a study commissioned By Microsoft to evaluate the value
customers achieved by moving both on-premises and existing data in the cloud to Archive Storage. Customers can
significantly reduce operational and hardware expenses to realize an ROI of up to 112 percent over three years by moving
their data to the Archive tier.
Resources:
• Azure Blob storage: hot, cool, and archive access tiers.
• Rehydrate blob data from the archive tier.
Note This features set is available to accounts that have a hierarchical namespace only if you enroll in the public
preview of multi-protocol access on Data Lake Storage (see section § “Azure Data Lake Storage” above). To review limitations,
see article Known issues with Azure Data Lake Storage Gen2.
You can use the policy to transition your data to the appropriate access tiers or expire at the end of the data's
lifecycle.
The lifecycle management policy lets you:
• Transition blobs to a cooler storage tier (hot to cool, hot to archive, or cool to archive) to optimize for
performance and cost.
• Delete blobs at the end of their lifecycles.
• Define rules to be run once per day at the storage account level.
• Apply rules to containers or a subset of blobs (using prefixes as filters).
Apache Storm is an open source framework for stream processing that uses a topology of spouts and bolts to
consume, process, and output the results from real-time streaming data sources. You can use Apache Storm to
process streams of data in real time with Apache Hadoop. Storm solutions can also provide guaranteed processing
of data, with the ability to replay data that was not successfully processed the first time.
Customer can provision Apache Storm in an Azure HDInsight cluster, and implement a topology in Java or C#. As
already introduced, Azure HDInsight is a managed, full-spectrum, open source analytics service in the public cloud.
Azure HDInsight allows to easily run popular open source frameworks, including Apache Storm, Apache Spark, etc.,
to effortlessly process massive amounts of data, and to ultimately get all the benefits of the broad open source
ecosystem with the global scale of Azure.
Apache Storm on Azure HDInsight comes with full enterprise-level continuous support. Apache Storm on Azure
HDInsight also provides an SLA of 99.9 percent. In other words, Microsoft guarantees that an Apache Storm cluster
has external connectivity at least 99.9 percent of the time.
Resources:
• What is Azure HDInsight?.
• What is Apache Storm on Azure HDInsight?.
• Process events from Event Hubs with Apache Storm on HDInsight.
• Create and monitor an Apache Storm topology in Azure HDInsight.
Apache Spark is an open source distributed platform for general data processing. Apache Spark provides the Apache
Spark Streaming API, in which you can write code in any supported Spark language, including Java, Scala, and
Python. Apache Spark 2.0 introduced the Spark Structured Streaming API, which provides a simpler and more
consistent programming model.
Interestingly enough, Spark 2.0 is available in an Azure HDInsight cluster -Azure HDInsight is a managed, full-
spectrum, open source analytics service in the public cloud -.
Apache Spark Streaming provides data stream processing on HDInsight Spark clusters, with a guarantee that any
input event is processed exactly once, even if a node failure occurs.
A Spark Stream is a long-running job that receives input data from a wide variety of sources.
While Apache Spark already has connectors to ingest data from many sources like Apache Kafka, Apache Flume,
ZeroMQ, TCP sockets, etc. Apache Spark in Azure HDInsight adds first-class support for ingesting data from Azure
Event Hubs (see section § “Ingesting/streaming data” above). Azure Event Hubs is the most widely used queuing
service on Azure. Having an out-of-the-box support for Azure Event Hubs may make Azure Spark clusters in Azure
HDInsight for some you an ideal platform for building real-time analytics pipeline.
Resources:
• What is Azure HDInsight?.
• What is Apache Spark in Azure HDInsight?.
Azure Databricks is an Apache Spark-based analytics platform optimized for the Microsoft Azure cloud services
platform. Designed with the founders of Apache Spark, Databricks is integrated with Azure to provide one-click
setup, streamlined workflows, and an interactive workspace that enables collaboration between data scientists, data
engineers, and business analysts.
Azure Databricks is a fast, easy, and collaborative Apache Spark-based analytics service, and is fully integrated with
Azure AD, which gives you the ability to implement granular security (see section § “Identity and access
management” below).
For a big data pipeline, the data (raw or structured) is ingested into Azure through Azure Data Factory (see section
§ “Building data pipelines for data movement, transformation, and analytics” above) in batches, or streamed near
real-time using Apache Kafka (on Azure HDInsight), Azure Event Hub, or Azure IoT Hub (see section §
“Ingesting/streaming data” above).
Azure Databricks comprises the complete open source Apache Spark cluster technologies and capabilities. Within
Databricks, you can run optimized versions of Apache Spark to do advanced data analytics, and notably Spark
Streaming for real-time data processing and analysis for analytical and interactive applications. This integrates with
Integrates with HDFS, Apache Flume, and Apache Kafka.
Apache Spark in Azure Databricks also includes the following components:
• Spark SQL and DataFrames. Spark SQL is the Spark module for working with structured data. A DataFrame
is a distributed collection of data organized into named columns. It is conceptually equivalent to a table in
a relational database or a data frame in R/Python.
Azure Stream Analytics is a real-time analytics and complex event-processing engine that is designed to analyze
and process high volumes of fast streaming data from multiple sources simultaneously.
Azure Stream Analytics is a fully managed serverless (PaaS) offering on Azure. You don’t have to provision any
hardware or manage clusters to run your jobs. Azure Stream Analytics fully manages your job by setting up complex
compute clusters in the cloud and taking care of the performance tuning necessary to run the job.
Integration with Azure Event Hubs and Azure IoT Hub (see section § “Ingesting/streaming data” above) allows your
job to ingest millions of events per second coming from a number of sources, to include connected devices,
clickstreams, and log files.
An Azure Stream Analytics job consists of an input, query, and an output. Stream Analytics can connect to Azure
Event Hubs and Azure IoT Hub for streaming data ingestion, as well as Azure Blob storage (see section § “Using
(short term) storage for your data” above) to ingest historical data.
Job output can be routed to many storage systems such as Azure Blob storage, Azure CosmosDB (see section §
“Using (short term) storage for your data” above), and Azure Data Lake Store (see section § “Using long term storage
for your data” above).
The query, which is based on SQL query language, can be used to easily filter, sort, aggregate, and join streaming
data over a period of time. You can also extend this SQL language with JavaScript user defined functions (UDFs).
You can easily adjust the event ordering options and duration of time windows when preforming aggregation
operations through simple language constructs and/or configurations.
Resource:
• What is Azure Stream Analytics?.
Note The on-premises data gateway acts as a bridge to provide quick and secure data transfer between on-premises
data (data that isn't in the cloud) and several cloud services in Azure, Power BI being one of them. By using a gateway,
organizations can keep databases and other data sources on their on-premises networks, yet securely use that on-premises
data in cloud services.)
Power BI lets you easily connect to your (cloud-based and on-premises) data sources, visualize and discover what’s
important, and share that with anyone or everyone you want. For example, Power BI offers, Azure Databricks
integrates with Power BI to allows you to discover and share your impactful insights quickly and easily. (You can
also use Tableau Software via JDBC/ODBC cluster endpoints.)
The Microsoft Power BI service (app.powerbi.com), sometimes referred to as Power BI online, is the SaaS part of
Power BI. In the Power BI service, dashboards help you keep a finger on the pulse of your business. Dashboards
display tiles, which you can select to open reports for exploring further. Dashboards and reports connect to datasets
that bring all of the relevant data together in one place.
Note ONNX comes with the ONNX Runtime, an optimized ML inference engine for ONNX models. For more
information on ONNX with Azure Machine Learning, see article Create and accelerate ML models.
Note At the time of this writing, ONNX is joining the LF AI Foundation, an umbrella foundation of the Linux
Foundation supporting open source innovation in Artificial Intelligence (AI), Machine Learning, and Deep Learning
• Monitor ML applications for operational and ML related issues, including comparing model inputs
between training and inference, exploring model-specific metrics and providing monitoring and alerts on
your ML infrastructure in production.
• Capture the data required for establishing an end to end audit trail of the ML lifecycle, including who
is publishing models, why changes are being made, and when models were deployed or used in production.
• Automate the end to end ML lifecycle with Azure Machine Learning and Azure DevOps to frequently
update models, test new models, and continuously roll out new ML models alongside your other
applications and services.
After an Information Security Management System (ISMS) foundation is set and best practices are adopted, there
are additional areas to evaluate and understand to determine an enterprise organization’s risk posture and keys for
mitigating its risks. To do this, organizations need to understand which areas are the cloud service provider’s
responsibility and which are the organization’s responsibility.
Above Figure 8 makes it clear that responsibilities are driven by the cloud service model: SaaS, PaaS, IaaS, or on-
premises.
Note See the Shared responsibility in cloud computing white paper to learn more about the responsibility for each
cloud based solution whether it’s an IaaS, a PaaS, or a SaaS solution.
One should mention that the aforementioned ISO/IEC 27017:2015 standard is unique in providing guidance for
both CSPs and cloud service customers. It also provides cloud service customers with practical information on what
they should expect from CSPs. Customers can benefit directly from ISO/IEC 27017:2015 by ensuring they understand
the shared responsibilities in the cloud.
Infrastructure protection
Infrastructure protection ensures customers’ assets exposed externally (e.g. Internet or through any partner network
connection) are secured. This encompasses any security control that secures network flows between customers’
assets and external network as well as any security control that identifies attacks on the traversed networks. Typically
this embrasses the following security controls: network firewalls, intrusion detection systems (IDS)/intrusion
prevention systems (IPS), etc.
When a customer chooses Microsoft Azure, Microsoft Cloud Infrastructure and Operations (MCIO) takes
responsibility and delivers the Azure infrastructure and production network for Microsoft Azure that support
the Azure infrastructure where customer application instances and customer data reside, as per section §
“Understanding the shared responsibilities’ model for your applications”.
The Microsoft Azure production network structured such that publicly accessible system components are
segregated from internal resources. Physical and logical boundaries exist between web servers providing access to
the public-facing Microsoft Azure management portal and the underlying Azure virtual infrastructure where
customer application instances and customer data reside.
You do not access to the Azure physical infrastructure and only see the above Azure virtual infrastructure through
software defined network (SDN) capabilities. The main logical construct is the Azure Virtual Network (VNET) (see
section § “Leveraging network virtualization capabilities” above) where customer can define subnets.
Multiple techniques are used to control information flows, including but not limited to:
• Physical separation. Network segments are physically separated by routers that are configured to prevent
specific communication patterns.
• Logical separation. Virtual LAN (VLAN) technology is used to further separate communications (see below).
• Firewalls. Firewalls and other network security enforcement points are used to limit data exchanges with
systems that are exposed to the Internet, and to isolate systems from back-end systems managed by
Microsoft.
• IDS/IPS detect and identify suspicious or undesirable activities that indicate intrusion, proactively drop
packets that are determined to be undesirable, and disconnect unauthorized connections.
• Protocol restrictions.
• All traffic to and from customers that are transmitted over encrypted connections.
Microsoft Azure implement boundary protection through the use of controlled devices at the network boundary
and at key points within the network. The overarching principle of network security is to allow only connection and
communication that is necessary to allow systems to operate, blocking all other ports, protocols and connections
by default.
Access Control Lists (ACLs) are the preferred mechanism through which to restrict network communications by
source and destination networks, protocols, and port numbers. Approved mechanisms to implement networked-
based ACLs include:
• Tiered ACLs on routers managed by MCIO,
• IPSec policies applied to hosts to restrict communications (when used in conjunction with tiered ACLs),
• Firewall rules,
• Host-based firewall rules.
In addition, the guiding principle of our security strategy is to “assume breach”, see section § “Vulnerability risk
assessment” in the Appendix. The Microsoft global incident response team works around the clock to mitigate the
effects of any attack against Azure, see section § Microsoft Cyber Defense Operations Center in the Appendix.
Resources:
• Inside Azure datacenter architecture with Mark Russinovich video.
• Azure information system components and boundaries.
• Isolation in the Azure Public Cloud.
• Azure network architecture.
• The Azure production network.
• Microsoft Azure Network Security.
One of the primary benefits of cloud computing is concept of a shared, common infrastructure across numerous
customers simultaneously, leading to economies of scale. This concept is called multi-tenancy.
Microsoft Azure is a multi-tenant cloud services platform that you can use to deploy a vast variety of
solutions. A multi-tenant cloud platform implies that multiple customer applications and data are stored on
the same physical hardware.
Microsoft Azure was designed to help identify and counter risks inherent in a multitenant environment, and
Microsoft works continuously to ensure that the multi-tenant architecture of its Microsoft Azure supports enterprise-
level security, confidentiality, privacy, integrity, and availability standards to ultimately provide a secure hardened
infrastructure.
Azure is designed with the assumption that all tenants are potentially hostile to all other tenants, and we have
implemented security measures to prevent the actions of one tenant from affecting the security or service of another
tenant, or accessing the content of another tenant.
The two primary goals of maintaining tenant isolation in a multi-tenant environment are:
1. Preventing leakage of, or unauthorized access to, customer content across tenants.
2. Preventing the actions of one tenant from adversely affecting the service for another tenant.
To accommodate (highly) sensitive data in the Azure public multi-tenant cloud, you can deploy additional
technologies and services on top of those used for confidential data and limit provisioned services to those
that provide sufficient isolation. These services offer isolation options at run time and support data
encryption at rest using customer managed keys in dedicated single tenant Hardware Security Modules
(HSMs) that are solely under your control.
Azure uses logical isolation to segregate each customer's applications and data from those of others.
This approach provides the scale and economic benefits of multi-tenant cloud services while rigorously preventing
customers from accessing one another's data or applications.
In the exposed software defined network (SDN), you have the ability to create virtual networks (see section
§ “Leveraging network virtualization capabilities” above). A virtual network (VNET) is a logical construct
built on top of this SDN logical infrastructure. This helps ensure that network traffic in one customer
deployments is not accessible to other Azure customers.
Fundamental to any shared cloud architecture is indeed the isolation provided for each customer to prevent one
malicious or compromised customer from affecting the service or data of another. In Azure, one customer’s
subscription can include multiple deployments, and each deployment can contain multiple VMs. Azure provides
network isolation at several points:
• Each deployment is isolated from other deployments. Multiple VMs within a deployment are allowed to
communicate with each other through private IP addresses.
• Multiple deployments (inside the same subscription) can be assigned to the same VNET, and then allowed
to communicate with each other through private IP addresses. Each VNET is isolated from other virtual
networks.
• Traffic between VMs always traverses through trusted packet filters.
• Protocols such as Address Resolution Protocol (ARP), Dynamic Host Configuration Protocol (DHCP), and
other OSI Layer-2 traffic from a VM are controlled using rate-limiting and anti-spoofing protection.
Azure provide a number of sophisticated customer facing controls that are native Azure controls or third-
party network virtual appliances (NVAs) for Internet Edge security (North-South). (Nothing prevents to opt to
a hybrid configuration where some VNets use advanced 3rd party controls and others use native controls.)
Following are some of the native Azure services to consider in this space:
• Azure Firewall, a managed, cloud-based network security service that provides centralized outbound and
inbound (non-HTTP/S) network and application (L3-L7) filtering. It's a fully stateful firewall as a service with
built-in high availability and unrestricted cloud scalability.
• Azure DDoS Protection, a managed, cloud-based network security service that, combined with application
design best practices, provides extensive Distributed Denial of Service (DDoS) protection to help you protect
your Azure resources from attacks. It provides the following service tiers:
o Basic. Automatically enabled as part of the Azure platform. Always-on traffic monitoring, and real-
time mitigation of common network-level attacks, provide the same defenses utilized by Microsoft
Azure. The entire scale of Azure’s global network can be used to distribute and mitigate attack
traffic across regions. Protection is provided for IPv4 and IPv6 Azure public IP addresses.
o Standard. Provides additional mitigation capabilities over the Basic service tier that are tuned
specifically to Azure Virtual Network (VNET) resources (see below)
The Standard service tier is simple to enable and requires no application changes. Protection
policies are tuned through dedicated traffic monitoring and Machine Learning algorithms. Policies
are applied to public IP addresses associated to resources deployed in VNETs, such as Azure Load
Balancer and Azure Application Gateway. Application layer protection can be added through the
Azure Application Gateway Web Application Firewall (WAF) (see below), or by installing a third-
party firewall from Azure Marketplace. Protection is provided for IPv4 and IPv6 Azure public IP
addresses.
• Azure Application Gateway Web Application Firewall (WAF), a web application firewall (WAF) that provides
centralized inbound web application protection from common exploits
and vulnerabilities. It’s based on Core Rule Set (CRS) 3.0 or 2.2.9 from the Open Web Application Security
Project (OWASP). The WAF automatically updates to include protection against new vulnerabilities, with no
additional configuration needed.
Figure 11 Core services segment using native Azure controls (depicted within a single subscription)
A Network Security Group (NSG) allows to express security rules for distributed inbound and outbound network
(L3-L4) traffic filtering on VM, container or subnet, see section § “Security controls for East-West traffic” below.
Resources:
• What is Azure Firewall? .
• Azure DDoS Protection Standard overview.
• Azure DDoS Protection: Best practices and reference architectures.
• Web application firewall for Azure Application Gateway.
• What is Azure Front Door Service?.
The above services are notably in-scope services for the certification against the Service Organization Control (SOC)
1, SOC 2, and SOC 3 standards. Azure has been audited against the Service Organization Control (SOC) reporting
framework and has achieved SOC 1 Type 2, SOC 2 Type 2, and SOC 3 reports:
• The SOC 1 Type 2 audit report attests to the design and operating effectiveness of Azure controls.
• The SOC 2 Type 2 audit included a further examination of Azure controls related to security, availability, and
confidentiality.
• SOC 3 report is an abbreviated version of the SOC 2 Type 2 audit report.
Azure is audited annually against the SOC reporting framework by independent third-party auditors to ensure that
security controls are maintained.
The third-party security virtual appliances provide security controls such as Network Intrusion Detection/Prevention
Systems (NIDS/NIPS), Next Generation Firewalls (NGFWs), encryption and more to your existing skillsets, processes,
and licenses. These technologies are available as network virtual appliances (NVAs), see section § “Azure Virtual
Network” above.
The Azure platform filters malformed packets and most classic NIDS/NIPS solutions are typically based on outdated
signature-based approaches which are easily evaded by attackers and typically produce high rate of false positives.
Thus, you may want to deprecate and then discontinue some legacy security approaches as you move to Microsoft
Azure. However, you can continue to use these technologies in Azure if you see value, but many organizations are
not migrating these solutions to Azure.
Following Figure 12 is a depiction of a core services segment using a NGFW with built in Web Application Firewall
(WAF) capabilities. Customers frequently choose this configuration to utilize their existing licensing and skillsets in
Azure.
Note These are instantiated as VMs running the appliance (not a service like the native capabilities), so you need to
configure the appropriate subnets, routing, network virtual applications (NVAs), and load balancers for a resilient
architecture.
A public Azure Load Balancer (see eponym section above) enables scalability and availability while Azure DDoS
Protection Standard can be applied to public IP addresses.
Resources:
• Hub-spoke network topology with shared services in Azure.
Azure Network security in the exposed virtual infrastructure is very similar to physical network security.
In this software defined network (SDN), subnets can only be created within a virtual network (VNET), see
section § “Leveraging network virtualization capabilities” above. Azure requires VMs to be connected to a
VNET.
Network security group (NSG) are used to protect against unsolicited traffic into subnets (replaces/supplements
East-West traffic controls). A NSG holds a list of security rules that allow or deny inbound network traffic to, or
outbound network traffic from, several types of Azure resources connected to VNETs. NSGs can be associated to
subnets, individual network interface controllers (NICs) attached to VMs. When an NSG is associated to a subnet,
the rules apply to all resources connected to the subnet. You can restrict traffic even further by also associating an
NSG to a VM or NIC.
Note While their use is not required, defining application security groups (ASGs) allow to simplify setup and
maintenance of NSG rules. An ASG can be defined for lists of IP addresses that may change in the future, be used across
many NSGs.
Following Figure 13 is a reference enterprise network design that depicts a core services segment and several
example segments aligned. As already presented (see section § “Security controls for Internet Edge security (North-
South)” above), the network edge security in the core services segment can use either native Azure controls or third
party NVAs.
The illustrated shared services in the core services segment may be hosted in a single VNET or can span across
multiple VNETs (e.g. for intranet vs. extranet resources):
• The core services segment includes examples of groupings we typically see in most enterprises.
• Each of the segments is connected to each other by VNET peering configurations.
• A public IP address may be mapped to an application within a segment that may not route through the
network edge (depicted if you zoom into example applications). This activity can be restricted with
permissions and/or routing.
Azure Security Center provides unified security management and advanced threat protection across (hybrid) cloud
workloads. It notably provides the ability to continuously monitoring the security status of your network. In this
context, one should mention the Adaptive Network Hardening feature.
As such, Adaptive Network Hardening provides recommendations to further harden the NSG rules. It uses a
Machine Learning algorithm that factors in actual traffic, known trusted configuration, threat intelligence, and other
indicators of compromise, and then provides recommendations to allow traffic only from specific IP/port tuples.
For example, let’s say the existing NSG rule is to allow traffic from 140.20.30.10/24 on port 22 (SSH). The Adaptive
Network Hardening’s recommendation, based on the analysis, would be to narrow the range and allow traffic from
140.23.30.10/29 – which is a narrower IP range, and deny all other traffic to that port.
Resources:
• Azure network security overview.
• Azure Network Security Groups.
• Adaptive Network Hardening in Azure Security Center.
• Azure best practices for network security.
Azure provides several means to get visibility into VNET network activity.
This includes:
• Getting verbose information during an investigation such as NSG flow logs, i.e. a subset of Azure Network
Watcher. Azure Network Watcher is a regional service that enables you to monitor and diagnose conditions
at the network level in, to, and from Azure. Its many diagnostic and visualization tools can help you
understand and gain deeper insights into your network in Azure.
Azure NSG flow logs allow you to view information about ingress and egress IP traffic through an NSG. Flow
logs can be analyzed to gain information and insights into network traffic and security as well as
Azure Services are published to the Azure network via public endpoints by default. While these services are available
to the internet, Azure services accessing them only traverse the Azure networks.
Some customers may have a preference or requirement that traffic to/from Azure services doesn’t traverse the Azure
network. Because of this, Azure provides the following options for accessing Azure services.
• (VNET) Service endpoints (Service tunnel). Extend a VNET private address space and the identity of this
VNET to the Azure PaaS services, over a direct connection. Endpoints allow you to secure your critical Azure
service resources to only your VNETs. Traffic from your VNET to the Azure service always remains on the
Microsoft Azure backbone network. This option is currently only available for a subset of Azure services.
VNET service endpoint policies allow you to filter virtual network traffic to Azure services, allowing only
specific Azure service resources, over service endpoints. Endpoint policies provide granular access control for
VNET traffic to Azure services. This feature is currently available for a subset of Azure services and regions.
• (VNet) Integration for Azure services. Enables private access to the service from VMs or compute resources
in the VM. You can integrate Azure services in your VNET with the following options:
• Deploying dedicated instances of the service into a VNET. The services can then be privately accessed
within the VNET and from on-premises networks. This option is currently only available for a subset of
Azure services.
• Using Azure Private Link to access privately an specific instance of the service from a VNET and from
on-premises networks.
Azure Private Link enables you to access Azure PaaS Services and Azure hosted customer/partner
services over a private endpoint in a VNET. Traffic between the VNET network and the service traverses
over the Microsoft backbone network, eliminating exposure from the public Internet. You can also
create your own Private Link Service in your VNET and deliver it privately to your customers.
The setup and consumption experience using Azure Private Link is consistent across Azure PaaS,
customer-owned, and shared partner services.
Resources:
• Virtual Network Service Endpoints.
• Virtual network service endpoint policies (Preview).
• Virtual network integration for Azure services.
• Announcing Azure Private Link.
• What is Azure Private Link? (Preview).
Server/workload security
Server security considerations help ensure compute resources used in the customers’ solutions implement the
necessary level of controls to be secure. Typically, this comprises the following security controls: antimalware,
operating system (OS) hardening, vulnerability management, etc.
As per section § “Understanding the shared responsibilities’ model for your applications” above (see Figure 9
Security responsibilities transfer to the cloud), following security responsibilities are transferred to the CSP:
• For IaaS and PaaS workloads: Microsoft Azure Fabric (FC)/Virtualization patching.
• For PaaS workloads:
• Security patches.
• VMs/Containers security: Operating System (OS) and middleware installation, hardening, etc.
Note Container technology is causing a structural change in the cloud-computing world. Containers make it possible
to run multiple instances of an application on a single instance of an operating system, thereby using resources more
efficiently. Because container technology is relatively new, many IT professionals have security concerns about the lack of
visibility and usage in a production environment.
The whitepaper Container Security in Microsoft Azure describes containers, container deployment and management, and
native platform services. It also describes runtime security issues that arise with the use of containers on the Azure platform.
In figures and examples, this paper focuses on Docker as the container model and Kubernetes as the container orchestrator.
Microsoft takes strong measures to protect customer data from inappropriate access or use by unauthorized
persons. Microsoft engineers do not have default access to customer data in the cloud. Instead, they are granted
access, under management oversight, only when necessary. Using the restricted access workflow, access to
customer data is carefully controlled, logged, and revoked when it is no longer needed. For example, access to
customer data may be required to resolve customer-initiated troubleshooting requests.
The access control requirements are established by the following policy:
• No access to customer data, by default.
• No user or administrator accounts on customer virtual machines (VMs).
• Grant the least privilege that is required to complete task.
• Audit and log access requests.
Note For more information, see page Who can access your data and on what terms?.
Microsoft engineers can be granted access to customer data using temporary credentials via “just-in-time” (JIT)
access. There must be an incident logged in the Azure Incident Management system that describes the reason for
access, approval record, what data was accessed, etc. This approach ensures that there is appropriate oversight for
all access to customer data and that all JIT actions (consent and access) are logged for audit.
Note Evidence that procedures have been established for granting temporary access for Azure personnel to
customer data and applications upon appropriate approval for customer support or incident handling purposes is available
from the Azure SOC 2 Type 2 attestation report produced by an independent third-party auditing firm and available on the
on the Service Trust Portal.
Azure Customer Lockbox is a service that provides you with the capability to control how a Microsoft
Engineer accesses your data. As part of the Support workflow, a Microsoft engineer may require elevated access
to customer data. Azure Customer Lockbox puts the customer in charge of that decision by enabling the customer
to Approve/Deny such elevated requests. Azure Customer Lockbox is an extension of the JIT workflow and comes
with full audit logging enabled. It is important to note that Azure Customer Lockbox capability is not required for
support cases that do not involve access to customer data. For the majority of support scenarios, access to customer
data is not needed and the workflow should not require Azure Customer Lockbox. Azure Customer Lockbox is
available to customers from all Azure public regions.
Resources:
• Azure customer data protection.
• Approve, audit support access requests to VMs using Customer Lockbox for Azure.
• Customer Lockbox for Microsoft Azure.
When you delete data or leave Microsoft Azure, Microsoft follows strict standards for overwriting storage resources
before reuse. As part of agreements for cloud services such as Azure Storage (see section § “Leveraging storage
virtualization capabilities” above), Azure Virtual Machine (see section § “Leveraging compute virtualization
capabilities” above), etc. Microsoft contractually commits to timely deletion of data.
Data destruction techniques vary depending on the type of data object being destroyed, whether it be whole
subscriptions themselves, storage, virtual machines, or databases. In a multi-tenant environment such as Microsoft
Azure, careful attention is taken to ensure that when a customer deletes data, no other customer (including, in most
cases, the customer who once owned the data) can gain access to that deleted data.
(Likewise, in terms of equipment disposal, upon a system’s end-of-life, Microsoft operational personnel follow
rigorous data handling procedures and hardware disposal processes to help assure that no hardware that may
contain customer data is made available to untrusted parties.)
Resource:
• Microsoft Azure Data Security (Data Cleansing and Leakage)
Data encryption in the cloud is an important risk mitigation requirement expected by customers worldwide.
Microsoft Azure helps you protect your data through its entire lifecycle whether at rest, in transit, or even in use.
Azure has extensive support to safeguard customer data using data encryption in transit and at rest, as well as data
encryption while in use.
Encryption in transit
Furthermore, Azure offers its customers a range of options for securing their own data and traffic. You can enable
for example encryption for traffic between your own virtual machines (VMs) and end-users. The certificate
management features built into Azure give administrators flexibility for configuring certificates and encryption keys
for management systems, individual services, secure shell (SSH) sessions, VPN connections, remote desktop (RDP)
connections, and other functions.
Note You should review Azure best practices for the protection of data in transit and properly configure HTTPS
endpoints for your resources provisioned in Azure to help ensure that all traffic going to and from your VMs is encrypted.
For key Azure services, data encryption in transit is enforced by default.
Azure optional services helps you safeguard cryptographic keys and secrets by storing them in hardware security
modules (HSMs), see section § “Encryption management” below and § “Secret management” below.
Encryption at rest
In most situations, not to say in all of them, data is stored encrypted (i.e. at rest) in Azure and decrypted on the fly
when used or computed by a program. This is both a usual and an adapted way to proceed for the most common
data.
Microsoft Azure provides extensive options for data encryption at rest to help you safeguard your data and meet
your compliance needs using both Microsoft managed encryption keys, as well as customer managed encryption
keys, giving you the flexibility to choose the solution that best meets your needs.
To take some example, Azure Cosmos DB requires no action from you - data stored in Azure Cosmos DB in
nonvolatile storage (solid-state drives) is encrypted by default (, and there are no controls to turn it on or off).
Encryption in use
Sometimes, protecting data in transit and data at rest is not enough. Data may be indeed too sensible to appear in
clear in memory (i.e. in use), even if the (virtual) machine and the workload processing data can be considered
hardened respectively secured. (In some cases, your sensitive content is the code and not the data. To secure
sensitive IP, you may require protect confidentiality and integrity of your code while it’s in use.)
Increasing popularity of use cases, such as privacy-preserving multi-party machine learning, has led to secure
compute workloads within the confines of Trusted Execution Environments (TEEs).
This concept called Confidential Computing is an ongoing effort to protect data and/or code throughout its lifecycle
at rest, in transit and now in use. This means that data can be processed in the cloud with the assurance that it is
always under customer control. Confidential Computing ensures that when data is in the clear, which is needed for
efficient data processing in memory, the data is protected inside a TEE, (a.k.a. as an enclave). TEE ensures that there
is no way to view data or the operations from outside the enclave, even with a debugger. Moreover, TEE ensures
that only authorized code is permitted to access data. If the code is altered or tampered with, the operations are
denied, and the environment is disabled.
With the introduction of Azure Confidential Computing (ACC) (with the DC-series of virtual machines (VMs) (see
section § “Azure Virtual Machines” above) that have the latest generation of Intel Xeon processors with Intel Software
Extension Guard (SGX) technology - The Intel SGX instruction extension was introduced with 7th Generation Intel
Note For more information on Azure Confidential Computing, see blog post Introducing Azure confidential
computing and the webcast Azure Confidential Computing updates with Mark Russinovich | Best of Microsoft Ignite 2018.
Note Intel SGX isolates a portion of physical memory to create an enclave where select code and data are protected
from viewing or modification. The protection offered by Intel SGX, when used appropriately by application developers, can
prevent compromise due to attacks from privileged software and many hardware-based attacks.
An application leveraging Intel SGX needs to be re-factored into trusted and untrusted components. The untrusted part of
the application sets up the enclave, which then allows the trusted part to run inside the enclave. No other code, irrespective
of the privilege level, has access to the code executing within the enclave or the data associated with enclave code. Design
best practices call for the trusted partition to contain just the minimum amount of content required to protect customer’s
secrets.
But all of this is still really a low-level work and developing applications above that is really difficult and requires
both advanced security expertise and specifics skills. In this context, Microsoft Research, together with partners, has
embarked and invested in a way that simplifies TEE-based application development for all audiences from hardcore
hardware security experts to edge and cloud software applications developers, regardless of the underlying
enclaving technologies. The effort results in the Microsoft Open Enclave SDK, an open source framework available
on GitHub over a year ago, which developers can use to build C/C++ enclave applications targeting Intel SGX
technology as well as ARM TrustZone (TZ), and embedded Secure Elements using Linux OSs.
The Open Enclave SDK is intended to be portable across enclave technologies, cross platform – cloud, hybrid, edge,
or on-premises, and designed with architectural flexibility in mind. As such, the Open Enclave SDK aims at creating
a single unified enclaving abstraction for developer to build applications once that run across multiple TEE
architectures, and thus was designed to:
• Make it easy to write and debug code that runs inside TEEs.
• Allow the development of code that’s portable between TEEs.
• Provide a flexible plugin model to support different runtimes and cryptographic libraries.
• Have a high degree of compatibility with existing code.
This SDK is natively leverage by ACC.
Note Microsoft has recently joined partners and the Linux Foundation to create Confidential Computing Consortium
that will be dedicated to defining and accelerating the adoption of confidential computing.
Microsoft will be contributing the Open Enclave SDK to the Confidential Computing Consortium to develop a broader
industry collaboration and ensure a truly open development approach. “The Open Enclave SDK is already a popular tool for
developers working on Trusted Execution Environments, one of the most promising areas for protecting data in use,” said
Mark Russinovich, chief technical officer, Microsoft. “We hope this contribution to the Consortium can put the tools in even
more developers’ hands and accelerate the development and adoption of applications that will improve trust and security
across cloud and edge computing.” (see blog post New Cross-Industry Effort to Advance Computational Trust and Security
for Next-Generation Cloud and Edge Computing).
You can now build, deploy, and run applications that protect data confidentiality and integrity through the entire
data lifecycle whether at rest, in transit, or in use. To get started, you can deploy a DC-series VM through the
custom deployment flow in Azure Marketplace. The custom deployment flow deploys and configures the VM and
installs the Open Enclave SDK for Linux VMs if selected.
Encryption management
Azure offers comprehensive encryption (key) management to help you control your keys in the cloud, including key
rotation, key deletion, permissions, etc. End-to-end data encryption using advanced ciphers is fundamental to
ensuring confidentiality and integrity of customer data in the cloud.
Azure Key Vault is a multi-tenant secrets management service that enables Azure services, applications and users to
store and use several types of secret/key data:
• Secrets. Provides secure storage of secrets, such as passwords and database connection strings.
• Cryptographic keys. Supports multiple key types and algorithms. As of this writing, Azure Key Vault
supports RSA and Elliptic Curve keys.
• Certificates. Supports X.509 certificates, which are built on top of keys and secrets and add an automated
renewal feature.
• Keys of an Azure Storage. Can manage keys of an Azure Storage account. Internally, Key Vault can list
(sync) keys with an Azure Storage account, and regenerate (rotate) the keys periodically.
Azure Key Vault enables the use of Hardware Security Modules (HSM) for high value keys: you can import or
generate keys in HSMs that never leave the HSM boundary to support Bring Your Own Key (BYOK) scenarios. Azure
Key Vault HSMs are FIPS 140-2 Level 2 validated, which includes requirements for physical tamper evidence and
role-based authentication.
Azure Key Vault is designed, deployed, and operated such that Microsoft and its agents are precluded from
accessing, using or extracting any data stored in the service, including cryptographic keys.
Note With Azure Stack, you can store and manage your secrets including cryptographic keys on an external
Hardware Secure Module (HSM) by using Thales CipherTrust Cloud Key Manager (available via the Azure marketplace), which
allows customers to integrate an HSM with Key Vault service running on Azure Stack.
If you expect to use back-end services integrated with Azure Key Vault (e.g., Azure Storage, Azure Disk Encryption,
Azure Data Lake Storage, etc.) for customer managed key support, then you need to use Microsoft provided
cryptography and encryption hardware, e.g. HSMs.
For customers who require single-tenant HSMs, Azure provides dedicated HSMs that have FIPS 140-2 Level 3
validation, as well as Common Criteria EAL4+ certification and conformance with eIDAS requirements. Azure
Dedicated HSM is most suitable for “lift-and-shift” scenarios where you require full administrative control and sole
access to your HSM device for administrative purposes and retain your crypto algorithms. After a Dedicated HSM
Secret management
As stated in the above section, Azure Key Vault is a multi-tenant secrets management service that stores and controls
access to secrets.
In addition, for customers that already rely on HashiCorp Vault to store and manage their secrets, Vault can also
help them manage and eliminate secrets sprawl in Azure.
HashiCorp Vault is a highly scalable, highly available, environment agnostic way to generate, manage, and store
secrets. It encrypts data using the Advanced Encryption Standard (AES) 256 bits. (Once data is encrypted it is stored
on a variety of storage backends such as HashiCorp Consul.)
Working with Microsoft, HashiCorp launched Vault with a number of features to make secret management easier
to automate in Azure cloud. (HashiCorp and Microsoft are longstanding partners in the cloud infrastructure
community.)
As a result, you can leverage all of these Vault features to automate your secrets management and retrieval through
Azure specific integrations. First and foremost Vault can be automatically unsealed using KMS keys from Azure Key
Vault. Next, as already outlined, Azure managed identities can be used to authenticate systems and applications
preventing the need to distribute initial access credentials.
Lastly, HashiCorp Vault can dynamically generate Azure AD service principals for apps using its Azure secrets engine
feature. “The Azure secrets engine dynamically generates Azure service principals and role assignments. Vault roles
can be mapped to one or more Azure roles, providing a simple, flexible way to manage the permissions granted to
generated service principals. Each service principal is associated with a Vault lease. When the lease expires (either
during normal revocation or through early revocation), the service principal is automatically deleted.
If an existing service principal is specified as part of the role configuration, a new password will be dynamically
generated instead of a new service principal. The password will be deleted when the lease is revoked.”
This allows users and applications off-cloud an easy method for generating flexible time and permission bound
access into Azure APIs.
Note For customers that would like a quick way of testing out Vault in Azure, the hc-demos repo on GitHub contains
all the code to create a Vault environment in Azure including all instructions on how to obtain Terraform, run it, connect to
their Azure instance and run the Vault commands. This is a great way to learn the concepts covered here with a low barrier
to entry.
Note For more information on HashiCorp Vault and Azure integrations, see page Hashicorp/Azure Integrations.
Resources:
• What is Azure Key Vault?.
• How to identify and eliminate secrets sprawl on Azure with HashiCorp Vault.
• Azure Friday: Azure Key Vault Auto Unseal & Dynamic Secrets with HashiCorp Vault on HashiCorp documentation.
• HashiCorp auto-unseal using Azure Key Vault on HashiCorp documentation.
Note Azure Stack uses either Azure AD or Active Directory Federation Services (AD FS) as an identity provider.
Azure AD is trusted by millions of organizations serving more than of a billion of identities for access to
Azure, as well as hundreds of thousands of other partner applications.
KuppingerCole has named Microsoft as the top overall leader in their Leadership Compass for Identity-as-a-
Service (IDaaS) Access Management 2019. Microsoft was identified as the “leading IDaaS AM vendor” for
functional strength and completeness of product, with a “focus on constant innovation”. The report analyzed
15 vendors across three categories of leadership and Microsoft earned the highest scores across product,
innovation, and market leadership.
Note KuppingerCole recognized Microsoft’s strength in our “support for popular SaaS app integrations,” which is
possible through the open support of our many partners. KuppingerCole highlighted the security capabilities of Azure Active
Directory including “strong adaptive authentication” and “strong threat analytics capabilities offering real-time threat
detection and remediation.”
KuppingerCole also acknowledged our accelerating app development support saying we are “increasingly DevOps friendly
with strong developer community support.” Through open standards, a secure authentication platform, and APIs we’re
committed to help customers create the next generation of apps and experiences.
For more information, see blog post KuppingerCole names Microsoft the top overall IDaaS leader and complimentary copy
of the report.
Recently, Gartner2 named Microsoft a Leader in the Magic Quadrant for Access Management, Worldwide
2019 for the third year in a row. Additionally, Forrester Research named Microsoft a Strong Performer in its
report The Forrester Wave: Identity-As-A-Service (IDaaS) For Enterprise, Q2 2019, with the largest market
presence across vendors.
The document Azure Active Directory Data Security Considerations explains the following technical aspects of Azure
AD
2
Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select
only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner's research
organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this
research, including any warranties of merchantability or fitness for a particular purpose.
Note For more information, see whitepaper Implementing a Zero Trust approach with Azure Active Directory.
The state of cyberattacks drives organizations to take the “assume breach” mindset, but this approach shouldn’t be
limiting. Zero Trust networks protect corporate data and resources while ensuring that organizations can build a
modern workplace by using technologies that empower employees to be productive anytime, anywhere, in any way.
An optional capability, i.e. Azure AD Identity Protection, enables organizations to configure automated responses
to detected suspicious actions related to user identities. The vast majority of security breaches take place when
attackers gain access to an environment by stealing a user’s identity. Discovering compromised identities is no easy
task. Azure AD uses adaptive Machine Learning algorithms and heuristics to detect anomalies and suspicious
incidents that indicate potentially compromised identities.
Unsurprisingly, Azure AD Identity Protection fully integrate with Azure AD Conditional Access policies.
Resources:
• Azure identity management security overview.
• What is Azure Active Directory?.
• A world without passwords with Azure Active Directory.
• What is passwordless?.
• What is role-based access control (RBAC) for Azure resources?.
• What is Conditional Access?.
• Manage access to Azure management with Conditional Access.
• What is Azure Active Directory Identity Protection?.
Azure built its Role-Based Access Control (RBAC) capabilities on top of the above to enforce the separation of
privileged and non-privileged roles for Azure resources. Using RBAC, users, managed identities, service groups, and
applications (a.k.a. service principals) from that directory can be granted access to resources in the Azure
subscription at a subscription, resource group, or individual resource level. For example, a storage account (see
section § “Leveraging storage virtualization capabilities” above) can be placed in a resource group to control access
to that specific storage account using Azure AD - Resource groups in Azure provide a way to monitor, control access,
provision and manage billing for collections of Azure assets/resources that are required to run an application -. In
this manner, only specific users can be given the ability to access the Storage Account Key, which controls access to
storage.
All accesses to Azure resources are based on a prior authentication on Azure AD and use in turn the RBAC
mechanism. Azure RBAC comes with the built-in roles that can be assigned to users, groups, and services. You can
therefore use predefined roles to give the necessary permissions to users in charge of managing backups or if
necessary create your own custom roles to fit your needs, for example if you use custom or third party solutions to
manage log events.
Note With Azure Stack, you can similarly use RBAC to grant system access to authorized users, groups, and services
by assigning them roles at a subscription, resource group, or individual resource level. Each role defines the access level a
user, group, or application has over Azure Stack resources.
The built-in Azure RBAC capabilities allow to list all the roles that are assigned to a specified user under the
customer's responsibility and the roles that are assigned to the groups to which the user belongs. Conversely, these
capabilities also allow to see all the access assignments for a specified subscription, resource group, or resource. All
of this can be achieved notably via the Get-AzureRmRoleAssignment cmdlet.
Resources:
• What is role-based access control (RBAC) for Azure resources?.
• Built-in roles for Azure resources.
• Custom roles for Azure resources.
A common challenge when building cloud applications is how to manage the credentials in code for authenticating
to cloud services. Keeping the credentials secure is an important task. Ideally, the credentials never appear on
developer workstations and aren't checked into source control. Azure Key Vault (see section § “Encryption
management” above) provides a way to securely store credentials, secrets, and other keys, but your code has to
authenticate to Key Vault to retrieve them.
The Azure managed identities for Azure resources feature in Azure AD solves this problem. The feature provides
Azure services with an automatically managed identity in Azure AD. You can use the identity to authenticate to any
service that supports Azure AD authentication, including Azure Key Vault, Azure Container Instances (ACI), Azure
Kubernetes Service (AKS) (see section § “Leveraging containerization” above), etc. without any credentials in your
code.
Managed identities can also be used for Infrastructure as Code (IaC) (see section § “Are you saying “Infrastructure
as Code” below). For example, for customers that rely on HashiCorp Terraform, the Azure Resource Manager module
for Terraform can be authenticated using managed identities for Azure resources. Same applies to HashiCorp Vault.
Resources:
• What is managed identities for Azure resources?.
• How to use Azure managed identities with Azure Container Instances.
• How to use Azure managed identities with Azure Kubernetes Services (AKS).
• Azure Provider: Authenticating using managed identities for Azure resources on Terraform documentation.
• Azure Authentication with HashiCorp Vault on vault documentation.
• Using Azure Active Directory Authentication with HashiCorp Vault – Part 1 & Part 2.
Azure AD provides a number of optional capabilities for privileged access management and identity governance.
Azure AD Privileged Identity Management (PIM) provides a mechanism for managing and monitoring administrators
in Azure AD and granting on-demand, temporary, “just-in-time” (JIT) and “just-enough” administrative access to
users for ad hoc tasks for short predetermined periods for Microsoft Azure.
Azure AD Access Reviews enable organizations to efficiently manage group memberships, role assignments, and
access to enterprise applications. User's access can be reviewed on a regular basis to make sure only the right people
have continued access.
Resources:
• What is Azure AD Privileged Identity Management?.
• What is Azure AD Identity Governance?.
• What are Azure AD access reviews?.
Security management
Log management
Log management allows from a security perspective to identified all alerts from all monitored applications, systems
and infrastructure. This includes log analysis, initial diagnosis and where required, dispatching.
You can use Azure Security Center to notably analyze the security state of your compute resources and the
configurations of the security controls that are in place to protect them. Azure Security Center notably provides
capabilities in the following main areas:
1. Cloud security posture management. Azure Security Center provides you with a bird’s eye security posture
view across your Azure environment, enabling you to continuously monitor and improve your security
posture using the Azure secure score. Azure Security Center helps you identify and perform the hardening
tasks recommended as security best practices and implement them across notably your compute resources
(see section § “Azure compute solutions” Azure compute solutions). This includes in particular managing
and enforcing your security policies and making sure your Azure Virtual Machine instances (VMs), Azure
VM Scale Sets, and non-Azure servers - Azure Security Center integrates with Azure Stack - are compliant.
Note With newly added IoT capabilities, customers can now also reduce attack surface for their IoT devices
integrated with the Azure IoT platform) and remediate issues before they can be exploited.
In addition to providing full visibility into the security posture of your environment, Azure Security Center
also provides visibility into the compliance state of your Azure environment against common regulatory
standards.
2. Cloud workload protection. Azure Security Center's threat protection enables customer to detect and
prevent threats at the IaaS layer (as well as in platform-as-a-service (PaaS) resources like Azure IoT Hub)
and on-premises servers and VMs. Key features of Azure Security Center threat protection include config
monitoring, server endpoint detection and response (EDR), application control, network segmentation, and
extends to support container and serverless workloads for Cloud-native applications.
Azure Security Center helps protect Linux servers with behavioral analytics. For every attack attempted or
carried out, you receive a detailed report and recommendations for remediation.
In terms of advanced controls, one can mention the “just-in-time” (JIT) virtual machine (VM) access that can
be used to lock down inbound traffic to Azure VMs, thus reducing your surface area exposed to attack -
SSH brute-force attack is one of the most common threats with more than 100,000 attack attempts on
Azure VMs per month -, while providing easy access to connect to VMs when needed. When JIT is enabled,
Azure Security Center locks down inbound traffic to your Azure VMs by creating a Network Security Group
(NSG) rule. You select the ports, for example 22 (SSH), on the VM to which inbound traffic will be locked
down. These ports are controlled by the JIT solution.
As you add applications to VMs in Azure, you can block malicious apps, including those not mitigated by
antimalware solutions, by using adaptive application controls. Machine learning automatically applies new
application whitelisting policies across your VMs.
SIEM
As introduced above, Azure Security Center provides unified security management by identifying and fixing
misconfigurations and providing visibility into threats to quickly remediate them. Security Center has grown rapidly
in usage and capabilities including a security information and event management (SIEM)-like functionality called
investigations.
When it comes to cloud workload protection, the goal is to present the information to users within Security Center
in an easy-to-consume manner so that you can address individual threats. Azure Security Center is not intended for
advanced security operations (SecOps) hunting scenarios or to be a SIEM tool.
SIEM and security orchestration and automated response (SOAR) capabilities are delivered in Azure Sentinel.
Azure Security Center and Azure Sentinel are different capabilities with complementary purposes.
Azure Sentinel delivers intelligent security analytics and threat intelligence across the organization, providing a
single solution for alert detection, threat visibility, proactive hunting, and threat response.
Azure Sentinel is your service operations center (SOC) view across the organization, alleviating the stress of
increasingly sophisticated attacks, increasing volumes of alerts, and long resolution timeframes. With Azure Sentinel,
you can:
• Collect data at cloud scale across all users, devices, applications, and infrastructure, both on-premises and
in multiple clouds.
• Integrate curated alerts from Microsoft’s security products like Azure Security Center, and from your non-
Microsoft security solutions.
• Detect previously undetected threats and minimize false positives using Microsoft Intelligent Security
Graph, which uses trillions of signals from Microsoft services and systems around the globe to identify new
and evolving threats. Investigate threats with artificial intelligence and hunt for suspicious activities at scale,
tapping into years of cyber security experience at Microsoft, see section § “Microsoft Cyber Defense
Operations Center” in the Appendix.
• Respond to incidents rapidly with built-in orchestration and automation of common tasks.
SIEMs typically integrate with a broad range of applications including threat intelligence applications for specific
workloads, and the same is true for Azure Sentinel. SecOps has the full power of querying against the raw data,
using Machine Learning models, even building your own model.
As such, Azure Security Center is one of the many sources of threat protection information that Azure Sentinel
collects data from, to create a view for the entire organization. Microsoft recommends that customers using Azure
use Azure Security Center for threat protection of workloads such as VMs, containers, storage, and IoT as already
covered.
In just a few clicks, you can connect Azure Security Center to Azure Sentinel. Once the Security Center data is in
Azure Sentinel, you can combine that data with other sources like firewalls, users, and devices, for proactive hunting
and threat mitigation with advanced querying and the power of Machine Learning.
Azure Sentinel is designed to simplify the application of advanced technologies like Machine Learning, User and
Entity Behavior Analytics (UEBA), to the variety of datasets you monitor and is complemented by other Microsoft
Threat Protection solutions that provide specialized investigation of hosts, email, identity attacks, and more.
Resource:
• What is Azure Sentinel?.
Note For information on Azure Resource Manager (ARM), please see the article Azure Resource Manager.
Note For a primer on ARM template, see article Understand the structure and syntax of Azure Resource Manager
templates and whitepaper Getting started with Azure Resource Manager. More in-depth information can be found in the
whitepaper World Class ARM Templates Considerations and Proven Practices.
It’s easy to create Azure Resource Manager templates in Visual Studio Code, i.e. an open source integrated
development environment (IDE) and available on Linux, using Azure Resource Group project templates. You can
also generate Azure Resource Manager templates from the Azure portal by clicking the Automation Script button,
which is available on the menu bar of every resource in the Azure portal. This creates the Azure Resource Manager
template for the given resource and even generates code for building the resource using the Azure CLI, PowerShell,
.NET, and others.
After you have an Azure Resource Manager template, you can deploy it to Azure by using PowerShell, the Azure
CLI, or Visual Studio Code. Or you can automate its integration, delivery and/or deployment in adequate DevOps
pipelines (see section § “Implementing (secure) DevOps practices for your applications” below). A great example of
Note The momentum is fantastic on the contribution front as well with nearly 180 unique contributors to the
Terraform provider for Azure Resource Manager. The involvement from the community with our increased 3-week cadence
of releases (ensures more coverage of Azure services by Terraform.
Additionally, after customer and community feedback regarding the need for additional Terraform modules for Azure,
Microsoft has been working hard at adding high quality modules and now have doubled the number of Azure modules in
the terraform registry, bringing it to over 120 modules. We believe all these additional integrations enable you to manage
infrastructure as code more easily and simplify managing your cloud environments.
Microsoft and HashiCorp are working together to provide integrated support for Terraform on Azure. Customers
using Terraform on Microsoft's Azure cloud are mutual customers, and both companies are united to provide
troubleshooting and support services. This joint entitlement process provides collaborative support across
companies and platforms while delivering a seamless customer experience. Customers using Terraform Provider for
Azure can file support tickets to Microsoft support. Customers using Terraform on Azure support can file support
tickets to Microsoft or HashiCorp.
Note HashiCorp and Microsoft are longstanding partners in the cloud infrastructure community. In 2017, Microsoft
committed to a multi-year partnership aimed at further integrating Azure services with HashiCorp products. As a result of
this collaboration, organizations can rely on tools like Terraform to create and manage Azure infrastructure.
For more information on HashiCorp Vault and Azure integrations, see page Hashicorp/Azure Integrations.
Resources:
• Azure Resource Manager overview.
• Azure Resource Manager templates.
• Azure Quickstart Templates.
• Use infrastructure automation tools with virtual machines in Azure.
• Terraform with Azure.
Doing automation
The idea behind automation is to transform any error prone, repetitive tasks into automation workflows or series of
steps. Automatiob addresses the need for the business to implement changes faster and with greater reliability. As
such, it serves the need of event management, capacity management, availability management, etc.
Azure Automation delivers a cloud-based automation and configuration service that provides consistent
management across your Azure and non-Azure environments. It consists of process automation, update
management, and configuration features. Azure Automation provides complete control during deployment,
operations, and decommissioning of workloads and resources.
Azure Automation provides you with the ability to automate frequent, time-consuming, and error-prone cloud
management tasks. This automation helps you focus on work that adds business value. By reducing errors and
boosting efficiency, it also helps to lower your operational costs.
You can integrate Azure services and other public systems that are required in deploying, configuring, and managing
your end to end processes. The service allows you to author runbooks graphically, in PowerShell, or Python. By using
a hybrid Runbook worker, you can unify management by orchestrating across on-premises environments.
Webhooks provide a way to fulfill requests and ensure continuous delivery and operations by triggering automation
from ITSM, DevOps, and monitoring systems.
Azure Automation has the ability to integrate with source control which promotes configuration as code where
runbooks or configurations can be checked into a source control system.
Resource:
• An introduction to Azure Automation.
This encompasses all the templates for the deployment of the Azure services on top of which these various
microservices/components run/are executed or on which these microservices/components rely on. This may also
comprise all the configuration files for the above microservices/components if one follows the so-called Twelve
Factors App approach (Configuration-as-Code): Store config in the environment.
In this context, a preproduction environment is indeed intended to best reflects the production environment in
which the application will ultimately be deployed. A public cloud platform like Azure greatly helps in the ability to
reproduce environment(s) with a configuration that will identically reproduce the target environment(s).
If this quality gate is passed, the workload can be pushed to the production environment(s).
Note The Tutorial: Deploy apps to Azure and Azure Stack illustrates how to deploy an application to Azure and Azure
Stack using a hybrid continuous integration/continuous delivery (CI/CD) pipelines.
Resource:
• What is Azure DevOps?.
• What is Azure Boards?.
• What is Azure Repos?.
• What is Azure Pipelines?.
• What is Azure Test Plans?.
• What is Azure Artifacts?.
All data collected by Azure Monitor fits into one of two fundamental types, metrics and logs:
• Metrics are numerical values that describe some aspect of a system at a particular point in time. They are
lightweight and capable of supporting near real-time scenarios.
• Logs contain different kinds of data organized into records with different sets of properties for each type.
Telemetry such as events and traces are stored as logs in addition to performance data so that it can all be
combined for analysis.
Azure Monitor enables monitoring for workloads and Azure services by collecting metrics, activity logs, and
diagnostic logs.
The metrics collected provide performance statistics for different resources, i.e. how a resource is performing and
the resources that it's consuming.
This said, and aside the above SLA’s, to achieve resilience, your applications on top of the above resiliency
foundation have also to take advantage of the resilient services built on/provided by the foundation as per the
shared responsibilities that applies in the public cloud, see section § “Understanding the shared responsibilities’
model for your applications” above.
Note “Information back-up” is covered under the ISO/IEC 27001:2013 standard, specifically addressed in Annex A,
domain 10.5.1. For more information, review of the publicly available ISO standards we are certified against is suggested.
Data protection services (DPS) backs up data for properties based on a defined schedule and upon request of the
properties. Data is retained according to data type identified by property. DPS investigates backup errors and
skipped files and follows up appropriately. See SOC 2 A1.2 Environmental protections, software, data backup
processes.
Beyond these capabilities, Azure Backup is the Azure-based service you can use to back up and restore your data in
Azure. They can back up on-premises machines and workloads, and Azure Virtual Machines (VMs). Azure Backup
replaces existing on-premises or off-site backup solutions with a cloud-based solution that is reliable, secure, and
cost-competitive to simplify data protection from ransomware and human error.
Azure Backup offers multiple components that are downloaded and deployed on the appropriate computer, server,
or in the cloud. All Azure Backup components (no matter whether data to be protected are located on-premises or
in the cloud) can be used to back up data to a Backup vault in Azure.
Customers have the responsibility to manage backups of their data. As already introduced, Azure Role-Based Access
Control (RBAC) enables fine-grained access management for Azure. Using RBAC, you can segregate duties within
your team and grant only the amount of access to users that they need to perform their jobs. Azure Backup provides
3 built-in roles to control backup management operations: Backup Contributor, Backup Operator and Backup
Reader.
Every activity is logged with the information about operations taken on the resources in the subscription, who
initiated the operation, when the operation occurred and what was the status of the operation. Information can be
retrieved the activity logs through the portal, PowerShell, Azure CLI, or Insights REST API.
Resource:
• What is the Azure Backup service?.
Disaster recovery
Aside Azure Backup, Azure Site Recovery also contributes to a business continuity and disaster recovery (BCDR)
strategy.
3
Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select
only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner's research
organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this
research, including any warranties of merchantability or fitness for a particular purpose.
Note For more information, see page What are the Microsoft OSA practices?.
Note For more information, see page What are the Microsoft SDL practices?.
Microsoft SDL is at the core of Microsoft’s defense in depth strategy that is fully align the principles of SD3:
Secure by Design, Secure by Default and Secure in Deployment.
Microsoft SDL is a software development model that includes specific security considerations. As such, Microsoft
SDL conforms to ISO/IEC 27034-1:2011 “Information technology -- Security techniques -- Application security --
Part 1: Overview and concepts”.
In a nutshell, a security requirements analysis must be completed for all system development projects. This analysis
document acts as a framework and includes the identification of possible risks to the finished development project
as well as mitigation strategies which can be implemented and tested during the development phases.
Critical security review and approval checkpoints are included during the system development lifecycle. All members
of software development teams receive appropriate training to stay informed about security basics and recent
trends in security and privacy. Individuals who develop software programs are required to attend at least one
security training class each year.
As shortly described above, a formal review process is implemented to ensure that new or modified source code
authored by Microsoft Azure staff is developed in a secure fashion, no malicious code has been introduced into the
system, and that proper coding practices are followed. The reviewers’ names, review dates, and review results are
documented and maintained for audit purposes.
Similarly, a formal security quality assurance process is implemented to test for vulnerabilities to known security
exposures and exploits. The process includes the use of automated security testing tools and requires that all high
vulnerabilities get remediated before the system will be released to production. Microsoft Azure have implemented
information validation through checking of data inputs as part of the SDL process. Thorough code reviews and
testing are completed during the above Verification phase of the SDL prior to software being put into a production
environment. The code reviews and testing check for cases of SQL injection, format string vulnerabilities, cross-site
scripting (XSS), integer arithmetic, command injection, and buffer overflow vulnerabilities. This satisfies the lifecycle
This comprises:
• Inventorying open source. Properly managing the use of open source software components first consists
in understand which components are in use. This obviously requires automation. Fortunately, modern agile
development practices (e.g. above mentioned SDL for Agile) already rely heavily on automated tooling, and
so are easily adapted to include capabilities in this area. (Without endorsing here a specific tool and service,
there are many tools available in this space, including open source tools like OWASP Dependency Check
and NPM Audit, and commercial services like WhiteSource Bolt, among many others.)
Note Inventory generation takes place at a natural point in the development lifecycle, such as during pull-
request validation or branch merging, with the inventory results being stored centrally and accessible to
appropriate personnel (including the Microsoft Security Incident Response Program).
• Performing security analysis. All identified components must be validated to ensure they are free of
security vulnerabilities, to the level of fidelity required by the policy in place. The following activities are
Note This can be a security benefit because security vulnerabilities are often fixed without explicit public disclosure,
and while the engineering cost of doing this isn’t free, benefits extend beyond security (such as engineering agility, taking
advantage of new features and bug fixes).
• Aligning security response processes. When a vulnerability is found or reported in an open source
component, a strategy for managing the process, which should align directly with the organization’s overall
security response plan. At Microsoft, we use the Microsoft Security Response Center (MSRC) to coordinate
response activities related to vulnerabilities in open source components (see next section below).
Microsoft Azure identifies, reports, and corrects information system flaws through vulnerability management,
incident response management, and patch / configuration management processes.
The Microsoft Azure Security Incident Response Program assists with identifying and reporting of information
system flaws through a global, 24x7 incident response service that works to mitigate the effects of attacks and
malicious activity. The incident response team follows established procedures for incident management,
communication, and recovery, and uses discoverable and predictable interfaces with internal and external partners
alike.
Vulnerability-related data are received from multiple sources of information, which include: Microsoft Security
Resource Center (MSRC), vendor Web sites, other third-party services (e.g., Internet Security Systems) and internal
/ external vulnerability scanning of services. Microsoft Azure Security Service Engineering will determine which
updates are applicable within the Azure environment. Potential changes are tested in advance.
Patching schedules are defined by Microsoft Azure Security Service Engineering as follows:
• 30 days for high vulnerabilities.
• 90 days for medium/moderate vulnerabilities.
In addition, Microsoft works with a variety of different industry bodies and security experts to understand new
threats and evolving trends. We constantly scan our systems for vulnerabilities, and we contract with external
penetration testers who also constantly scan the systems.
Resource:
• Operational Security for Online Services Overview. Provides insight into how Microsoft applies its resources to online
services in ways that extend beyond traditional standards and methodology to deliver industry-leading capabilities.
Microsoft Azure performs risk assessments of its environment to review the effectiveness of information security
controls and safeguards, as well as to identify new risks. The risks are assessed annually, and the results of the risk
assessment are presented to management through a formal risk assessment report.
The Online Services Security and Compliance (OSSC) team within MCIO manages the Information Security
Management System (ISMS) (and was created to ensure that Microsoft Azure is secure, meets the privacy
requirements of our customers, and complies with complex global regulatory requirements and industry standards).
The OSSC team monitors ongoing effectiveness and improvement of the ISMS control environment by reviewing
security issues, audit results, and monitoring status, and by planning and tracking necessary corrective actions.
Identification, assessment, and prioritization of risks are performed as part of Azure's risk management program
and verified as part of the ISO/IEC 27001:2013 audit (see section § “Clearly stated cloud principles of trust for your
applications and data” above).
In addition, Microsoft employs a method named “Red Teaming” to improve Microsoft Azure security controls and
processes through regular penetration testing. The Red Team is a group of full-time staff within Microsoft that
focuses on performing targeted and persistent attacks against Microsoft infrastructure, platforms, and applications,
but not end-customers’ applications or data.
The job of the Red Team is to simulate the kinds of sophisticated, well-funded targeted attack groups that can pose
a significant risk to cloud services and computing infrastructures. To accomplish this simulation, the team researches
and models known persistent adversaries, in addition to developing their own custom penetration tools and attack
methods.
Because of the sensitive and critical nature of the work, Red Team members at Microsoft are held to very high
standards of security and compliance. They go through extra validation, background screening, and training before
they are allowed to engage in any attack scenarios. Although no end-customer data is deliberately targeted by the
Red Team, they maintain the same Access To Customer Data (ATCD) requirements as service operations personnel
that deploy, maintain, and administer Microsoft Azure (see section § “Access to customer data by Microsoft
personnel” above) The Red Team abides by a strict code of conduct that prohibits intentional access or destruction
of customer data, or disruptions to customer Service Level Agreements (SLAs).
A different group, the Blue Team, is tasked with defending Microsoft Azure and related infrastructure from attack,
not only from the Red Team but from any other source as well. The Blue Team is comprised of dedicated security
responders as well as representatives from Microsoft Azure Engineering and Operations. The Blue Team follows
established security processes and uses the latest tools and technologies to detect and respond to attacks and
penetration. The Blue Team does not know when or how the Red Team’s attacks will occur or what methods may
be used - in fact, when a breach attempt is detected, the team does not know if it is a Red Team attack or an actual
attack from a real-world adversary. For this reason, the Blue Team is on-call 24x7, 365 days a year, and must react
to Red Team breaches the same way it would for any other adversary.
Microsoft understands that security assessment is also an important part of customer application development and
deployment. Therefore, Microsoft has established a policy for customers to carry out authorized penetration testing
on their applications hosted in Microsoft Azure. Because such testing can be indistinguishable from a real attack, it
is critical that customers conduct penetration testing only after notifying Microsoft. Penetration testing must be
conducted in accordance with Microsoft terms and conditions.
Resources:
• Microsoft Cloud Red Teaming. Explores the Red Teaming method, how attacks are conducted and defended against,
and the history and rationale behind the practice.
• Red vs. Blue - Internal security penetration testing of Microsoft Azure. A brief video explaining the Azure penetration
testing approach and discussing the roles of the Red and Blue teams.
• Penetration testing. Explains the process by which customers who wish to formally document upcoming penetration
testing engagements against Microsoft Azure are encouraged to fill out the Azure Service Penetration Testing
Notification form.
Important note While notifying Microsoft of pen testing activities is no longer required customers must still comply
with the Microsoft Cloud Unified Penetration Testing Rules of Engagement.
Important note If, during a penetration testing, you believe you have discovered a potential security flaw related to
the Microsoft Cloud or any other Microsoft service, please report it to Microsoft within 24 hours by following the instructions
on the Report a Computer Security Vulnerability page. Once submitted, you agree that you will not disclose this vulnerability
information publicly or to any third party until you hear back from Microsoft that the vulnerability has been fixed. All
vulnerabilities reported must follow the Coordinated Vulnerability Disclosure principle.
Microsoft follows a 5-step incident response process when managing both security and availability incidents for the
Azure services.
Important note The form Microsoft Online Services Security Incident and Abuse Reporting is available to report
suspected security issues or abuse of Microsoft Azure. This includes malicious network activity originating from a Microsoft
IP address. It also includes distribution of malicious content or other illicit or illegal material through Microsoft Azure.
The goal for both types is to restore normal service security and operations as quickly as possible after an issue is
detected, and an investigation is started. The response is implemented using a five-stage process which shows the
following activities:
If during the investigation of a security event, Microsoft becomes aware that customer data has been accessed by
an unlawful or unauthorized party, the security incident manager will immediately begin execution of the Customer
Security Incident Notification Process. This can occur at any point of the incident lifecycle, but usually begins during
the Assess or Diagnose phases. The security incident manager only needs reasonable suspicion that a reportable
event has occurred to begin execution of this process. The investigation and mitigation need not be completed
before this process begins in parallel.
The goal of the customer security incident notification process is to provide impacted customers with accurate,
actionable, and timely notice when their customer data has been breached. Such notices may also be required to
meet specific legal requirements.
Microsoft France
39 Quai du Président Roosevelt
92130 Issy-Les-Moulineaux
The reproduction in part or in full of this document, and of the associated trademarks and logos, without
the written permission of Microsoft France, is forbidden under French and international law applicable to
intellectual property.
MICROSOFT EXCLUDES ANY EXPRESS, IMPLICIT OR LEGAL GUARANTEE RELATING TO THE INFORMATION
IN THIS DOCUMENT.
Microsoft, Azure, Office 365, Microsoft 365, Dynamics 365 and other names of products and services are, or
may be, registered trademarks and/or commercial brands in the United States and/or in other countries.