0% found this document useful (0 votes)
282 views1,330 pages

SAP HANA Workload On Azure

Uploaded by

Aarju
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
282 views1,330 pages

SAP HANA Workload On Azure

Uploaded by

Aarju
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 1330

Contents

Overview
Get started
Certifications
SAP HANA on Azure (Large Instances)
Overview
What is SAP HANA on Azure (Large Instances)?
Know the terms
Certification
Available SKUs for HLI
Sizing
Onboarding requirements
SAP HANA data tiering and extension nodes
Operations model and responsibilities
Compatible Operating Systems
Architecture
General architecture
Network architecture
Storage architecture
HLI supported scenarios
Infrastructure and connectivity
HLI deployment
Connecting Azure VMs to HANA Large Instances
Connecting a VNet to HANA Large Instance ExpressRoute
Additional network requirements
Install SAP HANA
Validate the configuration
Sample HANA Installation
High availability and disaster recovery
Options and considerations
Backup and restore
Principles and preparation
Disaster recovery failover procedure
Troubleshoot and monitor
Monitoring HLI
Monitoring and troubleshooting from HANA side
How to
Azure HANA Large Instances control through Azure portal
Manage BareMetal Instances through the Azure portal
HA Setup with STONITH
OS Backup for Type II SKUs
Enable Kdump for HANA Large Instances
OS Upgrade for HANA Large Instances
Setting up SMT server for SUSE Linux
HLI to Azure VM migration
Buy an SAP HANA Large Instances reservation
SAP HANA on Azure Virtual Machines
Installation of SAP HANA on Azure VMs
S/4 HANA or BW/4 HANA SAP CAL deployment guide
SAP HANA infrastructure configurations and operations on Azure
SAP HANA Azure virtual machine storage configurations
NFS v4.1 volumes on Azure NetApp Files for SAP HANA
SAP HANA Availability in Azure Virtual Machines
SAP HANA on Azure Availability overview
SAP HANA on Azure Availability within one Azure region
SAP HANA on Azure Availability across Azure regions
Set up SAP HANA System Replication on SLES
Set up SAP HANA System Replication on RHEL
Set up SAP HANA System Replication with ANF on RHEL
Troubleshoot SAP HANA scale-out and Pacemaker on SLES
SAP HANA scale-out HSR with Pacemaker on RHEL
SAP HANA scale-out with standby node with Azure NetApp Files on SLES
SAP HANA scale-out with standby node with Azure NetApp Files on RHEL
SAP HANA backup overview
SAP HANA file level backup
SAP NetWeaver and Business One on Azure Virtual Machines
SAP workload planning and deployment checklist
Plan and implement SAP NetWeaver on Azure
Azure Storage types for SAP workload
SAP workload on Azure virtual machine supported scenarios
What SAP software is supported for Azure deployments
SAP NetWeaver Deployment guide
DBMS deployment guides for SAP workload
General Azure Virtual Machines DBMS deployment for SAP workload
SQL Server Azure Virtual Machines DBMS deployment for SAP workload
Oracle Azure Virtual Machines DBMS deployment for SAP workload
IBM Db2 Azure Virtual Machines DBMS deployment for SAP workload
High availability of IBM Db2 LUW on Azure VMs on SUSE Linux Enterprise Server
High availability of IBM Db2 LUW on Azure VMs on Red Hat Enterprise Linux Server
SAP ASE Azure Virtual Machines DBMS deployment for SAP workload
SAP MaxDB, liveCache and Content Server deployment on Azure
SAP HANA Availability in Azure Virtual Machines
SAP HANA on Azure Availability overview
SAP HANA on Azure Availability within one Azure region
SAP HANA on Azure Availability across Azure regions
SAP Business One on Azure Virtual Machines
SAP IDES on Windows/SQL Server SAP CAL deployment guide
SAP LaMa connector for Azure
High Availability (HA) on Windows and Linux
Overview
High Availability Architecture
HA Architecture and Scenarios
Higher Availability Architecture and Scenarios
SAP workload configurations with Azure Availability Zones
HA on Windows with Shared Disk for (A)SCS Instance
HA on Windows with SOFS File Share for (A)SCS Instance
HA for SAP NetWeaver on Windows with Azure NetApp Files (SMB)
HA on SUSE Linux for (A)SCS Instance
HA on SUSE Linux for (A)SCS Instance with Azure NetApp Files
HA on Red Hat Enterprise Linux for (A)SCS Instance
HA on Red Hat Enterprise Linux for (A)SCS Instance with Azure NetApp Files
Azure Infrastructure Preparation
Windows with Shared Disk for (A)SCS Instance
Windows with SOFS File Share for (A)SCS Instance
High availability for NFS on Azure VMs on SLES
GlusterFS on Azure VMs on Red Hat Enterprise Linux for SAP NetWeaver
Pacemaker on SLES
Pacemaker on RHEL
Public endpoint connectivity for VMs using Azure Standard Load Balancer in SAP
high-availability scenarios
SAP Installation
Windows with Shared Disk for (A)SCS Instance
Windows with SOFS File Share for (A)SCS Instance
HA for SAP NetWeaver on Windows with Azure NetApp Files (SMB)
SUSE Linux with NFS for (A)SCS Instance
SUSE Linux with NFS for (A)SCS Instance with Azure NetApp Files
High availability for SAP NetWeaver on Red Hat Enterprise Linux
Red Hat Enterprise Linux with NFS for (A)SCS Instance with Azure NetApp Files
SAP Multi-SID
Windows with Azure Shared Disk for (A)SCS Instance
Windows with Shared Disk for (A)SCS Instance
Windows with SOFS File Share for (A)SCS Instance
SLES with Pacemaker multi-SID for A(SCS) Instance
RHEL with Pacemaker multi-SID for A(SCS) Instance
Azure Site Recovery for SAP Disaster Recovery
Azure Proximity Placement Groups for optimal network latency with SAP applications
SAP BusinessObjects Business Intelligence platform on Azure
SAP BusinessObjects BI platform planning and implementation guide on Azure
SAP BusinessObjects BI platform deployment guide for linux on Azure
Integrate Azure AD with SAP applications
Provision users from SAP SuccessFactors to Active Directory
Provision users from SAP SuccessFactors to Azure AD
Write-back users from Azure AD to SAP SuccessFactors
Provision users to SAP Cloud Platform Identity Authentication Service
Configure SSO with SAP Cloud Platform Identity Authentication Service
Configure SSO with SAP SuccessFactors
Configure SSO with SAP Analytics Cloud
Configure SSO with SAP Fiori
Configure SSO with SAP Qualtrics
Configure SSO with SAP Ariba
Configure SSO with SAP Concur Travel and Expense
Configure SSO with SAP Cloud Platform
Configure SSO with SAP NetWeaver
Configure SSO with SAP Business ByDesign
Configure SSO with SAP HANA
Configure SSO with SAP Cloud for Customer
Configure SSO with SAP Fiori Launchpad
Azure Services Integration into SAP
Use SAP HANA in Power BI Desktop
DirectQuery and SAP HANA
Use the SAP BW Connector in Power BI Desktop
Azure Data Factory offers SAP HANA and Business Warehouse data integration
Azure Monitor for SAP Solutions
Azure Monitor for SAP Solutions Overview
Azure Monitor for SAP Solutions Providers
Configure Azure Monitor for SAP Solutions - Portal
Configure Azure Monitor for SAP Solutions - Azure PowerShell
Azure Monitor for SAP Solutions FAQ
Reference
Azure CLI
Azure CLI
Azure PowerShell
Resources
Azure Roadmap
Use Azure to host and run SAP workload scenarios
12/22/2020 • 20 minutes to read • Edit Online

When you use Microsoft Azure, you can reliably run your mission-critical SAP workloads and scenarios on a
scalable, compliant, and enterprise-proven platform. You get the scalability, flexibility, and cost savings of Azure.
With the expanded partnership between Microsoft and SAP, you can run SAP applications across development
and test and production scenarios in Azure and be fully supported. From SAP NetWeaver to SAP S/4HANA, SAP
BI on Linux to Windows, and SAP HANA to SQL, we've got you covered.
Besides hosting SAP NetWeaver scenarios with the different DBMS on Azure, you can host other SAP workload
scenarios, like SAP BI on Azure.
The uniqueness of Azure for SAP HANA is an offer that sets Azure apart. To enable hosting more memory and
CPU resource-demanding SAP scenarios that involve SAP HANA, Azure offers the use of customer-dedicated
bare-metal hardware. Use this solution to run SAP HANA deployments that require up to 24 TB (120 TB scale-out)
of memory for S/4HANA or other SAP HANA workload.
Hosting SAP workload scenarios in Azure also can create requirements of identity integration and single sign-on.
This situation can occur when you use Azure Active Directory (Azure AD) to connect different SAP components
and SAP software-as-a-service (SaaS) or platform-as-a-service (PaaS) offers. A list of such integration and single
sign-on scenarios with Azure AD and SAP entities is described and documented in the section "Azure AD SAP
identity integration and single sign-on."

Changes to the SAP workload section


Changes to documents in the SAP on Azure workload section are listed at the end of this article. The entries in the
change log are kept for around 180 days.

You want to know


If you have specific questions, we are going to point you to specific documents or flows in this section of the start
page. You want to know:
What Azure VMs and HANA Large Instance units are supported for which SAP software releases and which
operating system versions. Read the document What SAP software is supported for Azure deployment for
answers and the process to find the information
What SAP deployment scenarios are supported with Azure VMs and HANA Large Instances. Information about
the supported scenarios can be found in the documents:
SAP workload on Azure virtual machine supported scenarios
Supported scenarios for HANA Large Instance
What Azure Services, Azure VM types and Azure storage services are available in the different Azure regions,
check the site Products available by region
Are third party HA frame works, besides Windows and Pacemaker supported? Check bottom part of SAP
support note #1928533
What Azure storage is best for my scenario? Read Azure Storage types for SAP workload
Is the Red Hat kernel in Oracle Enterprise Linux supported by SAP? Read SAP SAP support note #1565179
Why are the Azure Da(s)v4/Ea(s) VM families not certified for SAP HANA? The Azure Das/Eas VM families are
based on AMD processor driven hardware. SAP HANA does not support AMD processors, not even in
virtualized scenarios
Why am I still getting the message: 'The cpu flags for the RDTSCP instruction or the cpu flags for constant_tsc
or nonstop_tsc are not set or current_clocksource and available_clocksource are not correctly configured' with
SAP HANA, despite the fact that I am running the most recent Linux kernels. For the answer, check SAP
support note #2791572

SAP HANA on Azure (Large Instances)


A series of documents leads you through SAP HANA on Azure (Large Instances), or for short, HANA Large
Instances. For information on HANA Large Instances, start with the document Overview and architecture of SAP
HANA on Azure (Large Instances) and go through the related documentation in the HANA Large Instance section

SAP HANA on Azure virtual machines


This section of the documentation covers different aspects of SAP HANA. As a prerequisite, you should be familiar
with the principal services of Azure that provide elementary services of Azure IaaS. So, you need knowledge of
Azure compute, storage, and networking. Many of these subjects are handled in the SAP NetWeaver-related Azure
planning guide.

SAP NetWeaver deployed on Azure virtual machines


This section lists planning and deployment documentation for SAP NetWeaver, SAP LaMa and Business One on
Azure. The documentation focuses on the basics and the use of non-HANA databases with an SAP workload on
Azure. The documents and articles for high availability are also the foundation for SAP HANA high availability in
Azure

SAP NetWeaver and S/4HANA high availability


High Availability of SAP application layer and DBMS is documented into the details starting with the document
Azure Virtual Machines high availability for SAP NetWeaver

Integrate Azure AD with SAP Services


In this section you can find information in how to configure SSO with most of the SAP SaaS and PaaS services,
NetWeaver and Fiori

Documentation on integration of Azure services into SAP components


In this section, you find documents about Microsoft Power BI integration into SAP data sources as well as Azure
Data Factory integration into SAP BW.

Change Log
12/21/2020: Add new certifications to SKUs of HANA Large Instances in Available SKUs for HLI
12/12/2020: Added pointer to SAP note clarifying details on Oracle Enterprise Linux support by SAP to What
SAP software is supported for Azure deployments
11/26/2020: Adapt SAP HANA Azure virtual machine storage configurations and Azure Storage types for SAP
workload to changed single VM SLAs
11/05/2020: Changing link to new SAP note about HANA supported file system types in SAP HANA Azure
virtual machine storage configurations
10/26/2020: Changing some tables for Azure premium storage configuration to clarify provisioned versus
burst throughput in SAP HANA Azure virtual machine storage configurations
10/22/2020: Change in HA for SAP NW on Azure VMs on SLES for SAP applications, HA for SAP NW on Azure
VMs on SLES with ANF, HA for SAP NW on Azure VMs on RHEL for SAP applications and HA for SAP NW on
Azure VMs on RHEL with ANF to adjust the recommendation for net.ipv4.tcp_keepalive_time
10/16/2020: Change in HA of IBM Db2 LUW on Azure VMs on SLES with Pacemaker, HA for SAP NW on Azure
VMs on RHEL for SAP applications, HA of IBM Db2 LUW on Azure VMs on RHEL, HA for SAP NW on Azure
VMs on RHEL multi-SID guide, HA for SAP NW on Azure VMs on RHEL with ANF, HA for SAP NW on Azure
VMs on SLES for SAP applications, HA for SAP NNW on Azure VMs on SLES multi-SID guide, HA for SAP NW
on Azure VMs on SLES with ANF for SAP applications, HA for NFS on Azure VMs on SLES, HA of SAP HANA on
Azure VMs on SLES, HA for SAP HANA scale-up with ANF on RHEL, HA of SAP HANA on Azure VMs on RHEL,
SAP HANA scale-out HSR with Pacemaker on Azure VMs on RHEL, Prepare Azure infrastructure for SAP
ASCS/SCS with WSFC and shared disk, multi-SID HA guide for SAP ASCS/SCS with WSFC and Azure shared
disk and multi-SID HA guide for SAP ASCS/SCS with WSFC and shared disk to add a statement that floating IP
is not supported in load-balancing scenarios on secondary IPs
10/16/2020: Adding documentation to control storage snapshots of HANA Large Instances in Backup and
restore of SAP HANA on HANA Large Instances
10/15/2020: Release of SAP BusinessObjects BI Platform on Azure documentation, SAP BusinessObjects BI
platform planning and implementation guide on Azure and SAP BusinessObjects BI platform deployment
guide for linux on Azure
10/05/2020: Release of SAP HANA scale-out HSR with Pacemaker on Azure VMs on RHEL configuration guide
09/30/2020: Change in High availability of SAP HANA on Azure VMs on RHEL, HA for SAP HANA scale-up
with ANF on RHEL and Setting up Pacemaker on RHEL in Azure to adapt the instructions for RHEL 8.1
09/29/2020: Making restrictions and recommendations around usage of PPG more obvious in the article
Azure proximity placement groups for optimal network latency with SAP applications
09/28/2020: Adding a new storage operation guide for SAP HANA using Azure NetApp Files with the
document NFS v4.1 volumes on Azure NetApp Files for SAP HANA
09/23/2020: Add new certified SKUs for HLI in Available SKUs for HLI
09/20/2020: Changes in documents Considerations for Azure Virtual Machines DBMS deployment for SAP
workload, SQL Server Azure Virtual Machines DBMS deployment for SAP NetWeaver, Azure Virtual Machines
Oracle DBMS deployment for SAP workload, IBM Db2 Azure Virtual Machines DBMS deployment for SAP
workload to adapt to new configuration suggestion that recommend separation of DBMS binaries and SAP
binaries into different Azure disks. Also adding Ultra disk recommendations to the different guides.
09/08/2020: Change in High availability of SAP HANA on Azure VMs on SLES to clarify stonith definitions
09/03/2020: Change in SAP HANA Azure virtual machine storage configurations to adapt to minimal 2 IOPS
per 1 GB capacity with Ultra disk
09/02/2020: Change in Available SKUs for HLI to get more transparent in what SKUs are HANA certified
August 25, 2020: Change in HA for SAP NW on Azure VMs on SLES with ANF to fix typo
August 25, 2020: Change in HA guide for SAP ASCS/SCS with WSFC and shared disk, Prepare Azure
infrastructure for SAP ASCS/SCS with WSFC and shared disk and Install SAP NW HA with WSFC and shared
disk to introduce the option of using Azure shared disk and document SAP ERS2 architecture
August 25, 2020: Release of multi-SID HA guide for SAP ASCS/SCS with WSFC and Azure shared disk
August 25, 2020: Change in HA guide for SAP ASCS/SCS with WSFC and Azure NetApp Files(SMB), Prepare
Azure infrastructure for SAP ASCS/SCS with WSFC and file share, multi-SID HA guide for SAP ASCS/SCS with
WSFC and shared disk and multi-SID HA guide for SAP ASCS/SCS with WSFC and SOFS file share as a result
of the content updates and restructuring in the HA guides for SAP ASCS/SCS with WFC and shared disk
August 21, 2020: Adding new OS release into Compatible Operating Systems for HANA Large Instances as
available operating system for HLI units of type I and II
August 18, 2020: Release of HA for SAP HANA scale-up with ANF on RHEL
August 17, 2020: Add information about using Azure Site Recovery for moving SAP NetWeaver systems from
on-premises to Azure in article Azure Virtual Machines planning and implementation for SAP NetWeaver
08/14/2020: Adding disk configuration advice for Db2 in article IBM Db2 Azure Virtual Machines DBMS
deployment for SAP workload
August 11, 2020: Adding RHEL 7.6 into Compatible Operating Systems for HANA Large Instances as available
operating system for HLI units of type I
August 10, 2020: Introducing cost conscious SAP HANA storage configuration in SAP HANA Azure virtual
machine storage configurations and making some updates to SAP workloads on Azure: planning and
deployment checklist
August 04, 2020: Change in Setting up Pacemaker on SLES in Azure and Setting up Pacemaker on RHEL in
Azure to emphasize the importance of reliable name resolution for Pacemaker clusters
August 04, 2020: Change in SAP NW HA on WFCS with file share, SAP NW HA on WFCS with shared disk, HA
for SAP NW on Azure VMs, HA for SAP NW on Azure VMs on SLES, HA for SAP NW on Azure VMs on SLES
with ANF, HA for SAP NW on Azure VMs on SLES multi-SID guide, High availability for SAP NetWeaver on
Azure VMs on RHEL, HA for SAP NW on Azure VMs on RHEL with ANF and HA for SAP NW on Azure VMs on
RHEL multi-SID guide to clarify the use of parameter enque/encni/set_so_keepalive
July 23, 2020: Added the Save on SAP HANA Large Instances with an Azure reservation article explaining what
you need to know before you buy an SAP HANA Large Instances reservation and how to make the purchase
July 16, 2020: Describe how to use Azure PowerShell to install new VM Extension for SAP in the Deployment
Guide
July 04,2020: Release of Azure monitor for SAP solutions(preview)
July 01, 2020: Suggesting less expensive storage configuration based on Azure premium storage burst
functionality in document SAP HANA Azure virtual machine storage configurations
June 24, 2020: Change in Setting up Pacemaker on SLES in Azure to release new improved Azure Fence Agent
and more resilient STONITH configuration for devices, based on Azure Fence Agent
June 24, 2020: Change in Setting up Pacemaker on RHEL in Azure to release more resilient STONITH
configuration
June 23, 2020: Changes to Azure Virtual Machines planning and implementation for SAP NetWeaver guide
and introduction of Azure Storage types for SAP workload guide
06/22/2020: Add installation steps for new VM Extension for SAP to the Deployment Guide
June 16, 2020: Change in Public endpoint connectivity for VMs using Azure Standard ILB in SAP HA scenarios
to add a link to SUSE Public Cloud Infrastructure 101 documentation
June 10, 2020: Adding new HLI SKUs into Available SKUs for HLI and SAP HANA (Large Instances) storage
architecture
May 21, 2020: Change in Setting up Pacemaker on SLES in Azure and Setting up Pacemaker on RHEL in Azure
to add a link to Public endpoint connectivity for VMs using Azure Standard ILB in SAP HA scenarios
May 19, 2020: Add important message not to use root volume group when using LVM for HANA related
volumes in SAP HANA Azure virtual machine storage configurations
May 19, 2020: Add new supported OS for HANA Large Instance Type II in [Compatible Operating Systems for
HANA Large Instances](/- azure/virtual-machines/workloads/sap/os-compatibility-matrix-hana-large-
instance)
May 12, 2020: Change in Public endpoint connectivity for VMs using Azure Standard ILB in SAP HA scenarios
to update links and add information for 3rd party firewall configuration
May 11, 2020: Change in High availability of SAP HANA on Azure VMs on SLES to set resource stickiness to 0
for the netcat resource, as that leads to more streamlined failover
May 05, 2020: Changes in Azure Virtual Machines planning and implementation for SAP NetWeaver to
express that Gen2 deployments are available for Mv1 VM family
April 24, 2020: Changes in SAP HANA scale-out with standby node on Azure VMs with ANF on SLES, in SAP
HANA scale-out with standby node on Azure VMs with ANF on RHEL, High availability for SAP NetWeaver on
Azure VMs on SLES with ANF and High availability for SAP NetWeaver on Azure VMs on RHEL with ANF to
add clarification that the IP addresses for ANF volumes are automatically assigned
April 22, 2020: Change in High availability of SAP HANA on Azure VMs on SLES to remove meta attribute
is-managed from the instructions, as it conflicts with placing the cluster in or out of maintenance mode
April 21, 2020: Added SQL Azure DB as supported DBMS for SAP (Hybris) Commerce Platform 1811 and later
in articles What SAP software is supported for Azure deployments and SAP certifications and configurations
running on Microsoft Azure
April 16, 2020: Added SAP HANA as supported DBMS for SAP (Hybris) Commerce Platform in articles What
SAP software is supported for Azure deployments and SAP certifications and configurations running on
Microsoft Azure
April 13, 2020: Correct to exact SAP ASE release numbers in SAP ASE Azure Virtual Machines DBMS
deployment for SAP workload
April 07, 2020: Change in Setting up Pacemaker on SLES in Azure to clarify cloud-netconfig-azure instructions
April 06, 2020: Changes in SAP HANA scale-out with standby node on Azure VMs with Azure NetApp Files on
SLES and in SAP HANA scale-out with standby node on Azure VMs with Azure NetApp Files on RHEL to
remove references to NetApp TR-4435 (replaced by TR-4746)
March 31, 2020: Change in High availability of SAP HANA on Azure VMs on SLES and High availability of SAP
HANA on Azure VMs on RHEL to add instructions how to specify stripe size when creating striped volumes
March 27, 2020: Change in High availability for SAP NW on Azure VMs on SLES with ANF for SAP applications
to align the file system mount options to NetApp TR-4746 (remove the sync mount option)
March 26, 2020: Change in High availability for SAP NetWeaver on Azure VMs on SLES multi-SID guide to add
reference to NetApp TR-4746
March 26, 2020: Change in High availability for SAP NetWeaver on Azure VMs on SLES for SAP applications,
High availability for SAP NetWeaver on Azure VMs on SLES with Azure NetApp Files for SAP applications, High
availability for NFS on Azure VMs on SLES, High availability for SAP NetWeaver on Azure VMs on RHEL multi-
SID guide, High availability for SAP NetWeaver on Azure VMs on RHEL for SAP applications and High
availability for SAP NetWeaver on Azure VMs on RHEL with Azure NetApp Files for SAP applications to update
diagrams and clarify instructions for Azure Load Balancer backend pool creation
March 19, 2020: Major revision of document Quickstart: Manual installation of single-instance SAP HANA on
Azure Virtual Machines to Installation of SAP HANA on Azure Virtual Machines
March 17, 2020: Change in Setting up Pacemaker on SUSE Linux Enterprise Server in Azure to remove SBD
configuration setting that is no longer necessary
March 16 2020: Clarification of column certification scenario in SAP HANA IaaS certified platform in What SAP
software is supported for Azure deployments
03/11/2020: Change in SAP workload on Azure virtual machine supported scenarios to clarify multiple
databases per DBMS instance support
March 11, 2020: Change in Azure Virtual Machines planning and implementation for SAP NetWeaver
explaining Generation 1 and Generation 2 VMs
March 10, 2020: Change in SAP HANA Azure virtual machine storage configurations to clarify real existing
throughput limits of ANF
March 09, 2020: Change in High availability for SAP NetWeaver on Azure VMs on SUSE Linux Enterprise
Server for SAP applications, High availability for SAP NetWeaver on Azure VMs on SUSE Linux Enterprise
Server with Azure NetApp Files for SAP applications, High availability for NFS on Azure VMs on SUSE Linux
Enterprise Server, Setting up Pacemaker on SUSE Linux Enterprise Server in Azure, High availability of IBM
Db2 LUW on Azure VMs on SUSE Linux Enterprise Server with Pacemaker, High availability of SAP HANA on
Azure VMs on SUSE Linux Enterprise Server and High availability for SAP NetWeaver on Azure VMs on SLES
multi-SID guide to update cluster resources with resource agent azure-lb
March 05, 2020: Structure changes and content changes for Azure Regions and Azure Virtual machines in
Azure Virtual Machines planning and implementation for SAP NetWeaver
03/03/2020: Change in High availability for SAP NW on Azure VMs on SLES with ANF for SAP applications to
change to more efficient ANF volume layout
March 01, 2020: Reworked Backup guide for SAP HANA on Azure Virtual Machines to include Azure Backup
service. Reduced and condensed content in SAP HANA Azure Backup on file level and deleted a third
document dealing with backup through disk snapshot. Content gets handled in Backup guide for SAP HANA
on Azure Virtual Machines
February 27, 2020: Change in High availability for SAP NW on Azure VMs on SLES for SAP applications, High
availability for SAP NW on Azure VMs on SLES with ANF for SAP applications and High availability for SAP
NetWeaver on Azure VMs on SLES multi-SID guide to adjust "on fail" cluster parameter
February 26, 2020: Change in SAP HANA Azure virtual machine storage configurations to clarify file system
choice for HANA on Azure
February 26, 2020: Change in High availability architecture and scenarios for SAP to include the link to the HA
for SAP NetWeaver on Azure VMs on RHEL multi-SID guide
February 26, 2020: Change in High availability for SAP NW on Azure VMs on SLES for SAP applications, High
availability for SAP NW on Azure VMs on SLES with ANF for SAP applications, Azure VMs high availability for
SAP NetWeaver on RHEL and Azure VMs high availability for SAP NetWeaver on RHEL with Azure NetApp
Files to remove the statement that multi-SID ASCS/ERS cluster is not supported
February 26, 2020: Release of High availability for SAP NetWeaver on Azure VMs on RHEL multi-SID guide to
add a link to the SUSE multi-SID cluster guide
02/25/2020: Change in High availability architecture and scenarios for SAP to add links to newer HA articles
February 25, 2020: Change in High availability of IBM Db2 LUW on Azure VMs on SUSE Linux Enterprise
Server with Pacemaker to point to document that describes access to public endpoint with Standard Azure
Load balancer
February 21, 2020: Complete revision of the article SAP ASE Azure Virtual Machines DBMS deployment for
SAP workload
February 21, 2020: Change in SAP HANA Azure virtual machine storage configuration to represent new
recommendation in stripe size for /hana/data and adding setting of I/O scheduler
February 21, 2020: Changes in HANA Large Instance documents to represent newly certified SKUs of S224
and S224m
February 21, 2020: Change in Azure VMs high availability for SAP NetWeaver on RHEL and Azure VMs high
availability for SAP NetWeaver on RHEL with Azure NetApp Files to adjust the cluster constraints for enqueue
server replication 2 architecture (ENSA2)
February 20, 2020: Change in High availability for SAP NetWeaver on Azure VMs on SLES multi-SID guide to
add a link to the SUSE multi-SID cluster guide
February 13, 2020: Changes to Azure Virtual Machines planning and implementation for SAP NetWeaver to
implement links to new documents
February 13, 2020: Added new document SAP workload on Azure virtual machine supported scenario
February 13, 2020: Added new document What SAP software is supported for Azure deployment
February 13, 2020: Change in High availability of IBM Db2 LUW on Azure VMs on Red Hat Enterprise Linux
Server to point to document that describes access to public endpoint with Standard Azure Load balancer
February 13, 2020: Add the new VM types to SAP certifications and configurations running on Microsoft Azure
February 13, 2020: Add new SAP support notes SAP workloads on Azure: planning and deployment checklist
February 13, 2020: Change in Azure VMs high availability for SAP NetWeaver on RHEL and Azure VMs high
availability for SAP NetWeaver on RHEL with Azure NetApp Files to align the cluster resources timeouts to the
Red Hat timeout recommendations
February 11, 2020: Release of SAP HANA on Azure Large Instance migration to Azure Virtual Machines
February 07, 2020: Change in Public endpoint connectivity for VMs using Azure Standard ILB in SAP HA
scenarios to update sample NSG screenshot
February 03, 2020: Change in High availability for SAP NW on Azure VMs on SLES for SAP applications and
High availability for SAP NW on Azure VMs on SLES with ANF for SAP applications to remove the warning
about using dash in the host names of cluster nodes on SLES
January 28, 2020: Change in High availability of SAP HANA on Azure VMs on RHEL to align the SAP HANA
cluster resources timeouts to the Red Hat timeout recommendations
January 17, 2020: Change in Azure proximity placement groups for optimal network latency with SAP
applications to change the section of moving existing VMs into a proximity placement group
January 17, 2020: Change in SAP workload configurations with Azure Availability Zones to point to procedure
that automates measurements of latency between Availability Zones
January 16, 2020: Change in How to install and configure SAP HANA (Large Instances) on Azure to adapt OS
releases to HANA IaaS hardware directory
January 16, 2020: Changes in High availability for SAP NetWeaver on Azure VMs on SLES multi-SID guide to
add instructions for SAP systems, using enqueue server 2 architecture (ENSA2)
January 10, 2020: Changes in SAP HANA scale-out with standby node on Azure VMs with Azure NetApp Files
on SLES and in SAP HANA scale-out with standby node on Azure VMs with Azure NetApp Files on RHEL to add
instructions on how to make nfs4_disable_idmapping changes permanent.
January 10, 2020: Changes in High availability for SAP NetWeaver on Azure VMs on SLES with Azure NetApp
Files for SAP applications and in Azure Virtual Machines high availability for SAP NetWeaver on RHEL with
Azure NetApp Files for SAP applications to add instructions how to mount Azure NetApp Files NFSv4 volumes.
December 23, 2019: Release of High availability for SAP NetWeaver on Azure VMs on SLES multi-SID guide
December 18, 2019: Release of SAP HANA scale-out with standby node on Azure VMs with Azure NetApp Files
on RHEL
SAP certifications and configurations running on
Microsoft Azure
12/22/2020 • 3 minutes to read • Edit Online

SAP and Microsoft have a long history of working together in a strong partnership that has mutual benefits for
their customers. Microsoft is constantly updating its platform and submitting new certification details to SAP in
order to ensure Microsoft Azure is the best platform on which to run your SAP workloads. The following tables
outline Azure supported configurations and list of growing SAP certifications. This list is an overview list that
might deviate here and there from the official SAP lists. How to get to the detailed data is documented in the
article What SAP software is supported for Azure deployments

SAP HANA certifications


References:
SAP HANA certified IaaS platforms for SAP HANA support for native Azure VMs and HANA Large Instances.

SA P P RO DUC T SUP P O RT ED O S A Z URE O F F ERIN GS

SAP HANA Developer Edition (including Red Hat Enterprise Linux, SUSE Linux D-Series VM family
the HANA client software comprised of Enterprise
SQLODBC, ODBO-Windows only,
ODBC, JDBC drivers, HANA studio, and
HANA database)

Business One on HANA SUSE Linux Enterprise DS14_v2, M32ts, M32ls, M64ls, M64s
SAP HANA Certified IaaS Platforms

SAP S/4 HANA Red Hat Enterprise Linux, SUSE Linux Controlled Availability for GS5. Full
Enterprise support for M64s, M64ms, M128s,
M128ms, M64ls, M32ls, M32ts,
M208s_v2, M208ms_v2, M416s_v2,
M416ms_v2,
SAP HANA on Azure (Large instances)
SAP HANA Certified IaaS Platforms

Suite on HANA, OLTP Red Hat Enterprise Linux, SUSE Linux M64s, M64ms, M128s, M128ms,
Enterprise M64ls, M32ls, M32ts, M208s_v2,
M208ms_v2,
M416s_v2, M416ms_v2, SAP HANA on
Azure (Large instances) SAP HANA
Certified IaaS Platforms

HANA Enterprise for BW, OLAP Red Hat Enterprise Linux, SUSE Linux GS5, M64s, M64ms, M128s, M128ms,
Enterprise M64ls, M32ls, M32ts, M208s_v2,
M208ms_v2,
M416s_v2, M416ms_v2, SAP HANA on
Azure (Large instances) SAP HANA
Certified IaaS Platforms
SA P P RO DUC T SUP P O RT ED O S A Z URE O F F ERIN GS

SAP BW/4 HANA Red Hat Enterprise Linux, SUSE Linux GS5, M64s, M64ms, M128s, M128ms,
Enterprise M64ls, M32ls, M32ts, M208s_v2,
M208ms_v2,
M416s_v2, M416ms_v2, SAP HANA on
Azure (Large instances)
SAP HANA Certified IaaS Platforms

Be aware that SAP uses the term 'clustering' in SAP HANA Certified IaaS Platforms as synonym for 'scale-out' and
NOT for high availability 'clustering'

SAP NetWeaver certifications


Microsoft Azure is certified for the following SAP products, with full support from Microsoft and SAP. References:
1928533 - SAP Applications on Azure: Supported Products and Azure VM types for all SAP NetWeaver based
applications, including SAP TREX, SAP LiveCache, and SAP Content Server. And all databases, excluding SAP
HANA.

SA P P RO DUC T GUEST O S RDB M S VIRT UA L M A C H IN E T Y P ES

SAP Business Suite Software Windows, SUSE Linux SQL Server, Oracle A5 to A11, D11 to D14,
Enterprise, Red Hat (Windows and Oracle Linux DS11 to DS14, DS11_v2 to
Enterprise Linux, Oracle only), DB2, SAP ASE DS15_v2, GS1 to GS5,
Linux D2s_v3 to D64s_v3,
D2as_v4 to D64as_v4,
E2s_v3 to E64s_v3, E2as_v4
to E64as_v4, M64s, M64ms,
M128s, M128ms, M64ls,
M32ls, M32ts, M208s_v2,
M208ms_v2, M416s_v2,
M416ms_v2

SAP Business All-in-One Windows, SUSE Linux SQL Server, Oracle A5 to A11, D11 to D14,
Enterprise, Red Hat (Windows and Oracle Linux DS11 to DS14, DS11_v2 to
Enterprise Linux, Oracle only), DB2, SAP ASE DS15_v2, GS1 to GS5,
Linux D2s_v3 to D64s_v3,
D2as_v4 to D64as_v4,
E2s_v3 to E64s_v3, E2as_v4
to E64as_v4, M64s, M64ms,
M128s, M128ms, M64ls,
M32ls, M32ts, M208s_v2,
M208ms_v2, M416s_v2,
M416ms_v2

SAP BusinessObjects BI Windows N/A A5 to A11, D11 to D14,


DS11 to DS14, DS11_v2 to
DS15_v2, GS1 to GS5,
D2s_v3 to D64s_v3,
D2as_v4 to D64as_v4,
E2s_v3 to E64s_v3, E2as_v4
to E64as_v4, M64s, M64ms,
M128s, M128ms, M64ls,
M32ls, M32ts, M208s_v2,
M208ms_v2, M416s_v2,
M416ms_v2
SA P P RO DUC T GUEST O S RDB M S VIRT UA L M A C H IN E T Y P ES

SAP NetWeaver Windows, SUSE Linux SQL Server, Oracle A5 to A11, D11 to D14,
Enterprise, Red Hat (Windows and Oracle Linux DS11 to DS14, DS11_v2 to
Enterprise Linux, Oracle only), DB2, SAP ASE DS15_v2, GS1 to GS5,
Linux D2s_v3 to D64s_v3,
D2as_v4 to D64as_v4,
E2s_v3 to E64s_v3, E2as_v4
to E64as_v4, M64s, M64ms,
M128s, M128ms, M64ls,
M32ls, M32ts, M208s_v2,
M208ms_v2, M416s_v2,
M416ms_v2

Other SAP Workload supported on Azure


SA P P RO DUC T GUEST O S RDB M S VIRT UA L M A C H IN E T Y P ES

SAP Business One on SQL Windows SQL Server All NetWeaver certified VM
Server types
SAP Note #928839

SAP BPC 10.01 MS SP08 Windows and Linux All NetWeaver Certified VM
types
SAP Note #2451795

SAP Business Objects BI Windows and Linux SAP Note #2145537


platform

SAP Data Services 4.2 SAP Note #2288344

SAP Hybris Commerce Windows SQL Server, Oracle All NetWeaver certified VM
Platform types
Hybris Documentation

SAP Hybris Commerce SLES 12 or more recent SAP HANA All NetWeaver certified VM
Platform types
Hybris Documentation

SAP Hybris Commerce RHEL 7 or more recent SAP HANA All NetWeaver certified VM
Platform types
[Hybris
Documentation]https://help.
sap.com/viewer/a74589c3a8
1a4a95bf51d87258c0ab15/
6.7.0.0/en-
US/8c71300f866910149b40
c88dfc0de431.html)

SAP (Hybris) Commerce Windows, SLES, or RHEL SQL Azure DB All NetWeaver certified VM
Platform 1811 and later types
Hybris Documentation
What is SAP HANA on Azure (Large Instances)?
12/22/2020 • 3 minutes to read • Edit Online

SAP HANA on Azure (Large Instances) is a unique solution to Azure. In addition to providing virtual machines
for deploying and running SAP HANA, Azure offers you the possibility to run and deploy SAP HANA on bare-
metal servers that are dedicated to you. The SAP HANA on Azure (Large Instances) solution builds on non-
shared host/server bare-metal hardware that is assigned to you. The server hardware is embedded in larger
stamps that contain compute/server, networking, and storage infrastructure. As a combination, it's HANA
tailored data center integration (TDI) certified. SAP HANA on Azure (Large Instances) offers different server
SKUs or sizes. Units can have 36 Intel CPU cores and 768 GB of memory and go up to units that have up to 480
Intel CPU cores and up to 24 TB of memory.
The customer isolation within the infrastructure stamp is performed in tenants, which looks like:
Networking : Isolation of customers within infrastructure stack through virtual networks per customer
assigned tenant. A tenant is assigned to a single customer. A customer can have multiple tenants. The
network isolation of tenants prohibits network communication between tenants in the infrastructure stamp
level, even if the tenants belong to the same customer.
Storage components : Isolation through storage virtual machines that have storage volumes assigned to
them. Storage volumes can be assigned to one storage virtual machine only. A storage virtual machine is
assigned exclusively to one single tenant in the SAP HANA TDI certified infrastructure stack. As a result,
storage volumes assigned to a storage virtual machine can be accessed in one specific and related tenant
only. They aren't visible between the different deployed tenants.
Ser ver or host : A server or host unit isn't shared between customers or tenants. A server or host deployed
to a customer, is an atomic bare-metal compute unit that is assigned to one single tenant. No hardware
partitioning or soft partitioning is used that might result in you sharing a host or a server with another
customer. Storage volumes that are assigned to the storage virtual machine of the specific tenant are
mounted to such a server. A tenant can have one to many server units of different SKUs exclusively
assigned.
Within an SAP HANA on Azure (Large Instances) infrastructure stamp, many different tenants are deployed
and isolated against each other through the tenant concepts on networking, storage, and compute level.
These bare-metal server units are supported to run SAP HANA only. The SAP application layer or workload
middle-ware layer runs in virtual machines. The infrastructure stamps that run the SAP HANA on Azure (Large
Instances) units are connected to the Azure network services backbones. In this way, low-latency connectivity
between SAP HANA on Azure (Large Instances) units and virtual machines is provided.
As of July 2019, we differentiate between two different revisions of HANA Large Instance stamps and location
of deployments:
"Revision 3" (Rev 3): Are the stamps that were made available for customer to deploy before July 2019
"Revision 4" (Rev 4): New stamp design that is deployed in close proximity to Azure VM hosts and which so
far are released in the Azure regions of:
West US2
East US
West Europe
North Europe
This document is one of several documents that cover SAP HANA on Azure (Large Instances). This document
introduces the basic architecture, responsibilities, and services provided by the solution. High-level capabilities
of the solution are also discussed. For most other areas, such as networking and connectivity, four other
documents cover details and drill-down information. The documentation of SAP HANA on Azure (Large
Instances) doesn't cover aspects of the SAP NetWeaver installation or deployments of SAP NetWeaver in VMs.
SAP NetWeaver on Azure is covered in separate documents found in the same Azure documentation container.
The different documents of HANA Large Instance guidance cover the following areas:
SAP HANA (Large Instances) overview and architecture on Azure
SAP HANA (Large Instances) infrastructure and connectivity on Azure
Install and configure SAP HANA (Large Instances) on Azure
SAP HANA (Large Instances) high availability and disaster recovery on Azure
SAP HANA (Large Instances) troubleshooting and monitoring on Azure
High availability set up in SUSE by using the STONITH
OS backup and restore for Type II SKUs of Revision 3 stamps
Save on SAP HANA Large Instances with an Azure reservation
Next steps
Refer Know the terms
Know the terms
12/22/2020 • 4 minutes to read • Edit Online

Several common definitions are widely used in the Architecture and Technical Deployment Guide. Note the
following terms and their meanings:
IaaS : Infrastructure as a service.
PaaS : Platform as a service.
SaaS : Software as a service.
SAP component : An individual SAP application, such as ERP Central Component (ECC), Business
Warehouse (BW), Solution Manager, or Enterprise Portal (EP). SAP components can be based on traditional
ABAP or Java technologies or a non-NetWeaver based application such as Business Objects.
SAP environment : One or more SAP components logically grouped to perform a business function, such
as development, quality assurance, training, disaster recovery, or production.
SAP landscape : Refers to the entire SAP assets in your IT landscape. The SAP landscape includes all
production and non-production environments.
SAP system : The combination of DBMS layer and application layer of, for example, an SAP ERP
development system, an SAP BW test system, and an SAP CRM production system. Azure deployments
don't support dividing these two layers between on-premises and Azure. An SAP system is either deployed
on-premises or it's deployed in Azure. You can deploy the different systems of an SAP landscape into either
Azure or on-premises. For example, you can deploy the SAP CRM development and test systems in Azure
while you deploy the SAP CRM production system on-premises. For SAP HANA on Azure (Large Instances),
it's intended that you host the SAP application layer of SAP systems in VMs and the related SAP HANA
instance on a unit in the SAP HANA on Azure (Large Instances) stamp.
Large Instance stamp : A hardware infrastructure stack that is SAP HANA TDI-certified and dedicated to
run SAP HANA instances within Azure.
SAP HANA on Azure (Large Instances): Official name for the offer in Azure to run HANA instances in on
SAP HANA TDI-certified hardware that's deployed in Large Instance stamps in different Azure regions. The
related term HANA Large Instance is short for SAP HANA on Azure (Large Instances) and is widely used in
this technical deployment guide.
Cross-premises : Describes a scenario where VMs are deployed to an Azure subscription that has site-to-
site, multi-site, or Azure ExpressRoute connectivity between on-premises data centers and Azure. In
common Azure documentation, these kinds of deployments are also described as cross-premises scenarios.
The reason for the connection is to extend on-premises domains, on-premises Azure Active
Directory/OpenLDAP, and on-premises DNS into Azure. The on-premises landscape is extended to the Azure
assets of the Azure subscriptions. With this extension, the VMs can be part of the on-premises domain.
Domain users of the on-premises domain can access the servers and run services on those VMs (such as
DBMS services). Communication and name resolution between VMs deployed on-premises and Azure-
deployed VMs is possible. This scenario is typical of the way in which most SAP assets are deployed. For
more information, see Azure VPN Gateway and Create a virtual network with a site-to-site connection by
using the Azure portal.
Tenant : A customer deployed in HANA Large Instance stamp gets isolated into a tenant. A tenant is isolated
in the networking, storage, and compute layer from other tenants. Storage and compute units assigned to
the different tenants can't see each other or communicate with each other on the HANA Large Instance
stamp level. A customer can choose to have deployments into different tenants. Even then, there is no
communication between tenants on the HANA Large Instance stamp level.
SKU categor y : For HANA Large Instance, the following two categories of SKUs are offered:
Type I class : S72, S72m, S96, S144, S144m, S192, S192m, S192xm, S224, and S224m
Type II class : S384, S384m, S384xm, S384xxm, S576m, S576xm, S768m, S768xm, and S960m
Stamp : Defines the Microsoft internal deployment size of HANA Large Instances. Before HANA Large
Instance units can get deployed, a HANA Large Instance stamp consisting out of compute, network, and
storage racks need to be deployed in a datacenter location. Such a deployment is called a HANA Large
instance stamp or from Revision 4 (see below) on we use the alternate of term of Large Instance Row
Revision : There are two different stamp revisions for HANA Large Instance stamps. These differ in
architecture and proximity to Azure virtual machine hosts
"Revision 3" (Rev 3): is the original design that got deployed from mid of the year 2016
"Revision 4" (Rev 4): is a new design that can provide closer proximity to Azure virtual machine hosts
and with that lower network latency between Azure VMs and HANA Large Instance units
"Revision 4.2" (Rev 4.2): on existing Revision 4 DCs, resources are rebranded to BareMetal Infrastructure.
Customers can access their resources as BareMetal instances from the Azure portal.
A variety of additional resources are available on how to deploy an SAP workload in the cloud. If you plan to
execute a deployment of SAP HANA in Azure, you need to be experienced with and aware of the principles of
Azure IaaS and the deployment of SAP workloads on Azure IaaS. Before you continue, see Use SAP solutions on
Azure virtual machines for more information.
Next steps
Refer HLI Certification
Certification
12/22/2020 • 2 minutes to read • Edit Online

Besides the NetWeaver certification, SAP requires a special certification for SAP HANA to support SAP HANA on
certain infrastructures, such as Azure IaaS.
The core SAP Note on NetWeaver, and to a degree SAP HANA certification, is SAP Note #1928533 – SAP
applications on Azure: Supported products and Azure VM types.
The certification records for SAP HANA on Azure (Large Instances) units can be found in the SAP HANA certified
IaaS Platforms site.
The SAP HANA on Azure (Large Instances) types, referred to in SAP HANA certified IaaS Platforms site, provides
Microsoft and SAP customers the ability to deploy large SAP Business Suite, SAP BW, S/4 HANA, BW/4HANA, or
other SAP HANA workloads in Azure. The solution is based on the SAP-HANA certified dedicated hardware stamp
(SAP HANA tailored data center integration – TDI). If you run an SAP HANA TDI-configured solution, all SAP HANA-
based applications (such as SAP Business Suite on SAP HANA, SAP BW on SAP HANA, S4/HANA, and BW4/HANA)
works on the hardware infrastructure.
Compared to running SAP HANA in VMs, this solution has a benefit. It provides for much larger memory volumes.
To enable this solution, you need to understand the following key aspects:
The SAP application layer and non-SAP applications run in VMs that are hosted in the usual Azure hardware
stamps.
Customer on-premises infrastructure, data centers, and application deployments are connected to the cloud
platform through ExpressRoute (recommended) or a virtual private network (VPN). Active Directory and DNS
also are extended into Azure.
The SAP HANA database instance for HANA workload runs on SAP HANA on Azure (Large Instances). The Large
Instance stamp is connected into Azure networking, so software running in VMs can interact with the HANA
instance running in HANA Large Instance.
Hardware of SAP HANA on Azure (Large Instances) is dedicated hardware provided in an IaaS with SUSE Linux
Enterprise Server or Red Hat Enterprise Linux preinstalled. As with virtual machines, further updates and
maintenance to the operating system is your responsibility.
Installation of HANA or any additional components necessary to run SAP HANA on units of HANA Large
Instance is your responsibility. All respective ongoing operations and administration of SAP HANA on Azure are
also your responsibility.
In addition to the solutions described here, you can install other components in your Azure subscription that
connects to SAP HANA on Azure (Large Instances). Examples are components that enable communication with
or directly to the SAP HANA database, such as jump servers, RDP servers, SAP HANA Studio, SAP Data Services
for SAP BI scenarios, or network monitoring solutions.
As in Azure, HANA Large Instance offers support for high availability and disaster recovery functionality.
Next steps
Refer Available SKUs for HLI
Available SKUs for HANA Large Instances
12/22/2020 • 9 minutes to read • Edit Online

SAP HANA on Azure (Large Instances) service based on Revision 3 stamps only, is available in several
configurations in the Azure regions of:
Australia East
Australia Southeast
Japan East
Japan West
SAP HANA on Azure (Large Instances) service based on Revision 4 stamps is available in several configurations
in the Azure regions of:
West US 2
East US
BareMetal Infrastructure (certified for SAP HANA workloads) service based on Revision 4.2 stamps. It's available
in several configurations in the Azure regions of:
West Europe
North Europe
East US 2
South Central US
The list of available Azure Large instances that are offered lists like the following.

IMPORTANT
Be aware of the first column that represents the status of HANA certification for each of the Large Instance types in the
list. The column should correlate with the SAP HANA hardware directory for the Azure SKUs that start with the letter S

SA P H A N A TOTA L M EM O RY M EM O RY
C ERT IF IED M O DEL M EM O RY DRA M O P TA N E STO RA GE AVA IL A B IL IT Y

YES SAP HANA on 768 GB 768 GB --- 3.0 TB Available


OLAP, OLTP Azure S96
– 2 x Intel®
Xeon®
Processor E7-
8890 v4
48 CPU cores
and 96 CPU
threads
SA P H A N A TOTA L M EM O RY M EM O RY
C ERT IF IED M O DEL M EM O RY DRA M O P TA N E STO RA GE AVA IL A B IL IT Y

YES SAP HANA on 3.0 TB 3.0 TB --- 6.3 TB Available


OLAP, OLTP Azure S224
– 4 x Intel®
Xeon®
Platinum
8276
processor
112 CPU
cores and 224
CPU threads

YES SAP HANA on 6.0 TB 6.0 TB --- 10.5 TB Available


OLTP Azure S224m
– 4 x Intel®
Xeon®
Platinum
8276
processor
112 CPU
cores and 224
CPU threads

YES SAP HANA on 6.0 TB 3.0 TB 3.0 TB 10.5 TB Available


OLTP Azure
S224om
– 4 x Intel®
Xeon®
Platinum
8276
processor
112 CPU
cores and 224
CPU threads

NO SAP HANA on 4.5 TB 1.5 TB 3.0 TB 8.4 TB Available


Azure S224oo
– 4 x Intel®
Xeon®
Platinum
8276
processor
112 CPU
cores and 224
CPU threads

NO SAP HANA on 7.5 TB 1.5 TB 6.0 TB 12.7 TB Available


Azure
S224ooo
– 4 x Intel®
Xeon®
Platinum
8276
processor
112 CPU
cores and 224
CPU threads
SA P H A N A TOTA L M EM O RY M EM O RY
C ERT IF IED M O DEL M EM O RY DRA M O P TA N E STO RA GE AVA IL A B IL IT Y

NO SAP HANA on 9.0 TB 3.0 TB 6.0 TB 14.8 TB Available


Azure
S224oom
– 4 x Intel®
Xeon®
Platinum
8276
processor
112 CPU
cores and 224
CPU threads

YES SAP HANA on 4.0 TB 4.0 TB --- 16 TB Available


OLAP, OLTP Azure S384
– 8 x Intel®
Xeon®
Processor E7-
8890 v4
192 CPU
cores and 384
CPU threads

YES SAP HANA on 6.0 TB 6.0 TB --- 18 TB Available


OLTP Azure S384m
– 8 x Intel®
Xeon®
Processor E7-
8890 v4
192 CPU
cores and 384
CPU threads

YES SAP HANA on 8.0 TB 8.0 TB --- 22 TB Available


OLAP, OLTP Azure
S384xm
– 8 x Intel®
Xeon®
Processor E7-
8890 v4
192 CPU
cores and 384
CPU threads

YES SAP HANA on 6.0 TB 6.0 TB --- 10.5 TB Available (Rev


OLAP, OLTP Azure S448 4 only)
– 8 x Intel®
Xeon®
Platinum
8276
processor
224 CPU
cores and 448
CPU threads
SA P H A N A TOTA L M EM O RY M EM O RY
C ERT IF IED M O DEL M EM O RY DRA M O P TA N E STO RA GE AVA IL A B IL IT Y

YES SAP HANA on 12.0 TB 12.0 TB --- 18.9 TB Available (Rev


OLAP, OLTP Azure S448m 4 only)
– 8 x Intel®
Xeon®
Platinum
8276
processor
224 CPU
cores and 448
CPU threads

NO SAP HANA on 9.0 TB 3.0 TB 6.0 TB 14.8 TB Available (Rev


Azure S448oo 4 only)
– 8 x Intel®
Xeon®
Platinum
8276
processor
224 CPU
cores and 448
CPU threads

NO SAP HANA on 12.0 TB 6.0 TB 6.0 TB 18.9 TB Available (Rev


Azure 4 only)
S448om
– 8 x Intel®
Xeon®
Platinum
8276
processor
224 CPU
cores and 448
CPU threads

NO SAP HANA on 15.0 TB 3.0 TB 12.0 TB 23.2 TB Available (Rev


Azure 4 only)
S448ooo
– 8 x Intel®
Xeon®
Platinum
8276
processor
224 CPU
cores and 448
CPU threads

NO SAP HANA on 18.0 TB 6.0 TB 12.0 TB 27.4 TB Available (Rev


Azure 4 only)
S448oom
– 8 x Intel®
Xeon®
Platinum
8276
processor
224 CPU
cores and 448
CPU threads
SA P H A N A TOTA L M EM O RY M EM O RY
C ERT IF IED M O DEL M EM O RY DRA M O P TA N E STO RA GE AVA IL A B IL IT Y

YES SAP HANA on 12.0 TB 12.0 TB --- 28 TB Available (Rev


OLTP Azure S576m 4 only)
– 12 x Intel®
Xeon®
Processor E7-
8890 v4
288 CPU
cores and 576
CPU threads

NO SAP HANA on 18.0 TB 18.0 --- 41 TB Available


Azure
S576xm
– 12 x Intel®
Xeon®
Processor E7-
8890 v4
288 CPU
cores and 576
CPU threads

YES SAP HANA on 9.0 TB 9.0 TB --- 14.7 TB Available (Rev


OLAP, OLTP Azure S672 4 only)
– 12 x Intel®
Xeon®
Platinum
8276
processor
336 CPU
cores and 672
CPU threads

YES SAP HANA on 18.0 TB 18.0 TB --- 27.4 TB Available (Rev


OLAP, OLTP Azure S672m 4 only)
– 12 x Intel®
Xeon®
Platinum
8276
processor
336 CPU
cores and 672
CPU threads

NO SAP HANA on 13.5 TB 4.5 TB 9.0 TB 21.1 TB Available (Rev


Azure S672oo 4 only)
– 12 x Intel®
Xeon®
Platinum
8276
processor
336 CPU
cores and 672
CPU threads
SA P H A N A TOTA L M EM O RY M EM O RY
C ERT IF IED M O DEL M EM O RY DRA M O P TA N E STO RA GE AVA IL A B IL IT Y

NO SAP HANA on 18.0 TB 9.0 TB 9.0 TB 27.4 TB Available (Rev


Azure 4 only)
S672om
– 12 x Intel®
Xeon®
Platinum
8276
processor
336 CPU
cores and 672
CPU threads

NO SAP HANA on 22.5 TB 4.5 TB 18.0 TB 33.7 TB Available (Rev


Azure 4 only)
S672ooo
– 12 x Intel®
Xeon®
Platinum
8276
processor
336 CPU
cores and 672
CPU threads

NO SAP HANA on 27.0 TB 9.0 TB 18.0 TB 40.0 TB Available (Rev


Azure 4 only)
S672oom
– 12 x Intel®
Xeon®
Platinum
8276
processor
336 CPU
cores and 672
CPU threads

YES SAP HANA on 16.0 TB 16.0 TB -- 36 TB Available


OLTP Azure S768m
– 16 x Intel®
Xeon®
Processor E7-
8890 v4
384 CPU
cores and 768
CPU threads

NO SAP HANA on 24.0 TB 24.0 TB --- 56 TB Available


Azure
S768xm
– 16 x Intel®
Xeon®
Processor E7-
8890 v4
384 CPU
cores and 768
CPU threads
SA P H A N A TOTA L M EM O RY M EM O RY
C ERT IF IED M O DEL M EM O RY DRA M O P TA N E STO RA GE AVA IL A B IL IT Y

YES SAP HANA on 12.0 TB 12.0 TB --- 18.9 TB Available (Rev


OLAP, OLTP Azure S896 4 only)
– 16 x Intel®
Xeon®
Platinum
8276
processor
448 CPU
cores and 896
CPU threads

YES SAP HANA on 24.0 TB 24.0 TB -- 35.8 TB Available


OLAP, OLTP Azure S896m
– 16 x Intel®
Xeon®
Platinum
8276
processor
448 CPU
cores and 896
CPU threads

NO SAP HANA on 18.0 TB 6.0 TB 12.0 TB 27.4 TB Available (Rev


Azure S896oo 4 only)
– 16 x Intel®
Xeon®
Platinum
8276
processor
448 CPU
cores and 896
CPU threads

NO SAP HANA on 24.0 TB 12.0 TB 12.0 TB 35.8 TB Available (Rev


Azure 4 only)
S896om
– 16 x Intel®
Xeon®
Platinum
8276
processor
448 CPU
cores and 896
CPU threads

NO SAP HANA on 30.0 TB 6.0 TB 24.0 TB 44.3 TB Available (Rev


Azure 4 only)
S896ooo
– 16 x Intel®
Xeon®
Platinum
8276
processor
448 CPU
cores and 896
CPU threads
SA P H A N A TOTA L M EM O RY M EM O RY
C ERT IF IED M O DEL M EM O RY DRA M O P TA N E STO RA GE AVA IL A B IL IT Y

NO SAP HANA on 36.0 TB 12.0 TB 24.0 TB 52.7 TB Available (Rev


Azure 4 only)
S896oom
– 16 x Intel®
Xeon®
Platinum
8276
processor
448 CPU
cores and 896
CPU threads

YES SAP HANA on 20.0 TB 20.0 TB -- 46 TB Available (Rev


OLTP Azure S960m 4 only)
– 20 x Intel®
Xeon®
Processor E7-
8890 v4
480 CPU
cores and 960
CPU threads

CPU cores = sum of non-hyper-threaded CPU cores of the sum of the processors of the server unit.
CPU threads = sum of compute threads provided by hyper-threaded CPU cores of the sum of the processors
of the server unit. Most units are configured by default to use Hyper-Threading Technology.
Based on supplier recommendations S768m, S768xm, and S960m are not configured to use Hyper-Threading
for running SAP HANA.

IMPORTANT
The following SKUs, though still supported can't be purchased anymore: S72, S72m, S144, S144m, S192, and S192m

The specific configurations chosen are dependent on workload, CPU resources, and desired memory. It's possible
for the OLTP workload to use the SKUs that are optimized for the OLAP workload.
Two different classes of hardware divide the SKUs into:
S72, S72m, S96, S144, S144m, S192, S192m, S192xm, S224, and S224m, S224oo, S224om, S224ooo,
S224oom are referred to as the "Type I class" of SKUs.
All other SKUs are referred to as the "Type II class" of SKUs.
If you are interested in SKUs that are not yet listed in the SAP hardware directory, contact your Microsoft
account team to get more information.
A complete HANA Large Instance stamp isn't exclusively allocated for a single customer's use. This fact applies to
the racks of compute and storage resources connected through a network fabric deployed in Azure as well.
HANA Large Instance infrastructure, like Azure, deploys different customer "tenants" that are isolated from one
another in the following three levels:
Network : Isolation through virtual networks within the HANA Large Instance stamp.
Storage : Isolation through storage virtual machines that have storage volumes assigned and isolate storage
volumes between tenants.
Compute : Dedicated assignment of server units to a single tenant. No hard or soft partitioning of server
units. No sharing of a single server or host unit between tenants.
The deployments of HANA Large Instance units between different tenants aren't visible to each other. HANA
Large Instance units deployed in different tenants can't communicate directly with each other on the HANA Large
Instance stamp level. Only HANA Large Instance units within one tenant can communicate with each other on the
HANA Large Instance stamp level.
A deployed tenant in the Large Instance stamp is assigned to one Azure subscription for billing purposes. For a
network, it can be accessed from virtual networks of other Azure subscriptions within the same Azure
enrollment. If you deploy with another Azure subscription in the same Azure region, you also can choose to ask
for a separated HANA Large Instance tenant.
There are significant differences between running SAP HANA on HANA Large Instance and SAP HANA running
on VMs deployed in Azure:
There is no virtualization layer for SAP HANA on Azure (Large Instances). You get the performance of the
underlying bare-metal hardware.
Unlike Azure, the SAP HANA on Azure (Large Instances) server is dedicated to a specific customer. There is no
possibility that a server unit or host is hard or soft partitioned. As a result, a HANA Large Instance unit is used
as assigned as a whole to a tenant and with that to you. A reboot or shutdown of the server doesn't lead
automatically to the operating system and SAP HANA being deployed on another server. (For Type I class
SKUs, the only exception is if a server encounters issues and redeployment needs to be performed on another
server.)
Unlike Azure, where host processor types are selected for the best price/performance ratio, the processor
types chosen for SAP HANA on Azure (Large Instances) are the highest performing of the Intel E7v3 and E7v4
processor line.
Next steps
Refer HLI Sizing
Sizing
12/22/2020 • 2 minutes to read • Edit Online

Sizing for HANA Large Instance is no different than sizing for HANA in general. For existing and deployed systems
that you want to move from other RDBMS to HANA, SAP provides a number of reports that run on your existing
SAP systems. If the database is moved to HANA, these reports check the data and calculate memory requirements
for the HANA instance. For more information on how to run these reports and obtain their most recent patches or
versions, read the following SAP Notes:
SAP Note #1793345 - Sizing for SAP Suite on HANA
SAP Note #1872170 - Suite on HANA and S/4 HANA sizing report
SAP Note #2121330 - FAQ: SAP BW on HANA sizing report
SAP Note #1736976 - Sizing report for BW on HANA
SAP Note #2296290 - New sizing report for BW on HANA
For green field implementations, SAP Quick Sizer is available to calculate memory requirements of the
implementation of SAP software on top of HANA.
Memory requirements for HANA increase as data volume grows. Be aware of your current memory consumption
to help you predict what it's going to be in the future. Based on memory requirements, you then can map your
demand into one of the HANA Large Instance SKUs.
Next steps
Refer Onboarding requirements
Onboarding requirements
12/22/2020 • 3 minutes to read • Edit Online

This list assembles requirements for running SAP HANA on Azure (Larger Instances).
Microsoft Azure
An Azure subscription that can be linked to SAP HANA on Azure (Large Instances).
Microsoft Premier support contract. For specific information related to running SAP in Azure, see SAP Support
Note #2015553 – SAP on Microsoft Azure: Support prerequisites. If you use HANA Large Instance units with
384 and more CPUs, you also need to extend the Premier support contract to include Azure Rapid Response.
Awareness of the HANA Large Instance SKUs you need after you perform a sizing exercise with SAP.
Network connectivity
ExpressRoute between on-premises to Azure: To connect your on-premises data center to Azure, make sure to
order at least a 1-Gbps connection from your ISP. Connectivity between HANA Large Instance units and Azure is
using ExpressRoute technology as well. This ExpressRoute connection between the HANA Large Instance units
and Azure is included in the price of the HANA Large Instance units, including all data ingress and egress
charges for this specific ExpressRoute circuit. Therefore, you as customer, do not encounter additional costs
beyond your ExpressRoute link between on-premises and Azure.
Operating system
Licenses for SUSE Linux Enterprise Server 12 for SAP Applications.

NOTE
The operating system delivered by Microsoft isn't registered with SUSE. It isn't connected to a Subscription
Management Tool instance.

SUSE Linux Subscription Management Tool deployed in Azure on a VM. This tool provides the capability for
SAP HANA on Azure (Large Instances) to be registered and respectively updated by SUSE. (There is no
internet access within the HANA Large Instance data center.)
Licenses for Red Hat Enterprise Linux 6.7 or 7.x for SAP HANA.

NOTE
The operating system delivered by Microsoft isn't registered with Red Hat. It isn't connected to a Red Hat Subscription
Manager instance.

Red Hat Subscription Manager deployed in Azure on a VM. The Red Hat Subscription Manager provides the
capability for SAP HANA on Azure (Large Instances) to be registered and respectively updated by Red Hat.
(There is no direct internet access from within the tenant deployed on the Azure Large Instance stamp.)
SAP requires you to have a support contract with your Linux provider as well. This requirement isn't
removed by the solution of HANA Large Instance or the fact that you run Linux in Azure. Unlike with some
of the Linux Azure gallery images, the service fee is not included in the solution offer of HANA Large
Instance. It's your responsibility to fulfill the requirements of SAP regarding support contracts with the Linux
distributor.
For SUSE Linux, look up the requirements of support contracts in SAP Note #1984787 - SUSE Linux
Enterprise Server 12: Installation notes and SAP Note #1056161 - SUSE priority support for SAP
applications.
For Red Hat Linux, you need to have the correct subscription levels that include support and service
updates to the operating systems of HANA Large Instance. Red Hat recommends the Red Hat Enterprise
Linux subscription for SAP solution. Refer https://access.redhat.com/solutions/3082481.
For the support matrix of the different SAP HANA versions with the different Linux versions, see SAP Note
#2235581.
For the compatibility matrix of the operating system and HLI firmware/driver versions, refer OS Upgrade for HLI.

IMPORTANT
For Type II units only the SLES 12 SP2 OS version is supported at this point.

Database
Licenses and software installation components for SAP HANA (platform or enterprise edition).
Applications
Licenses and software installation components for any SAP applications that connect to SAP HANA and related
SAP support contracts.
Licenses and software installation components for any non-SAP applications used with SAP HANA on Azure
(Large Instances) environments and related support contracts.
Skills
Experience with and knowledge of Azure IaaS and its components.
Experience with and knowledge of how to deploy an SAP workload in Azure.
SAP HANA installation certified personal.
SAP architect skills to design high availability and disaster recovery around SAP HANA.
SAP
Expectation is that you're an SAP customer and have a support contract with SAP.
Especially for implementations of the Type II class of HANA Large Instance SKUs, consult with SAP on versions
of SAP HANA and the eventual configurations on large-sized scale-up hardware.
Next steps
Refer SAP HANA (Large Instances) architecture on Azure
Use SAP HANA data tiering and extension nodes
12/22/2020 • 2 minutes to read • Edit Online

SAP supports a data tiering model for SAP BW of different SAP NetWeaver releases and SAP BW/4HANA. For more
information about the data tiering model, see the SAP document SAP BW/4HANA and SAP BW on HANA with SAP
HANA extension nodes. With HANA Large Instance, you can use option-1 configuration of SAP HANA extension
nodes as explained in the FAQ and SAP blog documents. Option-2 configurations can be set up with the following
HANA Large Instance SKUs: S72m, S192, S192m, S384, and S384m.
When you look at the documentation, the advantage might not be visible immediately. But when you look at the
SAP sizing guidelines, you can see an advantage by using option-1 and option-2 SAP HANA extension nodes. Here
are examples:
SAP HANA sizing guidelines usually require double the amount of data volume as memory. When you run your
SAP HANA instance with the hot data, you have only 50 percent or less of the memory filled with data. The
remainder of the memory is ideally held for SAP HANA doing its work.
That means in a HANA Large Instance S192 unit with 2 TB of memory, running an SAP BW database, you only
have 1 TB as data volume.
If you use an additional SAP HANA extension node of option-1, also a S192 HANA Large Instance SKU, it gives
you an additional 2-TB capacity for data volume. In the option-2 configuration, you get an additional 4 TB for
warm data volume. Compared to the hot node, the full memory capacity of the "warm" extension node can be
used for data storing for option-1. Double the memory can be used for data volume in option-2 SAP HANA
extension node configuration.
You end up with a capacity of 3 TB for your data and a hot-to-warm ratio of 1:2 for option-1. You have 5 TB of
data and a 1:4 ratio with the option-2 extension node configuration.
The higher the data volume compared to the memory, the higher the chances are that the warm data you are
asking for is stored on disk storage.
Next steps
Refer SAP HANA (Large Instances) architecture on Azure
Operations model and responsibilities
12/22/2020 • 4 minutes to read • Edit Online

The service provided with SAP HANA on Azure (Large Instances) is aligned with Azure IaaS services. You get an
instance of a HANA Large Instance with an installed operating system that is optimized for SAP HANA. As with
Azure IaaS VMs, most of the tasks of hardening the OS, installing additional software, installing HANA, operating
the OS and HANA, and updating the OS and HANA is your responsibility. Microsoft doesn't force OS updates or
HANA updates on you.

As shown in the diagram, SAP HANA on Azure (Large Instances) is a multi-tenant IaaS offer. For the most part, the
division of responsibility is at the OS-infrastructure boundary. Microsoft is responsible for all aspects of the service
below the line of the operating system. You are responsible for all aspects of the service above the line. The OS is
your responsibility. You can continue to use most current on-premises methods you might employ for compliance,
security, application management, basis, and OS management. The systems appear as if they are in your network in
all regards.
This service is optimized for SAP HANA, so there are areas where you need to work with Microsoft to use the
underlying infrastructure capabilities for best results.
The following list provides more detail on each of the layers and your responsibilities:
Networking : All the internal networks for the Large Instance stamp running SAP HANA. Your responsibility
includes access to storage, connectivity between the instances (for scale-out and other functions), connectivity to
the landscape, and connectivity to Azure where the SAP application layer is hosted in VMs. It also includes WAN
connectivity between Azure Data Centers for disaster recovery purposes replication. All networks are partitioned by
the tenant and have quality of service applied.
Storage : The virtualized partitioned storage for all volumes needed by the SAP HANA servers, as well as for
snapshots.
Ser vers : The dedicated physical servers to run the SAP HANA DBs assigned to tenants. The servers of the Type I
class of SKUs are hardware abstracted. With these types of servers, the server configuration is collected and
maintained in profiles, which can be moved from one physical hardware to another physical hardware. Such a
(manual) move of a profile by operations can be compared a bit to Azure service healing. The servers of the Type II
class SKUs don't offer such a capability.
SDDC : The management software that is used to manage data centers as software-defined entities. It allows
Microsoft to pool resources for scale, availability, and performance reasons.
O/S : The OS you choose (SUSE Linux or Red Hat Linux) that is running on the servers. The OS images you are
supplied with were provided by the individual Linux vendor to Microsoft for running SAP HANA. You must have a
subscription with the Linux vendor for the specific SAP HANA-optimized image. You are responsible for registering
the images with the OS vendor.
From the point of handover by Microsoft, you are responsible for any further patching of the Linux operating
system. This patching includes additional packages that might be necessary for a successful SAP HANA installation
and that weren't included by the specific Linux vendor in their SAP HANA optimized OS images. (For more
information, see SAP's HANA installation documentation and SAP Notes.)
You are responsible for OS patching owing to malfunction or optimization of the OS and its drivers relative to the
specific server hardware. You also are responsible for security or functional patching of the OS.
Your responsibility also includes monitoring and capacity planning of:
CPU resource consumption.
Memory consumption.
Disk volumes related to free space, IOPS, and latency.
Network volume traffic between HANA Large Instance and the SAP application layer.
The underlying infrastructure of HANA Large Instance provides functionality for backup and restore of the OS
volume. Using this functionality is also your responsibility.
Middleware : The SAP HANA Instance, primarily. Administration, operations, and monitoring are your
responsibility. You can use the provided functionality to use storage snapshots for backup and restore and disaster
recovery purposes. These capabilities are provided by the infrastructure. Your responsibilities also include
designing high availability or disaster recovery with these capabilities, leveraging them, and monitoring to
determine whether storage snapshots executed successfully.
Data : Your data managed by SAP HANA, and other data such as backups files located on volumes or file shares.
Your responsibilities include monitoring disk free space and managing the content on the volumes. You also are
responsible for monitoring the successful execution of backups of disk volumes and storage snapshots. Successful
execution of data replication to disaster recovery sites is the responsibility of Microsoft.
Applications: The SAP application instances or, in the case of non-SAP applications, the application layer of those
applications. Your responsibilities include deployment, administration, operations, and monitoring of those
applications. You are responsible for capacity planning of CPU resource consumption, memory consumption, Azure
Storage consumption, and network bandwidth consumption within virtual networks. You also are responsible for
capacity planning for resource consumption from virtual networks to SAP HANA on Azure (Large Instances).
WANs : The connections you establish from on-premises to Azure deployments for workloads. All customers with
HANA Large Instance use Azure ExpressRoute for connectivity. This connection isn't part of the SAP HANA on Azure
(Large Instances) solution. You are responsible for the setup of this connection.
Archive : You might prefer to archive copies of data by using your own methods in storage accounts. Archiving
requires management, compliance, costs, and operations. You are responsible for generating archive copies and
backups on Azure and storing them in a compliant way.
See the SLA for SAP HANA on Azure (Large Instances).
Next steps
Refer SAP HANA (Large Instances) architecture on Azure
Compatible Operating Systems for HANA Large
Instances
12/22/2020 • 2 minutes to read • Edit Online

HANA Large Instance Type I


O P ERAT IN G SY ST EM AVA IL A B IL IT Y SK US

SLES 12 SP2 Not offered anymore S72, S72m, S96, S144, S144m, S192,
S192m, S192xm

SLES 12 SP3 Available S72, S72m, S96, S144, S144m, S192,


S192m, S192xm

SLES 12 SP4 Available S72, S72m, S96, S144, S144m, S192,


S192m, S192xm, S224, S224m

SLES 12 SP5 Available S72, S72m, S96, S144, S144m, S192,


S192m, S192xm, S224, S224m

SLES 15 SP1 Available S72, S72m, S96, S144, S144m, S192,


S192m, S192xm, S224, S224m

RHEL 7.6 Available S72, S72m, S96, S144, S144m, S192,


S192m, S192xm, S224, S224m

Persistent Memory SKUs


O P ERAT IN G SY ST EM AVA IL A B IL IT Y SK US

SLES 12 SP4 Available S224oo, S224om, S224ooo, S224oom

HANA Large Instance Type II


O P ERAT IN G SY ST EM AVA IL A B IL IT Y SK US

SLES 12 SP2 Not offered anymore S384, S384m, S384xm, S384xxm,


S576m, S576xm, S768m, S768xm,
S960m

SLES 12 SP3 Available S384, S384m, S384xm, S384xxm,


S576m, S576xm, S768m, S768xm,
S960m

SLES 12 SP4 Available S384, S384m, S384xm, S384xxm,


S576m, S576xm, S768m, S768xm,
S960m
O P ERAT IN G SY ST EM AVA IL A B IL IT Y SK US

SLES 12 SP5 Available S384, S384m, S384xm, S384xxm,


S576m, S576xm, S768m, S768xm,
S896m, S960m

SLES 15 SP1 Available S384, S384m, S384xm, S384xxm,


S576m, S576xm, S768m, S768xm,
S896m, S960m

RHEL 7.6 Available S384, S384m, S384xm, S384xxm,


S576m, S576xm, S768m, S768xm,
S896m, S960m

Related Documents
To know more about Available SKUs
To know about Upgrading the Operating System
SAP HANA (Large Instances) architecture on Azure
12/22/2020 • 3 minutes to read • Edit Online

At a high level, the SAP HANA on Azure (Large Instances) solution has the SAP application layer residing in VMs.
The database layer resides on SAP TDI-configured hardware located in a Large Instance stamp in the same Azure
region that is connected to Azure IaaS.

NOTE
Deploy the SAP application layer in the same Azure region as the SAP DBMS layer. This rule is well documented in published
information about SAP workloads on Azure.

The overall architecture of SAP HANA on Azure (Large Instances) provides an SAP TDI-certified hardware
configuration, which is a non-virtualized, bare metal, high-performance server for the SAP HANA database. It also
provides the ability and flexibility of Azure to scale resources for the SAP application layer to meet your needs.
The architecture shown is divided into three sections:
Right : Shows an on-premises infrastructure that runs different applications in data centers so that end
users can access LOB applications, such as SAP. Ideally, this on-premises infrastructure is connected to Azure
with ExpressRoute.
Center : Shows Azure IaaS and, in this case, use of VMs to host SAP or other applications that use SAP
HANA as a DBMS system. Smaller HANA instances that function with the memory that VMs provide are
deployed in VMs together with their application layer. For more information about virtual machines, see
Virtual machines.
Azure network services are used to group SAP systems together with other applications into virtual
networks. These virtual networks connect to on-premises systems as well as to SAP HANA on Azure (Large
Instances).
For SAP NetWeaver applications and databases that are supported to run in Azure, see SAP Support Note
#1928533 – SAP applications on Azure: Supported products and Azure VM types. For documentation on
how to deploy SAP solutions on Azure, see:
Use SAP on Windows virtual machines
Use SAP solutions on Azure virtual machines
Left : Shows the SAP HANA TDI-certified hardware in the Azure Large Instance stamp. The HANA Large
Instance units are connected to the virtual networks of your Azure subscription by using the same
technology as the connectivity from on-premises into Azure. As of May 2019, an optimization got
introduced that allows to communicate between the HANA Large Instance units and the Azure VMs without
involvement of the ExpressRoute Gateway. This optimization called ExpressRoute Fast Path is displayed in
this architecture (red lines).
The Azure Large Instance stamp itself combines the following components:
Computing : Servers that are based on different generation of Intel Xeon processors that provide the
necessary computing capability and are SAP HANA certified.
Network : A unified high-speed network fabric that interconnects the computing, storage, and LAN
components.
Storage : A storage infrastructure that is accessed through a unified network fabric. The specific storage
capacity that is provided depends on the specific SAP HANA on Azure (Large Instances) configuration that is
deployed. More storage capacity is available at an additional monthly cost.
Within the multi-tenant infrastructure of the Large Instance stamp, customers are deployed as isolated tenants. At
deployment of the tenant, you name an Azure subscription within your Azure enrollment. This Azure subscription
is the one that the HANA Large Instance is billed against. These tenants have a 1:1 relationship to the Azure
subscription. For a network, it's possible to access a HANA Large Instance unit deployed in one tenant in one Azure
region from different virtual networks that belong to different Azure subscriptions. Those Azure subscriptions
must belong to the same Azure enrollment.
As with VMs, SAP HANA on Azure (Large Instances) is offered in multiple Azure regions. To offer disaster recovery
capabilities, you can choose to opt in. Different Large Instance stamps within one geo-political region are
connected to each other. For example, HANA Large Instance Stamps in US West and US East are connected
through a dedicated network link for disaster recovery replication.
Just as you can choose between different VM types with Azure Virtual Machines, you can choose from different
SKUs of HANA Large Instance that are tailored for different workload types of SAP HANA. SAP applies memory-to-
processor-socket ratios for varying workloads based on the Intel processor generations. The following table shows
the SKU types offered.
You can find available SKUs Available SKUs for HLI.
Next steps
Refer SAP HANA (Large Instances) network architecture
SAP HANA (Large Instances) network architecture
12/22/2020 • 17 minutes to read • Edit Online

The architecture of Azure network services is a key component of the successful deployment of SAP applications
on HANA Large Instance. Typically, SAP HANA on Azure (Large Instances) deployments have a larger SAP
landscape with several different SAP solutions with varying sizes of databases, CPU resource consumption, and
memory utilization. It's likely that not all IT systems are located in Azure already. Your SAP landscape is often
hybrid as well from a DBMS point and SAP application point of view using a mixture of NetWeaver, and S/4HANA
and SAP HANA and other DBMS. Azure offers different services that allow you to run the different DBMS,
NetWeaver, and S/4HANA systems in Azure. Azure also offers you network technology to make Azure look like a
virtual data center to your on-premises software deployments
Unless your complete IT systems are hosted in Azure. Azure networking functionality is used to connect the on-
premises world with your Azure assets to make Azure look like a virtual datacenter of yours. The Azure network
functionality used is:
Azure virtual networks are connected to the ExpressRoute circuit that connects to your on-premises network
assets.
An ExpressRoute circuit that connects on-premises to Azure should have a minimum bandwidth of 1 Gbps or
higher. This minimal bandwidth allows adequate bandwidth for the transfer of data between on-premises
systems and systems that run on VMs. It also allows adequate bandwidth for connection to Azure systems from
on-premises users.
All SAP systems in Azure are set up in virtual networks to communicate with each other.
Active Directory and DNS hosted on-premises are extended into Azure through ExpressRoute from on-
premises, or are running complete in Azure.
For the specific case of integrating HANA Large Instances into the Azure data center network fabric, Azure
ExpressRoute technology is used as well

NOTE
Only one Azure subscription can be linked to only one tenant in a HANA Large Instance stamp in a specific Azure region.
Conversely, a single HANA Large Instance stamp tenant can be linked to only one Azure subscription. This requirement is
consistent with other billable objects in Azure.

If SAP HANA on Azure (Large Instances) is deployed in multiple different Azure regions, a separate tenant is
deployed in the HANA Large Instance stamp. You can run both under the same Azure subscription as long as these
instances are part of the same SAP landscape.

IMPORTANT
Only the Azure Resource Manager deployment method is supported with SAP HANA on Azure (Large Instances).

Additional virtual network information


To connect a virtual network to ExpressRoute, an Azure ExpressRoute gateway must be created. For more
information, see About Expressroute gateways for ExpressRoute.
An Azure ExpressRoute gateway is used with ExpressRoute to an infrastructure outside of Azure or to an Azure
Large Instance stamp. You can connect the Azure ExpressRoute gateway to a maximum of four different
ExpressRoute circuits as long as those connections come from different Microsoft enterprise edge routers. For
more information, see SAP HANA (Large Instances) infrastructure and connectivity on Azure.

NOTE
The maximum throughput you can achieve with a ExpressRoute gateway is 10 Gbps by using an ExpressRoute connection.
Copying files between a VM that resides in a virtual network and a system on-premises (as a single copy stream) doesn't
achieve the full throughput of the different gateway SKUs. To leverage the complete bandwidth of the ExpressRoute gateway,
use multiple streams. Or you must copy different files in parallel streams of a single file.

Networking architecture for HANA Large Instance


The networking architecture for HANA Large Instance can be separated into four different parts:
On-premises networking and ExpressRoute connection to Azure. This part is the customer's domain and is
connected to Azure through ExpressRoute. This Expressroute circuit is fully paid by you as a customer. The
bandwidth should be large enough to handle the network traffic between your on-premises assets and the
Azure region you are connecting against. See the lower right in the following figure.
Azure network services, as previously discussed, with virtual networks, which again need ExpressRoute
gateways added. This part is an area where you need to find the appropriate designs for your application
requirements, security, and compliance requirements. Whether you use HANA Large Instance is another point
to consider in terms of the number of virtual networks and Azure gateway SKUs to choose from. See the upper
right in the figure.
Connectivity of HANA Large Instance through ExpressRoute technology into Azure. This part is deployed and
handled by Microsoft. All you need to do is provide some IP address ranges after the deployment of your assets
in HANA Large Instance connect the ExpressRoute circuit to the virtual networks. For more information, see SAP
HANA (Large Instances) infrastructure and connectivity on Azure. There is no additional fee for you as a
customer for the connectivity between the Azure data center network fabric and HANA Large Instance units.
Networking within the HANA Large Instance stamp, which is mostly transparent for you.
The requirement that your on-premises assets must connect through ExpressRoute to Azure doesn't change
because you use HANA Large Instance. The requirement to have one or multiple virtual networks that run the VMs,
which host the application layer that connects to the HANA instances hosted in HANA Large Instance units, also
doesn't change.
The differences to SAP deployments in Azure are:
The HANA Large Instance units of your customer tenant are connected through another ExpressRoute circuit
into your virtual networks. To separate load conditions, the on-premises to Azure virtual network ExpressRoute
circuits and the circuits between Azure virtual networks and HANA Large Instances don't share the same
routers.
The workload profile between the SAP application layer and the HANA Large Instance is of a different nature,
with many small requests and bursts like data transfers (result sets) from SAP HANA into the application layer.
The SAP application architecture is more sensitive to network latency than typical scenarios where data is
exchanged between on-premises and Azure.
The Azure ExpressRoute gateway has at least two ExpressRoute connections. One circuit that is connected from
on-premises and one that is connected from HANA Large Instances. This leaves only room for another two
additional circuits from different MSEEs to connect to on ExpressRoute Gateway. This restriction is independent
of the usage of ExpressRoute Fast Path. All the connected circuits share the maximum bandwidth for incoming
data of the ExpressRoute gateway.
With Revision 3 of HANA Large Instance stamps, the network latency experienced between VMs and HANA Large
Instance units can be higher than a typical VM-to-VM network round-trip latency. Dependent on the Azure region,
the values measured can exceed the 0.7-ms round-trip latency classified as below average in SAP Note #1100926
- FAQ: Network performance. Dependent on Azure Region and tool to measure network round-trip latency
between an Azure VM and HANA Large Instance unit, the measured latency can be up to and around 2
milliseconds. Nevertheless, customers deploy SAP HANA-based production SAP applications successfully on SAP
HANA Large Instance. Make sure you test your business processes thoroughly in Azure HANA Large Instance. A
new functionality, called ExpressRoute Fast Path, is able to reduce the network latency between HANA Large
Instances and application layer VMs in Azure substantially (see below).
With Revision 4 of HANA Large Instance stamps, the network latency between Azure VMs that are deployed in
proximity to the HANA Large Instance stamp, is experienced to meet the average or better than average
classification as documented in SAP Note #1100926 - FAQ: Network performance if Azure ExpressRoute Fast Path
is configured (see below). In order to deploy Azure VMs in close proximity to HANA Large Instance units of
Revision 4, you need to leverage Azure Proximity Placement Groups. The way how proximity placement groups
can be used to locate the SAP application layer in the same Azure datacenter as Revision 4 hosted HANA Large
Instance units is described in Azure Proximity Placement Groups for optimal network latency with SAP
applications.
To provide deterministic network latency between VMs and HANA Large Instance, the choice of the ExpressRoute
gateway SKU is essential. Unlike the traffic patterns between on-premises and VMs, the traffic pattern between
VMs and HANA Large Instance can develop small but high bursts of requests and data volumes to be transmitted.
To handle such bursts well, we highly recommend the use of the UltraPerformance gateway SKU. For the Type II
class of HANA Large Instance SKUs, the use of the UltraPerformance gateway SKU as a ExpressRoute gateway is
mandatory.

IMPORTANT
Given the overall network traffic between the SAP application and database layers, only the HighPerformance or
UltraPerformance gateway SKUs for virtual networks are supported for connecting to SAP HANA on Azure (Large Instances).
For HANA Large Instance Type II SKUs, only the UltraPerformance gateway SKU is supported as a ExpressRoute gateway.
Exceptions apply when using ExpressRoute Fast Path (see below)

ExpressRoute Fast Path


To lower the latency, ExpressRoute Fast Path got introduced and released in May 2019 for the specific connectivity
of HANA Large Instances to Azure virtual networks that host the SAP application VMs. The major difference to the
solution rolled out so far, is, that the data flows between VMs and HANA Large Instances are not routed through
the ExpressRoute gateway anymore. Instead the VMs assigned in the subnet(s) of the Azure virtual network are
directly communicating with the dedicated enterprise edge router.
IMPORTANT
The ExpressRoute Fast Path functionality requires that the subnets running the SAP application VMs are in the same Azure
virtual network that got connected to the HANA Large Instances. VMs located in Azure virtual networks that are peered
with the Azure virtual network connected directly to the HANA Large Instance units are not benefiting from ExpressRoute
Fast Path. As a result typical hub and spoke virtual network designs, where the ExpressRoute circuits are connecting against a
hub virtual network and virtual networks containing the SAP application layer (spokes) are getting peered, the optimization
by ExpressRoute Fast Path will not work. In addtion, ExpressRoute Fast Path does not support user defined routing rules
(UDR) today. For more information, see ExpressRoute virtual network gateway and FastPath.

For more details on how to configure ExpressRoute Fast Path, read the document Connect a virtual network to
HANA large instances.

NOTE
An UltraPerformance ExpressRoute gateway is required to have ExpressRoute Fast Path working

Single SAP system


The on-premises infrastructure previously shown is connected through ExpressRoute into Azure. The ExpressRoute
circuit connects into a Microsoft enterprise edge router (MSEE). For more information, see ExpressRoute technical
overview. After the route is established, it connects into the Azure backbone.

NOTE
To run SAP landscapes in Azure, connect to the enterprise edge router closest to the Azure region in the SAP landscape.
HANA Large Instance stamps are connected through dedicated enterprise edge router devices to minimize network latency
between VMs in Azure IaaS and HANA Large Instance stamps.

The ExpressRoute gateway for the VMs that host SAP application instances are connected to one ExpressRoute
circuit that connects to on-premises. The same virtual network is connected to a separate enterprise edge router
dedicated to connecting to Large Instance stamps. Using ExpressRoute Fast Path, the data flow from HANA Large
Instances to the SAP application layer VMs are not routed through the ExpressRoute gateway anymore and with
that reduce the network round-trip latency.
This system is a straightforward example of a single SAP system. The SAP application layer is hosted in Azure. The
SAP HANA database runs on SAP HANA on Azure (Large Instances). The assumption is that the ExpressRoute
gateway bandwidth of 2-Gbps or 10-Gbps throughput doesn't represent a bottleneck.

Multiple SAP systems or large SAP systems


If multiple SAP systems or large SAP systems are deployed to connect to SAP HANA on Azure (Large Instances),
the throughput of the ExpressRoute gateway might become a bottleneck. Or you want to isolate production and
non-production systems in different Azure virtual networks. In such a case, split the application layers into multiple
virtual networks. You also might create a special virtual network that connects to HANA Large Instance for cases
such as:
Performing backups directly from the HANA instances in HANA Large Instance to a VM in Azure that hosts NFS
shares.
Copying large backups or other files from HANA Large Instance units to disk space managed in Azure.
Use a separate virtual network to host VMs that manage storage for mass transfer of data between HANA Large
Instances and Azure. This arrangement avoids the effects of large file or data transfer from HANA Large Instance to
Azure on the ExpressRoute gateway that serves the VMs that run the SAP application layer.
For a more scalable network architecture:
Leverage multiple virtual networks for a single, larger SAP application layer.
Deploy one separate virtual network for each SAP system deployed, compared to combining these SAP
systems in separate subnets under the same virtual network.
A more scalable networking architecture for SAP HANA on Azure (Large Instances):

Dependent on the rules and restrictions, you want to apply between the different virtual networks hosting VMs of
different SAP systems, you should peer those virtual networks. For more information about virtual network
peering, see Virtual network peering.

Routing in Azure
By default deployment, three network routing considerations are important for SAP HANA on Azure (Large
Instances):
SAP HANA on Azure (Large Instances) can be accessed only through Azure VMs and the dedicated
ExpressRoute connection, not directly from on-premises. Direct access from on-premises to the HANA Large
Instance units, as delivered by Microsoft to you, isn't possible immediately. The transitive routing restrictions
are due to the current Azure network architecture used for SAP HANA Large Instance. Some administration
clients and any applications that need direct access, such as SAP Solution Manager running on-premises,
can't connect to the SAP HANA database. For exceptions check the section 'Direct Routing to HANA Large
Instances'.
If you have HANA Large Instance units deployed in two different Azure regions for disaster recovery, the
same transient routing restrictions applied in the past. In other words, IP addresses of a HANA Large
Instance unit in one region (for example, US West) were not routed to a HANA Large Instance unit deployed
in another region (for example, US East). This restriction was independent of the use of Azure network
peering across regions or cross-connecting the ExpressRoute circuits that connect HANA Large Instance
units to virtual networks. For a graphic representation, see the figure in the section "Use HANA Large
Instance units in multiple regions." This restriction, which came with the deployed architecture, prohibited
the immediate use of HANA System Replication as disaster recovery functionality. For recent changes, look
up the section 'Use HANA Large Instance units in multiple regions'.
SAP HANA on Azure (Large Instances) units have an assigned IP address from the server IP pool address
range that you submitted when requesting the HANA Large Instance deployment. For more information,
see SAP HANA (Large Instances) infrastructure and connectivity on Azure. This IP address is accessible
through the Azure subscriptions and circuit that connects Azure virtual networks to HANA Large Instances.
The IP address assigned out of that server IP pool address range is directly assigned to the hardware unit.
It's not assigned through NAT anymore, as was the case in the first deployments of this solution.
Direct Routing to HANA Large Instances
By default, the transitive routing does not work in these scenarios:
Between HANA Large Instance units and an on-premises deployment.
Between HANA Large Instance routing that are deployed in two different regions.
There are three ways to enable transitive routing in those scenarios:
A reverse-proxy to route data, to and from. For example, F5 BIG-IP, NGINX with Traffic Manager deployed in the
Azure virtual network that connects to HANA Large Instances and to on-premises as a virtual firewall/traffic
routing solution.
Using IPTables rules in a Linux VM to enable routing between on-premises locations and HANA Large Instance
units, or between HANA Large Instance units in different regions. The VM running IPTables needs to be
deployed in the Azure virtual network that connects to HANA Large Instances and to on-premises. The VM
needs to be sized accordingly, so, that the network throughput of the VM is sufficient for the expected network
traffic. For details on VM network bandwidth, check the article Sizes of Linux virtual machines in Azure.
Azure Firewall would be another solution to enable direct traffic between on-premises and HANA Large
instance units.
All the traffic of these solutions would be routed through an Azure virtual network and as such the traffic could be
additionally restricted by the soft appliances used or by Azure Network Security Groups, so, that certain IP
addresses or IP address ranges from on-premises could be blocked or explicitly allowed accessing HANA Large
Instances.
NOTE
Be aware that implementation and support for custom solutions involving third-party network appliances or IPTables isn't
provided by Microsoft. Support must be provided by the vendor of the component used or the integrator.

Express Route Global Reach


Microsoft introduced a new functionality called ExpressRoute Global Reach. Global Reach can be used for HANA
Large Instances in two scenarios:
Enable direct access from on-premises to your HANA Large Instance units deployed in different regions
Enable direct communication between your HANA Large Instance units deployed in different regions
D i r e c t A c c e ss fr o m o n - p r e m i se s

In the Azure regions where Global Reach is offered, you can request enabling the Global Reach functionality for
your ExpressRoute circuit that connects your on-premises network to the Azure virtual network that connects to
your HANA Large Instance units as well. There are some cost implications for the on-premises side of your
ExpressRoute circuit. For prices, check the prices for Global Reach Add-On. There are no additional costs for you
related to the circuit that connects the HANA Large Instance unit(s) to Azure.

IMPORTANT
In case of using Global Reach for enabling direct access between your HANA Large Instance units and on-premises assets,
the network data and control flow is not routed through Azure vir tual networks , but directly between the Microsoft
enterprise exchange routers. As a result any NSG or ASG rules, or any type of firewall, NVA, or proxy you deployed in an
Azure virtual network, are not getting touched. If you use ExpressRoute Global Reach to enable direct access from
on-premises to HANA Large instance units restrictions and permissions to access HANA large Instance
units need to be defined in firewalls on the on-premises side

C o n n e c t i n g H A N A L a r g e I n st a n c e s i n d i ffe r e n t A z u r e r e g i o n s

In the same way, as ExpressRoute Global Reach can be used for connecting on-premises to HANA Large Instance
units, it can be used to connect two HANA Large Instance tenants that are deployed for you in two different
regions. The isolation is the ExpressRoute circuits that your HANA Large Instance tenants are using to connect to
Azure in both regions. There are no additional charges for connecting two HANA Large Instance tenants that are
deployed in two different regions.

IMPORTANT
The data flow and control flow of the network traffic between the different HANA Large instance tenants will not be routed
through azure networks. As a result you can't use Azure functionality or NVAs to enforce communication restrictions
between your two HANA Large Instances tenants.

For more details on how to get ExpressRoute Global Reach enabled, read the document Connect a virtual network
to HANA large instances.

Internet connectivity of HANA Large Instance


HANA Large Instance does not have direct internet connectivity. As an example, this limitation might restrict your
ability to register the OS image directly with the OS vendor. You might need to work with your local SUSE Linux
Enterprise Server Subscription Management Tool server or Red Hat Enterprise Linux Subscription Manager.

Data encryption between VMs and HANA Large Instance


Data transferred between HANA Large Instance and VMs is not encrypted. However, purely for the exchange
between the HANA DBMS side and JDBC/ODBC-based applications, you can enable encryption of traffic. For more
information, see this documentation by SAP.

Use HANA Large Instance units in multiple regions


To realize disaster recovery set ups, you need to have SHANA Large Instance units in multiple Azure regions. Even
with using Azure [Global Vnet Peering], the transitive routing by default is not working between HANA Large
Instance tenants in two different regions. However, Global Reach opens up the communication path between the
HANA Large Instance units you have provisioned in two different regions. This usage scenario of ExpressRoute
Global Reach enables:
HANA System Replication without any additional proxies or firewalls
Copying backups between HANA Large Instance units in two different regions to perform system copies or
system refreshes

The figure shows how the different virtual networks in both regions are connected to two different ExpressRoute
circuits that are used to connect to SAP HANA on Azure (Large Instances) in both Azure regions (grey lines).
Reason for this two cross connections is to protect from an outage of the MSEEs on either side. The
communication flow between the two virtual networks in the two Azure regions is supposed to be handled over
the global peering of the two virtual networks in the two different regions (blue dotted line). The thick red line
describes the ExpressRoute Global Reach connection, which allows the HANA Large Instance units of your tenants
in two different regions to communicate with each other.

IMPORTANT
If you used multiple ExpressRoute circuits, AS Path prepending and Local Preference BGP settings should be used to ensure
proper routing of traffic.

Next steps
Refer SAP HANA (Large Instances) storage architecture
SAP HANA (Large Instances) storage architecture
12/22/2020 • 6 minutes to read • Edit Online

The storage layout for SAP HANA on Azure (Large Instances) is configured by SAP HANA on the classic
deployment model per SAP recommended guidelines. The guidelines are documented in the SAP HANA storage
requirements white paper.
The HANA Large Instance of the Type I class comes with four times the memory volume as storage volume. For the
Type II class of HANA Large Instance units, the storage isn't four times more. The units come with a volume that is
intended for storing HANA transaction log backups. For more information, see Install and configure SAP HANA
(Large Instances) on Azure.
See the following table in terms of storage allocation. The table lists the rough capacity for the different volumes
provided with the different HANA Large Instance units.

H A N A L A RGE
IN STA N C E SK U H A N A / DATA H A N A / LO G H A N A / SH A RED H A N A / LO GB A C K UP S

S72 1,280 GB 512 GB 768 GB 512 GB

S72m 3,328 GB 768 GB 1,280 GB 768 GB

S96 1,280 GB 512 GB 768 GB 512 GB

S192 4,608 GB 1,024 GB 1,536 GB 1,024 GB

S192m 11,520 GB 1,536 GB 1,792 GB 1,536 GB

S192xm 11,520 GB 1,536 GB 1,792 GB 1,536 GB

S384 11,520 GB 1,536 GB 1,792 GB 1,536 GB

S384m 12,000 GB 2,050 GB 2,050 GB 2,040 GB

S384xm 16,000 GB 2,050 GB 2,050 GB 2,040 GB

S384xxm 20,000 GB 3,100 GB 2,050 GB 3,100 GB

S576m 20,000 GB 3,100 GB 2,050 GB 3,100 GB

S576xm 31,744 GB 4,096 GB 2,048 GB 4,096 GB

S768m 28,000 GB 3,100 GB 2,050 GB 3,100 GB

S768xm 40,960 GB 6,144 GB 4,096 GB 6,144 GB

S960m 36,000 GB 4,100 GB 2,050 GB 4,100 GB

S896m 33,792 GB 512 GB 1,024 GB 512 GB


More recent SKUs of HANA Large Instances are delivered with storage configurations looking like:

H A N A L A RGE
IN STA N C E SK U H A N A / DATA H A N A / LO G H A N A / SH A RED H A N A / LO GB A C K UP S

S224 4,224 GB 512 GB 1,024 GB 512 GB

S224oo 6,336 GB 512 GB 1,024 GB 512 GB

S224m 8,448 GB 512 GB 1,024 GB 512 GB

S224om 8,448 GB 512 GB 1,024 GB 512 GB

S224ooo 10,560 GB 512 GB 1,024 GB 512 GB

S224oom 12,672 GB 512 GB 1,024 GB 512 GB

S448 8,448 GB 512 GB 1,024 GB 512 GB

S448oo 12,672 GB 512 GB 1,024 GB 512 GB

S448m 16,896 GB 512 GB 1,024 GB 512 GB

S448om 16,896 GB 512 GB 1,024 GB 512 GB

S448ooo 21,120 GB 512 GB 1,024 GB 512 GB

S448oom 25,344 GB 512 GB 1,024 GB 512 GB

S672 12,672 GB 512 GB 1,024 GB 512 GB

S672oo 19,008 GB 512 GB 1,024 GB 512 GB

S672m 25,344 GB 512 GB 1,024 GB 512 GB

S672om 25,344 GB 512 GB 1,024 GB 512 GB

S672ooo 31,680 GB 512 GB 1,024 GB 512 GB

S672oom 38,016 GB 512 GB 1,024 GB 512 GB

S896 16,896 GB 512 GB 1,024 GB 512 GB

S896oo 25,344 GB 512 GB 1,024 GB 512 GB

S896om 33,792 GB 512 GB 1,024 GB 512 GB

S896ooo 42,240 GB 512 GB 1,024 GB 512 GB

S896oom 50,688 GB 512 GB 1,024 GB 512 GB

Actual deployed volumes might vary based on deployment and the tool that is used to show the volume sizes.
If you subdivide a HANA Large Instance SKU, a few examples of possible division pieces might look like:
M EM O RY PA RT IT IO N
IN GB H A N A / DATA H A N A / LO G H A N A / SH A RED H A N A / LO G/ B A C K UP

256 400 GB 160 GB 304 GB 160 GB

512 768 GB 384 GB 512 GB 384 GB

768 1,280 GB 512 GB 768 GB 512 GB

1,024 1,792 GB 640 GB 1,024 GB 640 GB

1,536 3,328 GB 768 GB 1,280 GB 768 GB

These sizes are rough volume numbers that can vary slightly based on deployment and the tools used to look at
the volumes. There also are other partition sizes, such as 2.5 TB. These storage sizes are calculated with a formula
similar to the one used for the previous partitions. The term "partitions" doesn't mean that the operating system,
memory, or CPU resources are in any way partitioned. It indicates storage partitions for the different HANA
instances you might want to deploy on one single HANA Large Instance unit.
You might need more storage. You can add storage by purchasing additional storage in 1-TB units. This additional
storage can be added as additional volume. It also can be used to extend one or more of the existing volumes. It
isn't possible to decrease the sizes of the volumes as originally deployed and mostly documented by the previous
tables. It also isn't possible to change the names of the volumes or mount names. The storage volumes previously
described are attached to the HANA Large Instance units as NFS4 volumes.
You can use storage snapshots for backup and restore and disaster recovery purposes. For more information, see
SAP HANA (Large Instances) high availability and disaster recovery on Azure.
Refer HLI supported scenarios for storage layout details for your scenario.

Run multiple SAP HANA instances on one HANA Large Instance unit
It's possible to host more than one active SAP HANA instance on HANA Large Instance units. To provide the
capabilities of storage snapshots and disaster recovery, such a configuration requires a volume set per instance.
Currently, HANA Large Instance units can be subdivided as follows:
S72, S72m, S96, S144, S192 : In increments of 256 GB, with 256 GB the smallest starting unit. Different
increments such as 256 GB and 512 GB can be combined to the maximum of the memory of the unit.
S144m and S192m : In increments of 256 GB, with 512 GB the smallest unit. Different increments such as 512
GB and 768 GB can be combined to the maximum of the memory of the unit.
Type II class : In increments of 512 GB, with the smallest starting unit of 2 TB. Different increments such as 512
GB, 1 TB, and 1.5 TB can be combined to the maximum of the memory of the unit.
Few examples of running multiple SAP HANA instances might look like the following.

SIZ ES W IT H M ULT IP L E
SK U M EM O RY SIZ E STO RA GE SIZ E DATA B A SES

S72 768 GB 3 TB 1x768-GB HANA instance


or 1x512-GB instance +
1x256-GB instance
or 3x256-GB instances
SIZ ES W IT H M ULT IP L E
SK U M EM O RY SIZ E STO RA GE SIZ E DATA B A SES

S72m 1.5 TB 6 TB 3x512GB HANA instances


or 1x512-GB instance +
1x1-TB instance
or 6x256-GB instances
or 1x1.5-TB instance

S192m 4 TB 16 TB 8x512-GB instances


or 4x1-TB instances
or 4x512-GB instances +
2x1-TB instances
or 4x768-GB instances +
2x512-GB instances
or 1x4-TB instance

S384xm 8 TB 22 TB 4x2-TB instances


or 2x4-TB instances
or 2x3-TB instances + 1x2-
TB instances
or 2x2.5-TB instances + 1x3-
TB instances
or 1x8-TB instance

There are other variations as well.

Encryption of data at rest


The storage used for HANA Large Instance uses transparent encryption for the data as it's stored on the disks since
end of the year 2018. In earlier deployments, you could choose to get the volumes encrypted. If you decided
against that option, you can request to get the volumes encrypted online. The move from non-encrypted to
encrypted volumes is transparent and doesn't require downtime.
With the Type I class of SKUs, the volume the boot LUN is stored on, is encrypted. In Revision 3 HANA Large
Instance stamps, using the Type II class of SKUs of HANA Large Instance, you need to encrypt the boot LUN with
OS methods. In Revision 4 HANA Large Instance stamps, using Type II units the volume the boot LUN is stored and
is encrypted at rest by default as well.

Required settings for larger HANA instances on HANA Large Instances


The storage used in HANA Large Instances has a file size limitation. The size limitation is 16 TB per file. Unlike in
file size limitations in the EXT3 file systems, HANA is not aware implicitly of the storage limitation enforced by the
HANA Large Instances storage. As a result HANA will not automatically create a new data file when the file size
limit of 16 TB is reached. As HANA attempts to grow the file beyond 16 TB, HANA will report errors and the index
server will crash at the end.

IMPORTANT
In order to prevent HANA trying to grow data files beyond the 16 TB file size limit of HANA Large Instance storage, you
need to set the following parameters in the global.ini configuration file of HANA
datavolume_striping=true
datavolume_striping_size_gb = 15000
See also SAP note #2400005
Be aware of SAP note #2631285
Next steps
Refer Supported scenarios for HANA Large Instances
Supported scenarios for HANA Large Instances
12/22/2020 • 25 minutes to read • Edit Online

This article describes the supported scenarios and architecture details for HANA Large Instances (HLI).

NOTE
If your required scenario is not mentioned in this article, contact the Microsoft Service Management team to assess your
requirements. Before you set up the HLI unit, validate the design with SAP or your service implementation partner.

Terms and definitions


Let's understand the terms and definitions that are used in this article:
SID : A system identifier for the HANA system
HLI : Hana Large Instances
DR : Disaster recovery
Normal DR : A system setup with a dedicated resource for DR purposes only
Multipurpose DR : A DR-site system that's configured to use a non-production environment alongside a
production instance that's configured for a DR event
Single-SID : A system with one instance installed
Multi-SID : A system with multiple instances configured; also called an MCOS environment
HSR : SAP HANA System Replication

Overview
HANA Large Instances supports a variety of architectures to help you accomplish your business requirements.
The following sections cover the architectural scenarios and their configuration details.
The derived architecture design is purely from an infrastructure perspective, and you must consult SAP or your
implementation partners for the HANA deployment. If your scenarios are not listed in this article, contact the
Microsoft account team to review the architecture and derive a solution for you.

NOTE
These architectures are fully compliant with Tailored Data Integration (TDI) design and supported by SAP.

This article describes the details of the two components in each supported architecture:
Ethernet
Storage
Ethernet
Each provisioned server comes preconfigured with sets of Ethernet interfaces. The Ethernet interfaces configured
on each HLI unit are categorized into four types:
A : Used for or by client access.
B : Used for node-to-node communication. This interface is configured on all servers (irrespective of the
topology requested) but used only for scale-out scenarios.
C : Used for node-to-storage connectivity.
D : Used for node-to-iSCSI device connection for STONITH setup. This interface is configured only when an HSR
setup is requested.

N IC LO GIC A L
IN T ERFA C E SK U T Y P E N A M E W IT H SUSE O S N A M E W IT H RH EL O S USE C A SE

A TYPE I eth0.tenant eno1.tenant Client-to-HLI

B TYPE I eth2.tenant eno3.tenant Node-to-node

C TYPE I eth1.tenant eno2.tenant Node-to-storage

D TYPE I eth4.tenant eno4.tenant STONITH

A TYPE II vlan<tenantNo> team0.tenant Client-to-HLI

B TYPE II vlan<tenantNo+2> team0.tenant+2 Node-to-node

C TYPE II vlan<tenantNo+1> team0.tenant+1 Node-to-storage

D TYPE II vlan<tenantNo+3> team0.tenant+3 STONITH

You choose the interface based on the topology that's configured on the HLI unit. For example, interface “B” is set
up for node-to-node communication, which is useful when you have a scale-out topology configured. This
interface isn't used for single node, scale-up configurations. For more information about interface usage, review
your required scenarios (later in this article).
If necessary, you can define additional NIC cards on your own. However, the configurations of existing NICs can't
be changed.

NOTE
You might find additional interfaces that are physical interfaces or bonding. You should consider only the previously
mentioned interfaces for your use case. Any others can be ignored.

The distribution for units with two assigned IP addresses should look like:
Ethernet “A” should have an assigned IP address that's within the server IP pool address range that you
submitted to Microsoft. This IP address should be maintained in the /etc/hosts directory of the OS.
Ethernet “C” should have an assigned IP address that's used for communication to NFS. This address does
not need to be maintained in the etc/hosts directory to allow instance-to-instance traffic within the tenant.
For HANA System Replication or HANA scale-out deployment, a blade configuration with two assigned IP
addresses is not suitable. If you have only two assigned IP addresses and you want to deploy such a configuration,
contact SAP HANA on Azure Service Management. They can assign you a third IP address in a third VLAN. For
HANA Large Instances units with three assigned IP addresses on three NIC ports, the following usage rules apply:
Ethernet “A” should have an assigned IP address that's outside of the server IP pool address range that you
submitted to Microsoft. This IP address should not be maintained in the etc/hosts directory of the OS.
Ethernet “B” should be maintained exclusively in the etc/hosts directory for communication between the
various instances. These are the IP addresses to be maintained in scale-out HANA configurations as the IP
addresses that HANA uses for the inter-node configuration.
Ethernet “C” should have an assigned IP address that's used for communication to NFS storage. This type
of address should not be maintained in the etc/hosts directory.
Ethernet “D” should be used exclusively for access to STONITH devices for Pacemaker. This interface is
required when you configure HANA System Replication and want to achieve auto failover of the operating
system by using an SBD-based device.
Storage
Storage is preconfigured based on the requested topology. The volume sizes and mount points vary depending
on the number of servers, the number of SKUs, and the configured topology. For more information, review your
required scenarios (later in this article). If you require more storage, you can purchase it in 1-TB increments.

NOTE
The mount point /usr/sap/<SID> is a symbolic link to the /hana/shared mount point.

Supported scenarios
The architecture diagrams in the next sections use the following notations:

Here are the supported scenarios:


Single node with one SID
Single node MCOS
Single node with DR (normal)
Single node with DR (multipurpose)
HSR with STONITH
HSR with DR (normal/multipurpose)
Host auto failover (1+1)
Scale-out with standby
Scale-out without standby
Scale-out with DR

Single node with one SID


This topology supports one node in a scale-up configuration with one SID.
Architecture diagram
Ethernet
The following network interfaces are preconfigured:

N IC LO GIC A L
IN T ERFA C E SK U T Y P E N A M E W IT H SUSE O S N A M E W IT H RH EL O S USE C A SE

A TYPE I eth0.tenant eno1.tenant Client-to-HLI

B TYPE I eth2.tenant eno3.tenant Configured but not in


use
N IC LO GIC A L
IN T ERFA C E SK U T Y P E N A M E W IT H SUSE O S N A M E W IT H RH EL O S USE C A SE

C TYPE I eth1.tenant eno2.tenant Node-to-storage

D TYPE I eth4.tenant eno4.tenant Configured but not in


use

A TYPE II vlan<tenantNo> team0.tenant Client-to-HLI

B TYPE II vlan<tenantNo+2> team0.tenant+2 Configured but not in


use

C TYPE II vlan<tenantNo+1> team0.tenant+1 Node-to-storage

D TYPE II vlan<tenantNo+3> team0.tenant+3 Configured but not in


use

Storage
The following mount points are preconfigured:

M O UN T P O IN T USE C A SE

/hana/shared/SID HANA installation

/hana/data/SID/mnt00001 Data files installation

/hana/log/SID/mnt00001 Log files installation

/hana/logbackups/SID Redo logs

Key considerations
/usr/sap/SID is a symbolic link to /hana/shared/SID.

Single node MCOS


This topology supports one node in a scale-up configuration with multiple SIDs.
Architecture diagram
Ethernet
The following network interfaces are preconfigured:

N IC LO GIC A L
IN T ERFA C E SK U T Y P E N A M E W IT H SUSE O S N A M E W IT H RH EL O S USE C A SE

A TYPE I eth0.tenant eno1.tenant Client-to-HLI

B TYPE I eth2.tenant eno3.tenant Configured but not in


use

C TYPE I eth1.tenant eno2.tenant Node-to-storage

D TYPE I eth4.tenant eno4.tenant Configured but not in


use

A TYPE II vlan<tenantNo> team0.tenant Client-to-HLI


N IC LO GIC A L
IN T ERFA C E SK U T Y P E N A M E W IT H SUSE O S N A M E W IT H RH EL O S USE C A SE

B TYPE II vlan<tenantNo+2> team0.tenant+2 Configured but not in


use

C TYPE II vlan<tenantNo+1> team0.tenant+1 Node-to-storage

D TYPE II vlan<tenantNo+3> team0.tenant+3 Configured but not in


use

Storage
The following mount points are preconfigured:

M O UN T P O IN T USE C A SE

/hana/shared/SID1 HANA installation for SID1

/hana/data/SID1/mnt00001 Data files installation for SID1

/hana/log/SID1/mnt00001 Log files installation for SID1

/hana/logbackups/SID1 Redo logs for SID1

/hana/shared/SID2 HANA installation for SID2

/hana/data/SID2/mnt00001 Data files installation for SID2

/hana/log/SID2/mnt00001 Log files installation for SID2

/hana/logbackups/SID2 Redo logs for SID2

Key considerations
/usr/sap/SID is a symbolic link to /hana/shared/SID.
Volume size distribution is based on the database size in memory. To learn what database sizes in memory are
supported in a multi-SID environment, see Overview and architecture.

Single node with DR using storage replication


This topology supports one node in a scale-up configuration with one or multiple SIDs, with storage-based
replication to the DR site for a primary SID. In the diagram, only a single-SID system is depicted at the primary
site, but MCOS systems are supported as well.
Architecture diagram
Ethernet
The following network interfaces are preconfigured:

N IC LO GIC A L
IN T ERFA C E SK U T Y P E N A M E W IT H SUSE O S N A M E W IT H RH EL O S USE C A SE

A TYPE I eth0.tenant eno1.tenant Client-to-HLI

B TYPE I eth2.tenant eno3.tenant Configured but not in


use

C TYPE I eth1.tenant eno2.tenant Node-to-storage

D TYPE I eth4.tenant eno4.tenant Configured but not in


use

A TYPE II vlan<tenantNo> team0.tenant Client-to-HLI

B TYPE II vlan<tenantNo+2> team0.tenant+2 Configured but not in


use

C TYPE II vlan<tenantNo+1> team0.tenant+1 Node-to-storage

D TYPE II vlan<tenantNo+3> team0.tenant+3 Configured but not in


use

Storage
The following mount points are preconfigured:
M O UN T P O IN T USE C A SE

/hana/shared/SID HANA installation for SID

/hana/data/SID/mnt00001 Data files installation for SID

/hana/log/SID/mnt00001 Log files installation for SID

/hana/logbackups/SID Redo logs for SID

Key considerations
/usr/sap/SID is a symbolic link to /hana/shared/SID.
For MCOS: Volume size distribution is based on the database size in memory. To learn what database sizes in
memory are supported in a multi-SID environment, see Overview and architecture.
At the DR site: The volumes and mount points are configured (marked as “Required for HANA installation”) for
the production HANA instance installation at the DR HLI unit.
At the DR site: The data, log backups, and shared volumes (marked as “Storage Replication”) are replicated via
snapshot from the production site. These volumes are mounted during failover only. For more information, see
Disaster recovery failover procedure.
The boot volume for SKU Type I class is replicated to the DR node.

Single node with DR (multipurpose) using storage replication


This topology supports one node in a scale-up configuration with one or multiple SIDs, with storage-based
replication to the DR site for a primary SID. In the diagram, only a single-SID system is depicted at the primary
site, but multi-SID (MCOS) systems are supported as well. At the DR site, the HLI unit is used for the QA instance
while production operations are running from the primary site. During DR failover (or failover test), the QA
instance at the DR site is taken down.
Architecture diagram
Ethernet
The following network interfaces are preconfigured:

N IC LO GIC A L
IN T ERFA C E SK U T Y P E N A M E W IT H SUSE O S N A M E W IT H RH EL O S USE C A SE

A TYPE I eth0.tenant eno1.tenant Client-to-HLI

B TYPE I eth2.tenant eno3.tenant Configured but not in


use

C TYPE I eth1.tenant eno2.tenant Node-to-storage

D TYPE I eth4.tenant eno4.tenant Configured but not in


use

A TYPE II vlan<tenantNo> team0.tenant Client-to-HLI

B TYPE II vlan<tenantNo+2> team0.tenant+2 Configured but not in


use

C TYPE II vlan<tenantNo+1> team0.tenant+1 Node-to-storage

D TYPE II vlan<tenantNo+3> team0.tenant+3 Configured but not in


use

Storage
The following mount points are preconfigured:
M O UN T P O IN T USE C A SE

At the primar y site

/hana/shared/SID HANA installation for production SID

/hana/data/SID/mnt00001 Data files installation for production SID

/hana/log/SID/mnt00001 Log files installation for production SID

/hana/logbackups/SID Redo logs for production SID

At the DR site

/hana/shared/SID HANA installation for production SID

/hana/data/SID/mnt00001 Data files installation for production SID

/hana/log/SID/mnt00001 Log files installation for production SID

/hana/shared/QA-SID HANA installation for QA SID

/hana/data/QA-SID/mnt00001 Data files installation for QA SID

/hana/log/QA-SID/mnt00001 Log files installation for QA SID

/hana/logbackups/QA-SID Redo logs for QA SID

Key considerations
/usr/sap/SID is a symbolic link to /hana/shared/SID.
For MCOS: Volume size distribution is based on the database size in memory. To learn what database sizes in
memory are supported in a multi-SID environment, see Overview and architecture.
At the DR site: The volumes and mount points are configured (marked as “Required for HANA installation”) for
the production HANA instance installation at the DR HLI unit.
At the DR site: The data, log backups, and shared volumes (marked as “Storage Replication”) are replicated via
snapshot from the production site. These volumes are mounted during failover only. For more information, see
Disaster recovery failover procedure.
At the DR site: The data, log backups, log, and shared volumes for QA (marked as “QA instance installation”)
are configured for the QA instance installation.
The boot volume for SKU Type I class is replicated to the DR node.

HSR with STONITH for high availability


This topology support two nodes for the HANA System Replication configuration. This configuration is supported
only for single HANA instances on a node. This means that MCOS scenarios are not supported.

NOTE
As of December 2019, this architecture is supported only for the SUSE operating system.

Architecture diagram
Ethernet
The following network interfaces are preconfigured:

N IC LO GIC A L
IN T ERFA C E SK U T Y P E N A M E W IT H SUSE O S N A M E W IT H RH EL O S USE C A SE

A TYPE I eth0.tenant eno1.tenant Client-to-HLI

B TYPE I eth2.tenant eno3.tenant Configured but not in


use

C TYPE I eth1.tenant eno2.tenant Node-to-storage

D TYPE I eth4.tenant eno4.tenant Used for STONITH

A TYPE II vlan<tenantNo> team0.tenant Client-to-HLI

B TYPE II vlan<tenantNo+2> team0.tenant+2 Configured but not in


use

C TYPE II vlan<tenantNo+1> team0.tenant+1 Node-to-storage


N IC LO GIC A L
IN T ERFA C E SK U T Y P E N A M E W IT H SUSE O S N A M E W IT H RH EL O S USE C A SE

D TYPE II vlan<tenantNo+3> team0.tenant+3 Used for STONITH

Storage
The following mount points are preconfigured:

M O UN T P O IN T USE C A SE

On the primar y node

/hana/shared/SID HANA installation for production SID

/hana/data/SID/mnt00001 Data files installation for production SID

/hana/log/SID/mnt00001 Log files installation for production SID

/hana/logbackups/SID Redo logs for production SID

On the secondar y node

/hana/shared/SID HANA installation for secondary SID

/hana/data/SID/mnt00001 Data files installation for secondary SID

/hana/log/SID/mnt00001 Log files installation for secondary SID

/hana/logbackups/SID Redo logs for secondary SID

Key considerations
/usr/sap/SID is a symbolic link to /hana/shared/SID.
For MCOS: Volume size distribution is based on the database size in memory. To learn what database sizes in
memory are supported in a multi-SID environment, see Overview and architecture.
STONITH: An SBD is configured for the STONITH setup. However, the use of STONITH is optional.

High availability with HSR and DR with storage replication


This topology supports two nodes for the HANA System Replication configuration. Both normal and multipurpose
DRs are supported. These configurations are supported only for single HANA instances on a node. This means
that MCOS scenarios are not supported with these configurations.
In the diagram, a multipurpose scenario is depicted at the DR site, where the HLI unit is used for the QA instance
while production operations are running from the primary site. During DR failover (or failover test), the QA
instance at the DR site is taken down.
Architecture diagram
Ethernet
The following network interfaces are preconfigured:

N IC LO GIC A L
IN T ERFA C E SK U T Y P E N A M E W IT H SUSE O S N A M E W IT H RH EL O S USE C A SE

A TYPE I eth0.tenant eno1.tenant Client-to-HLI

B TYPE I eth2.tenant eno3.tenant Configured but not in


use

C TYPE I eth1.tenant eno2.tenant Node-to-storage

D TYPE I eth4.tenant eno4.tenant Used for STONITH

A TYPE II vlan<tenantNo> team0.tenant Client-to-HLI

B TYPE II vlan<tenantNo+2> team0.tenant+2 Configured but not in


use

C TYPE II vlan<tenantNo+1> team0.tenant+1 Node-to-storage

D TYPE II vlan<tenantNo+3> team0.tenant+3 Used for STONITH

Storage
The following mount points are preconfigured:

M O UN T P O IN T USE C A SE

On the primar y node at the primar y site

/hana/shared/SID HANA installation for production SID

/hana/data/SID/mnt00001 Data files installation for production SID


M O UN T P O IN T USE C A SE

/hana/log/SID/mnt00001 Log files installation for production SID

/hana/logbackups/SID Redo logs for production SID

On the secondar y node at the primar y site

/hana/shared/SID HANA installation for secondary SID

/hana/data/SID/mnt00001 Data files installation for secondary SID

/hana/log/SID/mnt00001 Log files installation for secondary SID

/hana/logbackups/SID Redo logs for secondary SID

At the DR site

/hana/shared/SID HANA installation for production SID

/hana/data/SID/mnt00001 Data files installation for production SID

/hana/log/SID/mnt00001 Log files installation for production SID

/hana/shared/QA-SID HANA installation for QA SID

/hana/data/QA-SID/mnt00001 Data files installation for QA SID

/hana/log/QA-SID/mnt00001 Log files installation for QA SID

/hana/logbackups/QA-SID Redo logs for QA SID

Key considerations
/usr/sap/SID is a symbolic link to /hana/shared/SID.
For MCOS: Volume size distribution is based on the database size in memory. To learn what database sizes in
memory are supported in a multi-SID environment, see Overview and architecture.
STONITH: An SBD is configured for the STONITH setup. However, the use of STONITH is optional.
At the DR site: Two sets of storage volumes are required for primary and secondary node replication.
At the DR site: The volumes and mount points are configured (marked as “Required for HANA installation”) for
the production HANA instance installation at the DR HLI unit.
At the DR site: The data, log backups, and shared volumes (marked as “Storage Replication”) are replicated via
snapshot from the production site. These volumes are mounted during failover only. For more information, see
Disaster recovery failover procedure.
At the DR site: The data, log backups, log, and shared volumes for QA (marked as “QA instance installation”)
are configured for the QA instance installation.
The boot volume for SKU Type I class is replicated to the DR node.

Host auto failover (1+1)


This topology supports two nodes in a host auto failover configuration. There is one node with a master/worker
role and another as a standby. SAP supports this scenario only for S/4 HANA. For more information, see OSS note
2408419 - SAP S/4HANA - Multi-Node Support.
Architecture diagram

Ethernet
The following network interfaces are preconfigured:

N IC LO GIC A L
IN T ERFA C E SK U T Y P E N A M E W IT H SUSE O S N A M E W IT H RH EL O S USE C A SE

A TYPE I eth0.tenant eno1.tenant Client-to-HLI

B TYPE I eth2.tenant eno3.tenant Node-to-node


communication
N IC LO GIC A L
IN T ERFA C E SK U T Y P E N A M E W IT H SUSE O S N A M E W IT H RH EL O S USE C A SE

C TYPE I eth1.tenant eno2.tenant Node-to-storage

D TYPE I eth4.tenant eno4.tenant Configured but not in


use

A TYPE II vlan<tenantNo> team0.tenant Client-to-HLI

B TYPE II vlan<tenantNo+2> team0.tenant+2 Node-to-node


communication

C TYPE II vlan<tenantNo+1> team0.tenant+1 Node-to-storage

D TYPE II vlan<tenantNo+3> team0.tenant+3 Configured but not in


use

Storage
The following mount points are preconfigured:

M O UN T P O IN T USE C A SE

On the master and standby nodes

/hana/shared HANA installation for production SID

/hana/data/SID/mnt00001 Data files installation for production SID

/hana/log/SID/mnt00001 Log files installation for production SID

/hana/logbackups/SID Redo logs for production SID

Key considerations
/usr/sap/SID is a symbolic link to /hana/shared/SID.
On standby: The volumes and mount points are configured (marked as “Required for HANA installation”) for
the HANA instance installation at the standby unit.

Scale-out with standby


This topology supports multiple nodes in a scale-out configuration. There is one node with a master role, one or
more nodes with a worker role, and one or more nodes as standby. However, there can be only one master node
at any single point in time.
Architecture diagram
Ethernet
The following network interfaces are preconfigured:

N IC LO GIC A L
IN T ERFA C E SK U T Y P E N A M E W IT H SUSE O S N A M E W IT H RH EL O S USE C A SE

A TYPE I eth0.tenant eno1.tenant Client-to-HLI

B TYPE I eth2.tenant eno3.tenant Node-to-node


communication

C TYPE I eth1.tenant eno2.tenant Node-to-storage

D TYPE I eth4.tenant eno4.tenant Configured but not in


use

A TYPE II vlan<tenantNo> team0.tenant Client-to-HLI

B TYPE II vlan<tenantNo+2> team0.tenant+2 Node-to-node


communication

C TYPE II vlan<tenantNo+1> team0.tenant+1 Node-to-storage

D TYPE II vlan<tenantNo+3> team0.tenant+3 Configured but not in


use

Storage
The following mount points are preconfigured:
M O UN T P O IN T USE C A SE

On the master, worker and standby nodes

/hana/shared HANA installation for production SID

/hana/data/SID/mnt00001 Data files installation for production SID

/hana/log/SID/mnt00001 Log files installation for production SID

/hana/logbackups/SID Redo logs for production SID

Scale-out without standby


This topology supports multiple nodes in a scale-out configuration. There is one node with a master role, and one
or more nodes with a worker role. However, there can be only one master node at any single point in time.
Architecture diagram

Ethernet
The following network interfaces are preconfigured:

N IC LO GIC A L
IN T ERFA C E SK U T Y P E N A M E W IT H SUSE O S N A M E W IT H RH EL O S USE C A SE

A TYPE I eth0.tenant eno1.tenant Client-to-HLI

B TYPE I eth2.tenant eno3.tenant Node-to-node


communication
N IC LO GIC A L
IN T ERFA C E SK U T Y P E N A M E W IT H SUSE O S N A M E W IT H RH EL O S USE C A SE

C TYPE I eth1.tenant eno2.tenant Node-to-storage

D TYPE I eth4.tenant eno4.tenant Configured but not in


use

A TYPE II vlan<tenantNo> team0.tenant Client-to-HLI

B TYPE II vlan<tenantNo+2> team0.tenant+2 Node-to-node


communication

C TYPE II vlan<tenantNo+1> team0.tenant+1 Node-to-storage

D TYPE II vlan<tenantNo+3> team0.tenant+3 Configured but not in


use

Storage
The following mount points are preconfigured:

M O UN T P O IN T USE C A SE

On the master and worker nodes

/hana/shared HANA installation for production SID

/hana/data/SID/mnt00001 Data files installation for production SID

/hana/log/SID/mnt00001 Log files installation for production SID

/hana/logbackups/SID Redo logs for production SID

Key considerations
/usr/sap/SID is a symbolic link to /hana/shared/SID.

Scale-out with DR using storage replication


This topology supports multiple nodes in a scale-out with a DR. Both normal and multipurpose DRs are
supported. In the diagram, only the single purpose DR is depicted. You can request this topology with or without
the standby node.
Architecture diagram
Ethernet
The following network interfaces are preconfigured:

N IC LO GIC A L
IN T ERFA C E SK U T Y P E N A M E W IT H SUSE O S N A M E W IT H RH EL O S USE C A SE

A TYPE I eth0.tenant eno1.tenant Client-to-HLI

B TYPE I eth2.tenant eno3.tenant Node-to-node


communication

C TYPE I eth1.tenant eno2.tenant Node-to-storage

D TYPE I eth4.tenant eno4.tenant Configured but not in


use

A TYPE II vlan<tenantNo> team0.tenant Client-to-HLI

B TYPE II vlan<tenantNo+2> team0.tenant+2 Node-to-node


communication

C TYPE II vlan<tenantNo+1> team0.tenant+1 Node-to-storage

D TYPE II vlan<tenantNo+3> team0.tenant+3 Configured but not in


use

Storage
The following mount points are preconfigured:

M O UN T P O IN T USE C A SE

On the primar y node

/hana/shared HANA installation for production SID

/hana/data/SID/mnt00001 Data files installation for production SID

/hana/log/SID/mnt00001 Log files installation for production SID


M O UN T P O IN T USE C A SE

/hana/logbackups/SID Redo logs for production SID

On the DR node

/hana/shared HANA installation for production SID

/hana/data/SID/mnt00001 Data files installation for production SID

/hana/log/SID/mnt00001 Log files installation for production SID

Key considerations
/usr/sap/SID is a symbolic link to /hana/shared/SID.
At the DR site: The volumes and mount points are configured (marked as “Required for HANA installation”) for
the production HANA instance installation at the DR HLI unit.
At the DR site: The data, log backups, and shared volumes (marked as “Storage Replication”) are replicated via
snapshot from the production site. These volumes are mounted during failover only. For more information, see
Disaster recovery failover procedure.
The boot volume for SKU Type I class is replicated to the DR node.

Single node with DR using HSR


This topology supports one node in a scale-up configuration with one SID, with HANA System Replication to the
DR site for a primary SID. In the diagram, only a single-SID system is depicted at the primary site, but multi-SID
(MCOS) systems are supported as well.
Architecture diagram
Ethernet
The following network interfaces are preconfigured:

N IC LO GIC A L
IN T ERFA C E SK U T Y P E N A M E W IT H SUSE O S N A M E W IT H RH EL O S USE C A SE

A TYPE I eth0.tenant eno1.tenant Client-to-HLI/HSR

B TYPE I eth2.tenant eno3.tenant Configured but not in


use

C TYPE I eth1.tenant eno2.tenant Node-to-storage

D TYPE I eth4.tenant eno4.tenant Configured but not in


use

A TYPE II vlan<tenantNo> team0.tenant Client-to-HLI/HSR

B TYPE II vlan<tenantNo+2> team0.tenant+2 Configured but not in


use

C TYPE II vlan<tenantNo+1> team0.tenant+1 Node-to-storage

D TYPE II vlan<tenantNo+3> team0.tenant+3 Configured but not in


use

Storage
The following mount points are preconfigured on both the HLI units (Primary and DR):

M O UN T P O IN T USE C A SE

/hana/shared/SID HANA installation for SID

/hana/data/SID/mnt00001 Data files installation for SID

/hana/log/SID/mnt00001 Log files installation for SID

/hana/logbackups/SID Redo logs for SID

Key considerations
/usr/sap/SID is a symbolic link to /hana/shared/SID.
For MCOS: Volume size distribution is based on the database size in memory. To learn what database sizes in
memory are supported in a multi-SID environment, see Overview and architecture.
The primary node syncs with the DR node by using HANA System Replication.
Global Reach is used to link the ExpressRoute circuits together to make a private network between your
regional networks.

Single node HSR to DR (cost optimized)


This topology supports one node in a scale-up configuration with one SID, with HANA System Replication to the
DR site for a primary SID. In the diagram, only a single-SID system is depicted at the primary site, but multi-SID
(MCOS) systems are supported as well. At the DR site, an HLI unit is used for the QA instance while production
operations are running from the primary site. During DR failover (or failover test), the QA instance at the DR site
is taken down.
Architecture diagram

Ethernet
The following network interfaces are preconfigured:

N IC LO GIC A L
IN T ERFA C E SK U T Y P E N A M E W IT H SUSE O S N A M E W IT H RH EL O S USE C A SE

A TYPE I eth0.tenant eno1.tenant Client-to-HLI/HSR

B TYPE I eth2.tenant eno3.tenant Configured but not in


use

C TYPE I eth1.tenant eno2.tenant Node-to-storage

D TYPE I eth4.tenant eno4.tenant Configured but not in


use

A TYPE II vlan<tenantNo> team0.tenant Client-to-HLI/HSR

B TYPE II vlan<tenantNo+2> team0.tenant+2 Configured but not in


use

C TYPE II vlan<tenantNo+1> team0.tenant+1 Node-to-storage

D TYPE II vlan<tenantNo+3> team0.tenant+3 Configured but not in


use

Storage
The following mount points are preconfigured:
M O UN T P O IN T USE C A SE

At the primar y site

/hana/shared/SID HANA installation for production SID

/hana/data/SID/mnt00001 Data files installation for production SID

/hana/log/SID/mnt00001 Log files installation for production SID

/hana/logbackups/SID Redo logs for production SID

At the DR site

/hana/shared/SID HANA installation for production SID

/hana/data/SID/mnt00001 Data files installation for production SID

/hana/log/SID/mnt00001 Log files installation for production SID

/hana/logbackups/SID Redo logs for production SID

/hana/shared/QA-SID HANA installation for QA SID

/hana/data/QA-SID/mnt00001 Data files installation for QA SID

/hana/log/QA-SID/mnt00001 Log files installation for QA SID

/hana/logbackups/QA-SID Redo logs for QA SID

Key considerations
/usr/sap/SID is a symbolic link to /hana/shared/SID.
For MCOS: Volume size distribution is based on the database size in memory. To learn what database sizes in
memory are supported in a multi-SID environment, see Overview and architecture.
At the DR site: The volumes and mount points are configured (marked as “PROD Instance at DR site”) for the
production HANA instance installation at the DR HLI unit.
At the DR site: The data, log backups, log, and shared volumes for QA (marked as “QA instance installation”)
are configured for the QA instance installation.
The primary node syncs with the DR node by using HANA System Replication.
Global Reach is used to link the ExpressRoute circuits together to make a private network between your
regional networks.

High availability and disaster recovery with HSR


This topology support two nodes for the HANA System Replication configuration for the local regions' high
availability. For the DR, the third node at the DR region syncs with the primary site by using HSR (async mode).
Architecture diagram
Ethernet
The following network interfaces are preconfigured:

N IC LO GIC A L
IN T ERFA C E SK U T Y P E N A M E W IT H SUSE O S N A M E W IT H RH EL O S USE C A SE

A TYPE I eth0.tenant eno1.tenant Client-to-HLI/HSR

B TYPE I eth2.tenant eno3.tenant Configured but not in


use

C TYPE I eth1.tenant eno2.tenant Node-to-storage

D TYPE I eth4.tenant eno4.tenant Configured but not in


use

A TYPE II vlan<tenantNo> team0.tenant Client-to-HLI/HSR

B TYPE II vlan<tenantNo+2> team0.tenant+2 Configured but not in


use

C TYPE II vlan<tenantNo+1> team0.tenant+1 Node-to-storage

D TYPE II vlan<tenantNo+3> team0.tenant+3 Configured but not in


use

Storage
The following mount points are preconfigured:
M O UN T P O IN T USE C A SE

At the primar y site

/hana/shared/SID HANA installation for production SID

/hana/data/SID/mnt00001 Data files installation for production SID

/hana/log/SID/mnt00001 Log files installation for production SID

/hana/logbackups/SID Redo logs for production SID

At the DR site

/hana/shared/SID HANA installation for production SID

/hana/data/SID/mnt00001 Data files installation for production SID

/hana/log/SID/mnt00001 Log files installation for production SID

/hana/logbackups/SID Redo logs for production SID

Key considerations
/usr/sap/SID is a symbolic link to /hana/shared/SID.
At the DR site: The volumes and mount points are configured (marked as “PROD DR instance”) for the
production HANA instance installation at the DR HLI unit.
The primary site node syncs with the DR node by using HANA System Replication.
Global Reach is used to link the ExpressRoute circuits together to make a private network between your
regional networks.

High availability and disaster recovery with HSR (cost optimized)


This topology supports two nodes for the HANA System Replication configuration for the local regions' high
availability. For the DR, the third node at the DR region syncs with the primary site by using HSR (async mode),
while another instance (for example, QA) is already running out from the DR node.
Architecture diagram
Ethernet
The following network interfaces are preconfigured:

N IC LO GIC A L
IN T ERFA C E SK U T Y P E N A M E W IT H SUSE O S N A M E W IT H RH EL O S USE C A SE

A TYPE I eth0.tenant eno1.tenant Client-to-HLI/HSR

B TYPE I eth2.tenant eno3.tenant Configured but not in


use

C TYPE I eth1.tenant eno2.tenant Node-to-storage

D TYPE I eth4.tenant eno4.tenant Configured but not in


use

A TYPE II vlan<tenantNo> team0.tenant Client-to-HLI/HSR

B TYPE II vlan<tenantNo+2> team0.tenant+2 Configured but not in


use

C TYPE II vlan<tenantNo+1> team0.tenant+1 Node-to-storage

D TYPE II vlan<tenantNo+3> team0.tenant+3 Configured but not in


use

Storage
The following mount points are preconfigured:

M O UN T P O IN T USE C A SE

At the primar y site


M O UN T P O IN T USE C A SE

/hana/shared/SID HANA installation for production SID

/hana/data/SID/mnt00001 Data files installation for production SID

/hana/log/SID/mnt00001 Log files installation for production SID

/hana/logbackups/SID Redo logs for production SID

At the DR site

/hana/shared/SID HANA installation for production SID

/hana/data/SID/mnt00001 Data files installation for production SID

/hana/log/SID/mnt00001 Log files installation for production SID

/hana/logbackups/SID Redo logs for production SID

/hana/shared/QA-SID HANA installation for QA SID

/hana/data/QA-SID/mnt00001 Data files installation for QA SID

/hana/log/QA-SID/mnt00001 Log files installation for QA SID

/hana/logbackups/QA-SID Redo logs for QA SID

Key considerations
/usr/sap/SID is a symbolic link to /hana/shared/SID.
At the DR site: The volumes and mount points are configured (marked as “PROD DR instance”) for the
production HANA instance installation at the DR HLI unit.
At the DR site: The data, log backups, log, and shared volumes for QA (marked as “QA instance installation”)
are configured for the QA instance installation.
The primary site node syncs with the DR node by using HANA System Replication.
Global Reach is used to link the ExpressRoute circuits together to make a private network between your
regional networks.

Scale-out with DR using HSR


This topology supports multiple nodes in a scale-out with a DR. You can request this topology with or without the
standby node. The primary site node syncs with the DR site node by using HANA System Replication (async
mode).
Architecture diagram
Ethernet
The following network interfaces are preconfigured:

N IC LO GIC A L
IN T ERFA C E SK U T Y P E N A M E W IT H SUSE O S N A M E W IT H RH EL O S USE C A SE

A TYPE I eth0.tenant eno1.tenant Client-to-HLI/HSR

B TYPE I eth2.tenant eno3.tenant Node-to-node


communication

C TYPE I eth1.tenant eno2.tenant Node-to-storage

D TYPE I eth4.tenant eno4.tenant Configured but not in


use

A TYPE II vlan<tenantNo> team0.tenant Client-to-HLI/HSR

B TYPE II vlan<tenantNo+2> team0.tenant+2 Node-to-node


communication

C TYPE II vlan<tenantNo+1> team0.tenant+1 Node-to-storage

D TYPE II vlan<tenantNo+3> team0.tenant+3 Configured but not in


use

Storage
The following mount points are preconfigured:

M O UN T P O IN T USE C A SE

On the primar y node

/hana/shared HANA installation for production SID

/hana/data/SID/mnt00001 Data files installation for production SID

/hana/log/SID/mnt00001 Log files installation for production SID


M O UN T P O IN T USE C A SE

/hana/logbackups/SID Redo logs for production SID

On the DR node

/hana/shared HANA installation for production SID

/hana/data/SID/mnt00001 Data files installation for production SID

/hana/log/SID/mnt00001 Log files installation for production SID

/hana/logbackups/SID Redo logs for production SID

Key considerations
/usr/sap/SID is a symbolic link to /hana/shared/SID.
At the DR site: The volumes and mount points are configured for the production HANA instance installation at
the DR HLI unit.
The primary site node syncs with the DR node by using HANA System Replication.
Global Reach is used to link the ExpressRoute circuits together to make a private network between your
regional networks.

Next steps
Infrastructure and connectivity for HANA Large Instances
High availability and disaster recovery for HANA Large Instances
SAP HANA (large instances) deployment
12/22/2020 • 2 minutes to read • Edit Online

This article assumes that you've completed your purchase of SAP HANA on Azure (large instances) from
Microsoft. Before reading this article, for general background, see HANA large instances common terms and
HANA large instances SKUs.
Microsoft requires the following information to deploy HANA large instance units:
Customer name.
Business contact information (including email address and phone number).
Technical contact information (including email address and phone number).
Technical networking contact information (including email address and phone number).
Azure deployment region (for example, West US, Australia East, or North Europe).
SAP HANA on Azure (large instances) SKU (configuration).
For every Azure deployment region:
A /29 IP address range for ER-P2P connections that connect Azure virtual networks to HANA large
instances.
A /24 CIDR Block used for the HANA large instances server IP pool.
Optional when using ExpressRoute Global Reach to enable direct routing from on-premises to HANA
Large Instance units or routing between HANA Large Instance units in different Azure regions, you need
to reserve another /29 IP address range. This particular range may not overlap with any of the other IP
address ranges you defined before.
The IP address range values used in the virtual network address space attribute of every Azure virtual network
that connects to the HANA large instances.
Data for each HANA large instances system:
Desired hostname, ideally with a fully qualified domain name.
Desired IP address for the HANA large instance unit out of the Server IP pool address range. (The first
30 IP addresses in the server IP pool address range are reserved for internal use within HANA large
instances.)
SAP HANA SID name for the SAP HANA instance (required to create the necessary SAP HANA-related
disk volumes). Microsoft needs the HANA SID for creating the permissions for sidadm on the NFS
volumes. These volumes attach to the HANA large instance unit. The HANA SID is also used as one of
the name components of the disk volumes that get mounted. If you want to run more than one HANA
instance on the unit, you should list multiple HANA SIDs. Each one gets a separate set of volumes
assigned.
In the Linux OS, the sidadm user has a group ID. This ID is required to create the necessary SAP HANA-
related disk volumes. The SAP HANA installation usually creates the sapsys group, with a group ID of
1001. The sidadm user is part of that group.
In the Linux OS, the sidadm user has a user ID. This ID is required to create the necessary SAP HANA-
related disk volumes. If you're running several HANA instances on the unit, list all the sidadm users.
The Azure subscription ID for the Azure subscription to which SAP HANA on Azure HANA large instances are
going to be directly connected. This subscription ID references the Azure subscription, which is going to be
charged with the HANA large instance unit or units.
After you provide the preceding information, Microsoft provisions SAP HANA on Azure (large instances).
Microsoft sends you information to link your Azure virtual networks to HANA large instances. You can also access
the HANA large instance units.
Use the following sequence to connect to the HANA large instances after Microsoft has deployed it:
1. Connecting Azure VMs to HANA large instances
2. Connecting a VNet to HANA large instances ExpressRoute
3. Additional network requirements (optional)
Connecting Azure VMs to HANA Large Instances
12/22/2020 • 12 minutes to read • Edit Online

The article What is SAP HANA on Azure (Large Instances)? mentions that the minimal deployment of HANA Large
Instances with the SAP application layer in Azure looks like the following:

Looking closer at the Azure virtual network side, there is a need for:
The definition of an Azure virtual network into which you're going to deploy the VMs of the SAP application
layer.
The definition of a default subnet in the Azure virtual network that is really the one into which the VMs are
deployed.
The Azure virtual network that's created needs to have at least one VM subnet and one Azure ExpressRoute
virtual network gateway subnet. These subnets should be assigned the IP address ranges as specified and
discussed in the following sections.

Create the Azure virtual network for HANA Large Instances


NOTE
The Azure virtual network for HANA Large Instances must be created by using the Azure Resource Manager deployment
model. The older Azure deployment model, commonly known as the classic deployment model, isn't supported by the HANA
Large Instance solution.

You can use the Azure portal, PowerShell, an Azure template, or the Azure CLI to create the virtual network. (For
more information, see Create a virtual network using the Azure portal). In the following example, we look at a
virtual network that's created by using the Azure portal.
When referring to the address space in this documentation, to the address space that the Azure virtual network is
allowed to use. This address space is also the address range that the virtual network uses for BGP route
propagation. This address space can be seen here:

In the previous example, with 10.16.0.0/16, the Azure virtual network was given a rather large and wide IP address
range to use. Therefore, all the IP address ranges of subsequent subnets within this virtual network can have their
ranges within that address space. We don't usually recommend such a large address range for single virtual
network in Azure. But let's look into the subnets that are defined in the Azure virtual network:

We look at a virtual network with a first VM subnet (here called "default") and a subnet called "GatewaySubnet".
In the two previous graphics, the vir tual network address space covers both the subnet IP address range of
the Azure VM and that of the virtual network gateway.
You can restrict the vir tual network address space to the specific ranges used by each subnet. You can also
define the vir tual network address space of a virtual network as multiple specific ranges, as shown here:
In this case, the vir tual network address space has two spaces defined. They are the same as the IP address
ranges that are defined for the subnet IP address range of the Azure VM and the virtual network gateway.
You can use any naming standard you like for these tenant subnets (VM subnets). However, there must always
be one, and only one, gateway subnet for each vir tual network that connects to the SAP HANA on Azure
(Large Instances) ExpressRoute circuit. This gateway subnet has to be named "GatewaySubnet" to make
sure that the ExpressRoute gateway is properly placed.

WARNING
It's critical that the gateway subnet always be named "GatewaySubnet".

You can use multiple VM subnets and non-contiguous address ranges. These address ranges must be covered by
the vir tual network address space of the virtual network. They can be in an aggregated form. They can also be
in a list of the exact ranges of the VM subnets and the gateway subnet.
Following is a summary of the important facts about an Azure virtual network that connects to HANA Large
Instances:
You must submit the vir tual network address space to Microsoft when you're performing an initial
deployment of HANA Large Instances.
The vir tual network address space can be one larger range that covers the ranges for both the subnet IP
address range of the Azure VM and the virtual network gateway.
Or you can submit multiple ranges that cover the different IP address ranges of VM subnet IP address range(s)
and the virtual network gateway IP address range.
The defined vir tual network address space is used for BGP routing propagation.
The name of the gateway subnet must be: "GatewaySubnet" .
The address space is used as a filter on the HANA Large Instance side to allow or disallow traffic to the HANA
Large Instance units from Azure. The BGP routing information of the Azure virtual network and the IP address
ranges that are configured for filtering on the HANA Large Instance side should match. Otherwise, connectivity
issues can occur.
There are some details about the gateway subnet that are discussed later, in the section Connecting a vir tual
network to HANA Large Instance ExpressRoute.

Different IP address ranges to be defined


Some of the IP address ranges that are necessary for deploying HANA Large Instances got introduced already. But
there are more IP address ranges that are also important. Not all of the following IP address ranges need to be
submitted to Microsoft. However, you do need to define them before sending a request for initial deployment:
Vir tual network address space : The vir tual network address space is the IP address ranges that you
assign to your address space parameter in the Azure virtual networks. These networks connect to the SAP
HANA Large Instance environment. We recommend that this address space parameter is a multi-line value. It
should consist of the subnet range of the Azure VM and the subnet range(s) of the Azure gateway. This subnet
range was shown in the previous graphics. It must NOT overlap with your on-premises or server IP pool or ER-
P2P address ranges. How do you get these IP address range(s)? Your corporate network team or service
provider should provide one or multiple IP address range(s) that aren't used inside your network. For example,
the subnet of your Azure VM is 10.0.1.0/24, and the subnet of your Azure gateway subnet is 10.0.2.0/28. We
recommend that your Azure virtual network address space is defined as: 10.0.1.0/24 and 10.0.2.0/28. Although
the address space values can be aggregated, we recommend matching them to the subnet ranges. This way you
can accidentally avoid reusing unused IP address ranges within larger address spaces elsewhere in your
network. The vir tual network address space is an IP address range. It needs to be submitted to
Microsoft when you ask for an initial deployment .
Azure VM subnet IP address range: This IP address range is the one you assign to the Azure virtual network
subnet parameter. This parameter is in your Azure virtual network and connects to the SAP HANA Large
Instance environment. This IP address range is used to assign IP addresses to your Azure VMs. The IP addresses
out of this range are allowed to connect to your SAP HANA Large Instance server(s). If needed, you can use
multiple Azure VM subnets. We recommend a /24 CIDR block for each Azure VM subnet. This address range
must be a part of the values that are used in the Azure virtual network address space. How do you get this IP
address range? Your corporate network team or service provider should provide an IP address range that isn't
being used inside your network.
Vir tual network gateway subnet IP address range: Depending on the features that you plan to use, the
recommended size is:
Ultra-performance ExpressRoute gateway: /26 address block--required for Type II class of SKUs.
Coexistence with VPN and ExpressRoute using a high-performance ExpressRoute virtual network
gateway (or smaller): /27 address block.
All other situations: /28 address block. This address range must be a part of the values used in the "VNet
address space" values. This address range must be a part of the values that are used in the Azure virtual
network address space values that you submit to Microsoft. How do you get this IP address range? Your
corporate network team or service provider should provide an IP address range that's not currently
being used inside your network.
Address range for ER-P2P connectivity: This range is the IP range for your SAP HANA Large Instance
ExpressRoute (ER) P2P connection. This range of IP addresses must be a /29 CIDR IP address range. This range
must NOT overlap with your on-premises or other Azure IP address ranges. This IP address range is used to set
up the ER connectivity from your ExpressRoute virtual gateway to the SAP HANA Large Instance servers. How
do you get this IP address range? Your corporate network team or service provider should provide an IP
address range that's not currently being used inside your network. This range is an IP address range. It
needs to be submitted to Microsoft when you ask for an initial deployment .
Ser ver IP pool address range: This IP address range is used to assign the individual IP address to HANA
large instance servers. The recommended subnet size is a /24 CIDR block. If needed, it can be smaller, with as
few as 64 IP addresses. From this range, the first 30 IP addresses are reserved for use by Microsoft. Make sure
that you account for this fact when you choose the size of the range. This range must NOT overlap with your on-
premises or other Azure IP addresses. How do you get this IP address range? Your corporate network team or
service provider should provide an IP address range that's not currently being used inside your network. This
range is an IP address range, which needs to be submitted to Microsoft when asking for an initial
deployment .
Optional IP address ranges that eventually need to be submitted to Microsoft:
If you choose to use ExpressRoute Global Reach to enable direct routing from on-premises to HANA Large
Instance units, you need to reserve another /29 IP address range. This range may not overlap with any of the
other IP address ranges you defined before.
If you choose to use ExpressRoute Global Reach to enable direct routing from a HANA Large Instance tenant in
one Azure region to another HANA Large Instance tenant in another Azure region, you need to reserve another
/29 IP address range. This range may not overlap with any of the other IP address ranges you defined before.
For more information about ExpressRoute Global Reach and usage around HANA large instances, check the
documents:
SAP HANA (Large Instances) network architecture
Connect a virtual network to HANA large instances
You need to define and plan the IP address ranges that were described previously. However, you don't need to
transmit all of them to Microsoft. The IP address ranges that you are required to name to Microsoft are:
Azure virtual network address space(s)
Address range for ER-P2P connectivity
Server IP pool address range
If you add additional virtual networks that need to connect to HANA Large Instances, you have to submit the new
Azure virtual network address space that you're adding to Microsoft.
Following is an example of the different ranges and some example ranges as you need to configure and eventually
provide to Microsoft. The value for the Azure virtual network address space isn't aggregated in the first example.
However, it is defined from the ranges of the first Azure VM subnet IP address range and the virtual network
gateway subnet IP address range.
You can use multiple VM subnets within the Azure virtual network when you configure and submit the additional IP
address ranges of the additional VM subnet(s) as part of the Azure virtual network address space.

The graphic does not show the additional IP address range(s) that are required for the optional use of ExpressRoute
Global Reach.
You can also aggregate the data that you submit to Microsoft. In that case, the address space of the Azure virtual
network only includes one space. Using the IP address ranges from the earlier example, the aggregated virtual
network address space could look like the following image:
In the example, instead of two smaller ranges that defined the address space of the Azure virtual network, we have
one larger range that covers 4096 IP addresses. Such a large definition of the address space leaves some rather
large ranges unused. Since the virtual network address space value(s) are used for BGP route propagation, usage
of the unused ranges on-premises or elsewhere in your network can cause routing issues. The graphic does not
show the additional IP address range(s) that are required for the optional use of ExpressRoute Global Reach.
We recommend that you keep the address space tightly aligned with the actual subnet address space that you use.
If needed, without incurring downtime on the virtual network, you can always add new address space values later.

IMPORTANT
Each IP address range in ER-P2P, the server IP pool, and the Azure virtual network address space must NOT overlap with one
another or with any other range that's used in your network. Each must be discrete. As the two previous graphics show, they
also can't be a subnet of any other range. If overlaps occur between ranges, the Azure virtual network might not connect to
the ExpressRoute circuit.

Next steps after address ranges have been defined


After the IP address ranges have been defined, the following things need to happen:
1. Submit the IP address ranges for the Azure virtual network address space, the ER-P2P connectivity, and server
IP pool address range, together with other data that has been listed at the beginning of the document. At this
point, you could also start to create the virtual network and the VM subnets.
2. An ExpressRoute circuit is created by Microsoft between your Azure subscription and the HANA Large Instance
stamp.
3. A tenant network is created on the Large Instance stamp by Microsoft.
4. Microsoft configures networking in the SAP HANA on Azure (Large Instances) infrastructure to accept IP
addresses from your Azure virtual network address space that communicates with HANA Large Instances.
5. Depending on the specific SAP HANA on Azure (Large Instances) SKU that you bought, Microsoft assigns a
compute unit in a tenant network. It also allocates and mounts storage, and installs the operating system (SUSE
or Red Hat Linux). IP addresses for these units are taken out of the Server IP Pool address range that you
submitted to Microsoft.
At the end of the deployment process, Microsoft delivers the following data to you:
Information that's needed to connect your Azure virtual network(s) to the ExpressRoute circuit that connects
Azure virtual networks to HANA Large Instances:
Authorization key(s)
ExpressRoute PeerID
Data for accessing HANA Large Instances after you establish ExpressRoute circuit and Azure virtual network.
You can also find the sequence of connecting HANA Large Instances in the document SAP HANA on Azure (Large
Instances) Setup. Many of the following steps are shown in an example deployment in that document.

Next steps
Refer to Connecting a virtual network to HANA Large Instance ExpressRoute.
Connect a virtual network to HANA large instances
12/22/2020 • 7 minutes to read • Edit Online

After you've created an Azure virtual network, you can connect that network to SAP HANA on Azure large
instances. Create an Azure ExpressRoute gateway on the virtual network. This gateway enables you to link the
virtual network to the ExpressRoute circuit that connects to the customer tenant on the HANA Large Instance
stamp.

NOTE
This step can take up to 30 minutes to complete. The new gateway is created in the designated Azure subscription, and
then connected to the specified Azure virtual network.

NOTE
This article has been updated to use the new Azure PowerShell Az module. You can still use the AzureRM module, which will
continue to receive bug fixes until at least December 2020. To learn more about the new Az module and AzureRM
compatibility, see Introducing the new Azure PowerShell Az module. For Az module installation instructions, see Install Azure
PowerShell.

If a gateway already exists, check whether it's an ExpressRoute gateway or not. If it is not an ExpressRoute gateway,
delete the gateway, and re-create it as an ExpressRoute gateway. If an ExpressRoute gateway is already established,
see the following section of this article, "Link virtual networks."
Use either the Azure portal or PowerShell to create an ExpressRoute VPN gateway connected to your virtual
network.
If you use the Azure portal, add a new Vir tual Network Gateway , and then select ExpressRoute as
the gateway type.
If you use PowerShell, first download and use the latest Azure PowerShell SDK.
The following commands create an ExpressRoute gateway. The texts preceded by a $ are user-defined variables
that should be updated with your specific information.
# These Values should already exist, update to match your environment
$myAzureRegion = "eastus"
$myGroupName = "SAP-East-Coast"
$myVNetName = "VNet01"

# These values are used to create the gateway, update for how you wish the GW components to be named
$myGWName = "VNet01GW"
$myGWConfig = "VNet01GWConfig"
$myGWPIPName = "VNet01GWPIP"
$myGWSku = "HighPerformance" # Supported values for HANA large instances are: HighPerformance or
UltraPerformance

# These Commands create the Public IP and ExpressRoute Gateway


$vnet = Get-AzVirtualNetwork -Name $myVNetName -ResourceGroupName $myGroupName
$subnet = Get-AzVirtualNetworkSubnetConfig -Name 'GatewaySubnet' -VirtualNetwork $vnet
New-AzPublicIpAddress -Name $myGWPIPName -ResourceGroupName $myGroupName `
-Location $myAzureRegion -AllocationMethod Dynamic
$gwpip = Get-AzPublicIpAddress -Name $myGWPIPName -ResourceGroupName $myGroupName
$gwipconfig = New-AzVirtualNetworkGatewayIpConfig -Name $myGWConfig -SubnetId $subnet.Id `
-PublicIpAddressId $gwpip.Id

New-AzVirtualNetworkGateway -Name $myGWName -ResourceGroupName $myGroupName -Location $myAzureRegion `


-IpConfigurations $gwipconfig -GatewayType ExpressRoute `
-GatewaySku $myGWSku -VpnType PolicyBased -EnableBgp $true

In this example, the HighPerformance gateway SKU was used. Your options are HighPerformance or
UltraPerformance as the only gateway SKUs that are supported for SAP HANA on Azure (large instances).

IMPORTANT
For HANA large instances of the Type II class SKU, you must use the UltraPerformance Gateway SKU.

Link virtual networks


The Azure virtual network now has an ExpressRoute gateway. Use the authorization information provided by
Microsoft to connect the ExpressRoute gateway to the SAP HANA Large Instances ExpressRoute circuit. You can
connect by using the Azure portal or PowerShell. The PowerShell instructions are as follows.
Run the following commands for each ExpressRoute gateway by using a different AuthGUID for each connection.
The first two entries shown in the following script come from the information provided by Microsoft. Also, the
AuthGUID is specific for every virtual network and its gateway. If you want to add another Azure virtual network,
you need to get another AuthID for your ExpressRoute circuit that connects HANA large instances into Azure from
Microsoft.
# Populate with information provided by Microsoft Onboarding team
$PeerID = "/subscriptions/9cb43037-9195-4420-a798-f87681a0e380/resourceGroups/Customer-USE-
Circuits/providers/Microsoft.Network/expressRouteCircuits/Customer-USE01"
$AuthGUID = "76d40466-c458-4d14-adcf-3d1b56d1cd61"

# Your ExpressRoute Gateway information


$myGroupName = "SAP-East-Coast"
$myGWName = "VNet01GW"
$myGWLocation = "East US"

# Define the name for your connection


$myConnectionName = "VNet01GWConnection"

# Create a new connection between the ER Circuit and your Gateway using the Authorization
$gw = Get-AzVirtualNetworkGateway -Name $myGWName -ResourceGroupName $myGroupName

New-AzVirtualNetworkGatewayConnection -Name $myConnectionName `


-ResourceGroupName $myGroupName -Location $myGWLocation -VirtualNetworkGateway1 $gw `
-PeerId $PeerID -ConnectionType ExpressRoute -AuthorizationKey $AuthGUID -ExpressRouteGatewayBypass

NOTE
The last parameter in the command New-AzVirtualNetworkGatewayConnection, ExpressRouteGatewayBypass is a new
parameter that enables ExpressRoute Fast Path. A functionality that reduces network latency between your HANA Large
Instance units and Azure VMs. The functionality got added in May 2019. For more details, check the article SAP HANA
(Large Instances) network architecture. Make sure that you are running the latest version of PowerShell cmdlets before
running the commands.

To connect the gateway to more than one ExpressRoute circuit associated with your subscription, you might need
to run this step more than once. For example, you're likely going to connect the same virtual network gateway to
the ExpressRoute circuit that connects the virtual network to your on-premises network.

Applying ExpressRoute Fast Path to existing HANA Large Instance


ExpressRoute circuits
The documentation so far explained how to connect a new ExpressRoute circuit that got created with a HANA
Large Instance deployment to an Azure ExpressRoute gateway of one of your Azure virtual networks. But many
customers already have their ExpressRoute circuits setup already and have their virtual networks connected to
HANA Large Instances already. As the new ExpressRoute Fast Path is reducing network latency, it is recommended
that you apply the change to use this functionality. The commands to connect a new ExpreesRoute circuit and to
change an existing ExpressRoute Circuit are the same. As a result you need to run this sequence of PowerShell
commands to change an existing circuit to use
# Populate with information provided by Microsoft Onboarding team
$PeerID = "/subscriptions/9cb43037-9195-4420-a798-f87681a0e380/resourceGroups/Customer-USE-
Circuits/providers/Microsoft.Network/expressRouteCircuits/Customer-USE01"
$AuthGUID = "76d40466-c458-4d14-adcf-3d1b56d1cd61"

# Your ExpressRoute Gateway information


$myGroupName = "SAP-East-Coast"
$myGWName = "VNet01GW"
$myGWLocation = "East US"

# Define the name for your connection


$myConnectionName = "VNet01GWConnection"

# Create a new connection between the ER Circuit and your Gateway using the Authorization
$gw = Get-AzVirtualNetworkGateway -Name $myGWName -ResourceGroupName $myGroupName

New-AzVirtualNetworkGatewayConnection -Name $myConnectionName `


-ResourceGroupName $myGroupName -Location $myGWLocation -VirtualNetworkGateway1 $gw `
-PeerId $PeerID -ConnectionType ExpressRoute -AuthorizationKey $AuthGUID -ExpressRouteGatewayBypass

It is important that you add the last parameter as displayed above to enable the ExpressRoute Fast Path
functionality

ExpressRoute Global Reach


As you want to enable Global Reach for one or both of the two scenarios:
HANA System Replication without any additional proxies or firewalls
Copying backups between HANA Large Instance units in two different regions to perform system copies or
system refreshes
you need consider that:
You need to provide an address space range of a /29 address space. That address range may not overlap with
any of the other address space ranges that you used so far connecting HANA Large Instances to Azure and
may not overlap with any of your IP address ranges you used somewhere else in Azure or on-premises.
There is a limitation on the ASNs (Autonomous System Number) that can be used to advertise your on-
premises routes to HANA Large Instances. Your on-premises must not advertise any routes with private ASNs
in the range of 65000 – 65020 or 65515.
For the scenario of connecting on-premises direct access to HANA Large instances, you need to calculate a fee
for the circuit that connects you to Azure. For prices, check the prices for Global Reach Add-On.
To get one or both of the scenarios applied to your deployment, open a support message with Azure as described
in Open a support request for HANA large Instances
Data that is needed and keywords that you need to use for Microsoft to be able to route and execute on your
request, looks like:
Service: SAP HANA Large Instance
Problem type: Configuration and Setup
Problem subtype: My problem is not listed above
Subject 'Modify my Network - add Global Reach'
Details: 'Add Global Reach to HANA Large Instance to HANA Large Instance tenant or 'Add Global Reach to on-
premises to HANA Large Instance tenant.
Additional details for the HANA Large Instance to HANA Large Instance tenant case: You need to define the two
Azure regions where the two tenants to connect are located AND you need to submit the /29 IP address
range
Additional details for the on-premises to HANA Large Instance tenant case: You need to define the Azure
Region where the HANA Large Instance tenant is deployed you want to connect to directly. Additionally you
need to provide the Auth GUID and Circuit Peer ID that you received when you established your
ExpressRoute circuit between on-premises and Azure. Additionally, you need to name your ASN . The last
deliverable is a /29 IP address range for ExpressRoute Global Reach.

NOTE
If you want to have both cases handled, you need to supply two different /29 IP address ranges that do not overlap with
any other IP address range used so far.

Next steps
Additional network requirements for HLI
Additional network requirements for large instances
12/22/2020 • 3 minutes to read • Edit Online

You might have additional network requirements as part of a deployment of large instances of SAP HANA on
Azure.

Add more IP addresses or subnets


Use either the Azure portal, PowerShell, or the Azure CLI when you add more IP addresses or subnets.
Add the new IP address range as a new range to the virtual network address space, instead of generating a new
aggregated range. Submit this change to Microsoft. This enables you to connect from that new IP address range to
the HANA large instance units in your client. You can open an Azure support request to get the new virtual network
address space added. After you receive confirmation, perform the next steps.
To create an additional subnet from the Azure portal, see Create a virtual network using the Azure portal. To create
one from PowerShell, see Create a virtual network using PowerShell.

Add virtual networks


After initially connecting one or more Azure virtual networks, you might want to connect additional ones that
access SAP HANA on Azure (large instances). First, submit an Azure support request. In that request, include the
specific information identifying the particular Azure deployment. Also include the IP address space range or ranges
of the Azure virtual network address space. SAP HANA on Microsoft Service Management then provides the
necessary information you need to connect the additional virtual networks and Azure ExpressRoute. For every
virtual network, you need a unique authorization key to connect to the ExpressRoute circuit to HANA large
instances.

Increase ExpressRoute circuit bandwidth


Consult with SAP HANA on Microsoft Service Management. If they advise you to increase the bandwidth of the
SAP HANA on Azure (large instances) ExpressRoute circuit, create an Azure support request. (You can request an
increase for a single circuit bandwidth up to a maximum of 10 Gbps.) You then receive notification after the
operation is complete; you don't need to do anything else to enable this higher speed in Azure.

Add an additional ExpressRoute circuit


Consult with SAP HANA on Microsoft Service Management. If they advise you to add an additional ExpressRoute
circuit, create an Azure support request (including a request to get authorization information to connect to the new
circuit). Before making the request, you must define the address space used on the virtual networks. SAP HANA on
Microsoft Service Management can then provide authorization.
When the new circuit is created, and the SAP HANA on Microsoft Service Management configuration is complete,
you receive a notification with the information you need to proceed. You are not able to connect Azure virtual
networks to this additional circuit if they are already connected to another SAP HANA on Azure (large instance)
ExpressRoute circuit in the same Azure region.

Delete a subnet
To remove a virtual network subnet, you can use the Azure portal, PowerShell, or the Azure CLI. If your Azure
virtual network IP address range or address space was an aggregated range, there is no follow up for you with
Microsoft. (Note, however, that the virtual network is still propagating the BGP route address space that includes
the deleted subnet.) You might have defined the Azure virtual network address range or address space as multiple
IP address ranges, of which one was assigned to your deleted subnet. Be sure to delete that from your virtual
network address space. Then inform SAP HANA on Microsoft Service Management to remove it from the ranges
that SAP HANA on Azure (large instances) is allowed to communicate with.
For more information, see Delete a subnet.

Delete a virtual network


For information, see Delete a virtual network.
SAP HANA on Microsoft Service Management removes the existing authorizations on the SAP HANA on Azure
(large instances) ExpressRoute circuit. It also removes the Azure virtual network IP address range or address space
for the communication with HANA large instances.
After you remove the virtual network, open an Azure support request to provide the IP address space range or
ranges to be removed.
To ensure you remove everything, delete the ExpressRoute connection, the virtual network gateway, the virtual
network gateway public IP, and the virtual network.

Delete an ExpressRoute circuit


To remove an additional SAP HANA on Azure (large instances) ExpressRoute circuit, open an Azure support request
with SAP HANA on Microsoft Service Management. Request that the circuit be deleted. Within the Azure
subscription, you may delete or keep the virtual network, as necessary. However, you must delete the connection
between the HANA large instances ExpressRoute circuit and the linked virtual network gateway.

Next steps
How to install and configure SAP HANA (large instances) on Azure
How to install and configure SAP HANA (Large
Instances) on Azure
12/22/2020 • 12 minutes to read • Edit Online

Before reading this article, get familiar with HANA Large Instances common terms and the HANA Large Instances
SKUs.
The installation of SAP HANA is your responsibility. You can start installing a new SAP HANA on Azure (Large
Instances) server after you establish the connectivity between your Azure virtual networks and the HANA Large
Instance unit(s).

NOTE
Per SAP policy, the installation of SAP HANA must be performed by a person who's passed the Certified SAP Technology
Associate exam, SAP HANA Installation certification exam, or who is an SAP-certified system integrator (SI).

When you're planning to install HANA 2.0, see SAP support note #2235581 - SAP HANA: Supported operating
systems to make sure that the OS is supported with the SAP HANA release you that you're installing. The
supported OS for HANA 2.0 is more restrictive than the supported OS for HANA 1.0. You also need to check
whether the OS release you are interested in is listed as supported for the particular HLI unit on this published list.
Click on the unit to get the whole details with the supported OS list of that unit.
Validate the following before you begin the HANA installation:
HLI unit(s)
Operating system configuration
Network configuration
Storage configuration

Validate the HANA Large Instance unit(s)


After you receive the HANA Large Instance unit from Microsoft, validate the following settings and adjust as
necessary.
The first step after you receive the HANA Large Instance and establish access and connectivity to the instances, is
to check in Azure portal whether the instance(s) are showing up with the correct SKUs and OS. Read Azure HANA
Large Instances control through Azure portal for the steps necessary to perform the checks.
The second step after you receive the HANA Large Instance and establish access and connectivity to the
instances, is to register the OS of the instance with your OS provider. This step includes registering your SUSE
Linux OS in an instance of SUSE SMT that's deployed in a VM in Azure.
The HANA Large Instance unit can connect to this SMT instance. (For more information, see How to set up SMT
server for SUSE Linux). Alternatively, your Red Hat OS needs to be registered with the Red Hat Subscription
Manager that you need to connect to. For more information, see the remarks in What is SAP HANA on Azure
(Large Instances)?.
This step is necessary for patching the OS, which is the responsibility of the customer. For SUSE, find the
documentation for installing and configuring SMT on this page about SMT installation.
The third step is to check for new patches and fixes of the specific OS release/version. Verify that the patch level
of the HANA Large Instance is in the latest state. There might be cases where the latest patches aren't included.
After taking over a HANA Large Instance unit, it's mandatory to check whether patches need to be applied.
The four th step is to check out the relevant SAP notes for installing and configuring SAP HANA on the specific
OS release/version. Due to changing recommendations or changes to SAP notes or configurations that are
dependent on individual installation scenarios, Microsoft won't always be able to configure a HANA Large Instance
unit perfectly.
Therefore, it's mandatory for you as a customer to read the SAP notes related to SAP HANA for your exact Linux
release. Also check the configurations of the OS release/version and apply the configuration settings if you haven't
already.
Specifically, check the following parameters and eventually adjust to:
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.core.rmem_default = 16777216
net.core.wmem_default = 16777216
net.core.optmem_max = 16777216
net.ipv4.tcp_rmem = 65536 16777216 16777216
net.ipv4.tcp_wmem = 65536 16777216 16777216
Starting with SLES12 SP1 and RHEL 7.2, these parameters must be set in a configuration file in the /etc/sysctl.d
directory. For example, a configuration file with the name 91-NetApp-HANA.conf must be created. For older SLES
and RHEL releases, these parameters must be set in/etc/sysctl.conf.
For all RHEL releases starting with RHEL 6.3, keep in mind:
The sunrpc.tcp_slot_table_entries = 128 parameter must be set in/etc/modprobe.d/sunrpc-local.conf. If the file
does not exist, you need to create it first by adding the entry:
options sunrpc tcp_max_slot_table_entries=128
The fifth step is to check the system time of your HANA Large Instance unit. The instances are deployed with a
system time zone. This time zone represents the location of the Azure region in which the HANA Large Instance
stamp is located. You can change the system time or time zone of the instances you own.
If you order more instances into your tenant, you need to adapt the time zone of the newly delivered instances.
Microsoft has no insight into the system time zone you set up with the instances after the handover. Thus, newly
deployed instances might not be set in the same time zone as the one you changed to. It's is your responsibility as
customer to adapt the time zone of the instance(s) that were handed over, if necessary.
The sixth step is to check etc/hosts. As the blades get handed over, they have different IP addresses that are
assigned for different purposes. Check the etc/hosts file. When units are added into an existing tenant, don't expect
to have etc/hosts of the newly deployed systems maintained correctly with the IP addresses of systems that were
delivered earlier. It's your responsibility as customer to makes sure that a newly deployed instance can interact and
resolve the names of the units that you deployed earlier in your tenant.

Operating system
The swap space of the delivered OS image is set to 2 GB according to the SAP support note #1999997 - FAQ: SAP
HANA memory. As a customer, if you want a different setting, you must set it yourself.
SUSE Linux Enterprise Server 12 SP1 for SAP applications is the distribution of Linux that's installed for SAP HANA
on Azure (Large Instances). This particular distribution provides SAP-specific capabilities "out of the box"
(including pre-set parameters for running SAP on SLES effectively).
See Resource library/white papers on the SUSE website and SAP on SUSE on the SAP Community Network (SCN)
for several useful resources related to deploying SAP HANA on SLES (including the set-up of high availability,
security hardening that's specific to SAP operations, and more).
Following is additional and useful SAP on SUSE-related links:
SAP HANA on SUSE Linux site
Best practices for SAP: Enqueue replication – SAP NetWeaver on SUSE Linux Enterprise 12
ClamSAP – SLES virus protection for SAP (including SLES 12 for SAP applications)
The following are SAP support notes that are applicable to implementing SAP HANA on SLES 12:
SAP support note #1944799 – SAP HANA guidelines for SLES operating system installation
SAP support note #2205917 – SAP HANA DB recommended OS settings for SLES 12 for SAP applications
SAP support note #1984787 – SUSE Linux Enterprise Server 12: installation notes
SAP support note #171356 – SAP software on Linux: General information
SAP support note #1391070 – Linux UUID solutions
Red Hat Enterprise Linux for SAP HANA is another offer for running SAP HANA on HANA Large Instances.
Releases of RHEL 7.2 and 7.3 are available and supported.
Following are additional useful SAP on Red Hat related links:
SAP HANA on Red Hat Linux site.
Following are SAP support notes that are applicable to implementing SAP HANA on Red Hat:
SAP support note #2009879 - SAP HANA guidelines for Red Hat Enterprise Linux (RHEL) operating system
SAP support note #2292690 - SAP HANA DB: Recommended OS settings for RHEL 7
SAP support note #1391070 – Linux UUID solutions
SAP support note #2228351 - Linux: SAP HANA Database SPS 11 revision 110 (or higher) on RHEL 6 or SLES
11
SAP support note #2397039 - FAQ: SAP on RHEL
SAP support note #2002167 - Red Hat Enterprise Linux 7.x: Installation and upgrade
Time synchronization
SAP applications that are built on the SAP NetWeaver architecture are sensitive to time differences for the various
components that comprise the SAP system. SAP ABAP short dumps with the error title of
ZDATE_LARGE_TIME_DIFF are probably familiar. That's because these short dumps appear when the system time
of different servers or VMs is drifting too far apart.
For SAP HANA on Azure (Large Instances), time synchronization that's done in Azure doesn't apply to the compute
units in the Large Instance stamps. This synchronization is not applicable for running SAP applications in native
Azure VMs, because Azure ensures that a system's time is properly synchronized.
As a result, you must set up a separate time server that can be used by SAP application servers that are running
on Azure VMs and by the SAP HANA database instances that are running on HANA Large Instances. The storage
infrastructure in Large Instance stamps is time-synchronized with NTP servers.

Networking
We assume that you followed the recommendations in designing your Azure virtual networks and in connecting
those virtual networks to the HANA Large Instances, as described in the following documents:
SAP HANA (Large Instance) overview and architecture on Azure
SAP HANA (Large Instances) infrastructure and connectivity on Azure
There are some details worth mentioning about the networking of the single units. Every HANA Large Instance
unit comes with two or three IP addresses that are assigned to two or three NIC ports. Three IP addresses are used
in HANA scale-out configurations and the HANA system replication scenario. One of the IP addresses that's
assigned to the NIC of the unit is out of the server IP pool that's described in SAP HANA (Large Instances)
overview and architecture on Azure.
For more information about Ethernet details for your architecture, see the HLI supported scenarios.

Storage
The storage layout for SAP HANA on Azure (Large Instances) is configured by SAP HANA on Azure
service management through SAP recommended guidelines. These guidelines are documented in the SAP HANA
storage requirements white paper.
The rough sizes of the different volumes with the different HANA Large Instances SKUs is documented in SAP
HANA (Large Instances) overview and architecture on Azure.
The naming conventions of the storage volumes are listed in the following table:

STO RA GE USA GE M O UN T N A M E VO L UM E N A M E

HANA data /hana/data/SID/mnt0000<m> Storage


IP:/hana_data_SID_mnt00001_tenant_v
ol

HANA log /hana/log/SID/mnt0000<m> Storage


IP:/hana_log_SID_mnt00001_tenant_vol

HANA log backup /hana/log/backups Storage


IP:/hana_log_backups_SID_mnt00001_t
enant_vol

HANA shared /hana/shared/SID Storage


IP:/hana_shared_SID_mnt00001_tenant
_vol/shared

usr/sap /usr/sap/SID Storage


IP:/hana_shared_SID_mnt00001_tenant
_vol/usr_sap

SID is the HANA instance System ID.


Tenant is an internal enumeration of operations when deploying a tenant.
HANA usr/sap share the same volume. The nomenclature of the mountpoints includes the system ID of the HANA
instances as well as the mount number. In scale-up deployments, there is only one mount, such as mnt00001. In
scale-out deployments, on the other hand, you see as many mounts as you have worker and master nodes.
For scale-out environments, data, log, and log backup volumes are shared and attached to each node in the scale-
out configuration. For configurations that are multiple SAP instances, a different set of volumes is created and
attached to the HANA Large Instance unit. For storage layout details for your scenario, see HLI supported
scenarios.
When you look at a HANA Large Instance unit, you realize that the units come with generous disk volume for
HANA/data, and that there is a volume HANA/log/backup. The reason that we made the HANA/data so large is
that the storage snapshots we offer you as a customer are using the same disk volume. The more storage
snapshots you perform, the more space is consumed by snapshots in your assigned storage volumes.
The HANA/log/backup volume is not supposed to be the volume for database backups. It is sized to be used as the
backup volume for the HANA transaction log backups. For more information, see SAP HANA (Large Instances)
high availability and disaster recovery on Azure.
In addition to the storage that's provided, you can purchase additional storage capacity in 1-TB increments. This
additional storage can be added as new volumes to a HANA Large Instance.
During onboarding with SAP HANA on Azure service management , the customer specifies a user ID (UID) and
group ID (GID) for the sidadm user and sapsys group (for example: 1000,500). During installation of the SAP
HANA system, you must use these same values. Because you want to deploy multiple HANA instances on a unit,
you get multiple sets of volumes (one set for each instance). As a result, at deployment time you need to define:
The SID of the different HANA instances (sidadm is derived from it).
The memory sizes of the different HANA instances. The memory size per instance defines the size of the
volumes in each individual volume set.
Based on storage provider recommendations, the following mount options are configured for all mounted
volumes (excludes boot LUN):
nfs rw, vers=4, hard, timeo=600, rsize=1048576, wsize=1048576, intr, noatime, lock 0 0
These mount points are configured in /etc/fstab as shown in the following graphics:

The output of the command df -h on a S72m HANA Large Instance unit looks like:

The storage controller and nodes in the Large Instance stamps are synchronized to NTP servers. When you
synchronize the SAP HANA on Azure (Large Instances) units and Azure VMs against an NTP server, there should
be no significant time drift between the infrastructure and the compute units in Azure or Large Instance stamps.
To optimize SAP HANA to the storage used underneath, set the following SAP HANA configuration parameters:
max_parallel_io_requests 128
async_read_submit on
async_write_submit_active on
async_write_submit_blocks all
For SAP HANA 1.0 versions up to SPS12, these parameters can be set during the installation of the SAP HANA
database, as described in SAP note #2267798 - Configuration of the SAP HANA database.
You can also configure the parameters after the SAP HANA database installation by using the hdbparam
framework.
The storage used in HANA Large Instances has a file size limitation. The size limitation is 16 TB per file. Unlike in
file size limitations in the EXT3 file systems, HANA is not aware implicitly of the storage limitation enforced by the
HANA Large Instances storage. As a result HANA will not automatically create a new data file when the file size
limit of 16TB is reached. As HANA attempts to grow the file beyond 16 TB, HANA will report errors and the index
server will crash at the end.

IMPORTANT
In order to prevent HANA trying to grow data files beyond the 16 TB file size limit of HANA Large Instance storage, you
need to set the following parameters in the SAP HANA global.ini configuration file
datavolume_striping=true
datavolume_striping_size_gb = 15000
See also SAP note #2400005
Be aware of SAP note #2631285

With SAP HANA 2.0, the hdbparam framework has been deprecated. As a result, the parameters must be set by
using SQL commands. For more information, see SAP note #2399079: Elimination of hdbparam in HANA 2.
Refer to HLI supported scenarios to learn more about the storage layout for your architecture.
Next steps
Refer to HANA Installation on HLI
Install HANA on SAP HANA on Azure (Large
Instances)
12/22/2020 • 4 minutes to read • Edit Online

To install HANA on SAP HANA on Azure (Large Instances), you must first do the following:
You provide Microsoft with all the data to deploy for you on an SAP HANA Large Instance.
You receive the SAP HANA Large Instance from Microsoft.
You create an Azure virtual network that is connected to your on-premises network.
You connect the ExpressRoute circuit for HANA Large Instances to the same Azure virtual network.
You install an Azure virtual machine that you use as a jump box for HANA Large Instances.
You ensure that you can connect from the jump box to your HANA Large Instance unit, and vice versa.
You check whether all the necessary packages and patches are installed.
You read the SAP notes and documentation about HANA installation on the operating system you're using.
Make sure that the HANA release of choice is supported on the operating system release.
The next section shows an example of downloading the HANA installation packages to the jump box virtual
machine. In this case, the operating system is Windows.

Download the SAP HANA installation bits


The HANA Large Instance units aren't directly connected to the internet. You can't directly download the installation
packages from SAP to the HANA Large Instance virtual machine. Instead, you download the packages to the jump
box virtual machine.
You need an SAP S-user or other user, which allows you to access the SAP Marketplace.
1. Sign in, and go to SAP Service Marketplace. Select Download Software > Installations and Upgrade >
By Alphabetical Index . Then select Under H – SAP HANA Platform Edition > SAP HANA Platform
Edition 2.0 > Installation . Download the files shown in the following screenshot.

2. In this example, we downloaded SAP HANA 2.0 installation packages. On the Azure jump box virtual
machine, expand the self-extracting archives into the directory as shown below.
3. As the archives are extracted, copy the directory created by the extraction (in this case, 51052030). Copy the
directory from the HANA Large Instance unit /hana/shared volume into a directory you created.

IMPORTANT
Don't copy the installation packages into the root or boot LUN, because space is limited and needs to be used by
other processes as well.

Install SAP HANA on the HANA Large Instance unit


In order to install SAP HANA, sign in as user root. Only root has enough permissions to install SAP HANA. Set
permissions on the directory you copied over into /hana/shared.

chmod –R 744 <Installation bits folder>

If you want to install SAP HANA by using the graphical user interface setup, the gtk2 package needs to be installed
on HANA Large Instances. To check whether it is installed, run the following command:

rpm –qa | grep gtk2

(In later steps, we show the SAP HANA setup with the graphical user interface.)
Go into the installation directory, and navigate into the sub directory HDB_LCM_LINUX_X86_64.
Out of that directory, start:

./hdblcmgui

At this point, you progress through a sequence of screens in which you provide the data for the installation. In this
example, we are installing the SAP HANA database server and the SAP HANA client components. Therefore, our
selection is SAP HANA Database .

On the next screen, select Install New System .

Next, select among several additional components that you can install.
Here, we choose the SAP HANA Client and the SAP HANA Studio. We also install a scale-up instance. Then choose
Single-Host System .

Next, provide some data.


IMPORTANT
As HANA System ID (SID), you must provide the same SID as you provided Microsoft when you ordered the HANA Large
Instance deployment. Choosing a different SID causes the installation to fail, due to access permission problems on the
different volumes.

For the installation path, use the /hana/shared directory. In the next step, you provide the locations for the HANA
data files and the HANA log files.

NOTE
The SID you specified when you defined system properties (two screens ago) should match the SID of the mount points. If
there is a mismatch, go back and adjust the SID to the value you have on the mount points.
In the next step, review the host name and eventually correct it.

In the next step, you also need to retrieve data you gave to Microsoft when you ordered the HANA Large Instance
deployment.

IMPORTANT
Provide the same System Administrator User ID and ID of User Group as you provided to Microsoft, as you order the
unit deployment. Otherwise, the installation of SAP HANA on the HANA Large Instance unit fails.

The next two screens are not shown here. They enable you to provide the password for the SYSTEM user of the
SAP HANA database, and the password for the sapadm user. The latter is used for the SAP Host Agent that gets
installed as part of the SAP HANA database instance.
After defining the password, you see a confirmation screen. check all the data listed, and continue with the
installation. You reach a progress screen that documents the installation progress, like this one:

As the installation finishes, you should see a screen like this one:
The SAP HANA instance should now be up and running, and ready for usage. You should be able to connect to it
from SAP HANA Studio. Also make sure that you check for and apply the latest updates.

Next steps
SAP HANA Large Instances high availability and disaster recovery on Azure
SAP HANA Large Instances high availability and
disaster recovery on Azure
12/22/2020 • 6 minutes to read • Edit Online

IMPORTANT
This documentation is no replacement of the SAP HANA administration documentation or SAP Notes. It's expected that
the reader has a solid understanding of and expertise in SAP HANA administration and operations, especially with the
topics of backup, restore, high availability, and disaster recovery.

It's important that you exercise steps and processes taken in your environment and with your HANA versions
and releases. Some processes described in this documentation are simplified for a better general understanding
and are not meant to be used as detailed steps for eventual operation handbooks. If you want to create
operation handbooks for your configurations, you need to test and exercise your processes and document the
processes related to your specific configurations.
High availability and disaster recovery (DR) are crucial aspects of running your mission-critical SAP HANA on
the Azure (Large Instances) server. It's important to work with SAP, your system integrator, or Microsoft to
properly architect and implement the right high-availability and disaster recovery strategies. It's also important
to consider the recovery point objective (RPO) and the recovery time objective, which are specific to your
environment.
Microsoft supports some SAP HANA high-availability capabilities with HANA Large Instances. These capabilities
include:
Storage replication : The storage system's ability to replicate all data to another HANA Large Instance
stamp in another Azure region. SAP HANA operates independently of this method. This functionality is the
default disaster recovery mechanism offered for HANA Large Instances.
HANA system replication : The replication of all data in SAP HANA to a separate SAP HANA system. The
recovery time objective is minimized through data replication at regular intervals. SAP HANA supports
asynchronous, synchronous in-memory, and synchronous modes. Synchronous mode is used only for SAP
HANA systems that are within the same datacenter or less than 100 km apart. With the current design of
HANA Large Instance stamps, HANA system replication can be used for high availability within one region
only. HANA system replication requires a third-party reverse proxy or routing component for disaster
recovery configurations into another Azure region.
Host auto-failover : A local fault-recovery solution for SAP HANA that's an alternative to HANA system
replication. If the master node becomes unavailable, you configure one or more standby SAP HANA nodes in
scale-out mode, and SAP HANA automatically fails over to a standby node.
SAP HANA on Azure (Large Instances) is offered in two Azure regions in four geopolitical areas (US, Australia,
Europe, and Japan). Two regions within a geopolitical area that host HANA Large Instance stamps are connected
to separate dedicated network circuits. These are used for replicating storage snapshots to provide disaster
recovery methods. The replication is not established by default but is set up for customers who order disaster
recovery functionality. Storage replication is dependent on the usage of storage snapshots for HANA Large
Instances. It's not possible to choose an Azure region as a DR region that is in a different geopolitical area.
The following table shows the currently supported high availability and disaster recovery methods and
combinations:
SC EN A RIO SUP P O RT ED IN H IGH AVA IL A B IL IT Y DISA ST ER REC O VERY
H A N A L A RGE IN STA N C ES O P T IO N O P T IO N C O M M EN T S

Single node Not available. Dedicated DR setup.


Multipurpose DR setup.

Host auto-failover: Scale- Possible with the standby Dedicated DR setup. HANA volume sets are
out (with or without taking the active role. Multipurpose DR setup. attached to all the nodes.
standby) HANA controls the role DR synchronization by DR site must have the same
including 1+1 switch. using storage replication. number of nodes.

HANA system replication Possible with primary or Dedicated DR setup. Separate set of disk
secondary setup. Multipurpose DR setup. volumes are attached to
Secondary moves to DR synchronization by each node.
primary role in a failover using storage replication. Only disk volumes of
case. DR by using HANA system secondary replica in the
HANA system replication replication is not yet production site get
and OS control failover. possible without third- replicated to the DR
party components. location.
One set of volumes is
required at the DR site.

A dedicated DR setup is where the HANA Large Instance unit in the DR site is not used for running any other
workload or non-production system. The unit is passive and is deployed only if a disaster failover is executed.
Though, this setup is not a preferred choice for many customers.
Refer HLI supported scenarios to learn storage layout and ethernet details for your architecture.

NOTE
SAP HANA MCOD deployments (multiple HANA Instances on one unit) as overlaying scenarios work with the HA and DR
methods listed in the table. An exception is the use of HANA System Replication with an automatic failover cluster based
on Pacemaker. Such a case only supports one HANA instance per unit. For SAP HANA MDC deployments, only non-
storage-based HA and DR methods work if more than one tenant is deployed. With one tenant deployed, all methods
listed are valid.

A multipurpose DR setup is where the HANA Large Instance unit on the DR site runs a non-production
workload. In case of disaster, shut down the non-production system, mount the storage-replicated (additional)
volume sets, and then start the production HANA instance. Most customers who use the HANA Large Instance
disaster recovery functionality use this configuration.
You can find more information on SAP HANA high availability in the following SAP articles:
SAP HANA High Availability Whitepaper
SAP HANA Administration Guide
SAP HANA Academy Video on SAP HANA System Replication
SAP Support Note #1999880 – FAQ on SAP HANA System Replication
SAP Support Note #2165547 – SAP HANA Back up and Restore within SAP HANA System Replication
Environment
SAP Support Note #1984882 – Using SAP HANA System Replication for Hardware Exchange with
Minimum/Zero Downtime

Network considerations for disaster recovery with HANA Large


Instances
To take advantage of the disaster recovery functionality of HANA Large Instances, you need to design network
connectivity to the two Azure regions. You need an Azure ExpressRoute circuit connection from on-premises in
your main Azure region, and another circuit connection from on-premises to your disaster recovery region. This
measure covers a situation in which there's a problem in an Azure region, including a Microsoft Enterprise Edge
Router (MSEE) location.
As a second measure, you can connect all Azure virtual networks that connect to SAP HANA on Azure (Large
Instances) in one region to an ExpressRoute circuit that connects HANA Large Instances in the other region.
With this cross connect, services running on an Azure virtual network in Region 1 can connect to HANA Large
Instance units in Region 2, and the other way around. This measure addresses a case in which only one of the
MSEE locations that connects to your on-premises location with Azure goes offline.
The following graphic illustrates a resilient configuration for disaster recovery cases:

Other requirements with HANA Large Instances storage replication


for disaster recovery
In addition to the preceding requirements for a disaster recovery setup with HANA Large Instances, you must:
Order SAP HANA on Azure (Large Instances) SKUs of the same size as your production SKUs and deploy
them in the disaster recovery region. In the current customer deployments, these instances are used to
run non-production HANA instances. These configurations are referred to as multipurpose DR setups.
Order additional storage on the DR site for each of your SAP HANA on Azure (Large Instances) SKUs that
you want to recover in the disaster recovery site. Buying additional storage lets you allocate the storage
volumes. You can allocate the volumes that are the target of the storage replication from your production
Azure region into the disaster recovery Azure region.
In the case, where you have HSR setup on primary, and you setup storage based replication to the DR
site, you must purchase additional storage at the DR site so both primary and secondary nodes data gets
replicated to the DR site.
Next steps
Refer Backup and restore.
Backup and restore of SAP HANA on HANA Large
Instances
12/22/2020 • 37 minutes to read • Edit Online

IMPORTANT
This article isn't a replacement for the SAP HANA administration documentation or SAP Notes. We expect that you have a
solid understanding of and expertise in SAP HANA administration and operations, especially for backup, restore, high
availability, and disaster recovery. In this article, screenshots from SAP HANA Studio are shown. Content, structure, and the
nature of the screens of SAP administration tools and the tools themselves might change from SAP HANA release to release.

It's important that you exercise steps and processes taken in your environment and with your HANA versions and
releases. Some processes described in this article are simplified for a better general understanding. They aren't
meant to be used as detailed steps for eventual operation handbooks. If you want to create operation handbooks
for your configurations, test and exercise your processes and document the processes related to your specific
configurations.
One of the most important aspects of operating databases is to protect them from catastrophic events. The cause
of these events can be anything from natural disasters to simple user errors.
Backing up a database, with the ability to restore it to any point in time, such as before someone deleted critical
data, enables restoration to a state that's as close as possible to the way it was prior to the disruption.
Two types of backups must be performed to achieve the capability to restore:
Database backups: Full, incremental, or differential backups
Transaction log backups
In addition to full-database backups performed at an application level, you can perform backups with storage
snapshots. Storage snapshots don't replace transaction log backups. Transaction log backups remain important to
restore the database to a certain point in time or to empty the logs from already committed transactions. Storage
snapshots can accelerate recovery by quickly providing a roll-forward image of the database.
SAP HANA on Azure (Large Instances) offers two backup and restore options:
Do it yourself (DIY). After you make sure that there's enough disk space, perform full database and log
backups by using one of the following disk backup methods. You can back up either directly to volumes
attached to the HANA Large Instance units or to NFS shares that are set up in an Azure virtual machine
(VM). In the latter case, customers set up a Linux VM in Azure, attach Azure Storage to the VM, and share
the storage through a configured NFS server in that VM. If you perform the backup against volumes that
directly attach to HANA Large Instance units, copy the backups to an Azure storage account. Do this after
you set up an Azure VM that exports NFS shares that are based on Azure Storage. You can also use either
an Azure Backup vault or Azure cold storage.
Another option is to use a third-party data protection tool to store the backups after they're copied to an
Azure storage account. The DIY backup option also might be necessary for data that you need to store for
longer periods of time for compliance and auditing purposes. In all cases, the backups are copied into NFS
shares represented through a VM and Azure Storage.
Infrastructure backup and restore functionality. You also can use the backup and restore functionality
that the underlying infrastructure of SAP HANA on Azure (Large Instances) provides. This option fulfills the
need for backups and fast restores. The rest of this section addresses the backup and restore functionality
that's offered with HANA Large Instances. This section also covers the relationship that backup and restore
have to the disaster recovery functionality offered by HANA Large Instances.

NOTE
The snapshot technology that's used by the underlying infrastructure of HANA Large Instances has a dependency on SAP
HANA snapshots. At this point, SAP HANA snapshots don't work in conjunction with multiple tenants of SAP HANA
multitenant database containers. If only one tenant is deployed, SAP HANA snapshots do work and you can use this
method.

Use storage snapshots of SAP HANA on Azure (Large Instances)


The storage infrastructure underlying SAP HANA on Azure (Large Instances) supports storage snapshots of
volumes. Both backup and restoration of volumes is supported, with the following considerations:
Instead of full database backups, storage volume snapshots are taken on a frequent basis.
When a snapshot is triggered over /hana/data and /hana/shared, which includes /usr/sap, volumes, the
snapshot technology initiates an SAP HANA snapshot before it runs the storage snapshot. This SAP HANA
snapshot is the setup point for eventual log restorations after recovery of the storage snapshot. For an HANA
snapshot to be successful, you need an active HANA instance. In an HSR scenario, a storage snapshot isn't
supported on a current secondary node where an HANA snapshot can’t be performed.
After the storage snapshot runs successfully, the SAP HANA snapshot is deleted.
Transaction log backups are taken frequently and stored in the /hana/logbackups volume or in Azure. You can
trigger the /hana/logbackups volume that contains the transaction log backups to take a snapshot separately.
In that case, you don't need to run an HANA snapshot.
If you must restore a database to a certain point in time, for a production outage, request that Microsoft Azure
Support or SAP HANA on Azure restore to a certain storage snapshot. An example is a planned restoration of a
sandbox system to its original state.
The SAP HANA snapshot that's included in the storage snapshot is an offset point for applying transaction log
backups that ran and were stored after the storage snapshot was taken.
These transaction log backups are taken to restore the database back to a certain point in time.
You can perform storage snapshots that target three classes of volumes:
A combined snapshot over /hana/data and /hana/shared, which includes /usr/sap. This snapshot requires the
creation of an SAP HANA snapshot as preparation for the storage snapshot. The SAP HANA snapshot ensures
that the database is in a consistent state from a storage point of view. For the restore process, that's a point to
set up on.
A separate snapshot over /hana/logbackups.
An operating system partition.
To get the latest snapshot scripts and documentation, see GitHub. When you download the snapshot script
package from GitHub, you get three files. One of the files is documented in a PDF for the functionality provided.
After you download the tool set, follow the instructions in "Get the snapshot tools."

Storage snapshot considerations


NOTE
Storage snapshots consume storage space that's allocated to the HANA Large Instance units. Consider the following aspects
of scheduling storage snapshots and how many storage snapshots to keep.
The specific mechanics of storage snapshots for SAP HANA on Azure (Large Instances) include:
A specific storage snapshot at the point in time when it's taken consumes little storage.
As data content changes and the content in SAP HANA data files change on the storage volume, the snapshot
needs to store the original block content and the data changes.
As a result, the storage snapshot increases in size. The longer the snapshot exists, the larger the storage
snapshot becomes.
The more changes that are made to the SAP HANA database volume over the lifetime of a storage snapshot,
the larger the space consumption of the storage snapshot.
SAP HANA on Azure (Large Instances) comes with fixed volume sizes for the SAP HANA data and log volumes.
Performing snapshots of those volumes eats into your volume space. You need to:
Determine when to schedule storage snapshots.
Monitor the space consumption of the storage volumes.
Manage the number of snapshots that you store.
You can disable the storage snapshots when you either import masses of data or perform other significant
changes to the HANA database.
The following sections provide information for performing these snapshots and include general
recommendations:
Although the hardware can sustain 255 snapshots per volume, you want to stay well below this number. The
recommendation is 250 or less.
Before you perform storage snapshots, monitor and keep track of free space.
Lower the number of storage snapshots based on free space. You can lower the number of snapshots that you
keep, or you can extend the volumes. You can order additional storage in 1-terabyte units.
During activities such as moving data into SAP HANA with SAP platform migration tools (R3load) or restoring
SAP HANA databases from backups, disable storage snapshots on the /hana/data volume.
During larger reorganizations of SAP HANA tables, avoid storage snapshots if possible.
Storage snapshots are a prerequisite to taking advantage of the disaster recovery capabilities of SAP HANA on
Azure (Large Instances).

Prerequisites for using self-service storage snapshots


To make sure that the snapshot script runs successfully, make sure that Perl is installed on the Linux operating
system on the HANA Large Instances server. Perl comes preinstalled on your HANA Large Instance unit. To check
the Perl version, use the following command:
perl -v

Set up storage snapshots


To set up storage snapshots with HANA Large Instances, follow these steps.
1. Make sure that Perl is installed on the Linux operating system on the HANA Large Instances server.
2. Modify the /etc/ssh/ssh_config to add the line MACs hmac-sha1.
3. Create an SAP HANA backup user account on the master node for each SAP HANA instance you run, if
applicable.
4. Install the SAP HANA HDB client on all the SAP HANA Large Instances servers.
5. On the first SAP HANA Large Instances server of each region, create a public key to access the underlying
storage infrastructure that controls snapshot creation.
6. Copy the scripts and configuration file from GitHub to the location of hdbsql in the SAP HANA installation.
7. Modify the HANABackupDetails.txt file as necessary for the appropriate customer specifications.
Get the latest snapshot scripts and documentation from GitHub. For the steps listed previously, see Microsoft
snapshot tools for SAP HANA on Azure.
Consideration for MCOD scenarios
If you run an MCOD scenario with multiple SAP HANA instances on one HANA Large Instance unit, you have
separate storage volumes provisioned for each of the SAP HANA instances. For more information on MDC and
other considerations, see "Important things to remember" in Microsoft snapshot tools for SAP HANA on Azure.
Step 1: Install the SAP HANA HDB client
The Linux operating system installed on SAP HANA on Azure (Large Instances) includes the folders and scripts
necessary to run SAP HANA storage snapshots for backup and disaster recovery purposes. Check for more recent
releases in GitHub.
It's your responsibility to install the SAP HANA HDB client on the HANA Large Instance units while you install SAP
HANA.
Step 2: Change the /etc/ssh/ssh_config
This step is described in "Enable communication with storage" in Microsoft snapshot tools for SAP HANA on
Azure.
Step 3: Create a public key
To enable access to the storage snapshot interfaces of your HANA Large Instance tenant, establish a sign-in
procedure through a public key.
On the first SAP HANA on Azure (Large Instances) server in your tenant, create a public key to access the storage
infrastructure. With a public key, a password isn't required to sign in to the storage snapshot interfaces. You also
don't need to maintain password credentials with a public key.
To generate a public key, see "Enable communication with storage" in Microsoft snapshot tools for SAP HANA on
Azure.
Step 4: Create an SAP HANA user account
To start the creation of SAP HANA snapshots, create a user account in SAP HANA that the storage snapshot scripts
can use. Create an SAP HANA user account within SAP HANA Studio for this purpose. The user must be created
under the SYSTEMDB and not under the SID database for MDC. In the single container environment, the user is
created in the tenant database. This account must have Backup Admin and Catalog Read privileges.
To set up and use a user account, see "Enable communication with SAP HANA" in GitHub.
Step 5: Authorize the SAP HANA user account
In this step, you authorize the SAP HANA user account that you created so that the scripts don't need to submit
passwords at runtime. The SAP HANA command hdbuserstore enables the creation of an SAP HANA user key. The
key is stored on one or more SAP HANA nodes. The user key lets the user access SAP HANA without having to
manage passwords from within the scripting process. The scripting process is discussed later in this article.

IMPORTANT
Run these configuration commands with the same user context that the snapshot commands are run in. Otherwise, the
snapshot commands won't work properly.
Step 6: Get the snapshot scripts, configure the snapshots, and test the configuration and connectivity
Download the most recent version of the scripts from GitHub. The way the scripts are installed changed with
release 4.1 of the scripts. For more information, see "Enable communication with SAP HANA" in Microsoft
snapshot tools for SAP HANA on Azure.
For the exact sequence of commands, see "Easy installation of snapshot tools (default)" in Microsoft snapshot tools
for SAP HANA on Azure. We recommend the use of the default installation.
To upgrade from version 3.x to 4.1, see "Upgrade an existing install" in Microsoft snapshot tools for SAP HANA on
Azure. To uninstall the 4.1 tool set, see "Uninstallation of the snapshot tools" in Microsoft snapshot tools for SAP
HANA on Azure.
Don't forget to run the steps described in "Complete setup of snapshot tools" in Microsoft snapshot tools for SAP
HANA on Azure.
The purpose of the different scripts and files as they got installed is described in "What are these snapshot tools?"
in Microsoft snapshot tools for SAP HANA on Azure.
Before you configure the snapshot tools, make sure that you also configured HANA backup locations and settings
correctly. For more information, see "SAP HANA Configuration" in Microsoft snapshot tools for SAP HANA on
Azure.
The configuration of the snapshot tool set is described in "Config file - HANABackupCustomerDetails.txt" in
Microsoft snapshot tools for SAP HANA on Azure.
Test connectivity with SAP HANA
After you put all the configuration data into the HANABackupCustomerDetails.txt file, check whether the
configurations are correct for the HANA instance data. Use the script testHANAConnection , which is independent of
an SAP HANA scale-up or scale-out configuration.
For more information, see "Check connectivity with SAP HANA - testHANAConnection" in Microsoft snapshot tools
for SAP HANA on Azure.
Test storage connectivity
The next test step is to check the connectivity to the storage based on the data you put into the
HANABackupCustomerDetails.txt configuration file. Then run a test snapshot. Before you run the
azure_hana_backup command, you must run this test. For the sequence of commands for this test, see "Check
connectivity with storage - testStorageSnapshotConnection"" in Microsoft snapshot tools for SAP HANA on Azure.
After a successful sign-in to the storage virtual machine interfaces, the script continues with phase 2 and creates a
test snapshot. The output is shown here for a three-node scale-out configuration of SAP HANA.
If the test snapshot runs successfully with the script, you can schedule the actual storage snapshots. If it isn't
successful, investigate the problems before you move forward. The test snapshot should stay around until the first
real snapshots are done.
Step 7: Perform snapshots
When the preparation steps are finished, you can start to configure and schedule the actual storage snapshots. The
script to be scheduled works with SAP HANA scale-up and scale-out configurations. For periodic and regular
execution of the backup script, schedule the script by using the cron utility.
For the exact command syntax and functionality, see "Perform snapshot backup - azure_hana_backup" in Microsoft
snapshot tools for SAP HANA on Azure.
When the script azure_hana_backup runs, it creates the storage snapshot in the following three phases:
1. It runs an SAP HANA snapshot.
2. It runs a storage snapshot.
3. It removes the SAP HANA snapshot that was created before the storage snapshot ran.
To run the script, call it from the HDB executable folder to which it was copied.
The retention period is administered with the number of snapshots that are submitted as a parameter when you
run the script. The amount of time that's covered by the storage snapshots is a function of the period of execution,
and of the number of snapshots submitted as a parameter when the script runs.
If the number of snapshots that are kept exceeds the number that are named as a parameter in the call of the
script, the oldest storage snapshot of the same label is deleted before a new snapshot runs. The number you give
as the last parameter of the call is the number you can use to control the number of snapshots that are kept. With
this number, you also can control, indirectly, the disk space that's used for snapshots.

Snapshot strategies
The frequency of snapshots for the different types depends on whether you use the HANA Large Instance disaster
recovery functionality. This functionality relies on storage snapshots, which might require special
recommendations for the frequency and execution periods of the storage snapshots.
In the considerations and recommendations that follow, the assumption is that you do not use the disaster
recovery functionality that HANA Large Instances offers. Instead, you use the storage snapshots to have backups
and be able to provide point-in-time recovery for the last 30 days. Given the limitations of the number of
snapshots and space, consider the following requirements:
The recovery time for point-in-time recovery.
The space used.
The recovery point and recovery time objectives for potential recovery from a disaster.
The eventual execution of HANA full-database backups against disks. Whenever a full-database backup against
disks or the backint interface is performed, the execution of the storage snapshots fails. If you plan to run full-
database backups on top of storage snapshots, make sure that the execution of the storage snapshots is
disabled during this time.
The number of snapshots per volume, which is limited to 250.
If you don't use the disaster recovery functionality of HANA Large Instances, the snapshot period is less frequent.
In such cases, perform the combined snapshots on /hana/data and /hana/shared, which includes /usr/sap, in 12-
hour or 24-hour periods. Keep the snapshots for a month. The same is true for the snapshots of the log backup
volume. The execution of SAP HANA transaction log backups against the log backup volume occurs in 5-minute to
15-minute periods.
Scheduled storage snapshots are best performed by using cron. Use the same script for all backups and disaster
recovery needs. Modify the script inputs to match the various requested backup times. These snapshots are all
scheduled differently in cron depending on their execution time. It can be hourly, every 12 hours, daily, or weekly.
The following example shows a cron schedule in /etc/crontab:

00 1-23 * * * ./azure_hana_backup --type=hana --prefix=hourlyhana --frequency=15min --retention=46


10 00 * * * ./azure_hana_backup --type=hana --prefix=dailyhana --frequency=15min --retention=28
00,05,10,15,20,25,30,35,40,45,50,55 * * * * ./azure_hana_backup --type=logs --prefix=regularlogback --
frequency=3min --retention=28
22 12 * * * ./azure_hana_backup --type=logs --prefix=dailylogback --frequncy=3min --retention=28
30 00 * * * ./azure_hana_backup --type=boot --boottype=TypeI --prefix=dailyboot --frequncy=15min --
retention=28

In the previous example, an hourly combined snapshot covers the volumes that contain the /hana/data and
/hana/shared/SID, which includes /usr/sap, locations. Use this type of snapshot for a faster point-in-time recovery
within the past two days. There's also a daily snapshot on those volumes. So, you have two days of coverage by
hourly snapshots plus four weeks of coverage by daily snapshots. The transaction log backup volume also is
backed up daily. These backups are kept for four weeks.
As you see in the third line of crontab, the backup of the HANA transaction log is scheduled to run every 5
minutes. The start times of the different cron jobs that run storage snapshots are staggered. In this way, the
snapshots don't run all at once at a certain point in time.
In the following example, you perform a combined snapshot that covers the volumes that contain the /hana/data
and /hana/shared/SID, which includes /usr/sap, locations on an hourly basis. You keep these snapshots for two
days. The snapshots of the transaction log backup volumes run on a 5-minute basis and are kept for four hours. As
before, the backup of the HANA transaction log file is scheduled to run every 5 minutes.
The snapshot of the transaction log backup volume is performed with a 2-minute delay after the transaction log
backup has started. Under normal circumstances, the SAP HANA transaction log backup finishes within those 2
minutes. As before, the volume that contains the boot LUN is backed up once per day by a storage snapshot and is
kept for four weeks.

10 0-23 * * * ./azure_hana_backup --type=hana ==prefix=hourlyhana --frequency=15min --retention=48


0,5,10,15,20,25,30,35,40,45,50,55 * * * * ./azure_hana_backup --type=logs --prefix=regularlogback --
frequency=3min --retention=28
2,7,12,17,22,27,32,37,42,47,52,57 * * * * ./azure_hana_backup --type=logs --prefix=logback --frequency=3min -
-retention=48
30 00 * * * ./azure_hana_backup --type=boot --boottype=TypeII --prefix=dailyboot --frequency=15min --
retention=28

The following graphic illustrates the sequences of the previous example. The boot LUN is excluded.

SAP HANA performs regular writes against the /hana/log volume to document the committed changes to the
database. On a regular basis, SAP HANA writes a savepoint to the /hana/data volume. As specified in crontab, an
SAP HANA transaction log backup runs every 5 minutes.
You also see that an SAP HANA snapshot runs every hour as a result of triggering a combined storage snapshot
over the /hana/data and /hana/shared/SID volumes. After the HANA snapshot succeeds, the combined storage
snapshot runs. As instructed in crontab, the storage snapshot on the /hana/logbackup volume runs every 5
minutes, around 2 minutes after the HANA transaction log backup.
IMPORTANT
The use of storage snapshots for SAP HANA backups is valuable only when the snapshots are performed in conjunction with
SAP HANA transaction log backups. These transaction log backups need to cover the time periods between the storage
snapshots.

If you've set a commitment to users of a point-in-time recovery of 30 days, you need to:
Access a combined storage snapshot over /hana/data and /hana/shared/SID that's 30 days old, in extreme
cases.
Have contiguous transaction log backups that cover the time between any of the combined storage snapshots.
So, the oldest snapshot of the transaction log backup volume needs to be 30 days old. This isn't the case if you
copy the transaction log backups to another NFS share that's located on Azure Storage. In that case, you might
pull old transaction log backups from that NFS share.
To benefit from storage snapshots and the eventual storage replication of transaction log backups, change the
location to which SAP HANA writes the transaction log backups. You can make this change in HANA Studio.
Although SAP HANA backs up full log segments automatically, specify a log backup interval to be deterministic.
This is especially true when you use the disaster recovery option because you usually want to run log backups
with a deterministic period. In the following case, 15 minutes is set as the log backup interval.

You also can choose backups that are more frequent than every 15 minutes. A more frequent setting is often used
in conjunction with disaster recovery functionality of HANA Large Instances. Some customers perform transaction
log backups every 5 minutes.
If the database has never been backed up, the final step is to perform a file-based database backup to create a
single backup entry that must exist within the backup catalog. Otherwise, SAP HANA can't initiate your specified
log backups.
After your first successful storage snapshots run, delete the test snapshot that ran in step 6. For more information,
see "Remove test snapshots - removeTestStorageSnapshot" in Microsoft snapshot tools for SAP HANA on Azure.
Monitor the number and size of snapshots on the disk volume
On a specific storage volume, you can monitor the number of snapshots and the storage consumption of those
snapshots. The ls command doesn't show the snapshot directory or files. The Linux OS command du shows
details about those storage snapshots because they're stored on the same volumes. Use the command with the
following options:
du –sh .snapshot : This option provides a total of all the snapshots within the snapshot directory.
du –sh --max-depth=1 : This option lists all the snapshots that are saved in the .snapshot folder and the size of
each snapshot.
du –hc : This option provides the total size used by all the snapshots.

Use these commands to make sure that the snapshots that are taken and stored don't consume all the storage on
the volumes.

NOTE
The snapshots of the boot LUN aren't visible with the previous commands.

Get details of snapshots


To get more details on snapshots, use the script azure_hana_snapshot_details . You can run this script in either
location if there's an active server in the disaster recovery location. The script provides the following output,
broken down by each volume that contains snapshots:
The size of total snapshots in a volume
The following details in each snapshot in that volume:
Snapshot name
Create time
Size of the snapshot
Frequency of the snapshot
HANA Backup ID associated with that snapshot, if relevant
For syntax of the command and outputs, see "List snapshots - azure_hana_snapshot_details" in Microsoft snapshot
tools for SAP HANA on Azure.
Reduce the number of snapshots on a server
As previously explained, you can reduce the number of certain labels of snapshots that you store. The last two
parameters of the command to initiate a snapshot are the label and the number of snapshots you want to retain.

./azure_hana_backup --type=hana --prefix=dailyhana --frequency=15min --retention=28

In the previous example, the snapshot label is dailyhana . The number of snapshots with this label to be kept is
28 . As you respond to disk space consumption, you might want to reduce the number of stored snapshots. An
easy way to reduce the number of snapshots to 15, for example, is to run the script with the last parameter set to
15 :

./azure_hana_backup --type=hana --prefix=dailyhana --frequency=15min --retention=15

If you run the script with this setting, the number of snapshots, which includes the new storage snapshot, is 15.
The 15 most recent snapshots are kept, and the 15 older snapshots are deleted.

NOTE
This script reduces the number of snapshots only if there are snapshots more than one hour old. The script doesn't delete
snapshots that are less than one hour old. These restrictions are related to the optional disaster recovery functionality
offered.

If you no longer want to maintain a set of snapshots with the backup prefix dailyhana in the syntax examples, run
the script with 0 as the retention number. All snapshots that match that label are then removed. Removing all
snapshots can affect the capabilities of HANA Large Instances disaster recovery functionality.
A second option to delete specific snapshots is to use the script azure_hana_snapshot_delete . This script is
designed to delete a snapshot or set of snapshots either by using the HANA backup ID as found in HANA Studio or
through the snapshot name itself. Currently, the backup ID is only tied to the snapshots created for the hana
snapshot type. Snapshot backups of the type logs and boot don't perform an SAP HANA snapshot, so there's no
backup ID to be found for those snapshots. If the snapshot name is entered, it looks for all snapshots on the
different volumes that match the entered snapshot name.
For more information on the script, see "Delete a snapshot - azure_hana_snapshot_delete" in Microsoft snapshot
tools for SAP HANA on Azure.
Run the script as user root .

IMPORTANT
If there's data that exists only on the snapshot you plan to delete, after the snapshot is deleted, that data is lost forever.

File-level restore from a storage snapshot


For the snapshot types hana and logs , you can access the snapshots directly on the volumes in the .snapshot
directory. There's a subdirectory for each of the snapshots. Copy each file in the state it was in at the point of the
snapshot from that subdirectory into the actual directory structure.
In the current version of the script, there's no restore script provided for the snapshot restore as self-service.
Snapshot restore can be performed as part of the self-service disaster recovery scripts at the disaster recovery site
during failover. To restore a desired snapshot from the existing available snapshots, you must contact the
Microsoft operations team by opening a service request.

NOTE
Single file restore doesn't work for snapshots of the boot LUN independent of the type of the HANA Large Instance units.
The .snapshot directory isn't exposed in the boot LUN.

Recover to the most recent HANA snapshot


In a production-down scenario, the process of recovering from a storage snapshot can be started as a customer
incident with Microsoft Azure Support. It's a high-urgency matter if data was deleted in a production system and
the only way to retrieve it is to restore the production database.
In a different situation, a point-in-time recovery might be low urgency and planned days in advance. You can plan
this recovery with SAP HANA on Azure instead of raising a high-priority flag. For example, you might plan to
upgrade the SAP software by applying a new enhancement package. You then need to revert to a snapshot that
represents the state before the enhancement package upgrade.
Before you send the request, you need to prepare. The SAP HANA on Azure team can then handle the request and
provide the restored volumes. Afterward, you restore the HANA database based on the snapshots.
For the possibilities for getting a snapshot restored with the new tool set, see "How to restore a snapshot" in
Manual recovery guide for SAP HANA on Azure from a storage snapshot.
To prepare for the request, follow these steps.
1. Decide which snapshot to restore. Only the hana/data volume is restored unless you instruct otherwise.
2. Shut down the HANA instance.

3. Unmount the data volumes on each HANA database node. If the data volumes are still mounted to the
operating system, the restoration of the snapshot fails.

4. Open an Azure support request, and include instructions about the restoration of a specific snapshot:
During the restoration: SAP HANA on Azure Service might ask you to attend a conference call to
coordinate, verify, and confirm that the correct storage snapshot is restored.
After the restoration: SAP HANA on Azure Service notifies you when the storage snapshot is
restored.
5. After the restoration process is complete, remount all the data volumes.

Another possibility for getting, for example, SAP HANA data files recovered from a storage snapshot, is
documented in step 7 in Manual recovery guide for SAP HANA on Azure from a storage snapshot.
To restore from a snapshot backup, see Manual recovery guide for SAP HANA on Azure from a storage snapshot.

NOTE
If your snapshot was restored by Microsoft operations, you don't need to do step 7.

Recover to another point in time


To restore to a certain point in time, see "Recover the database to the following point in time" in Manual recovery
guide for SAP HANA on Azure from a storage snapshot.

SnapCenter integration in SAP HANA large instances


This section describes how customers can use NetApp SnapCenter software to take a snapshot, backup, and
restore SAP HANA databases hosted on Microsoft Azure HANA Large Instances (HLI).
SnapCenter offers solutions for scenarios including backup/recovery, disaster recovery (DR) with asynchronous
storage replication, system replication, and system cloning. Integrated with SAP HANA Large Instances on Azure,
customers can now use SnapCenter for backup and recovery operations.
For additional references, see NetApp TR-4614 and TR-4646 on SnapCenter.
SAP HANA Backup/Recovery with SnapCenter (TR-4614)
SAP HANA Disaster Recovery with Storage Replication (TR-4646)
SAP HANA HSR with SnapCenter (TR-4719)
SAP Cloning from SnapCenter (TR-4667)
System Requirements and Prerequisites
To run SnapCenter on Azure HLI, system requirements include:
SnapCenter Server on Azure Windows 2016 or newer with 4-vCPU, 16-GB RAM and a minimum of 650 GB
managed premium SSD storage.
SAP HANA Large Instances system with 1.5 TB – 24-TB RAM. It's recommended to use two SAP HANA Large
Instance systems for cloning operations and tests.
The steps to integrate SnapCenter in SAP HANA are:
1. Raise a support ticket request to communicate the user-generated public key to the Microsoft Ops team. This is
required to set up the SnapCenter user to access the storage system.
2. Create a VM in your VNET that has access to HLI; this VM is used for SnapCenter.
3. Download and install SnapCenter.
4. Backup and recovery operations.
Create a support ticket for user-role storage setup
1. Open the Azure portal and navigate to the Subscriptions page. Once on the “Subscriptions” page, select
your SAP HANA subscription, outlined in red below.
2. On your SAP HANA subscription page, select the Resource Groups subpage.

3. Select an appropriate resource group in a region.

4. Select a SKU entry corresponding to SAP HANA on Azure storage.


5. Open a New suppor t ticket request, outlined in red.

6. On the Basics tab, provide the following information for the ticket:
Issue type: Technical
Subscription: Your subscription
Ser vice: SAP HANA Large Instance
Resource: Your resource group
Summar y: Provide the user-generated public key
Problem type: Configuration and Setup
Problem subtype: Set up SnapCenter for HLI
7. In the Description of the support ticket, on the Details tab, provide:
Set up SnapCenter for HLI
Your public key for SnapCenter user (snapcenter.pem) - see the public key create example below

8. Select Review + create to review your support ticket.


9. Generate a certificate for the SnapCenter username on the HANA Large Instance or any Linux server.
SnapCenter requires a username and password to access the storage virtual machine (SVM) and to create
snapshots of the HANA database. Microsoft uses the public key to allow you (the customer) to set the
password for accessing the storage system.

openssl req -x509 -nodes -days 1095 -newkey rsa:2048 -keyout snapcenter.key -out snapcenter.pem -subj
"/C=US/ST=WA/L=BEL/O=NetApp/CN=snapcenter"
Generating a 2048 bit RSA private key
.......................................................................................................
.........................................+++++
...............................+++++
writing new private key to 'snapcenter.key'
-----

sollabsjct31:~ # ls -l cl25*
-rw-r--r-- 1 root root 1704 Jul 22 09:59 snapcenter.key
-rw-r--r-- 1 root root 1253 Jul 22 09:59 snapcenter.pem

10. Attach the snapcenter.pem file to the support ticket and then select Create
Once the public key certificate is submitted, Microsoft sets up the SnapCenter username for your tenant
along with SVM IP address.
11. After you receive the SVM IP, set a password to access SVM, which you control.
The following is an example of the REST CALL (documentation) from HANA Large Instance or VM in virtual
network, which has access to HANA Large Instance environment and will be used to set the password.

curl --cert snapcenter.pem --key snapcenter.key -X POST -k


"https://10.0.40.11/api/security/authentication/password" -d
'{"name":"snapcenter","password":"test1234"}'

Ensure that there is no proxy variable active on the HANA DB system.

sollabsjct31:/tmp # unset http_proxy


sollabsjct31:/tmp # unset https_proxy

Download and install SnapCenter


Now that the username is set up for SnapCenter access to the storage system, you'll use the SnapCenter username
to configure the SnapCenter once it's installed.
Before installing SnapCenter, review SAP HANA Backup/Recovery with SnapCenter to define your backup strategy.
1. Sign in to NetApp to download the latest version of SnapCenter.
2. Install SnapCenter on the Windows Azure VM.
The installer checks the prerequisites of the VM.

IMPORTANT
Pay attention to the size of the VM, especially in larger environments.

3. Configure the user credentials for the SnapCenter. By default, it populates the Windows user credentials
used for installing the application.
4. When you start the session, save the security exemption and the GUI starts up.
5. Sign into SnapCenter on the VM (https://snapcenter-vm:8146) using the Windows credentials to configure
the environment.
Set up the storage system
1. In SnapCenter, select Storage System , and then select +New .

The default is one SVM per tenant. If a customer has multiple tenants or HLIs in multiple regions, the
recommendation is to configure all SVMs in SnapCenter
2. In Add Storage System, provide the information for the Storage System that you want to add, the
SnapCenter username and password, and then select Submit .
NOTE
The default is one SVM per tenant. If there are multiple tenants, then the recommendation is to configure all SVMs
here in SnapCenter.

3. In SnapCenter, select Hosts and the select +Add to set up the HANA plug-in and the HANA DB hosts. The
latest version of SnapCenter detects the HANA database on the host automatically.

4. Provide the information for the new host:


a. Select the operating system for the host type.
b. Enter the SnapCenter VM hostname.
c. Provide the credentials you want to use.
d. Select the Microsoft Windows and SAP HANA options and then select Submit .
IMPORTANT
Before you can install the first node, SnapCenter allows a non-root user to install plug-ins on the database. For
information on how to enable a non-root user, see Adding a non-root user and configuring sudo privileges.

5. Review the host details and select Submit to install the plug-in on the SnapCenter server.
6. After the plug-in is installed, in SnapCenter, select Hosts and then select +Add to add a HANA node.

7. Provide the information for the HANA node:


a. Select the operating system for the host type.
b. Enter the HANA DB hostname or IP address.
c. Select + to add the credentials configured on the HANA DB host operating system and then select OK .
d. Select SAP HANA and then select Submit .
8. Confirm the fingerprint and select Confirm and Submit .

9. On the HANA node, under the system database, select Security > Users > SNAPCENTER to create the
SnapCenter user.
Auto discovery
SnapCenter 4.3 enables the auto discovery function by default. Auto discovery is not supported for HANA
instances with HANA System Replication (HSR) configured. You must manually add the instance to the SnapCenter
server.
HANA setup (Manual)
If you configured HSR, you must configure the system manually.
1. In SnapCenter, select Resources and SAN HANA (at the top), and then select +Add SAP HANA
Database (on the right).

2. Specify the resource details of the HANA administrator user configured on the Linux host, or on the host
where the plug-ins are installed. The backup will be managed from the plug-in on the Linux system.

3. Select the data volume for which you need to take snapshots, select Save and then select Finish .

Create a snapshot policy


Before you use SnapCenter to back up SAP HANA database resources, you must create a backup policy for the
resource or resource group that you want to back up. During the process of creating a snapshot policy, you'll be
given the option to configure pre/post commands and special SSL keys. For information on how to create a
snapshot policy, see Creating backup policies for SAP HANA databases.
1. In SnapCenter, select Resources and then select a database.

2. Follow the workflow of the configuration wizard to configure the snapshot scheduler.

3. Provide the options for configuring pre/post commands and special SSL keys. In this example, we're using
no special settings.

4. Select Add to create a snapshot policy, which can also be used for other HANA databases.

5. Enter the policy name and a description.


6. Select the backup type and frequency.

7. Configure the On demand backup retention settings . In our example, we're setting the retention to
three snapshot copies to keep.

8. Configure the Hourly retention settings .

9. If a SnapMirror setup is configured, select Update SnapMirror after creating a local SnapShot copy .
10. Select Finish to review the summary of the new backup policy.
11. Under Configure Schedule , select Add .

12. Select the Star t date , Expires on date, and the frequency.

13. Provide the email details for notifications.


14. Select Finish to create the backup policy.
Disable EMS message to NetApp Autosupport
By default, EMS data collection is enabled and runs every seven days after your installation date. You can disable
data collection with the PowerShell cmdlet Disable-SmDataCollectionEms .
1. In PowerShell, establish a session with SnapCenter.

Open-SmConnection

2. Sign in with your credentials.


3. Disable the collection of EMS messages.

Disable-SmCollectionEms

Restore database after crash


You can use SnapCenter to restore the database. In this section, we'll cover the high-level steps, but for more
information, see SAP HANA Backup/Recovery with SnapCenter.
1. Stop the database and delete all the database files.

su - h31adm
> sapcontrol -nr 00 -function StopSystem
StopSystem
OK
> sapcontrol -nr 00 -function GetProcessList
OK
name, description, dispstatus, textstatus, starttime, elapsedtime, pid
hdbdaemon, HDB Daemon, GRAY, Stopped, , , 35902

2. Unmount the database volume.

unmount /hana/data/H31/mnt00001

3. Restore the database files via SnapCenter. Select the database and then select Restore .
4. Select the restore type. In our example, we're restore the complete resource.

NOTE
With a default setup, you don't need to specify commands to do a local restore from the on-disk snapshot.

TIP
If you want to restore a particular LUN inside the volume, select File Level.

5. Follow the workflow through the configuration wizard.


SnapCenter restores the data to the original location so you can start the restore process in HANA. Also,
since SnapCenter isn't able to modify the backup catalog (database is down), a warning is displayed.
6. Since all the database files are restored, start the restore process in HANA. In HANA Studio, under Systems ,
right-click the system database and select Backup and Recover y > Recover System Database .
7. Select a recovery type.

8. Select the location of the backup catalog.

9. Select a backup to recover the SAP HANA database.


Once the database is recovered, a message appears with a Recovered to Time and Recovered to Log
Position stamp.
10. Under Systems , right-click the system database and select Backup and Recover y > Recover Tenant
Database .
11. Follow the workflow of the wizard to complete the recovery of the tenant database.
For more information on restoring a database, see SAP HANA Backup/Recovery with SnapCenter.
Non-database backups
You can restore non-data volumes, for example, a network file share (/hana/shared) or an operating system
backup. For more information on restoring a non-data volume, see SAP HANA Backup/Recovery with SnapCenter.
SAP HANA system cloning
Before you can clone, you must have the same HANA version installed as the source database. The SID and ID can
be different.

1. Create a HANA database user store for the H34 database from /usr/sap/H34/HDB40.
hdbuserstore set H34KEY sollabsjct34:34013 system manager

2. Disable the firewall.

systemctl disable SuSEfirewall2


systemctl stop SuSEfirewall2

3. Install the Java SDK.

zypper in java-1_8_0-openjdk

4. In SnapCenter, add the destination host on which the clone will be mounted. For more information, see
Adding hosts and installing plug-in packages on remote hosts.
a. Provide the information for the Run As Credentials you want to add.
b. Select the host operating system and enter the host information.
c. Under Plug-ins to install , select the version, enter the install path, and select SAP HANA .
d. Select Validate to run the pre-install checks.
5. Stop HANA and unmount the old data volume. You will mount the clone from SnapCenter.

sapcontrol -nr 40 -function StopSystem


umount /hana/data/H34/mnt00001

6. Create the configuration and shell script files for the target.

mkdir /NetApp
chmod 777 /NetApp
cd NetApp
chmod 777 sc-system-refresh-H34.cfg
chmod 777 sc-system-refresh.sh

TIP
You can copy the scripts from SAP Cloning from SnapCenter.

7. Modify the configuration file.

vi sc-system-refresh-H34.cfg

HANA_ARCHITECTURE="MDC_single_tenant"
KEY="H34KEY"
TIME_OUT_START=18
TIME_OUT_STOP=18
INSTANCENO="40"
STORAGE="10.250.101.33"
8. Modify the shell script file.
vi sc-system-refresh.sh

VERBOSE=NO
MY_NAME=" basename $0 "
BASE_SCRIPT_DIR=" dirname $0 "
MOUNT_OPTIONS="rw,vers=4,hard,timeo=600,rsize=1048576,wsize=1048576,intr,noatime,nolock"
9. Start the clone from a backup process. Select the host to create the clone.

NOTE
For more information, see Cloning from a backup.

10. Under Scripts , provide the following:


Mount command: /NetApp/sc-system-refresh.sh mount H34
%hana_data_h31_mnt00001_t250_vol_Clone
Post clone command: /NetApp/sc-system-refresh.sh recover H34
11. Disable (lock) the automatic mount in the /etc/fstab since the data volume of the pre-installed database isn't
necessary.

vi /etc/fstab

Delete a clone
You can delete a clone if it is no longer necessary. For more information, see Deleting clones.
The commands used to execute before clone deletion, are:
Pre clone delete: /NetApp/sc-system-refresh.sh shut down H34
Unmount: /NetApp/sc-system-refresh.sh umount H34
These commands allow SnapCenter to showdown the database, unmount the volume, and delete the fstab entry.
After that, the FlexClone is deleted.
Cloning database logfile
20190502025323###sollabsjct34###sc-system-refresh.sh: Adding entry in /etc/fstab.
20190502025323###sollabsjct34###sc-system-refresh.sh: 10.250.101.31:/Sc21186309-ee57-41a3-8584-8210297f791d
/hana/data/H34/mnt00001 nfs rw,vers=4,hard,timeo=600,rsize=1048576,wsize=1048576,intr,noatime,lock 0 0
20190502025323###sollabsjct34###sc-system-refresh.sh: Mounting data volume.
20190502025323###sollabsjct34###sc-system-refresh.sh: mount /hana/data/H34/mnt00001
20190502025323###sollabsjct34###sc-system-refresh.sh: Data volume mounted successfully.
20190502025323###sollabsjct34###sc-system-refresh.sh: chown -R h34adm:sapsys /hana/data/H34/mnt00001
20190502025333###sollabsjct34###sc-system-refresh.sh: Recover system database.
20190502025333###sollabsjct34###sc-system-refresh.sh: /usr/sap/H34/HDB40/exe/Python/bin/python
/usr/sap/H34/HDB40/exe/python_support/recoverSys.py --command "RECOVER DATA USING SNAPSHOT CLEAR LOG"
[140278542735104, 0.005] >> starting recoverSys (at Thu May 2 02:53:33 2019)
[140278542735104, 0.005] args: ()
[140278542735104, 0.005] keys: {'command': 'RECOVER DATA USING SNAPSHOT CLEAR LOG'}
recoverSys started: ============2019-05-02 02:53:33 ============
testing master: sollabsjct34
sollabsjct34 is master
shutdown database, timeout is 120
stop system
stop system: sollabsjct34
stopping system: 2019-05-02 02:53:33
stopped system: 2019-05-02 02:53:33
creating file recoverInstance.sql
restart database
restart master nameserver: 2019-05-02 02:53:38
start system: sollabsjct34
2019-05-02T02:53:59-07:00 P010976 16a77f6c8a2 INFO RECOVERY state of service: nameserver,
sollabsjct34:34001, volume: 1, RecoveryPrepared
recoverSys finished successfully: 2019-05-02 02:54:00
[140278542735104, 26.490] 0
[140278542735104, 26.490] << ending recoverSys, rc = 0 (RC_TEST_OK), after 26.485 secs
20190502025400###sollabsjct34###sc-system-refresh.sh: Wait until SAP HANA database is started ....
20190502025400###sollabsjct34###sc-system-refresh.sh: Status: YELLOW
20190502025410###sollabsjct34###sc-system-refresh.sh: Status: YELLOW
20190502025420###sollabsjct34###sc-system-refresh.sh: Status: YELLOW
20190502025430###sollabsjct34###sc-system-refresh.sh: Status: YELLOW
20190502025440###sollabsjct34###sc-system-refresh.sh: Status: YELLOW
20190502025451###sollabsjct34###sc-system-refresh.sh: Status: GREEN
20190502025451###sollabsjct34###sc-system-refresh.sh: SAP HANA database is started.
20190502025451###sollabsjct34###sc-system-refresh.sh: Recover tenant database H34.
20190502025451###sollabsjct34###sc-system-refresh.sh: /usr/sap/H34/SYS/exe/hdb/hdbsql -U H34KEY RECOVER DATA
FOR H34 USING SNAPSHOT CLEAR LOG
0 rows affected (overall time 69.584135 sec; server time 69.582835 sec)
20190502025600###sollabsjct34###sc-system-refresh.sh: Checking availability of Indexserver for tenant H34.
20190502025601###sollabsjct34###sc-system-refresh.sh: Recovery of tenant database H34 succesfully finished.
20190502025601###sollabsjct34###sc-system-refresh.sh: Status: GREEN
Deleting the DB Clone – Logfile
20190502030312###sollabsjct34###sc-system-refresh.sh: Stopping HANA database.
20190502030312###sollabsjct34###sc-system-refresh.sh: sapcontrol -nr 40 -function StopSystem HDB

02.05.2019 03:03:12
StopSystem
OK
20190502030312###sollabsjct34###sc-system-refresh.sh: Wait until SAP HANA database is stopped ....
20190502030312###sollabsjct34###sc-system-refresh.sh: Status: GREEN
20190502030322###sollabsjct34###sc-system-refresh.sh: Status: GREEN
20190502030332###sollabsjct34###sc-system-refresh.sh: Status: GREEN
20190502030342###sollabsjct34###sc-system-refresh.sh: Status: GRAY
20190502030342###sollabsjct34###sc-system-refresh.sh: SAP HANA database is stopped.
20190502030347###sollabsjct34###sc-system-refresh.sh: Unmounting data volume.
20190502030347###sollabsjct34###sc-system-refresh.sh: Junction path: Sc21186309-ee57-41a3-8584-8210297f791d
20190502030347###sollabsjct34###sc-system-refresh.sh: umount /hana/data/H34/mnt00001
20190502030347###sollabsjct34###sc-system-refresh.sh: Deleting /etc/fstab entry.
20190502030347###sollabsjct34###sc-system-refresh.sh: Data volume unmounted successfully.

Uninstall SnapCenter plug-ins package for Linux


You can uninstall the Linux plug-ins package from the command line. Because the automatic deployment expects a
fresh system, it's easy to uninstall the plug-in.

NOTE
You may need to uninstall an older version of the plug-in manually.

Uninstall the plug-ins.

cd /opt/NetApp/snapcenter/spl/installation/plugins
./uninstall

You can now install the latest HANA plug-in on the new node by selecting SUBMIT in SnapCenter.

Next steps
See Disaster recovery principles and preparation.
Disaster Recovery principles
12/22/2020 • 7 minutes to read • Edit Online

HANA Large Instances offer a disaster recovery functionality between HANA Large Instance stamps in different
Azure regions. For instance, if you deploy HANA Large Instance units in the US West region of Azure, you can use
the HANA Large Instance units in the US East region as disaster recovery units. As mentioned earlier, disaster
recovery is not configured automatically, because it requires you to pay for another HANA Large Instance unit in
the DR region. The disaster recovery setup works for scale-up as well as scale-out setups.
In the scenarios deployed so far, customers use the unit in the DR region to run non-production systems that use
an installed HANA instance. The HANA Large Instance unit needs to be of the same SKU as the SKU used for
production purposes. The following image shows what the disk configuration between the server unit in the Azure
production region and the disaster recovery region looks like:

As shown in this overview graphic, you then need to order a second set of disk volumes. The target disk volumes
are the same size as the production volumes for the production instance in the disaster recovery units. These disk
volumes are associated with the HANA Large Instance server unit in the disaster recovery site. The following
volumes are replicated from the production region to the DR site:
/hana/data
/hana/logbackups
/hana/shared (includes /usr/sap)
The /hana/log volume is not replicated because the SAP HANA transaction log is not needed in the way that the
restore from those volumes is done.
The basis of the disaster recovery functionality offered is the storage replication functionality offered by the HANA
Large Instance infrastructure. The functionality that is used on the storage side is not a constant stream of changes
that replicate in an asynchronous manner as changes happen to the storage volume. Instead, it is a mechanism that
relies on the fact that snapshots of these volumes are created on a regular basis. The delta between an already
replicated snapshot and a new snapshot that is not yet replicated is then transferred to the disaster recovery site
into target disk volumes. These snapshots are stored on the volumes and, if there is a disaster recovery failover,
need to be restored on those volumes.
The first transfer of the complete data of the volume should be before the amount of data becomes smaller than
the deltas between snapshots. As a result, the volumes in the DR site contain every one of the volume snapshots
performed in the production site. Eventually, you can use that DR system to get to an earlier status to recover lost
data, without rolling back the production system.
If there is an MCOD deployment with multiple independent SAP HANA instances on one HANA Large Instance unit,
it is expected that all SAP HANA instances are getting storage replicated to the DR side.
In cases where you use HANA System Replication as high-availability functionality in your production site, and use
storage-based replication for the DR site, the volumes of both the nodes from primary site to the DR instance are
replicated. You must purchase additional storage (same size as of primary node) at DR site to accommodate
replication from both primary and secondary to the DR.

NOTE
The HANA Large Instance storage replication functionality is mirroring and replicating storage snapshots. If you don't
perform storage snapshots as introduced in the Backup and restore section of this article, there can't be any replication to
the disaster recovery site. Storage snapshot execution is a prerequisite to storage replication to the disaster recovery site.

Preparation of the disaster recovery scenario


In this scenario, you have a production system running on HANA Large Instances in the production Azure region.
For the steps that follow, let's assume that the SID of that HANA system is "PRD," and that you have a non-
production system running on HANA Large Instances in the DR Azure region. For the latter, let's assume that its SID
is "TST." The following image shows this configuration:

If the server instance has not already been ordered with the additional storage volume set, SAP HANA on Azure
Service Management attaches the additional set of volumes as a target for the production replica to the HANA
Large Instance unit on which you're running the TST HANA instance. For that purpose, you need to provide the SID
of your production HANA instance. After SAP HANA on Azure Service Management confirms the attachment of
those volumes, you need to mount those volumes to the HANA Large Instance unit.
The next step is for you to install the second SAP HANA instance on the HANA Large Instance unit in the DR Azure
region, where you run the TST HANA instance. The newly installed SAP HANA instance needs to have the same SID.
The users created need to have the same UID and Group ID that the production instance has. Read Backup and
restore for details. If the installation succeeded, you need to:
Execute step 2 of the storage snapshot preparation described in Backup and restore.
Create a public key for the DR unit of HANA Large Instance unit if you have not yet done so. See step 3 of the
storage snapshot preparation described in Backup and restore.
Maintain the HANABackupCustomerDetails.txt with the new HANA instance and test whether connectivity into
storage works correctly.
Stop the newly installed SAP HANA instance on the HANA Large Instance unit in the DR Azure region.
Unmount these PRD volumes and contact SAP HANA on Azure Service Management. The volumes can't stay
mounted to the unit because they can't be accessible while functioning as storage replication target.

The operations team establishes the replication relationship between the PRD volumes in the production Azure
region and the PRD volumes in the DR Azure region.
IMPORTANT
The /hana/log volume is not replicated because it is not necessary to restore the replicated SAP HANA database to a
consistent state in the disaster recovery site.

Next, set up, or adjust the storage snapshot backup schedule to get to your RTO and RPO in the disaster case. To
minimize the recovery point objective, set the following replication intervals in the HANA Large Instance service:
For the volumes covered by the combined snapshot (snapshot type hana ), set to replicate every 15 minutes to
the equivalent storage volume targets in the disaster recovery site.
For the transaction log backup volume (snapshot type logs ), set to replicate every 3 minutes to the equivalent
storage volume targets in the disaster recovery site.
To minimize the recovery point objective, set up the following:
Perform a hana type storage snapshot (see "Step 7: Perform snapshots") every 30 minutes to 1 hour.
Perform SAP HANA transaction log backups every 5 minutes.
Perform a logs type storage snapshot every 5-15 minutes. With this interval period, you achieve an RPO of
around 15-25 minutes.
With this setup, the sequence of transaction log backups, storage snapshots, and the replication of the HANA
transaction log backup volume and /hana/data, and /hana/shared (includes /usr/sap) might look like the data
shown in this graphic:

To achieve an even better RPO in the disaster recovery case, you can copy the HANA transaction log backups from
SAP HANA on Azure (Large Instances) to the other Azure region. To achieve this further RPO reduction, perform the
following steps:
1. Back up the HANA transaction log as frequently as possible to /hana/logbackups.
2. Use rsync to copy the transaction log backups to the NFS share-hosted Azure virtual machines. The VMs are in
Azure virtual networks in the Azure production region and in the DR regions. You need to connect both Azure
virtual networks to the circuit connecting the production HANA Large Instances to Azure. See the graphics in the
[Network considerations for disaster recovery with HANA Large Instances](#Network-considerations-for-
disaster recovery-with-HANA-Large-Instances) section.
3. Keep the transaction log backups in the region in the VM attached to the NFS exported storage.
4. In a disaster failover case, supplement the transaction log backups you find on the /hana/logbackups volume
with more recently taken transaction log backups on the NFS share in the disaster recovery site.
5. Start a transaction log backup to restore to the latest backup that might be saved over to the DR region.
When HANA Large Instance operations confirm the replication relationship setup and you start the execution
storage snapshot backups, the data replication begins.

As the replication progresses, the snapshots on the PRD volumes in the DR Azure regions are not restored. They
are only stored. If the volumes are mounted in such a state, they represent the state in which you unmounted those
volumes after the PRD SAP HANA instance was installed in the server unit in the DR Azure region. They also
represent the storage backups that are not yet restored.
If there is a failover, you also can choose to restore to an older storage snapshot instead of the latest storage
snapshot.

Next steps
Refer Disaster recovery failover procedure.
Disaster recovery failover procedure
12/22/2020 • 6 minutes to read • Edit Online

IMPORTANT
This article isn't a replacement for the SAP HANA administration documentation or SAP Notes. We expect that you have a
solid understanding of and expertise in SAP HANA administration and operations, especially for backup, restore, high
availability, and disaster recovery (DR). In this article, screenshots from SAP HANA Studio are shown. Content, structure, and
the nature of the screens of SAP administration tools and the tools themselves might change from SAP HANA release to
release.

There are two cases to consider when you fail over to a DR site:
You need the SAP HANA database to go back to the latest status of data. In this case, there's a self-service script
with which you can perform the failover without the need to contact Microsoft. For the failback, you need to
work with Microsoft.
You want to restore to a storage snapshot that's not the latest replicated snapshot. In this case, you need to work
with Microsoft.

NOTE
The following steps must be done on the HANA Large Instance unit, which represents the DR unit.

To restore to the latest replicated storage snapshots, follow the steps in "Perform full DR failover -
azure_hana_dr_failover" in Microsoft snapshot tools for SAP HANA on Azure.
If you want to have multiple SAP HANA instances failed over, run the azure_hana_dr_failover command several
times. When requested, enter the SAP HANA SID you want to fail over and restore.
You can test the DR failover also without impacting the actual replication relationship. To perform a test failover,
follow the steps in "Perform a test DR failover - azure_hana_test_dr_failover" in Microsoft snapshot tools for SAP
HANA on Azure.

IMPORTANT
Do not run any production transactions on the instance that you created in the DR site through the process of testing a
failover . The command azure_hana_test_dr_failover creates a set of volumes that have no relationship to the primary site. As
a result, synchronization back to the primary site is not possible.

If you want to have multiple SAP HANA instances to test, run the script several times. When requested, enter the
SAP HANA SID of the instance you want to test for failover.

NOTE
If you need to fail over to the DR site to rescue some data that was deleted hours ago and need the DR volumes to be set to
an earlier snapshot, this procedure applies.

1. Shut down the nonproduction instance of HANA on the disaster recovery unit of HANA Large Instances that
you're running. A dormant HANA production instance is preinstalled.
2. Make sure that no SAP HANA processes are running. Use the following command for this check:
/usr/sap/hostctrl/exe/sapcontrol –nr <HANA instance number> - function GetProcessList .
The output should show you the hdbdaemon process in a stopped state and no other HANA processes in a
running or started state.
3. Determine to which snapshot name or SAP HANA backup ID you want to have the disaster recovery site
restored. In real disaster recovery cases, this snapshot is usually the latest snapshot. If you need to recover
lost data, pick an earlier snapshot.
4. Contact Azure Support through a high-priority support request. Ask for the restore of that snapshot with the
name and date of the snapshot or the HANA backup ID on the DR site. The default is that the operations side
restores the /hana/data volume only. If you want to have the /hana/logbackups volumes too, you need to
specifically state that. Do not restore the /hana/shared volume. Instead, choose specific files like global.ini
out of the .snapshot directory and its subdirectories after you remount the /hana/shared volume for PRD.
On the operations side, the following steps occur:
a. The replication of snapshots from the production volume to the disaster recovery volumes is stopped.
This disruption might have already happened if an outage at the production site is the reason you need to
perform the disaster recovery procedure.
b. The storage snapshot name or snapshot with the backup ID you chose is restored on the disaster recovery
volumes.
c. After the restore, the disaster recovery volumes are available to be mounted to the HANA Large Instance
units in the disaster recovery region.
5. Mount the disaster recovery volumes to the HANA Large Instance unit in the disaster recovery site.
6. Start the dormant SAP HANA production instance.
7. If you chose to copy transaction log backup logs to reduce the RPO time, merge the transaction log backups
into the newly mounted DR /hana/logbackups directory. Don't overwrite existing backups. Copy newer
backups that weren't replicated with the latest replication of a storage snapshot.
8. You can also restore single files out of the snapshots that weren't replicated to the /hana/shared/PRD
volume in the DR Azure region.
The following steps show how to recover the SAP HANA production instance based on the restored storage
snapshot and the transaction log backups that are available.
1. Change the backup location to /hana/logbackups by using SAP HANA Studio.
2. SAP HANA scans through the backup file locations and suggests the most recent transaction log backup to
restore to. The scan can take a few minutes until a screen like the following appears:
3. Adjust some of the default settings:
Clear Use Delta Backups .
Select Initialize Log Area .
4. Select Finish .
A progress window, like the one shown here, should appear. Keep in mind that the example is of a disaster recovery
restore of a three-node scale-out SAP HANA configuration.
If the restore stops responding at the Finish screen and doesn't show the progress screen, confirm that all the SAP
HANA instances on the worker nodes are running. If necessary, start the SAP HANA instances manually.

Failback from a DR to a production site


You can fail back from a DR to a production site. Let's look at a scenario in which the failover into the disaster
recovery site was caused by problems in the production Azure region, and not by your need to recover lost data.
You've been running your SAP production workload for a while in the disaster recovery site. As the problems in the
production site are resolved, you want to fail back to your production site. Because you can't lose data, the step
back into the production site involves several steps and close cooperation with the SAP HANA on Azure operations
team. It's up to you to trigger the operations team to start synchronizing back to the production site after the
problems are resolved.
Follow these steps:
1. The SAP HANA on Azure operations team gets the trigger to synchronize the production storage volumes from
the disaster recovery storage volumes, which now represent the production state. In this state, the HANA Large
Instance unit in the production site is shut down.
2. The SAP HANA on Azure operations team monitors the replication and makes sure that it's caught up before
they inform you.
3. You shut down the applications that use the production HANA Instance in the disaster recovery site. You then
perform an HANA transaction log backup. Next, you stop the HANA instance that's running on the HANA Large
Instance units in the disaster recovery site.
4. After the HANA instance that's running in the HANA Large Instance unit in the disaster recovery site is shut
down, the operations team manually synchronizes the disk volumes again.
5. The SAP HANA on Azure operations team starts the HANA Large Instance unit in the production site again. They
hand it over to you. You make sure that the SAP HANA instance is in a shutdown state at the startup time of the
HANA Large Instance unit.
6. You perform the same database restore steps that you did when you previously failed over to the disaster
recovery site.

Monitor disaster recovery replication


To monitor the status of your storage replication progress, run the script azure_hana_replication_status . This
command must be run from a unit that runs in the disaster recovery location to function as expected. The
command works no matter whether replication is active. The command can be run for every HANA Large Instance
unit of your tenant in the disaster recovery location. It can't be used to obtain details about the boot volume.
For more information on the command and its output, see "Get DR replication status -
azure_hana_replication_status" in Microsoft snapshot tools for SAP HANA on Azure.

Next steps
See Monitor and troubleshoot from HANA side.
How to monitor SAP HANA (large instances) on
Azure
12/22/2020 • 2 minutes to read • Edit Online

SAP HANA on Azure (Large Instances) is no different from any other IaaS deployment — you need to monitor what
the OS and the application is doing and how the applications consume the following resources:
CPU
Memory
Network bandwidth
Disk space
With Azure Virtual Machines, you need to figure out whether the resource classes named above are sufficient or
they get depleted. Here is more detail on each of the different classes:
CPU resource consumption: The ratio that SAP defined for certain workload against HANA is enforced to make
sure that there should be enough CPU resources available to work through the data that is stored in memory.
Nevertheless, there might be cases where HANA consumes many CPUs executing queries due to missing indexes
or similar issues. This means you should monitor CPU resource consumption of the HANA large instance unit as
well as CPU resources consumed by the specific HANA services.
Memor y consumption: Is important to monitor from within HANA, as well as outside of HANA on the unit.
Within HANA, monitor how the data is consuming HANA allocated memory in order to stay within the required
sizing guidelines of SAP. You also want to monitor memory consumption on the Large Instance level to make sure
that additional installed non-HANA software does not consume too much memory, and therefore compete with
HANA for memory.
Network bandwidth: The Azure VNet gateway is limited in bandwidth of data moving into the Azure VNet, so it
is helpful to monitor the data received by all the Azure VMs within a VNet to figure out how close you are to the
limits of the Azure gateway SKU you selected. On the HANA Large Instance unit, it does make sense to monitor
incoming and outgoing network traffic as well, and to keep track of the volumes that are handled over time.
Disk space: Disk space consumption usually increases over time. Most common causes are: data volume
increases, execution of transaction log backups, storing trace files, and performing storage snapshots. Therefore, it
is important to monitor disk space usage and manage the disk space associated with the HANA Large Instance
unit.
For the Type II SKUs of the HANA Large Instances, the server comes with the preloaded system diagnostic tools.
You can utilize these diagnostic tools to perform the system health check. Run the following command to
generates the health check log file at /var/log/health_check.

/opt/sgi/health_check/microsoft_tdi.sh

When you work with the Microsoft Support team to troubleshoot an issue, you may also be asked to provide the
log files by using these diagnostic tools. You can zip the file using the following command.

tar -czvf health_check_logs.tar.gz /var/log/health_check

Next steps
Refer How to monitor SAP HANA (large instances) on Azure.
Monitoring and troubleshooting from HANA side
12/22/2020 • 4 minutes to read • Edit Online

In order to effectively analyze problems related to SAP HANA on Azure (Large Instances), it is useful to narrow
down the root cause of a problem. SAP has published a large amount of documentation to help you.
Applicable FAQs related to SAP HANA performance can be found in the following SAP Notes:
SAP Note #2222200 – FAQ: SAP HANA Network
SAP Note #2100040 – FAQ: SAP HANA CPU
SAP Note #199997 – FAQ: SAP HANA Memory
SAP Note #200000 – FAQ: SAP HANA Performance Optimization
SAP Note #199930 – FAQ: SAP HANA I/O Analysis
SAP Note #2177064 – FAQ: SAP HANA Service Restart and Crashes

SAP HANA Alerts


As a first step, check the current SAP HANA alert logs. In SAP HANA Studio, go to Administration Console:
Aler ts: Show: all aler ts . This tab will show all SAP HANA alerts for specific values (free physical memory, CPU
utilization, etc.) that fall outside of the set minimum and maximum thresholds. By default, checks are auto-
refreshed every 15 minutes.

CPU
For an alert triggered due to improper threshold setting, a resolution is to reset to the default value or a more
reasonable threshold value.
The following alerts may indicate CPU resource problems:
Host CPU Usage (Alert 5)
Most recent savepoint operation (Alert 28)
Savepoint duration (Alert 54)
You may notice high CPU consumption on your SAP HANA database from one of the following:
Alert 5 (Host CPU usage) is raised for current or past CPU usage
The displayed CPU usage on the overview screen

The Load graph might show high CPU consumption, or high consumption in the past:
An alert triggered due to high CPU utilization could be caused by several reasons, including, but not limited to:
execution of certain transactions, data loading, jobs that are not responding, long running SQL statements, and
bad query performance (for example, with BW on HANA cubes).
Refer to the SAP HANA Troubleshooting: CPU Related Causes and Solutions site for detailed troubleshooting steps.

Operating System
One of the most important checks for SAP HANA on Linux is to make sure that Transparent Huge Pages are
disabled, see SAP Note #2131662 – Transparent Huge Pages (THP) on SAP HANA Servers.
You can check if Transparent Huge Pages are enabled through the following Linux command: cat
/sys/kernel/mm/transparent_hugepage/enabled
If always is enclosed in brackets as below, it means that the Transparent Huge Pages are enabled: [always]
madvise never; if never is enclosed in brackets as below, it means that the Transparent Huge Pages are disabled:
always madvise [never]
The following Linux command should return nothing: rpm -qa | grep ulimit. If it appears ulimit is installed,
uninstall it immediately.

Memory
You may observe that the amount of memory allocated by the SAP HANA database is higher than expected. The
following alerts indicate issues with high memory usage:
Host physical memory usage (Alert 1)
Memory usage of name server (Alert 12)
Total memory usage of Column Store tables (Alert 40)
Memory usage of services (Alert 43)
Memory usage of main storage of Column Store tables (Alert 45)
Runtime dump files (Alert 46)
Refer to the SAP HANA Troubleshooting: Memory Problems site for detailed troubleshooting steps.

Network
Refer to SAP Note #2081065 – Troubleshooting SAP HANA Network and perform the network troubleshooting
steps in this SAP Note.
1. Analyzing round-trip time between server and client. A. Run the SQL script HANA_Network_Clients.
2. Analyze internode communication. A. Run SQL script HANA_Network_Services.
3. Run Linux command ifconfig (the output shows if any packet losses are occurring).
4. Run Linux command tcpdump .
Also, use the open source IPERF tool (or similar) to measure real application network performance.
Refer to the SAP HANA Troubleshooting: Networking Performance and Connectivity Problems site for detailed
troubleshooting steps.

Storage
From an end-user perspective, an application (or the system as a whole) runs sluggishly, is unresponsive, or can
even seem to stop responding if there are issues with I/O performance. In the Volumes tab in SAP HANA Studio,
you can see the attached volumes, and what volumes are used by each service.

Attached volumes in the lower part of the screen you can see details of the volumes, such as files and I/O statistics.

Refer to the SAP HANA Troubleshooting: I/O Related Root Causes and Solutions and SAP HANA Troubleshooting:
Disk Related Root Causes and Solutions site for detailed troubleshooting steps.

Diagnostic Tools
Perform an SAP HANA Health Check through HANA_Configuration_Minichecks. This tool returns potentially critical
technical issues that should have already been raised as alerts in SAP HANA Studio.
Refer to SAP Note #1969700 – SQL statement collection for SAP HANA and download the SQL Statements.zip file
attached to that note. Store this .zip file on the local hard drive.
In SAP HANA Studio, on the System Information tab, right-click in the Name column and select Impor t SQL
Statements .

Select the SQL Statements.zip file stored locally, and a folder with the corresponding SQL statements will be
imported. At this point, the many different diagnostic checks can be run with these SQL statements.
For example, to test SAP HANA System Replication bandwidth requirements, right-click the Bandwidth statement
under Replication: Bandwidth and select Open in SQL Console.
The complete SQL statement opens allowing input parameters (modification section) to be changed and then
executed.

Another example is right-clicking on the statements under Replication: Over view . Select Execute from the
context menu:

This results in information that helps with troubleshooting:


Do the same for HANA_Configuration_Minichecks and check for any X marks in the C (Critical) column.
Sample outputs:
HANA_Configuration_MiniChecks_Rev102.01+1 for general SAP HANA checks.

HANA_Ser vices_Over view for an overview of what SAP HANA services are currently running.

HANA_Ser vices_Statistics for SAP HANA service information (CPU, memory, etc.).
HANA_Configuration_Over view_Rev110+ for general information on the SAP HANA instance.

HANA_Configuration_Parameters_Rev70+ to check SAP HANA parameters.

Next steps
Refer High availability set up in SUSE using the STONITH.
Azure HANA Large Instances control through Azure
portal
12/22/2020 • 10 minutes to read • Edit Online

NOTE
For Rev 4.2, follow the instructions in the Manage BareMetal Instances through the Azure portal topic.

This document covers the way how HANA Large Instances are presented in Azure portal and what activities can be
conducted through Azure portal with HANA Large Instance units that are deployed for you. Visibility of HANA
Large Instances in Azure portal is provided through an Azure resource provider for HANA Large Instances, which
currently is in public preview

Register HANA Large Instance Resource Provider


Usually your Azure subscription you were using for HANA Large Instance deployments is registered for the HANA
Large Instance Resource Provider. However, if you can’t see you deployed HANA Large Instance units, you should
register the Resource Provider in your Azure subscription. There are two ways in registering the HANA Large
Instance Resource provider
Register through CLI interface
You need to be logged into your Azure subscription, used for the HANA Large Instance deployment via the Azure
CLI interface. You can (re-)register the HANA Large Instance Provider with this command:

az provider register --namespace Microsoft.HanaOnAzure

For more information, see the article Azure resource providers and types
Register through Azure portal
You can (re-)register the HANA Large Instance Resource Provider through Azure portal. You need to list your
subscription in Azure portal and double-click on the subscription, which was used to deploy your HANA Large
Instance unit(s). One you are in the overview page of your subscription, select "Resource providers" as shown
below and type "HANA" into the search window.
In the screenshot shown, the resource provider was already registered. In case the resource provider is not yet
registered, press "re-register" or "register".
For more information, see the article Azure resource providers and types

Display of HANA Large Instance units in the Azure portal


When submitting an HANA Large Instance deployment request, you are asked to specify the Azure subscription
that you are connecting to the HANA Large Instances as well. It is recommended, to use the same subscription you
are using to deploy the SAP application layer that works against the HANA Large Instance units. As your first HANA
Large Instances are getting deployed, a new Azure resource group is created in the Azure subscription you
submitted in the deployment request for your HANA Large Instance(s). The new resource group will list all your
HANA Large Instance units you have deployed in the specific subscription.
In order to find the new Azure resource group, you list the resource group in your subscription by navigating
through the left navigation pane of the Azure portal

In the list of resource groups, you are getting listed, you might need to filter on the subscription you used to have
HANA Large Instances deployed
After filtering to the correct subscription, you still may have a long list of resource groups. Look for one with a post-
fix of -Txxx where "xxx" are three digits, like -T050 .
As you found the resource group, list the details of it. The list you received could look like:

All the units listed are representing a single HANA Large Instance unit that has been deployed in your subscription.
In this case, you look at eight different HANA Large Instance units, which were deployed in your subscription.
If you deployed several HANA Large Instance tenants under the same Azure subscription, you will find multiple
Azure resource groups

Look at attributes of single HLI Unit


In the list of the HANA Large Instance units, you can click on a single unit and get to the details of the single HANA
Large Instance unit.
In the overview screen, after clicking 'Show more', you are getting a presentation of the unit, which looks like:

Looking at the different attributes shown, those attributes look hardly different than Azure VM attributes. On the
left-hand side header, it shows the Resource group, Azure region, subscription name, and ID as well as some tags
that you added. By default, the HANA Large Instance units have no tag assigned. On the right-hand side of the
header, the name of the unit is listed as assigned when the deployment was done. The operating system is shown
as well as the IP address. As with VMs the HANA Large instance unit type with the number of CPU threads and
memory is shown as well. More details on the different HANA Large Instance units, are shown here:
Available SKUs for HLI
SAP HANA (Large Instances) storage architecture
Additional data on the right lower side is the revision of the HANA Large Instance stamp. Possible values are:
Revision 3
Revision 4
Revision 4 is the latest architecture released of HANA Large Instances with major improvements in network latency
between Azure VMs and HANA Large instance units deployed in Revision 4 stamps or rows. Another very
important information is found in the lower right corner of the overview with the name of the Azure Proximity
Placement Group that is automatically created for each deployed HANA Large Instance unit. This Proximity
Placement Group needs to be referenced when deploying the Azure VMs that host the SAP application layer. By
using the Azure proximity placement group associated with the HANA Large Instance unit, you make sure that the
Azure VMs are deployed in close proximity to the HANA Large Instance unit. The way how proximity placement
groups can be used to locate the SAP application layer in the same Azure datacenter as Revision 4 hosted HANA
Large Instance units is described in Azure Proximity Placement Groups for optimal network latency with SAP
applications.
An additional field in the right column of the header informs about the power state of the HANA Large instance
unit.

NOTE
The power state describes whether the hardware unit is powered on or off. It does not give information about the operating
system being up and running. As you restart a HANA Large Instance unit, you will experience a small time where the state of
the unit changes to Star ting to move into the state of Star ted . Being in the state of Star ted means that the OS is starting
up or that the OS has been started up completely. As a result, after a restart of the unit, you can't expect to immediately log
into the unit as soon as the state switches to Star ted .

If you press 'See more', additional information is shown. One additional information is displaying the revision of
the HANA Large Instance stamp, the unit got deployed in. See the article What is SAP HANA on Azure (Large
Instances) for the different revisions of HANA Large Instance stamps

Check activities of a single HANA Large Instance unit


Beyond giving an overview of the HANA Large Instance units, you can check activities of the particular unit. An
activity log could look like:

One of the main activities recorded are restarts of a unit. The data listed includes the status of the activity, the time
stamp the activity got triggered, the subscription ID out of which the activity got triggered and the Azure user who
triggered the activity.
Another activity that is getting recorded are changes to the unit in the Azure meta data. Besides the restart initiated,
you can see the activity of Write HANAInstances . This type of activity performs no changes on the HANA Large
Instance unit itself, but is documenting changes to the meta data of the unit in Azure. In the case listed, we added
and deleted a tag (see next section).

Add and delete an Azure tag to a HANA Large Instance unit


Another possibility you have is to add a tag to a HANA Large Instance unit. The way tags are getting assigned does
not differ from assigning tags to VMs. As with VMs the tags exist in Azure meta data and, for HANA Large
Instances, have the same restrictions as tags for VMs.
Deleting tags works the same way as with VMs. Both activities, applying and deleting a tag will be listed in the
activity log of the particular HANA Large Instance unit.

Check properties of a HANA Large Instance unit


The section Proper ties includes important information that you get when the instances are handed over to you. It
is a section where you get all the information that you could require in support cases or which you need when
setting up storage snapshot configuration. As such this section is a collection of data around your instance, the
connectivity of the instance to Azure and the storage backend. The top of the section looks like:

The first few data items, you saw in the overview screen already. But an important portion of data is the
ExpressRoute Circuit ID, which you got as the first deployed units were handed over. In some support cases, you
might get asked for that data. An important data entry is shown at the bottom of the screenshot. The data
displayed is the IP address of the NFS storage head that isolates your storage to your tenant in the HANA Large
Instance stack. This IP address is also needed when you edit the configuration file for storage snapshot backups.
As you scroll down in the property pane, you get additional data like a unique resource ID for your HANA Large
Instance unit, or the subscription ID which was assigned to the deployment.

Restart a HANA Large Instance unit through Azure portal


Initiating a restart of the Linux operating system, there were various situations where the OS could not finish a
restart successfully. In order to force a restart, you needed to open a service request to have Microsoft operations
perform a power restart of the HANA Large Instance unit. The functionality of a power restart of a HANA Large
Instance unit is now integrated into the Azure portal. As you are in the overview part of the HANA Large Instance
unit, you see the button for restart on top of the data section

As you are pressing the restart button, you are asked whether you really want to restart the unit. As you confirm by
pressing the button "Yes", the unit will restart.
NOTE
In the restart process, you will experience a small time where the state of the unit changes to Star ting to move into the
state of Star ted . Being in the state of Star ted means that the OS is starting up or that the OS has been started up
completely. As a result, after a restart of the unit, you can't expect to immediately log into the unit as soon as the state
switches to Star ted .

IMPORTANT
Dependent on the amount of memory in your HANA Large Instance unit, a restart and reboot of the hardware and the
operating system can take up to one hour

Open a support request for HANA large Instances


Out of the Azure portal display of HANA Large Instance units, you can create support requests specifically for a
HANA large Instance unit as well. As you follow the link New suppor t request

In order to get the service of SAP HANA Large Instances listed in the next screen, you might need to select 'All
Services" as shown below
In the list of services, you can find the service SAP HANA Large Instance . As you choose that service, you can
select specific problem types as shown:

Under each of the different problem types, you are offered a selection of problem subtypes you need to select to
characterize your problem further. After selecting the subtype, you now can name the subject. Once you are done
with the selection process, you can move to next step of the creation. In the Solutions section, you are pointed to
documentation around HANA Large Instances, which might give a pointer to a solution of your problem. If you
can't find a solution for your problem in the documentation suggested, you go to the next step. In the next step, you
are going to be asked whether the issue is with VMs or with HANA Large Instance units. This information helps to
direct the support request to the correct specialists.
As you answered the questions and provided additional details, you can go the next step in order to review the
support request and the submit it.

Next steps
How to monitor SAP HANA (large instances) on Azure
Monitoring and troubleshooting from HANA side
Manage BareMetal Instances through the Azure
portal
12/22/2020 • 6 minutes to read • Edit Online

This article shows how the Azure portal displays BareMetal Instances. This article also shows you the activities you
can do in the Azure portal with your deployed BareMetal Instance units.

Register the resource provider


An Azure resource provider for BareMetal Instances provides visibility of the instances in the Azure portal, currently
in public preview. By default, the Azure subscription you use for BareMetal Instance deployments registers the
BareMetalInfrastructure resource provider. If you don't see your deployed BareMetal Instance units, you must
register the resource provider with your subscription. There are two ways to register the BareMetal Instance
resource provider:
Azure CLI
Azure portal
Azure CLI
Sign in to the Azure subscription you use for the BareMetal Instance deployment through the Azure CLI. You can
register the BareMetalInfrastructure resource provider with:

az provider register --namespace Microsoft.BareMetalInfrastructure

For more information, see the article Azure resource providers and types.
Azure portal
You can register the BareMetalInfrastructure resource provider through the Azure portal.
You'll need to list your subscription in the Azure portal and then double-click on the subscription used to deploy
your BareMetal Instance units.
1. Sign in to the Azure portal.
2. On the Azure portal menu, select All ser vices .
3. In the All ser vices box, enter subscription , and then select Subscriptions .
4. Select the subscription from the subscription list to view.
5. Select Resource providers and enter BareMetalInfrastructure into the search. The resource provider
should be Registered , as the image shows.

NOTE
If the resource provider is not registered, select Register .
BareMetal Instance units in the Azure portal
When you submit a BareMetal Instance deployment request, you'll specify the Azure subscription that you're
connecting to the BareMetal Instances. Use the same subscription you use to deploy the application layer that
works against the BareMetal Instance units.
During the deployment of your BareMetal Instances, a new Azure resource group gets created in the Azure
subscription you used in the deployment request. This new resource group lists all your BareMetal Instance units
you've deployed in the specific subscription.
1. In the BareMetal subscription, in the Azure portal, select Resource groups .
2. In the list, locate the new resource group.

TIP
You can filter on the subscription you used to deploy the BareMetal Instance. After you filter to the proper
subscription, you might have a long list of resource groups. Look for one with a post-fix of -Txxx where xxx is three
digits like -T250 .

3. Select the new resource group to show the details of it. The image shows one BareMetal Instance unit
deployed.

NOTE
If you deployed several BareMetal Instance tenants under the same Azure subscription, you would see multiple Azure
resource groups.
View the attributes of a single instance
You can view the details of a single unit. In the list of the BareMetal instance, select the single instance you want to
view.

The attributes in the image don't look much different than the Azure virtual machine (VM) attributes. On the left,
you'll see the Resource group, Azure region, and subscription name and ID. If you assigned tags, then you'll see
them here as well. By default, the BareMetal Instance units don't have tags assigned.
On the right, you'll see the unit's name, operating system (OS), IP address, and SKU that shows the number of CPU
threads and memory. You'll also see the power state and hardware version (revision of the BareMetal Instance
stamp). The power state indicates if the hardware unit is powered on or off. The operating system details, however,
don't indicate whether it's up and running.
The possible hardware revisions are:
Revision 3
Revision 4
Revision 4.2

NOTE
Revision 4.2 is the latest rebranded BareMetal Infrastructure using the Revision 4 architecture. It has significant
improvements in network latency between Azure VMs and BareMetal instance units deployed in Revision 4 stamps or rows.

Also, on the right side, you'll find the Azure Proximity Placement Group's name, which is created automatically for
each deployed BareMetal Instance unit. Reference the Proximity Placement Group when you deploy the Azure VMs
that host the application layer. When you use the Proximity Placement Group associated with the BareMetal
Instance unit, you ensure that the Azure VMs get deployed close to the BareMetal Instance unit.

TIP
To locate the application layer in the same Azure datacenter as Revision 4.x, see Azure proximity placement groups for
optimal network latency.

Check activities of a single instance


You can check the activities of a single unit. One of the main activities recorded are restarts of the unit. The data
listed includes the activity's status, timestamp the activity triggered, subscription ID, and the Azure user who
triggered the activity.
Changes to the unit's metadata in Azure also get recorded in the Activity log. Besides the restart initiated, you can
see the activity of Write BareMetallnstances . This activity makes no changes on the BareMetal Instance unit itself
but documents the changes to the unit's metadata in Azure.
Another activity that gets recorded is when you add or delete a tag to an instance.

Add and delete an Azure tag to an instance


You can add Azure tags to a BareMetal Instance unit or delete them. The way tags get assigned doesn't differ from
assigning tags to VMs. As with VMs, the tags exist in the Azure metadata, and for BareMetal Instances, they have
the same restrictions as the tags for VMs.
Deleting tags work the same way as with VMs. Applying and deleting a tag are listed in the BareMetal Instance
unit's Activity log.

Check properties of an instance


When you acquire the instances, you can go to the Properties section to view the data collected about the
instances. The data collected includes the Azure connectivity, storage backend, ExpressRoute circuit ID, unique
resource ID, and the subscription ID. You'll use this information in support requests or when setting up storage
snapshot configuration.
Another critical piece of information you'll see is the storage NFS IP address. It isolates your storage to your tenant
in the BareMetal Instance stack. You'll use this IP address when you edit the configuration file for storage snapshot
backups.
Restart a unit through the Azure portal
There are various situations where the OS won't finish a restart, which requires a power restart of the BareMetal
Instance unit. You can do a power restart of the unit directly from the Azure portal:
Select Restar t and then Yes to confirm the restart of the unit.

When you restart a BareMetal Instance unit, you'll experience a delay. During this delay, the power state moves
from Star ting to Star ted , which means the OS has started up completely. As a result, after a restart, you can't log
into the unit as soon as the state switches to Star ted .

IMPORTANT
Depending on the amount of memory in your BareMetal Instance unit, a restart and a reboot of the hardware and the
operating system can take up to one hour.

Open a support request for BareMetal Instances


You can submit support requests specifically for a BareMetal Instance unit.
1. In Azure portal, under Help + Suppor t , create a New suppor t request and provide the following
information for the ticket:
Issue type: Select an issue type
Subscription: Select your subscription
Ser vice: BareMetal Infrastructure
Resource: Provide the name of the instance
Summar y: Provide a summary of your request
Problem type: Select a problem type
Problem subtype: Select a subtype for the problem
2. Select the Solutions tab to find a solution to your problem. If you can't find a solution, go to the next step.
3. Select the Details tab and select whether the issue is with VMs or the BareMetal Instance units. This
information helps direct the support request to the correct specialists.
4. Indicate when the problem began and select the instance region.
5. Provide more details about the request and upload a file if needed.
6. Select Review + Create to submit the request.
It takes up to five business days for a support representative to confirm your request.

Next steps
If you want to learn more about BareMetal, see BareMetal workload types.
High availability set up in SUSE using the STONITH
12/22/2020 • 11 minutes to read • Edit Online

This document provides the detailed step by step instructions to set up the High Availability on SUSE Operating
system using the STONITH device.
Disclaimer : This guide is derived by testing the setup in the Microsoft HANA Large Instances environment, which
successfully works. As Microsoft Service Management team for HANA Large Instances does not support Operating
system, you may need to contact SUSE for any further troubleshooting or clarification on the operating system
layer. Microsoft service management team does set up STONITH device and fully supports and can be involved for
troubleshooting for STONITH device issues.

Overview
To set up the High availability using SUSE clustering, the following pre-requisites must meet.
Pre -requisites
HANA Large Instances are provisioned
Operating system is registered
HANA Large Instances servers are connected to SMT server to get patches/packages
Operating system have latest patches installed
NTP (time server) is set up
Read and understand the latest version of SUSE documentation on HA setup
Setup details
This guide uses the following setup:
Operating System: SLES 12 SP1 for SAP
HANA Large Instances: 2xS192 (four sockets, 2 TB)
HANA Version: HANA 2.0 SP1
Server Names: sapprdhdb95 (node1) and sapprdhdb96 (node2)
STONITH Device: iSCSI based STONITH device
NTP set up on one of the HANA Large Instance nodes
When you set up HANA Large Instances with HSR, you can request Microsoft Service Management team to set up
STONITH. If you are already an existing customer who has HANA Large Instances provisioned, and need STONITH
device set up for your existing blades, you need to provide the following information to Microsoft Service
Management team in the service request form (SRF). You can request SRF form through the Technical Account
Manager or your Microsoft Contact for HANA Large Instance onboarding. The new customers can request
STONITH device at the time of provisioning. The inputs are available in the provisioning request form.
Server Name and Server IP address (for example, myhanaserver1, 10.35.0.1)
Location (for example, US East)
Customer Name (for example, Microsoft)
SID - HANA System Identifier (for example, H11)
Once the STONITH device is configured, Microsoft Service Management team does provide you the SBD device
name and IP address of the iSCSI storage, which you can use to configure STONITH setup.
To set up the end to end HA using STONITH, the following steps needs to be followed:
1. Identify the SBD device
2. Initialize the SBD device
3. Configuring the Cluster
4. Setting Up the Softdog Watchdog
5. Join the node to the cluster
6. Validate the cluster
7. Configure the resources to the cluster
8. Test the failover process

1. Identify the SBD device


This section describes on how to determine the SBD device for your setup after Microsoft service management
team has configured the STONITH. This section only applies to the existing customer . If you are a new
customer, Microsoft service management team does provide SBD device name to you and you can skip this
section.
1.1 Modify /etc/iscsi/initiatorname.isci to

iqn.1996-04.de.suse:01:<Tenant><Location><SID><NodeNumber>

Microsoft service management does provide this string. Modify the file on both the nodes, however the node
number is different on each node.

1.2 Modify /etc/iscsi/iscsid.conf: Set node.session.timeo.replacement_timeout=5 and node.startup = automatic.


Modify the file on both the nodes.
1.3 Execute the discovery command, it shows four sessions. Run it on both the nodes.

iscsiadm -m discovery -t st -p <IP address provided by Service Management>:3260

1.4 Execute the command to log in to the iSCSI device, it shows four sessions. Run it on both the nodes.

iscsiadm -m node -l
1.5 Execute the rescan script: rescan-scsi-bus.sh. This script shows you the new disks created for you. Run it on
both the nodes. You should see a LUN number that is greater than zero (for example: 1, 2 etc.)

rescan-scsi-bus.sh

1.6 To get the device name run the command fdisk –l. Run it on both the nodes. Pick the device with the size of 178
MiB .

fdisk –l

2. Initialize the SBD device


2.1 Initialize the SBD device on both the nodes

sbd -d <SBD Device Name> create

2.2 Check what has been written to the device. Do it on both the nodes

sbd -d <SBD Device Name> dump

3. Configuring the Cluster


This section describes the steps to set up the SUSE HA cluster.
3.1 Package installation
3.1.1 Please check that ha_sles and SAPHanaSR-doc patterns are installed. If it is not installed, install them. Install it
on both the nodes.
zypper in -t pattern ha_sles
zypper in SAPHanaSR SAPHanaSR-doc

3.2 Setting up the cluster


3.2.1 You can either use ha-cluster-init command, or use the yast2 wizard to set up the cluster. In this case, the
yast2 wizard is used. You perform this step only on the Primar y node .
Follow yast2> High Availability > Cluster

Click cancel since the halk2 package is already installed.


Click Continue
Expected value=Number of nodes deployed (in this case 2)

Click Next

Add node names and then click “Add suggested files”


Click “Turn csync2 ON”
Click “Generate Pre-Shared-Keys”, it shows below popup
Click OK
The authentication is performed using the IP addresses and pre-shared-keys in Csync2. The key file is generated
with csync2 -k /etc/csync2/key_hagroup. The file key_hagroup should be copied to all members of the cluster
manually after it's created. Ensure to copy the file from node 1 to node2 .

Click Next
In the default option, Booting was off, change it to “on” so pacemaker is started on boot. You can make the choice
based on your setup requirements. Click Next and the cluster configuration is complete.

4. Setting Up the Softdog Watchdog


This section describes the configuration of the watchdog (softdog).
4.1 Add the following line to /etc/init.d/boot.local on both the nodes.

modprobe softdog

4.2 Update the file /etc/sysconfig/sbd on both the nodes as following:

SBD_DEVICE="<SBD Device Name>"


4.3 Load the kernel module on both the nodes by running the following command

modprobe softdog

4.4 Check and ensure that softdog is running as following on both the nodes:

lsmod | grep dog

4.5 Start the SBD device on both the nodes

/usr/share/sbd/sbd.sh start

4.6 Test the SBD daemon on both the nodes. You see two entries after you configure it on both the nodes

sbd -d <SBD Device Name> list

4.7 Send a test message to one of your nodes

sbd -d <SBD Device Name> message <node2> <message>

4.8 On the Second node (node2) you can check the message status

sbd -d <SBD Device Name> list

4.9 To adopt the sbd config, update the file /etc/sysconfig/sbd as following. Update the file on both the nodes
SBD_DEVICE=" <SBD Device Name>"
SBD_WATCHDOG="yes"
SBD_PACEMAKER="yes"
SBD_STARTMODE="clean"
SBD_OPTS=""

4.10 Start the pacemaker service on the Primar y node (node1)

systemctl start pacemaker

If the pacemaker service fails, refer to Scenario 5: Pacemaker service fails

5. Joining the cluster


This section describes on how to join the node to the cluster.
5.1 Add the node
Run the following command on node2 to let node2 join the cluster.

ha-cluster-join

If you receive an error during joining the cluster, refer Scenario 6: Node 2 unable to join the cluster.

6. Validating the cluster


6.1 Start the cluster service
To check and optionally start the cluster for the first time on both nodes.

systemctl status pacemaker


systemctl start pacemaker
6.2 Monitor the status
Run the command crm_mon to ensure both the nodes are online. You can run it on any of the nodes of the
cluster

crm_mon

You can also log in to hawk to check the cluster status https://<node IP>:7630. The default user is hacluster and the
password is linux. If needed, you can change the password using passwd command.

7. Configure Cluster Properties and Resources


This section describes the steps to configure the cluster resources. In this example, set up the following resource,
the rest can be configured (if needed) by referencing the SUSE HA guide. Perform the config in one of the nodes
only. Do on primary node.
Cluster bootstrap
STONITH Device
The Virtual IP Address
7.1 Cluster bootstrap and more
Add cluster bootstrap. Create the file and add the text as following:

sapprdhdb95:~ # vi crm-bs.txt
# enter the following to crm-bs.txt
property $id="cib-bootstrap-options" \
no-quorum-policy="ignore" \
stonith-enabled="true" \
stonith-action="reboot" \
stonith-timeout="150s"
rsc_defaults $id="rsc-options" \
resource-stickiness="1000" \
migration-threshold="5000"
op_defaults $id="op-options" \
timeout="600"
Add the configuration to the cluster.

crm configure load update crm-bs.txt

7.2 STONITH device


Add resource STONITH. Create the file and add the text as following.

# vi crm-sbd.txt
# enter the following to crm-sbd.txt
primitive stonith-sbd stonith:external/sbd \
params pcmk_delay_max="15"

Add the configuration to the cluster.

crm configure load update crm-sbd.txt

7.3 The virtual IP address


Add resource virtual IP. Create the file and add the text as below.

# vi crm-vip.txt
primitive rsc_ip_HA1_HDB10 ocf:heartbeat:IPaddr2 \
operations $id="rsc_ip_HA1_HDB10-operations" \
op monitor interval="10s" timeout="20s" \
params ip="10.35.0.197"

Add the configuration to the cluster.

crm configure load update crm-vip.txt

7.4 Validate the resources


When you run command crm_mon, you can see the two resources there.

Also, you can see the status at https://<node IP address>:7630/cib/live/state


8. Testing the failover process
To test the failover process, stop the pacemaker service on node1, and the resources failover to node2.

Service pacemaker stop

Now, stop the pacemaker service on node2 and resources failed over to node1
Before failover

After failover
9. Troubleshooting
This section describes the few failure scenarios, which can be encountered during the setup. You may not
necessarily face these issues.
Scenario 1: Cluster node not online
If any of the nodes does not show online in cluster manager, you can try following to bring it online.
Start the iSCSI service

service iscsid start

And now you should be able to log in to that iSCSI node

iscsiadm -m node -l

The expected output looks like following

sapprdhdb45:~ # iscsiadm -m node -l


Logging in to [iface: default, target: iqn.1992-08.com.netapp:hanadc11:1:t020, portal: 10.250.22.11,3260]
(multiple)
Logging in to [iface: default, target: iqn.1992-08.com.netapp:hanadc11:1:t020, portal: 10.250.22.12,3260]
(multiple)
Logging in to [iface: default, target: iqn.1992-08.com.netapp:hanadc11:1:t020, portal: 10.250.22.22,3260]
(multiple)
Logging in to [iface: default, target: iqn.1992-08.com.netapp:hanadc11:1:t020, portal: 10.250.22.21,3260]
(multiple)
Login to [iface: default, target: iqn.1992-08.com.netapp:hanadc11:1:t020, portal: 10.250.22.11,3260]
successful.
Login to [iface: default, target: iqn.1992-08.com.netapp:hanadc11:1:t020, portal: 10.250.22.12,3260]
successful.
Login to [iface: default, target: iqn.1992-08.com.netapp:hanadc11:1:t020, portal: 10.250.22.22,3260]
successful.
Login to [iface: default, target: iqn.1992-08.com.netapp:hanadc11:1:t020, portal: 10.250.22.21,3260]
successful.

Scenario 2: yast2 does not show graphical view


The yast2 graphical screen is used to set up the High Availability cluster in this document. If yast2 does not open
with the graphical window as shown and throw Qt error, perform the steps as following. If it opens with the
graphical window, you can skip the steps.
Error

Expected Output
If the yast2 does not open with the graphical view, follow the steps following.
Install the required packages. You must be logged in as user “root” and have SMT set up to download/install the
packages.
To install the packages, use yast>Software>Software Management>Dependencies> option “Install recommended
packages…”. The following screenshot illustrates the expected screens.

NOTE
You need to perform the steps on both the nodes, so that you can access the yast2 graphical view from both the nodes.

Under Dependencies, select "Install Recommended Packages"

Review the changes and hit OK


Package installation proceeds

Click Next

Click Finish
You also need to install the libqt4 and libyui-qt packages.
zypper -n install libqt4

zypper -n install libyui-qt

Yast2 should be able to open the graphical view now as shown here.

Scenario 3: yast2 does not High Availability option


For the High Availability option to be visible on the yast2 control center, you need to install the additional packages.
Using Yast2>Software>Software management>Select the following patterns
SAP HANA server base
C/C++ Compiler and tools
High availability
SAP Application server base
The following screen shows the steps to install the patterns.
Using yast2 > Software > Software Management

Select the patterns


Click Accept

Click Continue
Click Next when the installation is complete

Scenario 4: HANA Installation fails with gcc assemblies error


The HANA installation fails with following error.
To fix the issue, you need to install libraries (libgcc_sl and libstdc++6) as following.

Scenario 5: Pacemaker service fails


The following issue occurred during the pacemaker service start.

sapprdhdb95:/ # systemctl start pacemaker


A dependency job for pacemaker.service failed. See 'journalctl -xn' for details.
sapprdhdb95:/ # journalctl -xn
-- Logs begin at Thu 2017-09-28 09:28:14 EDT, end at Thu 2017-09-28 21:48:27 EDT. --
Sep 28 21:48:27 sapprdhdb95 corosync[68812]: [SERV ] Service engine unloaded: corosync configuration map
Sep 28 21:48:27 sapprdhdb95 corosync[68812]: [QB ] withdrawing server sockets
Sep 28 21:48:27 sapprdhdb95 corosync[68812]: [SERV ] Service engine unloaded: corosync configuration ser
Sep 28 21:48:27 sapprdhdb95 corosync[68812]: [QB ] withdrawing server sockets
Sep 28 21:48:27 sapprdhdb95 corosync[68812]: [SERV ] Service engine unloaded: corosync cluster closed pr
Sep 28 21:48:27 sapprdhdb95 corosync[68812]: [QB ] withdrawing server sockets
Sep 28 21:48:27 sapprdhdb95 corosync[68812]: [SERV ] Service engine unloaded: corosync cluster quorum se
Sep 28 21:48:27 sapprdhdb95 corosync[68812]: [SERV ] Service engine unloaded: corosync profile loading s
Sep 28 21:48:27 sapprdhdb95 corosync[68812]: [MAIN ] Corosync Cluster Engine exiting normally
Sep 28 21:48:27 sapprdhdb95 systemd[1]: Dependency failed for Pacemaker High Availability Cluster Manager
-- Subject: Unit pacemaker.service has failed
-- Defined-By: systemd
-- Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit pacemaker.service has failed.
--
-- The result is dependency.

sapprdhdb95:/ # tail -f /var/log/messages


2017-09-28T18:44:29.675814-04:00 sapprdhdb95 corosync[57600]: [QB ] withdrawing server sockets
2017-09-28T18:44:29.676023-04:00 sapprdhdb95 corosync[57600]: [SERV ] Service engine unloaded: corosync
cluster closed process group service v1.01
2017-09-28T18:44:29.725885-04:00 sapprdhdb95 corosync[57600]: [QB ] withdrawing server sockets
2017-09-28T18:44:29.726069-04:00 sapprdhdb95 corosync[57600]: [SERV ] Service engine unloaded: corosync
cluster quorum service v0.1
2017-09-28T18:44:29.726164-04:00 sapprdhdb95 corosync[57600]: [SERV ] Service engine unloaded: corosync
profile loading service
2017-09-28T18:44:29.776349-04:00 sapprdhdb95 corosync[57600]: [MAIN ] Corosync Cluster Engine exiting
normally
2017-09-28T18:44:29.778177-04:00 sapprdhdb95 systemd[1]: Dependency failed for Pacemaker High Availability
Cluster Manager.
2017-09-28T18:44:40.141030-04:00 sapprdhdb95 systemd[1]: [/usr/lib/systemd/system/fstrim.timer:8] Unknown
lvalue 'Persistent' in section 'Timer'
2017-09-28T18:45:01.275038-04:00 sapprdhdb95 cron[57995]: pam_unix(crond:session): session opened for user
root by (uid=0)
2017-09-28T18:45:01.308066-04:00 sapprdhdb95 CRON[57995]: pam_unix(crond:session): session closed for user
root

To fix it, delete the following line from the file /usr/lib/systemd/system/fstrim.timer

Persistent=true

Scenario 6: Node 2 unable to join the cluster


When joining the node2 to the existing cluster using ha-cluster-join command, the following error occurred.
ERROR: Can’t retrieve SSH keys from <Primary Node>

To fix, run the following on both the nodes

ssh-keygen -q -f /root/.ssh/id_rsa -C 'Cluster Internal' -N ''


cat /root/.ssh/id_rsa.pub >> /root/.ssh/authorized_keys

After the preceding fix, node2 should get added to the cluster

10. General Documentation


You can find more information on SUSE HA setup in the following articles:
SAP HANA SR Performance Optimized Scenario
Storage-based fencing
Blog - Using Pacemaker Cluster for SAP HANA- Part 1
Blog - Using Pacemaker Cluster for SAP HANA- Part 2
OS backup and restore for Type II SKUs of Revision 3
stamps
12/22/2020 • 2 minutes to read • Edit Online

This document describes the steps to perform an operating system file level backup and restore for the Type II
SKUs of the HANA Large Instances of Revision 3.

IMPORTANT
This ar ticle does not apply to Type II SKU deployments in Revision 4 HANA Large Instance stamps. Boot
LUNS of Type II HANA Large Instance units which are deployed in Revision 4 HANA Large Instance stamps can be backed up
with storage snapshots as this is the case with Type I SKUs already in Revision 3 stamps

NOTE
The OS backup scripts uses the ReaR software, which is pre-installed in the server.

After the provisioning is complete by the Microsoft Service Management team, by default, the server is configured
with two backup schedules to back up the file system level back of the operating system. You can check the
schedules of the backup jobs by using the following command:

#crontab –l

You can change the backup schedule anytime using the following command:

#crontab -e

How to take a manual backup?


The OS file system backup is scheduled using a cron job already. However, you can perform the operating system
file level backup manually as well. To perform a manual backup, run the following command:

#rear -v mkbackup

The following screen show shows the sample manual backup:


How to restore a backup?
You can restore a full backup or an individual file from the backup. To restore, use the following command:

#tar -xvf <backup file> [Optional <file to restore>]

After the restore, the file is recovered in the current working directory.
The following command shows the restore of a file /etc/fstabfrom the backup file backup.tar.gz

#tar -xvf /osbackups/hostname/backup.tar.gz etc/fstab

NOTE
You need to copy the file to desired location after it is restored from the backup.

The following screenshot shows the restore of a complete backup:

How to install the ReaR tool and change the configuration?


The Relax-and-Recover (ReaR) packages are pre-installed in the Type II SKUs of HANA Large Instances, and no
action needed from you. You can directly start using the ReaR for the operating system backup. However, in the
circumstances where you need to install the packages in your own, you can follow the listed steps to install and
configure the ReaR tool.
To install the ReaR backup packages, use the following commands:
For SLES operating system, use the following command:

#zypper install <rear rpm package>

For RHEL operating system, use the following command:

#yum install rear -y


To configure the ReaR tool, you need to update parameters OUTPUT_URL and BACKUP_URL in the file
/etc/rear/local.conf.

OUTPUT=ISO
ISO_MKISOFS_BIN=/usr/bin/ebiso
BACKUP=NETFS
OUTPUT_URL="nfs://nfsip/nfspath/"
BACKUP_URL="nfs://nfsip/nfspath/"
BACKUP_OPTIONS="nfsvers=4,nolock"
NETFS_KEEP_OLD_BACKUP_COPY=
EXCLUDE_VG=( vgHANA-data-HC2 vgHANA-data-HC3 vgHANA-log-HC2 vgHANA-log-HC3 vgHANA-shared-HC2 vgHANA-shared-HC3
)
BACKUP_PROG_EXCLUDE=("${BACKUP_PROG_EXCLUDE[@]}" '/media' '/var/tmp/*' '/var/crash' '/hana' '/usr/sap'
‘/proc’)

The following screenshot shows the restore of a complete backup:


Kdump for SAP HANA on Azure Large Instances
(HLI)
12/22/2020 • 2 minutes to read • Edit Online

Configuring and enabling kdump is a step that is needed to troubleshoot system crashes that do not have a clear
cause. There are times when a system will unexpectedly crash that cannot be explained by a hardware or
infrastructure problem. In these cases it can be an operating system or application problem and kdump will allow
SUSE to determine why a system crashed.

Enable Kdump service


This document describes the details on how to enable Kdump service on Azure HANA Large Instance(Type I and
Type II )

Supported SKUs
H A N A L A RGE IN STA N C E
TYPE O S VEN DO R O S PA C K A GE VERSIO N SK U

Type I SuSE SLES 12 SP3 S224m

Type I SuSE SLES 12 SP4 S224m

Type I SuSE SLES 12 SP2 S72

Type I SuSE SLES 12 SP2 S72m

Type I SuSE SLES 12 SP3 S72m

Type I SuSE SLES 12 SP2 S96

Type I SuSE SLES 12 SP3 S96

Type I SuSE SLES 12 SP2 S192

Type I SuSE SLES 12 SP3 S192

Type I SuSE SLES 12 SP4 S192

Type I SuSE SLES 12 SP2 S192m

Type I SuSE SLES 12 SP3 S192m

Type I SuSE SLES 12 SP4 S192m

Type I SuSE SLES 12 SP2 S144

Type I SuSE SLES 12 SP3 S144


H A N A L A RGE IN STA N C E
TYPE O S VEN DO R O S PA C K A GE VERSIO N SK U

Type I SuSE SLES 12 SP2 S144m

Type I SuSE SLES 12 SP3 S144m

Type II SuSE SLES 12 SP2 S384

Type II SuSE SLES 12 SP3 S384

Type II SuSE SLES 12 SP4 S384

Type II SuSE SLES 12 SP2 S384xm

Type II SuSE SLES 12 SP3 S384xm

Type II SuSE SLES 12 SP4 S384xm

Type II SuSE SLES 12 SP2 S576m

Type II SuSE SLES 12 SP3 S576m

Type II SuSE SLES 12 SP4 S576m

Prerequisites
Kdump service uses /var/crash directory to write dumps, make sure the partition corresponds to this directory
has sufficient space to accommodate dumps.

Setup details
Script to enable Kdump can be found here

NOTE
this script is made based on our lab setup and Customer is expected to contact OS vendor for any further tuning. Separate
LUN is going to be provisioned for the new and existing servers for saving the dumps and script will take care of configuring
the file system out of the LUN. Microsoft will not be responsible for analyzing the dump. Customer has to open a ticket with
OS vendor to get it analyzed.

Run this script on HANA Large Instance using the below command

NOTE
sudo privilege needed to run this command.

sudo bash enable-kdump.sh

If the command outputs Kdump is successfully enabled, please make sure to reboot the system to apply the
changes successfully.
If the command output is Failed to do certain operation, Exiting!!!!, then Kdump service is not enabled. Refer
to section Support issue.

Test Kdump
NOTE
Below operation will trigger a kernel crash and system reboot.

Trigger a kernel crash

echo c > /proc/sysrq-trigger

After the system reboots successfully, check the /var/crash directory for kernel crash logs.
If the /var/crash has directory with current date, then the Kdump is successfully enabled.

Support issue
If the script fails with an error or Kdump isn't enabled, raise service request with Microsoft support team with
following details.
HLI subscription ID
Server name
OS vendor
OS version
Kernel version

Related Documents
To know more on configuring the kdump
Operating System Upgrade
12/22/2020 • 4 minutes to read • Edit Online

This document describes the details on operating system upgrades on the HANA Large Instances.

NOTE
The OS upgrade is customer's responsibility, Microsoft operations support can guide you to the key areas to watch out
during the upgrade. You should consult your operating system vendor as well before you plan for an upgrade.

NOTE
This article contains references to the term blacklist, a term that Microsoft no longer uses. When the term is removed from
the software, we'll remove it from this article.

During HLI unit provisioning, the Microsoft operations team installs the operating system. Over the time, you are
required to maintain the operating system (Example: Patching, tuning, upgrading etc.) on the HLI unit.
Before you do major changes to the operating system (for example, Upgrade SP1 to SP2), you shall contact
Microsoft Operations team by opening a support ticket to consult.
Include in your ticket:
Your HLI subscription ID.
Your server name.
The patch level you are planning to apply.
The date you are planning this change.
We would recommend you open this ticket at least one week prior to the desirable upgrade, which will let opration
team know about the desired firmware version.
For the support matrix of the different SAP HANA versions with the different Linux versions, see SAP Note
#2235581.

Known issues
The following are the few common known issues during the upgrade:
On SKU Type II class SKU, the software foundation software (SFS) is removed after the OS upgrade. You need to
reinstall the compatible SFS after the OS upgrade.
Ethernet card drivers (ENIC and FNIC) rolled back to older version. You need to reinstall the compatible version
of the drivers after the upgrade.

SAP HANA Large Instance (Type I) recommended configuration


Operating system configuration can drift from the recommended settings over time due to patching, system
upgrades, and changes made by customers. Additionally, Microsoft identifies updates needed for existing systems
to ensure they are optimally configured for the best performance and resiliency. Following instructions outline
recommendations that address network performance, system stability, and optimal HANA performance.
Compatible eNIC/fNIC driver versions
In order to have proper network performance and system stability, it is advised to ensure the OS-specific
appropriate version of eNIC and fNIC drivers are installed as depicted in following compatibility table. Servers are
delivered to customers with compatible versions. In some cases, during OS/Kernel patching, drivers can get rolled
back to the default driver versions. Ensure appropriate driver version is running post OS/Kernel patching
operations.

O S PA C K A GE
O S VEN DO R VERSIO N F IRM WA RE VERSIO N EN IC DRIVER F N IC DRIVER

SuSE SLES 12 SP2 3.1.3h 2.3.0.40 1.6.0.34

SuSE SLES 12 SP3 3.1.3h 2.3.0.44 1.6.0.36

SuSE SLES 12 SP4 3.2.3i 4.0.0.6 2.0.0.60

SuSE SLES 12 SP2 3.2.3i 2.3.0.45 1.6.0.37

SuSE SLES 12 SP3 3.2.3i 2.3.0.43 1.6.0.36

SuSE SLES 12 SP5 3.2.3i 4.0.0.8 2.0.0.60

Red Hat RHEL 7.2 3.1.3h 2.3.0.39 1.6.0.34

Commands for driver upgrade and to clean old rpm packages


Command to check existing installed drivers

rpm -qa | grep enic/fnic

Delete existing eNIC/fNIC rpm

rpm -e <old-rpm-package>

Install the recommended eNIC/fNIC driver packages

rpm -ivh <enic/fnic.rpm>

Commands to confirm the installation

modinfo enic
modinfo fnic

Steps for eNIC/fNIC drivers installation during OS Upgrade


Upgrade OS version
Remove old rpm packages
Install compatible eNIC/fNIC drivers as per installed OS version
Reboot system
After reboot, check the eNIC/fNIC version
SuSE HLIs GRUB update failure
SAP on Azure HANA Large Instances (Type I) can be in a non-bootable state after upgrade. The below procedure
fixes this issue.
Execution Steps
Execute multipath -ll command.
Get the LUN ID whose size is approximately 50G or use the command: fdisk -l | grep mapper
Update /etc/default/grub_installdevice file with line /dev/mapper/<LUN ID> . Example:
/dev/mapper/3600a09803830372f483f495242534a56

NOTE
LUN ID varies from server to server.

Disable EDAC
The Error Detection And Correction (EDAC) module helps in detecting and correcting memory errors. However, the
underlying hardware for SAP HANA on Azure Large Instances (Type I) is already performing the same function.
Having the same feature enabled at the hardware and operating system (OS) levels can cause conflicts and can
lead to occasional, unplanned shutdowns of the server. Therefore, it is recommended to disable the module from
the OS.
Execution Steps
Check if EDAC module is enabled. If an output is returned in below command, that means the module is
enabled.

lsmod | grep -i edac

Disable the modules by appending the following lines to the file /etc/modprobe.d/blacklist.conf

blacklist sb_edac
blacklist edac_core

A reboot is required to take changes in place. Execute lsmod command and verify the module is not present there
in output.
Kernel parameters
Make sure the correct setting for transparent_hugepage , numa_balancing , processor.max_cstate , ignore_ce and
intel_idle.max_cstate are applied.

intel_idle.max_cstate=1
processor.max_cstate=1
transparent_hugepage=never
numa_balancing=disable
mce=ignore_ce
Execution Steps
Add these parameters to the GRB_CMDLINE_LINUX line in the file /etc/default/grub

intel_idle.max_cstate=1 processor.max_cstate=1 transparent_hugepage=never numa_balancing=disable mce=ignore_ce

Create a new grub file.

grub2-mkconfig -o /boot/grub2/grub.cfg

Reboot system.
Next steps
Refer Backup and restore for OS backup Type I SKU class.
Refer OS Backup for Type II SKUs of Revision 3 stamps for Type II SKU class.
Set up SMT server for SUSE Linux
12/22/2020 • 5 minutes to read • Edit Online

Large Instances of SAP HANA don't have direct connectivity to the internet. It's not a straightforward process to
register such a unit with the operating system provider, and to download and apply updates. A solution for SUSE
Linux is to set up an SMT server in an Azure virtual machine. Host the virtual machine in an Azure virtual network,
which is connected to the HANA Large Instance. With such an SMT server, the HANA Large Instance unit could
register and download updates.
For more documentation on SUSE, see their Subscription Management Tool for SLES 12 SP2.
Prerequisites for installing an SMT server that fulfills the task for HANA Large Instances are:
An Azure virtual network that is connected to the HANA Large Instance ExpressRoute circuit.
A SUSE account that is associated with an organization. The organization should have a valid SUSE subscription.

Install SMT server on an Azure virtual machine


First, sign in to the SUSE Customer Center.
Go to Organization > Organization Credentials . In that section, you should find the credentials that are
necessary to set up the SMT server.
Then, install a SUSE Linux VM in the Azure virtual network. To deploy the virtual machine, take a SLES 12 SP2
gallery image of Azure (select BYOS SUSE image). In the deployment process, don't define a DNS name, and don't
use static IP addresses.

The deployed virtual machine is smaller, and got the internal IP address in the Azure virtual network of 10.34.1.4.
The name of the virtual machine is smtserver. After the installation, the connectivity to the HANA Large Instance
unit or units is checked. Depending on how you organized name resolution, you might need to configure
resolution of the HANA Large Instance units in etc/hosts of the Azure virtual machine.
Add a disk to the virtual machine. You use this disk to hold the updates, and the boot disk itself could be too small.
Here, the disk got mounted to /srv/www/htdocs, as shown in the following screenshot. A 100-GB disk should
suffice.
Sign in to the HANA Large Instance unit or units, maintain /etc/hosts, and check whether you can reach the Azure
virtual machine that is supposed to run the SMT server over the network.
After this check, sign in to the Azure virtual machine that should run the SMT server. If you are using putty to sign
in to the virtual machine, run this sequence of commands in your bash window:

cd ~
echo "export NCURSES_NO_UTF8_ACS=1" >> .bashrc

Restart your bash to activate the settings. Then start YAST.


Connect your VM (smtserver) to the SUSE site.

smtserver:~ # SUSEConnect -r <registration code> -e s<email address> --url https://scc.suse.com


Registered SLES_SAP 12.2 x86_64
To server: https://scc.suse.com
Using E-Mail: email address
Successfully registered system.

After the virtual machine is connected to the SUSE site, install the smt packages. Use the following putty command
to install the smt packages.

smtserver:~ # zypper in smt


Refreshing service 'SUSE_Linux_Enterprise_Server_for_SAP_Applications_12_SP2_x86_64'.
Loading repository data...
Reading installed packages...
Resolving package dependencies...

You can also use the YAST tool to install the smt packages. In YAST, go to Software Maintenance , and search for
smt. Select smt , which switches automatically to yast2-smt.
Accept the selection for installation on the smtserver. After the installation completes, go to the SMT server
configuration. Enter the organizational credentials from the SUSE Customer Center you retrieved earlier. Also enter
your Azure virtual machine hostname as the SMT Server URL. In this demonstration, it's https://smtserver.

Now test whether the connection to the SUSE Customer Center works. As you see in the following screenshot, in
this demonstration case, it did work.
After the SMT setup starts, provide a database password. Because it's a new installation, you should define that
password as shown in the following screenshot.

The next step is to create a certificate.


At the end of the configuration, it might take a few minutes to run the synchronization check. After the installation
and configuration of the SMT server, you should find the directory repo under the mount point /srv/www/htdocs/.
There are also some subdirectories under repo.
Restart the SMT server and its related services with these commands.

rcsmt restart
systemctl restart smt.service
systemctl restart apache2

Download packages onto SMT server


After all the services are restarted, select the appropriate packages in SMT Management by using YAST. The
package selection depends on the operating system image of the HANA Large Instance server. The package
selection doesn't depend on the SLES release or version of the virtual machine running the SMT server. The
following screenshot shows an example of the selection screen.

Next, start the initial copy of the select packages to the SMT server you set up. This copy is triggered in the shell by
using the command smt-mirror.
The packages should get copied into the directories created under the mount point /srv/www/htdocs. This process
can take an hour or more, depending on how many packages you select. As this process finishes, move to the SMT
client setup.

Set up the SMT client on HANA Large Instance units


The client or clients in this case are the HANA Large Instance units. The SMT server setup copied the script
clientSetup4SMT.sh into the Azure virtual machine. Copy that script over to the HANA Large Instance unit you want
to connect to your SMT server. Start the script with the -h option, and give the name of your SMT server as a
parameter. In this example, the name is smtserver.

It's possible that the load of the certificate from the server by the client succeeds, but the registration fails, as
shown in the following screenshot.
If the registration fails, see SUSE support document, and run the steps described there.

IMPORTANT
For the server name, provide the name of the virtual machine (in this case, smtserver), without the fully qualified domain
name.

After running these steps, run the following command on the HANA Large Instance unit:

SUSEConnect –cleanup

NOTE
Wait a few minutes after that step. If you run clientSetup4SMT.sh immediately, you might get an error.

If you encounter a problem that you need to fix based on the steps of the SUSE article, restart clientSetup4SMT.sh
on the HANA Large Instance unit. Now it should finish successfully.
You configured the SMT client of the HANA Large Instance unit to connect to the SMT server you installed in the
Azure virtual machine. You now can take 'zypper up' or 'zypper in' to install operating system updates to HANA
Large Instances, or install additional packages. You can only get updates that you downloaded before on the SMT
server.

Next steps
HANA Installation on HLI.
SAP HANA on Azure Large Instance migration to
Azure Virtual Machines
12/22/2020 • 16 minutes to read • Edit Online

This article describes possible Azure Large Instance deployment scenarios and offers planning and migration
approach with minimized transition downtime

Overview
Since the announcement of the Azure Large Instances for SAP HANA (HLI) in September 2016, many customers
have adopted this hardware as a service offering for their in-memory compute platform. In recent years, the Azure
VM size extension coupled with the support of HANA scale-out deployment has exceeded most enterprise
customers’ ERP database capacity demand. We begin to see customers expressing the interest to migrate their SAP
HANA workload from physical servers to Azure VMs. This guide isn't a step-by-step configuration document. It
describes the common deployment models and offers planning and migration advises. The intent is to call out
necessary considerations for preparation to minimize transition downtime.

Assumptions
This article makes the following assumptions:
The only interest considered is a homogenous HANA database compute service migration from Hana Large
Instance (HLI) to Azure VM without significant software upgrade or patching. These minor updates include the
use of a more recent OS version or HANA version explicitly stated as supported by relevant SAP notes.
All updates/upgrades activities need to be done before or after the migration. For example, SAP HANA MCOS
converting to MDC deployment.
The migration approach that would offer the least downtime is SAP HANA System Replication. Other migration
methods aren't part of the scope of this document.
This guidance is applicable for both Rev3 and Rev4 SKUs of HLI.
HANA deployment architecture remains primarily unchanged during the migration. That is, a system with single
instance DR will stay the same way at the destination.
Customers have reviewed and understood the Service Level Agreement (SLA) of the target (to-be) architecture.
Commercial terms between HLIs and VMs are different. Customers should monitor the usage of their VMs for
cost management.
Customers understand that HLI is a dedicated compute platform while VMs run on shared yet isolated
infrastructure.
Customers have validated that target VMs support your intended architecture. To see all the supported VM
SKUs certified for SAP HANA deployment, see the SAP HANA hardware directory.
Customers have validated the design and migration plan.
Plan for disaster recovery VM along with the primary site. Customers can't use the HLI as the DR node for the
primary site running on VMs after the migration.
Customers copied the required backup files to target VMs, based on business recoverability and compliance
requirements. With VM accessible backups, it allows for point-in-time recovery during the transition period.
For HSR HA, customers need to set up and configure the STONITH device per SAP HANA HA guides for SLES
and RHEL. It’s not preconfigured like the HLI case.
This migration approach doesn't cover the HLI SKUs with Optane configuration.
Deployment scenarios
Common deployment models with HLI customers are summarized in the following table. Migration to Azure VMs
for all HLI scenarios is possible. To benefit from complementary Azure services available, minor architectural
changes may be required.

SC EN A RIO ID H L I SC EN A RIO M IGRAT E TO VM VERB AT IM ? REM A RK

1 Single node with one SID Yes -

2 Single node with MCOS Yes -

3 Single node with DR using No Storage replication is not


storage replication available with Azure virtual
platform, change current DR
solution to either HSR or
backup/restore

4 Single node with DR No Storage replication is not


(multipurpose) using storage available with Azure virtual
replication platform, change current DR
solution to either HSR or
backup/restore

5 HSR with STONITH for high Yes No preconfigured SBD for


availability target VMs. Select and
deploy a STONITH solution.
Possible options: Azure
Fencing Agent (supported
for both RHEL, SLES), SBD

6 HA with HSR, DR with No Replace storage replication


storage replication for DR needs with either HSR
or backup/restore

7 Host auto failover (1+1) Yes Use ANF for shared storage
with Azure VMs

8 Scale-out with standby Yes BW/4HANA with M128s,


M416s, M416ms VMs using
ANF for storage only

9 Scale-out without standby Yes BW/4HANA with M128s,


M416s, M416ms VMs (with
or without using ANF for
storage)

10 Scale-out with DR using No Replace storage replication


storage replication for DR needs with either HSR
or backup/restore

11 Single node with DR using Yes -


HSR

12 Single node HSR to DR (cost Yes -


optimized)
SC EN A RIO ID H L I SC EN A RIO M IGRAT E TO VM VERB AT IM ? REM A RK

13 HA and DR with HSR Yes -

14 HA and DR with HSR (cost Yes -


optimized)

15 Scale-out with DR using HSR Yes BW/4HANA with M128s.


M416s, M416ms VMs (with
or without using ANF for
storage)

Source (HLI) planning


When onboarding an HLI server, both Microsoft Service Management and customers went through the planning of
the compute, network, storage, and OS-specific settings for running the SAP HANA database. Similar planning
needs to take place for the migration to Azure VM.
SAP HANA housekeeping
It’s a good operational practice to tidy up the database content so unwanted, outdated data, or stale logs aren't
migrated to the new database. Housekeeping generally involves deleting or archiving of old, expired, or inactive
data. These ‘data hygiene’ actions should be tested in non-production systems to validate their data trim validity
before production usage.
Allow network connectivity for new VMs and, or virtual network
In a customer’s HLI deployment, the network has been set up based on the information described in the article SAP
HANA (Large Instances) network architecture. Also, network traffic routing is done in the manner outlined in the
section ‘Routing in Azure’.
In setting up a new VM as the migration target, If it's placed in the existing virtual network with IP address
ranges already permitted to connect to the HLI, no further connectivity update is required.
If the new Azure VM is placed in a new Microsoft Azure Virtual Network, may be in another region, and peered
with the existing virtual network, the ExpressRoute service key and Resource ID from the original HLI
provisioning are usable to allow access for this new virtual network IP range. Coordinate with Microsoft Service
Management to enable the virtual network to HLI connectivity. Note: To minimize network latency between the
application and database layers, both the application and database layers must be on the same virtual network.
Existing app layer Availability Set, Availability Zones, and Proximity Placement Group (PPG )
The current deployment model is done to satisfy certain service level objectives. In this move, ensure the target
infrastructure will meet or exceed the set goals.
More likely than not, customers SAP application servers are placed in an availability set. If the current deployment
service level is satisfactory and
If the target VM assumes the hostname of the HLI logical name, updating the domain name service (DNS)
address resolution pointing to the VM's IP would work without updating any SAP profiles
If you’re not using PPG, be sure to place all the application and DB servers in the same zone to minimize
network latency.
If you’re using PPG, refer to the section of this document: 'Destination Planning, Availability Set, Availability
Zones, and Proximity Placement Group (PPG)'.
Storage replication discontinuance process (if used)
If storage replication is used as the DR solution, it should be terminated (unscheduled) after the SAP application
has been shut down. In addition, the last SAP HANA catalog, log file, and data Backups have been replicated onto
the remote DR HLI storage volumes. Doing so as a precaution in case a disaster happens during the physical server
to Azure VM transition.
Data backups preservation consideration
After the cut-over to SAP HANA on Azure VM, all the snapshot-based data or log backups on the HLI aren't easily
accessible or restorable to a VM if needed. In the early transition period, before the Azure-based backup builds
enough history to satisfy Point-in-Time recovery requirements, we recommend taking file level backups in addition
to snapshots on the HLI, days or weeks before cut-over. Have these backups copied to an Azure Storage account
accessible by the new SAP HANA VM. In addition to backing up the HLI content, it’s prudent to have full backups of
the SAP landscape readily accessible in case a rollback is needed.
Adjusting system monitoring
Customers use many different tools to monitor and send alert notifications for systems within their SAP landscape.
This item is just a call-out for appropriate action to incorporate changes for monitoring and update the alert
notification recipients if needed.
Microsoft Operations team involvement
Open a ticket from the Azure portal based on the existing HLI instance. After the support ticket is created, a support
engineer will contact you via email.
Engage Microsoft account team
Plan migration close to the anniversary renewal time of the HLI contract to minimize unnecessary over expense on
compute resource. To decommission the HLI blade, it’s required to coordinate contract termination and actual shut-
down of the unit.

Destination planning
Standing up a new infrastructure to take the place of an existing one deserves some thinking to ensure the new
addition will fit in the large scheme of things. Below are some key points for contemplation.
Resource availability in the target region
The current SAP application servers' deployment region typically are in close proximity with the associated HLIs.
However, HLIs are offered in fewer locations than available Azure regions. When migrating the physical HLI to
Azure VM, it's also a good time to ‘fine-tune’ the proximity distance of all related services for performance
optimization. While doing so, one key consideration is to ensure the chosen region has all required resources. For
example, the availability of certain VM family or the offering of Azure Zones for high availability setup.
Virtual network
Customers need to choose whether to run the new HANA database in an existing virtual network or to create a
new one. The primary deciding factor is the current networking layout for the SAP landscape. Also when the
infrastructure goes from one-zone to two-zones deployment and uses PPG, it imposes architectural change. For
more information, see the article Azure PPG for optimal network latency with SAP application.
Security
Whether the new SAP HANA VM landing on a new or existing vnet/subnet, it represents a new business critical
service that requires safeguarding. Access control compliant with company info security policy ought to be
evaluated and deployed for this new class of service.
VM sizing recommendation
This migration is also an opportunity to right size your HANA compute engine. One can use HANA system views in
conjunction with HANA Studio to understand the system resource consumption, which allows for right sizing to
drive spending efficiency.
Storage
Storage performance is one of the factors that impacts SAP application user experience. Base on a given VM SKU,
there are minimum storage layout published SAP HANA Azure virtual machine storage configurations. We
recommend reviewing these minimum specs and comparing against the existing HLI system statistics to ensure
adequate IO capacity and performance for the new HANA VM.
If you configure PPG for the new HANA VM and its associated severs, submit a support ticket to inspect and ensure
the co-location of the storage and the VM. Since your backup solution may need to change, the storage cost should
also be revisited to avoid operational spending surprises.
Storage replication for disaster recovery
With HLI, storage replication was offered as the default option for the disaster recovery. This feature is not the
default option for SAP HANA on Azure VM. Consider HSR, backup/restore or other supported solutions satisfying
your business needs.
Availability Sets, Availability Zones, and Proximity Placement Groups
To shorten distance between the application layer and SAP HANA to keep network latency at a minimum, the new
database VM and the current SAP application servers should be placed in a PPG. Refer to Proximity Placement
Group to learn how Azure Availability Set and Availability Zones work with PPG for SAP deployments. If members
of the target HANA system are deployed in more than one Azure Zone, customers should have a clear view of the
latency profile of the chosen zones. The placement of SAP system components is optimal regarding proximal
distance between SAP application and the database. The public domain Availability zone latency test tool helps
make the measurement easier.
Backup strategy
Many customers are already using third-party backup solutions for SAP HANA on HLI. In that case only an
additional protected VM and HANA databases need to be configured. Ongoing HLI backup jobs can now be
unscheduled if the machine is being decommissioned after the migration. Azure Backup for SAP HANA on VM is
now generally available. See these links for detailed information about: Backup, Restore, Manage SAP HANA
backup in Azure VMs.
DR strategy
If your service level objectives accommodate a longer recovery time, a simple backup to blob storage and restore
in place or restore to a new VM is the simplest and least expensive DR strategy.
Like the large instance platform where HANA DR typically is done with HSR; On Azure VM, HSR is also the most
natural and native SAP HANA DR solution. Regardless of whether the source deployment is single-instance or
clustered, a replica of the source infrastructure is required in the DR region. This DR replica will be configured after
the primary HLI to VM migration is complete. The DR HANA DB will register to the primary SAP HANA on VM
instance as a secondary replication site.
SAP application server connectivity destination change
The HSR migration results in a new HANA DB host and hence a new DB hostname for the application layer, SAP
profiles need to be modified to reflect the new hostname. If the switching is done by name resolution preserving
the hostname, no profile change is required.
Operating system
The operating system images for HLI and VM, despite being on the same release level, SLES 12 SP4 for example,
aren't identical. Customers must validate the required packages, hot fixes, patches, kernel, and security fixes on the
HLI to install the same packages on the target. It's supported to use HSR to replicate from an older OS onto a VM
with a newer OS version. Verify the specific supported versions by reviewing SAP note 2763388.
New SAP license request
A simple call-out to request a new SAP license for the new HANA system now that it’s been migrated to VMs.
Service level agreement (SLA ) differences
The authors like to call out the difference of availability SLA between HLI and Azure VM. For example, clustered
HLIs HA pairs offer 99.99% availability. To achieve the same SLA, one must deploy VMs in availability zones. This
article describes availability with associated deployment architectures so customers can plan their target
infrastructure accordingly.

Migration strategy
In this document, we cover only the HANA System Replication approach for the migration from HLI to Azure VM.
Depends on the target storage solution deployed, the process differs slightly. The high-level steps are described
below.
VM with premium/ultra-disks for data
For VMs that are deployed with premium or ultra-disks, the standard SAP HANA system replication configuration
is applicable for setting up HSR. The SAP help article provides an overview of the steps involved in setting up
system replication, taking over a secondary system, failing back to the primary, and disabling system replication.
For the purpose of the migration, we will only need the setup, taking over, and disabling replication steps.
VM with ANF for data and log volumes
At a high level, the latest HLI storage snapshots of the full data and log volumes need to be copied to Azure Storage
where they are accessible and recoverable by the target HANA VM. The copy process can be done with any native
Linux copy tools.

IMPORTANT
Copying and data transfer can take hours depends on the HANA database size and network bandwidth. The bulk of the copy
process should be done in advance of the primary HANA DB downtime.

MCOS to MDC Conversion


The Multiple Components in One System (MCOS) deployment model was used by some of our HLI customers. The
motivation was to circumvent the Multiple Databases Container (MDC) storage snapshot limitation of earlier SAP
HANA versions. In the MCOS model, several independent SAP HANA instances are stacked up in one HLI blade.
Using HSR for the migration would work fine but resulting in multiple HANA VMs with one tenant DB each. This
end-state makes for a busier landscape than what a customer might have been accustom to. With SAP HANA 2.0
default deployment being MDC, A viable alternative is to perform HANA tenant move after the HSR migration. This
process ‘consolidates’ these independent HANA databases into cotenants in one single HANA container.
Application layer consideration
The DB server is viewed as the center of an SAP system. All application servers should be located near the SAP
HANA DB. In some cases when new use of PPG is desired, relocating of existing application servers onto the PPG
where the HANA VM is may be required. Building new application servers may be deemed easier if you already
have deployment templates handy.
If existing application servers and the new HANA VM are optimally located, no new application servers need to be
built unless additional capacity is required.
If a new infrastructure is built to enhance service availability, the existing application servers may become
unnecessary and should be shut down and deleted. If the target VM hostname changed, and differ from the HLI
hostname, SAP application server profiles need to be adjusted to point to the new host. If only the HANA DB IP
address has changed, a DNS record update is needed to lead incoming connections to the new HANA VM.
Acceptance test
Although the migration from HLI to VM makes no material change to the database content as compared to a
heterogeneous migration, we still recommend validating key functionalities and performance aspect of the new
setup.
Cutover plan
Although this migration is straight forward, it however involves the decommissioning of an existing DB. Careful
planning to preserve the source system with its associated content and backup images are critical in case fallback is
necessary. Good planning does offer a speedier reversal.

Post migration
The migration job is not done until we have safely decoupled any HLI-dependent services or connectivity to ensure
data integrity is preserved. Also, shut down unnecessary services. This section calls out a few top-of-mind items.
Decommissioning the HLI
After a successful migration of the HANA DB to Azure VM, ensure no productive business transactions run on the
HLI DB. However, keeping the HLI running for a period of time equals to its local backup retention window is a safe
practice ensuring speedier recovery if needed. Only then should the HLI blade be decommissioned. Customers
should contractually conclude their HLI commitments with Microsoft by contacting their Microsoft representatives.
Remove any proxy (ex: Iptables, BIGIP) configured for HLI
If a proxy service like the IPTables is used to route on-premises traffic to and from the HLI, it is no longer needed
after the successful migration to VM. However, this connectivity service should be kept for as long as the HLI blade
is still standing-by. Only shut down the service after the HLI blade is fully decommissioned.
Remove Global Reach for HLI
Global Reach is used to connect customers' ExpressRoute gateway with the HLI ExpressRoute gateway. It allows
customers' on-premises traffic to reach the HLI tenant directly without the use of a proxy service. This connection is
no longer needed in absence of the HLI unit after migration. Like the case of the IPTables proxy service,
GlobalReach should also be kept until the HLI blade is fully decommissioned.
Operating system subscription – move/reuse
As the VM servers are stood up and the HLI blades are decommissioned, the OS subscriptions can be replaced or
reused to avoid double paying of OS licenses.

Next steps
See these articles:
SAP HANA infrastructure configurations and operations on Azure.
SAP workloads on Azure: planning and deployment checklist.
Save on SAP HANA Large Instances with an Azure
reservation
11/2/2020 • 5 minutes to read • Edit Online

You can save on your SAP HANA Large Instances (HLI) costs when you pre-purchase Azure reservations for one or
three years. The reservation discount is applied to the provisioned HLI SKU that matches the reserved instance
purchased. This article helps you understand the things you need to know before you buy a reservation and how
to make the purchase.
By purchasing a reservation, you commit to usage of the HLI for one or three years. The HLI reserved capacity
purchase covers the compute and NFS storage that comes bundled with the SKU. The reservation doesn't include
software licensing costs such as the operating system, SAP, or additional storage costs. The reservation discount
automatically applies to the provisioned SAP HLI. When the reservation term ends, pay-as-you-go rates apply to
your provisioned resource.

Purchase considerations
An HLI SKU must be provisioned before going through the reserved capacity purchase. The reservation is paid for
up front or with monthly payments. The following restrictions apply to HLI reserved capacity:
Reservation discounts apply to Enterprise Agreement and Microsoft Customer Agreement subscriptions only.
Other subscriptions aren't supported.
Instance size flexibility isn't supported for HLI reserved capacity. A reservation applies only to the SKU and the
region that you purchase it for.
Self-service cancellation and exchange aren't supported.
The reserved capacity scope is a single scope, so it applies to a single subscription and resource group. The
purchased capacity can't be updated for use by another subscription.
You can't have a shared reservation scope for HANA reserved capacity. You can't split, merge, or update
reservation scope.
You can purchase a single HLI at a time using the reserved capacity API calls. Make additional API calls to buy
additional quantities.
You can purchase reserved capacity in the Azure portal or by using the REST API.

Buy a HANA Large Instance reservation


Use the following information to buy an HLI reservation with the Reservation Order REST APIs.
Get the reservation order and price
First, get the reservation order and price for the provisioned HANA large instance SKU by using the Calculate Price
API.
The following example uses armclient to make REST API calls with PowerShell. Here's what the reservation order
and Calculate Price API request and request body should resemble:
armclient post /providers/Microsoft.Capacity/calculatePrice?api-version=2018-06-01 "{
'sku': {
'name': 'SAP_HANA_On_Azure_S224om'
},
'location': 'eastus',
'properties': {
'reservedResourceType': 'SapHana',
'billingScopeId': '/subscriptions/11111111-1111-1111-111111111111',
'term': 'P1Y',
'quantity': '1',
'displayName': 'testreservation_S224om',
'appliedScopes': ['/subscriptions/11111111-1111-1111-111111111111'],
'appliedScopeType': 'Single',
'instanceFlexibility': 'NotSupported'
}
}"

For more information about data fields and their descriptions, see HLI reservation fields.
The following example response resembles what you get returned. Note the value you returned for quoteId .

{
"properties": {
"currencyCode": "USD",
"netTotal": 313219.0,
"taxTotal": 0.0,
"isTaxIncluded": false,
"grandTotal": 313219.0,
"purchaseRequest": {
"sku": {
"name": "SAP_HANA_On_Azure_S224om"
},
"location": "eastus",
"properties": {
"billingScopeId": "/subscriptions/11111111-1111-1111-111111111111",
"term": "P1Y",
"billingPlan": "Upfront",
"quantity": 1,
"displayName": "testreservation_S224om",
"appliedScopes": [
"/subscriptions/11111111-1111-1111-111111111111"
],
"appliedScopeType": "Single",
"reservedResourceType": "SapHana",
"instanceFlexibility": "NotSupported"
}
},
"quoteId": "d0fd3a890795",
"isBillingPartnerManaged": true,
"reservationOrderId": "22222222-2222-2222-2222-222222222222",
"skuTitle": "SAP HANA on Azure Large Instances - S224om - US East",
"skuDescription": "SAP HANA on Azure Large Instances, S224om",
"pricingCurrencyTotal": {
"currencyCode": "USD",
"amount": 313219.0
}
}
}

Make your purchase


Make your purchase using the returned quoteId and the reservationOrderId that you got from the preceding Get
the reservation order and price section.
Here's an example request:

armclient put /providers/Microsoft.Capacity/reservationOrders/22222222-2222-2222-2222-222222222222?api-


version=2018-06-01 "{
'sku': {
'name': 'SAP_HANA_On_Azure_S224om'
},
'location': 'eastus',
'properties': {
'reservedResourceType': 'SapHana',
'billingScopeId': '/subscriptions/11111111-1111-1111-111111111111',
'term': 'P1Y',
'quantity': '1',
'displayName': ' testreservation_S224om',
'appliedScopes': ['/subscriptions/11111111-1111-1111-111111111111/resourcegroups/123'],
'appliedScopeType': 'Single',
'instanceFlexibility': 'NotSupported',
'renew': true,
'quoteId': 'd0fd3a890795'
}
}"

Here's an example response. If the order is placed successfully, the provisioningState should be creating .
{
"id": "/providers/microsoft.capacity/reservationOrders/22222222-2222-2222-2222-222222222222",
"type": "Microsoft.Capacity/reservationOrders",
"name": "22222222-2222-2222-2222-222222222222",
"etag": 1,
"properties": {
"displayName": "testreservation_S224om",
"requestDateTime": "2020-07-14T05:42:34.3528353Z",
"term": "P1Y",
"provisioningState": "Creating",
"reservations": [
{
"sku": {
"name": "SAP_HANA_On_Azure_S224om"
},
"id": "/providers/microsoft.capacity/reservationOrders22222222-2222-2222-2222-
222222222222/reservations/33333333-3333-3333-3333-3333333333333",
"type": "Microsoft.Capacity/reservationOrders/reservations",
"name": "22222222-2222-2222-2222-222222222222/33333333-3333-3333-3333-3333333333333",
"etag": 1,
"location": "eastus”
"properties": {
"appliedScopes": [
"/subscriptions/11111111-1111-1111-111111111111/resourcegroups/123"
],
"appliedScopeType": "Single",
"quantity": 1,
"provisioningState": "Creating",
"displayName": " testreservation_S224om",
"effectiveDateTime": "2020-07-14T05:42:34.3528353Z",
"lastUpdatedDateTime": "2020-07-14T05:42:34.3528353Z",
"reservedResourceType": "SapHana",
"instanceFlexibility": "NotSupported",
"skuDescription": "SAP HANA on Azure Large Instances – S224om - US East",
"renew": true
}
}
],
"originalQuantity": 1,
"billingPlan": "Upfront"
}
}

Verify purchase status success


Run the Reservation order GET request to see the status of the purchase order. provisioningState should be
Succeeded .

armclient get /providers/microsoft.capacity/reservationOrders/22222222-2222-2222-2222-222222222222?api-


version=2018-06-01

The response should resemble the following example.


{
"id": "/providers/microsoft.capacity/reservationOrders/44444444-4444-4444-4444-444444444444",
"type": "Microsoft.Capacity/reservationOrders",
"name": "22222222-2222-2222-2222-222222222222 ",
"etag": 8,
"properties": {
"displayName": "testreservation_S224om",
"requestDateTime": "2020-07-14T05:42:34.3528353Z",
"createdDateTime": "2020-07-14T05:44:47.157579Z",
"expiryDate": "2021-07-14",
"term": "P1Y",
"provisioningState": "Succeeded",
"reservations": [
{
"id": "/providers/microsoft.capacity/reservationOrders/22222222-2222-2222-2222-
222222222222/reservations/33333333-3333-3333-3333-3333333333333"
}
],
"originalQuantity": 1,
"billingPlan": "Upfront"
}
}

HLI reservation fields


The following information explains the meaning of various reservation fields.
SKU HLI SKU name. It looks like SAP_HANA_On_Azure_<SKUname> .
Location Available HLI regions. See SKUs for SAP HANA on Azure (Large Instances) for available regions. To get
location string format, use the get locations API call.
Reser ved Resource type SapHana

Subscription The subscription used to pay for the reservation. The payment method on the subscription is
charged the costs for the reservation. The subscription type must be an enterprise agreement (offer numbers: MS-
AZR-0017P or MS-AZR-0148P) or Microsoft Customer Agreement. The charges are deducted from the monetary
commitment balance, if available, or charged as overage.
Scope The reservation's scope should be single scope.
Term One year or three years. It looks like P1Y or P3Y .
Quantity The number of instances being purchased for the reservation. The quantity to purchase is a single HLI at
a time. For additional reservations, repeat the API call with corresponding fields.

Troubleshoot errors
You might receive an error like the following example when you make a reservation purchase. The possible cause
is that the HLI isn't provisioned for purchase. If so, contact your Microsoft account team to get an HLI provisioned
before you try to make a reservation purchase.

{
"error": {
"code": "BadRequest",
"message": "Capacity check or quota check failed. Please select a different subscription or
location. You can also go to https://aka.ms/corequotaincrease to learn about quota increase."
}
}
Next steps
Learn about How to call Azure REST APIs with Postman and cURL.
See SKUs for SAP HANA on Azure (Large Instances) for the available SKU list and regions.
Installation of SAP HANA on Azure virtual machines
12/22/2020 • 7 minutes to read • Edit Online

Introduction
This guide helps you to point to the right resources to deploy HANA in Azure virtual machines successfully. This
guide is going to point you to documentation resources that you need to check before installing SAP HANA in an
Azure VM. So, that you are able to perform the right steps to end with a supported configuration of SAP HANA in
Azure VMs.

NOTE
This guide describes deployments of SAP HANA into Azure VMs. For information on how to deploy SAP HANA into HANA
large instances, see How to install and configure SAP HANA (Large Instances) on Azure.

Prerequisites
This guide also assumes that you're familiar with:
SAP HANA and SAP NetWeaver and how to install them on-premises.
How to install and operate SAP HANA and SAP application instances on Azure.
The concepts and procedures documented in:
Planning for SAP deployment on Azure, which includes Azure Virtual Network planning and Azure
Storage usage. See SAP NetWeaver on Azure Virtual Machines - Planning and implementation guide
Deployment principles and ways to deploy VMs in Azure. See Azure Virtual Machines deployment for
SAP
High availability concepts for SAP HANA as documented in SAP HANA high availability for Azure virtual
machines

Step-by-step before deploying


In this section, the different steps are listed that you need to perform before starting with the installation of SAP
HANA in an Azure virtual machine. The order is enumerated and as such should be followed through as
enumerated:
1. Not all possible deployment scenarios are supported on Azure. Therefore, you should check the document SAP
workload on Azure virtual machine supported scenarios for the scenario you have in mind with your SAP
HANA deployment. If the scenario is not listed, you need to assume that it has not been tested and, as a result,
is not supported
2. Assuming that you have a rough idea on your memory requirement for your SAP HANA deployment, you need
to find a fitting Azure VM. Not all the VMs that are certified for SAP NetWeaver, as documented in SAP support
note #1928533, are SAP HANA certified. The source of truth for SAP HANA certified Azure VMs is the website
SAP HANA hardware directory. The units starting with S are HANA Large Instances units and not Azure VMs.
3. Different Azure VM types have different minimum operating system releases for SUSE Linux or Red Hat Linux.
On the website SAP HANA hardware directory, you need to click on an entry in the list of SAP HANA certified
units to get detailed data of this unit. Besides the supported HANA workload, the OS releases that are
supported with those units for SAP HANA are listed
4. As of operating system releases, you need to consider certain minimum kernel releases. These minimum
releases are documented in these SAP support notes:
SAP support note #2814271 SAP HANA Backup fails on Azure with Checksum Error
SAP support note #2753418 Potential Performance Degradation Due to Timer Fallback
SAP support note #2791572 Performance Degradation Because of Missing VDSO Support For Hyper-V
in Azure
5. Based on the OS release that is supported for the virtual machine type of choice, you need to check whether
your desired SAP HANA release is supported with that operating system release. Read SAP support note
#2235581 for a support matrix of SAP HANA releases with the different Operating System releases.
6. As you might have found a valid combination of Azure VM type, operating system release and SAP HANA
release, you need to check in the SAP Product Availability Matrix. In the SAP Availability Matrix, you can find out
whether the SAP product you want to run against your SAP HANA database is supported.

Step-by-step VM deployment and guest OS considerations


In this phase, you need to go through the steps deploying the VM(s) to install HANA and eventually optimize the
chosen operating system after the installation.
1. Chose the base image out of the Azure gallery. If you want to build your own operating system image for
SAP HANA, you need to know all the different packages that are necessary for a successful SAP HANA
installation. Otherwise it is recommended using the SUSE and Red Hat images for SAP or SAP HANA out of
the Azure image gallery. These images include the packages necessary for a successful HANA installation.
Based on your support contract with the operating system provider, you need to choose an image where
you bring your own license. Or you choose an OS image that includes support
2. If you chose a guest OS image that requires you bringing your own license, you need to register the OS
image with your subscription, so, that you can download and apply the latest patches. This step is going to
require public internet access. Unless you set up your private instance of, for example, an SMT server in
Azure.
3. Decide the network configuration of the VM. You can read more information in the document SAP HANA
infrastructure configurations and operations on Azure. Keep in mind that there are no network throughput
quotas you can assign to virtual network cards in Azure. As a result, the only purpose of directing traffic
through different vNICs is based on security considerations. We trust you to find a supportable
compromise between complexity of traffic routing through multiple vNICs and the requirements enforced
by security aspects.
4. Apply the latest patches to the operating system once the VM is deployed and registered. Registered either
with your own subscription. Or in case you chose an image that includes operating system support the VM
should have access to the patches already.
5. Apply the tunes necessary for SAP HANA. These tunes are listed in these SAP support notes:
SAP support note #2694118 - Red Hat Enterprise Linux HA Add-On on Azure
SAP support note #1984787 - SUSE LINUX Enterprise Server 12: Installation notes
SAP support note #2578899 - SUSE Linux Enterprise Server 15: Installation Note
SAP support note #2002167 - Red Hat Enterprise Linux 7.x: Installation and Upgrade
SAP support note #2292690 - SAP HANA DB: Recommended OS settings for RHEL 7
SAP support note #2772999 - Red Hat Enterprise Linux 8.x: Installation and Configuration
SAP support note #2777782 - SAP HANA DB: Recommended OS Settings for RHEL 8
SAP support note #2455582 - Linux: Running SAP applications compiled with GCC 6.x
SAP support note #2382421 - Optimizing the Network Configuration on HANA- and OS-Level
6. Select the Azure storage type for SAP HANA. In this step, you need to decide on storage layout for SAP
HANA installation. You are going to use either attached Azure disks or native Azure NFS shares. The Azure
storage types that or supported and combinations of different Azure storage types that can be used, are
documented in SAP HANA Azure virtual machine storage configurations. Take the configurations
documented as starting point. For non-production systems, you might be able to configure lower
throughput or IOPS. For production purposes, you might need to configure a bit more throughput and
IOPS.
7. Make sure that you configured Azure Write Accelerator for your volumes that contain the DBMS transaction
logs or redo logs when you are using M-Series or Mv2-Series VMs. Be aware of the limitations for Write
Accelerator as documented.
8. Check whether Azure Accelerated Networking is enabled on the VM(s) deployed.

NOTE
Not all the commands in the different sap-tune profiles or as described in the notes might run successfully on Azure.
Commands that would manipulate the power mode of VMs usually return with an error since the power mode of the
underlying Azure host hardware can not be manipulated.

Step-by-step preparations specific to Azure virtual machines


One of the Azure specifics is the installation of an Azure VM extension that delivers monitoring data for the SAP
Host Agent. The details about the installation of this monitoring extension are documented in:
SAP Note 2191498 discusses SAP enhanced monitoring with Linux VMs on Azure
SAP Note 1102124 discusses information about SAPOSCOL on Linux
SAP Note 2178632 discusses key monitoring metrics for SAP on Microsoft Azure
Azure Virtual Machines deployment for SAP NetWeaver

SAP HANA installation


With the Azure virtual machines deployed and the operating systems registered and configured, you can install
SAP HANA according to the SAP install. As a good start to get to this documentation, start with this SAP website
HANA resources
For SAP HANA scale-out configurations using direct attached disks of Azure Premium Storage or Ultra disk, read
the specifics in the document SAP HANA infrastructure configurations and operations on Azure

Additional resources for SAP HANA backup


For information on how to back up SAP HANA databases on Azure VMs, see:
Backup guide for SAP HANA on Azure Virtual Machines
SAP HANA Azure Backup on file level

Next steps
Read the documentation:
SAP HANA infrastructure configurations and operations on Azure
SAP HANA Azure virtual machine storage configurations
Deploy SAP S/4HANA or BW/4HANA on Azure
12/22/2020 • 5 minutes to read • Edit Online

This article describes how to deploy S/4HANA on Azure by using the SAP Cloud Appliance Library (SAP CAL) 3.0.
To deploy other SAP HANA-based solutions, such as BW/4HANA, follow the same steps.

NOTE
For more information about the SAP CAL, go to the SAP Cloud Appliance Library website. SAP also has a blog about the SAP
Cloud Appliance Library 3.0.

NOTE
As of May 29, 2017, you can use the Azure Resource Manager deployment model in addition to the less-preferred classic
deployment model to deploy the SAP CAL. We recommend that you use the new Resource Manager deployment model and
disregard the classic deployment model.

Step-by-step process to deploy the solution


The following sequence of screenshots shows you how to deploy S/4HANA on Azure by using the SAP CAL. The
process works the same way for other solutions, such as BW/4HANA.
The Solutions page shows some of the SAP CAL HANA-based solutions available on Azure. SAP S/4HANA 1610
FPS01, Fully-Activated Appliance is in the middle row:

Create an account in the SAP CAL


1. To sign in to the SAP CAL for the first time, use your SAP S-User or other user registered with SAP. Then
define an SAP CAL account that is used by the SAP CAL to deploy appliances on Azure. In the account
definition, you need to:
a. Select the deployment model on Azure (Resource Manager or classic).
b. Enter your Azure subscription. An SAP CAL account can be assigned to one subscription only. If you need
more than one subscription, you need to create another SAP CAL account.
c. Give the SAP CAL permission to deploy into your Azure subscription.

NOTE
The next steps show how to create an SAP CAL account for Resource Manager deployments. If you already have an
SAP CAL account that is linked to the classic deployment model, you need to follow these steps to create a new SAP
CAL account. The new SAP CAL account needs to deploy in the Resource Manager model.

2. Create a new SAP CAL account. The Accounts page shows three choices for Azure:
a. Microsoft Azure (classic) is the classic deployment model and is no longer preferred.
b. Microsoft Azure is the new Resource Manager deployment model.
c. Windows Azure operated by 21Vianet is an option in China that uses the classic deployment model.
To deploy in the Resource Manager model, select Microsoft Azure .

3. Enter the Azure Subscription ID that can be found on the Azure portal.

4. To authorize the SAP CAL to deploy into the Azure subscription you defined, click Authorize . The following
page appears in the browser tab:
5. If more than one user is listed, choose the Microsoft account that is linked to be the coadministrator of the
Azure subscription you selected. The following page appears in the browser tab:

6. Click Accept . If the authorization is successful, the SAP CAL account definition displays again. After a short
time, a message confirms that the authorization process was successful.
7. To assign the newly created SAP CAL account to your user, enter your User ID in the text box on the right
and click Add .

8. To associate your account with the user that you use to sign in to the SAP CAL, click Review .
9. To create the association between your user and the newly created SAP CAL account, click Create .
You successfully created an SAP CAL account that is able to:
Use the Resource Manager deployment model.
Deploy SAP systems into your Azure subscription.
Now you can start to deploy S/4HANA into your user subscription in Azure.

NOTE
Before you continue, determine whether you have Azure vCPU quotas for Azure H-Series VMs. At the moment, the SAP CAL
uses H-Series VMs of Azure to deploy some of the SAP HANA-based solutions. Your Azure subscription might not have any
H-Series vCPU quotas for H-Series. If so, you might need to contact Azure support to get a quota of at least 16 H-Series
vCPUs.

NOTE
When you deploy a solution on Azure in the SAP CAL, you might find that you can choose only one Azure region. To deploy
into Azure regions other than the one suggested by the SAP CAL, you need to purchase a CAL subscription from SAP. You
also might need to open a message with SAP to have your CAL account enabled to deliver into Azure regions other than the
ones initially suggested.

Deploy a solution
Let's deploy a solution from the Solutions page of the SAP CAL. The SAP CAL has two sequences to deploy:
A basic sequence that uses one page to define the system to be deployed
An advanced sequence that gives you certain choices on VM sizes
We demonstrate the basic path to deployment here.
1. On the Account Details page, you need to:
a. Select an SAP CAL account. (Use an account that is associated to deploy with the Resource Manager
deployment model.)
b. Enter an instance Name .
c. Select an Azure Region . The SAP CAL suggests a region. If you need another Azure region and you don't
have an SAP CAL subscription, you need to order a CAL subscription with SAP.
d. Enter a master Password for the solution of eight or nine characters. The password is used for the
administrators of the different components.

2. Click Create , and in the message box that appears, click OK .

3. In the Private Key dialog box, click Store to store the private key in the SAP CAL. To use password
protection for the private key, click Download .
4. Read the SAP CAL Warning message, and click OK .

Now the deployment takes place. After some time, depending on the size and complexity of the solution (the
SAP CAL provides an estimate), the status is shown as active and ready for use.
5. To find the virtual machines collected with the other associated resources in one resource group, go to the
Azure portal:
6. On the SAP CAL portal, the status appears as Active . To connect to the solution, click Connect . Different
options to connect to the different components are deployed within this solution.

7. Before you can use one of the options to connect to the deployed systems, click Getting Star ted Guide .
The documentation names the users for each of the connectivity methods. The passwords for those users
are set to the master password you defined at the beginning of the deployment process. In the
documentation, other more functional users are listed with their passwords, which you can use to sign in to
the deployed system.
For example, if you use the SAP GUI that's preinstalled on the Windows Remote Desktop machine, the S/4
system might look like this:

Or if you use the DBACockpit, the instance might look like this:
Within a few hours, a healthy SAP S/4 appliance is deployed in Azure.
If you bought an SAP CAL subscription, SAP fully supports deployments through the SAP CAL on Azure. The
support queue is BC-VCM-CAL.
SAP HANA infrastructure configurations and
operations on Azure
12/22/2020 • 20 minutes to read • Edit Online

This document provides guidance for configuring Azure infrastructure and operating SAP HANA systems that are
deployed on Azure native virtual machines (VMs). The document also includes configuration information for SAP
HANA scale-out for the M128s VM SKU. This document is not intended to replace the standard SAP
documentation, which includes the following content:
SAP administration guide
SAP installation guides
SAP notes

Prerequisites
To use this guide, you need basic knowledge of the following Azure components:
Azure virtual machines
Azure networking and virtual networks
Azure Storage
To learn more about SAP NetWeaver and other SAP components on Azure, see the SAP on Azure section of the
Azure documentation.

Basic setup considerations


The following sections describe basic setup considerations for deploying SAP HANA systems on Azure VMs.
Connect into Azure virtual machines
As documented in the Azure virtual machines planning guide, there are two basic methods for connecting into
Azure VMs:
Connect through the internet and public endpoints on a Jump VM or on the VM that is running SAP HANA.
Connect through a VPN or Azure ExpressRoute.
Site-to-site connectivity via VPN or ExpressRoute is necessary for production scenarios. This type of connection is
also needed for non-production scenarios that feed into production scenarios where SAP software is being used.
The following image shows an example of cross-site connectivity:
Choose Azure VM types
The Azure VM types that can be used for production scenarios are listed in the SAP documentation for IAAS. For
non-production scenarios, a wider variety of native Azure VM types is available.

NOTE
For non-production scenarios, use the VM types that are listed in the SAP note #1928533. For the usage of Azure VMs for
production scenarios, check for SAP HANA certified VMs in the SAP published Certified IaaS Platforms list.

Deploy the VMs in Azure by using:


The Azure portal.
Azure PowerShell cmdlets.
The Azure CLI.
You also can deploy a complete installed SAP HANA platform on the Azure VM services through the SAP Cloud
platform. The installation process is described in Deploy SAP S/4HANA or BW/4HANA on Azure or with the
automation released here.

IMPORTANT
In order to use M208xx_v2 VMs, you need to be careful selecting your Linux image from the Azure VM image gallery. In
order to read the details, read the article Memory optimized virtual machine sizes.

Storage configuration for SAP HANA


For storage configurations and storage types to be used with SAP HANA in Azure, read the document SAP HANA
Azure virtual machine storage configurations
Set up Azure virtual networks
When you have site-to-site connectivity into Azure via VPN or ExpressRoute, you must have at least one Azure
virtual network that is connected through a Virtual Gateway to the VPN or ExpressRoute circuit. In simple
deployments, the Virtual Gateway can be deployed in a subnet of the Azure virtual network (VNet) that hosts the
SAP HANA instances as well. To install SAP HANA, you create two additional subnets within the Azure virtual
network. One subnet hosts the VMs to run the SAP HANA instances. The other subnet runs Jumpbox or
Management VMs to host SAP HANA Studio, other management software, or your application software.

IMPORTANT
Out of functionality, but more important out of performance reasons, it is not supported to configure Azure Network
Virtual Appliances in the communication path between the SAP application and the DBMS layer of a SAP NetWeaver, Hybris
or S/4HANA based SAP system. The communication between the SAP application layer and the DBMS layer needs to be a
direct one. The restriction does not include Azure ASG and NSG rules as long as those ASG and NSG rules allow a direct
communication. Further scenarios where NVAs are not supported are in communication paths between Azure VMs that
represent Linux Pacemaker cluster nodes and SBD devices as described in High availability for SAP NetWeaver on Azure
VMs on SUSE Linux Enterprise Server for SAP applications. Or in communication paths between Azure VMs and Windows
Server SOFS set up as described in Cluster an SAP ASCS/SCS instance on a Windows failover cluster by using a file share in
Azure. NVAs in communication paths can easily double the network latency between two communication partners, can
restrict throughput in critical paths between the SAP application layer and the DBMS layer. In some scenarios observed with
customers, NVAs can cause Pacemaker Linux clusters to fail in cases where communications between the Linux Pacemaker
cluster nodes need to communicate to their SBD device through an NVA.

IMPORTANT
Another design that is NOT supported is the segregation of the SAP application layer and the DBMS layer into different
Azure virtual networks that are not peered with each other. It is recommended to segregate the SAP application layer and
DBMS layer using subnets within an Azure virtual network instead of using different Azure virtual networks. If you decide
not to follow the recommendation, and instead segregate the two layers into different virtual network, the two virtual
networks need to be peered. Be aware that network traffic between two peered Azure virtual networks are subject of
transfer costs. With the huge data volume in many Terabytes exchanged between the SAP application layer and DBMS layer
substantial costs can be accumulated if the SAP application layer and DBMS layer is segregated between two peered Azure
virtual networks.

When you install the VMs to run SAP HANA, the VMs need:
Two virtual NICs installed: one NIC to connect to the management subnet, and one NIC to connect from the
on-premises network or other networks, to the SAP HANA instance in the Azure VM.
Static private IP addresses that are deployed for both virtual NICs.

NOTE
You should assign static IP addresses through Azure means to individual vNICs. You should not assign static IP addresses
within the guest OS to a vNIC. Some Azure services like Azure Backup Service rely on the fact that at least the primary vNIC
is set to DHCP and not to static IP addresses. See also the document Troubleshoot Azure virtual machine backup. If you
need to assign multiple static IP addresses to a VM, you need to assign multiple vNICs to a VM.

However, for deployments that are enduring, you need to create a virtual datacenter network architecture in
Azure. This architecture recommends the separation of the Azure VNet Gateway that connects to on-premises into
a separate Azure VNet. This separate VNet should host all the traffic that leaves either to on-premises or to the
internet. This approach allows you to deploy software for auditing and logging traffic that enters the virtual
datacenter in Azure in this separate hub VNet. So you have one VNet that hosts all the software and
configurations that relates to in- and outgoing traffic to your Azure deployment.
The articles Azure Virtual Datacenter: A Network Perspective and Azure Virtual Datacenter and the Enterprise
Control Plane give more information on the virtual datacenter approach and related Azure VNet design.
NOTE
Traffic that flows between a hub VNet and spoke VNet using Azure VNet peering is subject of additional costs. Based on
those costs, you might need to consider making compromises between running a strict hub and spoke network design and
running multiple Azure ExpressRoute Gateways that you connect to 'spokes' in order to bypass VNet peering. However,
Azure ExpressRoute Gateways introduce additional costs as well. You also may encounter additional costs for third-party
software you use for network traffic logging, auditing, and monitoring. Dependent on the costs for data exchange through
VNet peering on the one side and costs created by additional Azure ExpressRoute Gateways and additional software
licenses, you may decide for micro-segmentation within one VNet by using subnets as isolation unit instead of VNets.

For an overview of the different methods for assigning IP addresses, see IP address types and allocation methods
in Azure.
For VMs running SAP HANA, you should work with static IP addresses assigned. Reason is that some
configuration attributes for HANA reference IP addresses.
Azure Network Security Groups (NSGs) are used to direct traffic that's routed to the SAP HANA instance or the
jumpbox. The NSGs and eventually Application Security Groups are associated to the SAP HANA subnet and the
Management subnet.
The following image shows an overview of a rough deployment schema for SAP HANA following a hub and
spoke VNet architecture:

To deploy SAP HANA in Azure without a site-to-site connection, you still want to shield the SAP HANA instance
from the public internet and hide it behind a forward proxy. In this basic scenario, the deployment relies on Azure
built-in DNS services to resolve hostnames. In a more complex deployment where public-facing IP addresses are
used, Azure built-in DNS services are especially important. Use Azure NSGs and Azure NVAs to control, monitor
the routing from the internet into your Azure VNet architecture in Azure. The following image shows a rough
schema for deploying SAP HANA without a site-to-site connection in a hub and spoke VNet architecture:
Another description on how to use Azure NVAs to control and monitor access from Internet without the hub and
spoke VNet architecture can be found in the article Deploy highly available network virtual appliances.

Configuring Azure infrastructure for SAP HANA scale-out


In order to find out the Azure VM types that are certified for either OLAP scale-out or S/4HANA scale-out, check
the SAP HANA hardware directory. A checkmark in the column 'Clustering' indicates scale-out support.
Application type indicates whether OLAP scale-out or S/4HANA scale-out is supported. For details on nodes
certified in scale-out for each of the VMs, check the details of the entries in the particular VM SKU listed in the SAP
HANA hardware directory.
The minimum OS releases for deploying scale-out configurations in Azure VMs, check the details of the entries in
the particular VM SKU listed in the SAP HANA hardware directory. Of a n-node OLAP scale-out configuration, one
node functions as master node. The other nodes up to the limit of the certification act as worker node. Additional
standby nodes don't count into the number of certified nodes

NOTE
Azure VM scale-out deployments of SAP HANA with standby node are only possible using the Azure NetApp Files storage.
No other SAP HANA certified Azure storage allows the configuration of SAP HANA standby nodes

For /hana/shared, we also recommend the usage of Azure NetApp Files.


A typical basic design for a single node in a scale-out configuration is going to look like:
The basic configuration of a VM node for SAP HANA scale-out looks like:
For /hana/shared , you use the native NFS service provided through Azure NetApp Files.
All other disk volumes are not shared among the different nodes and are not based on NFS. Installation
configurations and steps for scale-out HANA installations with non-shared /hana/data and /hana/log is
provided further later in this document. For HANA certified storage that can be used, check the article SAP
HANA Azure virtual machine storage configurations.
Sizing the volumes or disks, you need to check the document SAP HANA TDI Storage Requirements, for the size
required dependent on the number of worker nodes. The document releases a formula you need to apply to get
the required capacity of the volume
The other design criteria that is displayed in the graphics of the single node configuration for a scale-out SAP
HANA VM is the VNet, or better the subnet configuration. SAP highly recommends a separation of the
client/application facing traffic from the communications between the HANA nodes. As shown in the graphics, this
goal is achieved by having two different vNICs attached to the VM. Both vNICs are in different subnets, have two
different IP addresses. You then control the flow of traffic with routing rules using NSGs or user-defined routes.
Particularly in Azure, there are no means and methods to enforce quality of service and quotas on specific vNICs.
As a result, the separation of client/application facing and intra-node communication does not open any
opportunities to prioritize one traffic stream over the other. Instead the separation remains a measure of security
in shielding the intra-node communications of the scale-out configurations.

NOTE
SAP recommends separating network traffic to the client/application side and intra-node traffic as described in this
document. Therefore putting an architecture in place as shown in the last graphics is recommended. Also consult your
security and compliance team for requirements that deviate from the recommendation

From a networking point of view the minimum required network architecture would look like:
Installing SAP HANA scale -out n Azure
Installing a scale-out SAP configuration, you need to perform rough steps of:
Deploying new or adapting an existing Azure VNet infrastructure
Deploying the new VMs using Azure Managed Premium Storage, Ultra disk volumes, and/or NFS volumes
based on ANF
Adapt network routing to make sure that, for example, intra-node communication between VMs is not
routed through an NVA.
Install the SAP HANA master node.
Adapt configuration parameters of the SAP HANA master node
Continue with the installation of the SAP HANA worker nodes
Installation of SAP HANA in scale-out configuration
As your Azure VM infrastructure is deployed, and all other preparations are done, you need to install the SAP
HANA scale-out configurations in these steps:
Install the SAP HANA master node according to SAP's documentation
In case of using Azure Premium Storage or Ultra disk storage with non-shared disks of /hana/data and
/hana/log, you need to change the global.ini file and add the parameter 'basepath_shared = no' to the
global.ini file. This parameter enables SAP HANA to run in scale-out without 'shared' /hana/data and
/hana/log volumes between the nodes. Details are documented in SAP Note #2080991. If you are using NFS
volumes based on ANF for /hana/data and /hana/log, you don't need to make this change
After the eventual change in the global.ini parameter, restart the SAP HANA instance
Add additional worker nodes. See also
https://help.sap.com/viewer/6b94445c94ae495c83a19646e7c3fd56/2.0.00/en-
US/0d9fe701e2214e98ad4f8721f6558c34.html. Specify the internal network for SAP HANA inter-node
communication during the installation or afterwards using, for example, the local hdblcm. For more detailed
documentation, see also SAP Note #2183363.
Details to set up an SAP HANA scale-out system with standby node on SUSE Linux is described in detail in Deploy
a SAP HANA scale-out system with standby node on Azure VMs by using Azure NetApp Files on SUSE Linux
Enterprise Server. Equivalent documentation for Red Hat can be found in the article Deploy a SAP HANA scale-out
system with standby node on Azure VMs by using Azure NetApp Files on Red Hat Enterprise Linux.

SAP HANA Dynamic Tiering 2.0 for Azure virtual machines


In addition to the SAP HANA certifications on Azure M-series VMs, SAP HANA Dynamic Tiering 2.0 is also
supported on Microsoft Azure (see SAP HANA Dynamic Tiering documentation links further down). While there is
no difference in installing the product or operating it, for example, via SAP HANA Cockpit inside an Azure Virtual
Machine, there are a few important items, which are mandatory for official support on Azure. These key points are
described below. Throughout the article, the abbreviation "DT 2.0" is going to be used instead of the full name
Dynamic Tiering 2.0.
SAP HANA Dynamic Tiering 2.0 isn't supported by SAP BW or S4HANA. Main use cases right now are native
HANA applications.
Overview
The picture below gives an overview regarding DT 2.0 support on Microsoft Azure. There is a set of mandatory
requirements, which has to be followed to comply with the official certification:
DT 2.0 must be installed on a dedicated Azure VM. It may not run on the same VM where SAP HANA runs
SAP HANA and DT 2.0 VMs must be deployed within the same Azure Vnet
The SAP HANA and DT 2.0 VMs must be deployed with Azure accelerated networking enabled
Storage type for the DT 2.0 VMs must be Azure Premium Storage
Multiple Azure disks must be attached to the DT 2.0 VM
It's required to create a software raid / striped volume (either via lvm or mdadm) using striping across the
Azure disks
More details are going to be explained in the following sections.

Dedicated Azure VM for SAP HANA DT 2.0


On Azure IaaS, DT 2.0 is only supported on a dedicated VM. It is not allowed to run DT 2.0 on the same Azure VM
where the HANA instance is running. Initially two VM types can be used to run SAP HANA DT 2.0:
M64-32ms
E32sv3
See VM type description here
Given the basic idea of DT 2.0, which is about offloading "warm" data in order to save costs it makes sense to use
corresponding VM sizes. There is no strict rule though regarding the possible combinations. It depends on the
specific customer workload.
Recommended configurations would be:

SA P H A N A VM T Y P E DT 2. 0 VM T Y P E

M128ms M64-32ms

M128s M64-32ms

M64ms E32sv3

M64s E32sv3

All combinations of SAP HANA-certified M-series VMs with supported DT 2.0 VMs (M64-32ms and E32sv3) are
possible.
Azure networking and SAP HANA DT 2.0
Installing DT 2.0 on a dedicated VM requires network throughput between the DT 2.0 VM and the SAP HANA VM
of 10 Gb minimum. Therefore it's mandatory to place all VMs within the same Azure Vnet and enable Azure
accelerated networking.
See additional information about Azure accelerated networking here
VM Storage for SAP HANA DT 2.0
According to DT 2.0 best practice guidance, the disk IO throughput should be minimum 50 MB/sec per physical
core. Looking at the spec for the two Azure VM types, which are supported for DT 2.0 the maximum disk IO
throughput limit for the VM look like:
E32sv3 : 768 MB/sec (uncached) which means a ratio of 48 MB/sec per physical core
M64-32ms : 1000 MB/sec (uncached) which means a ratio of 62.5 MB/sec per physical core
It is required to attach multiple Azure disks to the DT 2.0 VM and create a software raid (striping) on OS level to
achieve the max limit of disk throughput per VM. A single Azure disk cannot provide the throughput to reach the
max VM limit in this regard. Azure Premium storage is mandatory to run DT 2.0.
Details about available Azure disk types can be found here
Details about creating software raid via mdadm can be found here
Details about configuring LVM to create a striped volume for max throughput can be found here
Depending on size requirements, there are different options to reach the max throughput of a VM. Here are
possible data volume disk configurations for every DT 2.0 VM type to achieve the upper VM throughput limit. The
E32sv3 VM should be considered as an entry level for smaller workloads. In case it should turn out that it's not
fast enough it might be necessary to resize the VM to M64-32ms. As the M64-32ms VM has much memory, the
IO load might not reach the limit especially for read intensive workloads. Therefore fewer disks in the stripe set
might be sufficient depending on the customer specific workload. But to be on the safe side the disk
configurations below were chosen to guarantee the maximum throughput:

VM SK U DISK C O N F IG 1 DISK C O N F IG 2 DISK C O N F IG 3 DISK C O N F IG 4 DISK C O N F IG 5

M64-32ms 4 x P50 -> 16 TB 4 x P40 -> 8 TB 5 x P30 -> 5 TB 7 x P20 -> 3.5 TB 8 x P15 -> 2 TB

E32sv3 3 x P50 -> 12 TB 3 x P40 -> 6 TB 4 x P30 -> 4 TB 5 x P20 -> 2.5 TB 6 x P15 -> 1.5 TB
Especially in case the workload is read-intense it could boost IO performance to turn on Azure host cache "read-
only" as recommended for the data volumes of database software. Whereas for the transaction log Azure host
disk cache must be "none".
Regarding the size of the log volume a recommended starting point is a heuristic of 15% of the data size. The
creation of the log volume can be accomplished by using different Azure disk types depending on cost and
throughput requirements. For the log volume, high I/O throughput is required. In case of using the VM type M64-
32ms it is mandatory to enable Write Accelerator. Azure Write Accelerator provides optimal disk write latency for
the transaction log (only available for M-series). There are some items to consider though like the maximum
number of disks per VM type. Details about Write Accelerator can be found here
Here are a few examples about sizing the log volume:

LO G VO L UM E A N D DISK T Y P E C O N F IG LO G VO L UM E A N D DISK T Y P E C O N F IG
DATA VO L UM E SIZ E A N D DISK T Y P E 1 2

4 x P50 -> 16 TB 5 x P20 -> 2.5 TB 3 x P30 -> 3 TB

6 x P15 -> 1.5 TB 4 x P6 -> 256 GB 1 x P15 -> 256 GB

Like for SAP HANA scale-out, the /hana/shared directory has to be shared between the SAP HANA VM and the DT
2.0 VM. The same architecture as for SAP HANA scale-out using dedicated VMs, which act as a highly available
NFS server is recommended. In order to provide a shared backup volume, the identical design can be used. But it
is up to the customer if HA would be necessary or if it is sufficient to just use a dedicated VM with enough storage
capacity to act as a backup server.
Links to DT 2.0 documentation
SAP HANA Dynamic Tiering installation and update guide
SAP HANA Dynamic Tiering tutorials and resources
SAP HANA Dynamic Tiering PoC
SAP HANA 2.0 SPS 02 dynamic tiering enhancements

Operations for deploying SAP HANA on Azure VMs


The following sections describe some of the operations related to deploying SAP HANA systems on Azure VMs.
Back up and restore operations on Azure VMs
The following documents describe how to back up and restore your SAP HANA deployment:
SAP HANA backup overview
SAP HANA file-level backup
SAP HANA storage snapshot benchmark
Start and restart VMs that contain SAP HANA
A prominent feature of the Azure public cloud is that you're charged only for your computing minutes. For
example, when you shut down a VM that is running SAP HANA, you're billed only for the storage costs during that
time. Another feature is available when you specify static IP addresses for your VMs in your initial deployment.
When you restart a VM that has SAP HANA, the VM restarts with its prior IP addresses.
Use SAProuter for SAP remote support
If you have a site-to-site connection between your on-premises locations and Azure, and you're running SAP
components, then you're probably already running SAProuter. In this case, complete the following items for
remote support:
Maintain the private and static IP address of the VM that hosts SAP HANA in the SAProuter configuration.
Configure the NSG of the subnet that hosts the HANA VM to allow traffic through TCP/IP port 3299.
If you're connecting to Azure through the internet, and you don't have an SAP router for the VM with SAP HANA,
then you need to install the component. Install SAProuter in a separate VM in the Management subnet. The
following image shows a rough schema for deploying SAP HANA without a site-to-site connection and with
SAProuter:

Be sure to install SAProuter in a separate VM and not in your Jumpbox VM. The separate VM must have a static IP
address. To connect your SAProuter to the SAProuter that is hosted by SAP, contact SAP for an IP address. (The
SAProuter that is hosted by SAP is the counterpart of the SAProuter instance that you install on your VM.) Use the
IP address from SAP to configure your SAProuter instance. In the configuration settings, the only necessary port is
TCP port 3299.
For more information on how to set up and maintain remote support connections through SAProuter, see the SAP
documentation.
High-availability with SAP HANA on Azure native VMs
If you're running SUSE Linux Enterprise Server or Red Hat, you can establish a Pacemaker cluster with STONITH
devices. You can use the devices to set up an SAP HANA configuration that uses synchronous replication with
HANA System Replication and automatic failover. For more information listed in the 'next steps' section.

Next Steps
Get familiar with the articles as listed
SAP HANA Azure virtual machine storage configurations
Deploy a SAP HANA scale-out system with standby node on Azure VMs by using Azure NetApp Files on SUSE
Linux Enterprise Server
Deploy a SAP HANA scale-out system with standby node on Azure VMs by using Azure NetApp Files on Red
Hat Enterprise Linux
High availability of SAP HANA on Azure VMs on SUSE Linux Enterprise Server
High availability of SAP HANA on Azure VMs on Red Hat Enterprise Linux
SAP HANA Azure virtual machine storage
configurations
12/22/2020 • 23 minutes to read • Edit Online

Azure provides different types of storage that are suitable for Azure VMs that are running SAP HANA. The
SAP HANA cer tified Azure storage types that can be considered for SAP HANA deployments list like:
Azure premium SSD or premium storage
Ultra disk
Azure NetApp Files
To learn about these disk types, see the article Azure Storage types for SAP workload and Select a disk type
Azure offers two deployment methods for VHDs on Azure Standard and premium storage. We expect you to
take advantage of Azure managed disk for Azure block storage deployments.
For a list of storage types and their SLAs in IOPS and storage throughput, review the Azure documentation
for managed disks.

IMPORTANT
Independent of the Azure storage type chosen, the file system that is used on that storage needs to be supported by
SAP for the specific operating system and DBMS. SAP support note #2972496 lists the supported file systems for
different operating systems and databases, including SAP HANA. This applies to all volumes SAP HANA might access
for reading and writing for whatever task. Specifically using NFS on Azure for SAP HANA, additional restrictions of NFS
versions apply as stated later in this article

The minimum SAP HANA certified conditions for the different storage types are:
Azure premium storage - /hana/log is required to be supported by Azure Write Accelerator. The
/hana/data volume could be placed on premium storage without Azure Write Accelerator or on Ultra
disk
Azure Ultra disk at least for the /hana/log volume. The /hana/data volume can be placed on either
premium storage without Azure Write Accelerator or in order to get faster restart times Ultra disk
NFS v4.1 volumes on top of Azure NetApp Files for /hana/log and /hana/data . The volume of
/hana/shared can use NFS v3 or NFS v4.1 protocol
Some of the storage types can be combined. For example, it is possible to put /hana/data onto premium
storage and /hana/log can be placed on Ultra disk storage in order to get the required low latency. If you
use a volume based on ANF for /hana/data , /hana/log volume needs to be based on NFS on top of ANF as
well. Using NFS on top of ANF for one of the volumes (like /hana/data) and Azure premium storage or Ultra
disk for the other volume (like /hana/log ) is not suppor ted .
In the on-premises world, you rarely had to care about the I/O subsystems and its capabilities. Reason was
that the appliance vendor needed to make sure that the minimum storage requirements are met for SAP
HANA. As you build the Azure infrastructure yourself, you should be aware of some of these SAP issued
requirements. Some of the minimum throughput characteristics that SAP is recommending, are:
Read/write on /hana/log of 250 MB/sec with 1 MB I/O sizes
Read activity of at least 400 MB/sec for /hana/data for 16 MB and 64 MB I/O sizes
Write activity of at least 250 MB/sec for /hana/data with 16 MB and 64 MB I/O sizes
Given that low storage latency is critical for DBMS systems, even as DBMS, like SAP HANA, keep data in-
memory. The critical path in storage is usually around the transaction log writes of the DBMS systems. But
also operations like writing savepoints or loading data in-memory after crash recovery can be critical.
Therefore, it is mandator y to leverage Azure premium storage, Ultra disk, or ANF for /hana/data and
/hana/log volumes.
Some guiding principles in selecting your storage configuration for HANA can be listed like:
Decide on the type of storage based on Azure Storage types for SAP workload and Select a disk type
The overall VM I/O throughput and IOPS limits in mind when sizing or deciding for a VM. Overall VM
storage throughput is documented in the article Memory optimized virtual machine sizes
When deciding for the storage configuration, try to stay below the overall throughput of the VM with your
/hana/data volume configuration. Writing savepoints, SAP HANA can be aggressive issuing I/Os. It is
easily possible to push up to throughput limits of your /hana/data volume when writing a savepoint. If
your disk(s) that build the /hana/data volume have a higher throughput than your VM allows, you could
run into situations where throughput utilized by the savepoint writing is interfering with throughput
demands of the redo log writes. A situation that can impact the application throughput
If you are using Azure premium storage, the least expensive configuration is to use logical volume
managers to build stripe sets to build the /hana/data and /hana/log volumes

IMPORTANT
The suggestions for the storage configurations are meant as directions to start with. Running workload and analyzing
storage utilization patterns, you might realize that you are not utilizing all the storage bandwidth or IOPS provided.
You might consider downsizing on storage then. Or in contrary, your workload might need more storage throughput
than suggested with these configurations. As a result, you might need to deploy more capacity, IOPS or throughput.
In the field of tension between storage capacity required, storage latency needed, storage throughput and IOPS
required and least expensive configuration, Azure offers enough different storage types with different capabilities and
different price points to find and adjust to the right compromise for you and your HANA workload.

Linux I/O Scheduler mode


Linux has several different I/O scheduling modes. Common recommendation through Linux vendors and
SAP is to reconfigure the I/O scheduler mode for disk volumes from the mq-deadline or kyber mode to the
noop (non-multiqueue) or none for (multiqueue) mode. Details are referenced in SAP Note #1984787.

Solutions with premium storage and Azure Write Accelerator for


Azure M-Series virtual machines
Azure Write Accelerator is a functionality that is available for Azure M-Series VMs exclusively. As the name
states, the purpose of the functionality is to improve I/O latency of writes against the Azure premium storage.
For SAP HANA, Write Accelerator is supposed to be used against the /hana/log volume only. Therefore, the
/hana/data and /hana/log are separate volumes with Azure Write Accelerator supporting the /hana/log
volume only.

IMPORTANT
When using Azure premium storage, the usage of Azure Write Accelerator for the /hana/log volume is mandatory.
Write Accelerator is available for premium storage and M-Series and Mv2-Series VMs only. Write Accelerator is not
working in combination with other Azure VM families, like Esv3 or Edsv4.
The caching recommendations for Azure premium disks below are assuming the I/O characteristics for SAP
HANA that list like:
There hardly is any read workload against the HANA data files. Exceptions are large sized I/Os after restart
of the HANA instance or when data is loaded into HANA. Another case of larger read I/Os against data
files can be HANA database backups. As a result read caching mostly does not make sense since in most
of the cases, all data file volumes need to be read completely.
Writing against the data files is experienced in bursts based by HANA savepoints and HANA crash
recovery. Writing savepoints is asynchronous and are not holding up any user transactions. Writing data
during crash recovery is performance critical in order to get the system responding fast again. However,
crash recovery should be rather exceptional situations
There are hardly any reads from the HANA redo files. Exceptions are large I/Os when performing
transaction log backups, crash recovery, or in the restart phase of a HANA instance.
Main load against the SAP HANA redo log file is writes. Dependent on the nature of workload, you can
have I/Os as small as 4 KB or in other cases I/O sizes of 1 MB or more. Write latency against the SAP
HANA redo log is performance critical.
All writes need to be persisted on disk in a reliable fashion
Recommendation: As a result of these obser ved I/O patterns by SAP HANA, the caching for the
different volumes using Azure premium storage should be set like:
/hana/data - no caching or read caching
/hana/log - no caching - exception for M- and Mv2-Series VMs where Azure Write Accelerator should be
enabled
/hana/shared - read caching
OS disk - don't change default caching that is set by Azure at creation time of the VM
If you are using LVM or mdadm to build stripe sets across several Azure premium disks, you need to define
stripe sizes. These sizes differ between /hana/data and /hana/log . Recommendation: As stripe sizes
the recommendation is to use:
256 KB for /hana/data
64 KB for /hana/log

NOTE
The stripe size for /hana/data got changed from earlier recommendations calling for 64 KB or 128 KB to 256 KB
based on customer experiences with more recent Linux versions. The size of 256 KB is providing slightly better
performance. We also changed the recommendation for stripe sizes of /hana/log from 32 KB to 64 KB in order to get
enough throughput with larger I/O sizes.

NOTE
You don't need to configure any redundancy level using RAID volumes since Azure block storage keeps three images
of a VHD. The usage of a stripe set with Azure premium disks is purely to configure volumes that provide sufficient
IOPS and/or I/O throughput.

Accumulating a number of Azure VHDs underneath a stripe set, is accumulative from an IOPS and storage
throughput side. So, if you put a stripe set across over 3 x P30 Azure premium storage disks, it should give
you three times the IOPS and three times the storage throughput of a single Azure premium Storage P30
disk.
IMPORTANT
In case you are using LVM or mdadm as volume manager to create stripe sets across multiple Azure premium disks,
the three SAP HANA FileSystems /data, /log and /shared must not be put in a default or root volume group. It is
highly recommended to follow the Linux Vendors guidance which is typically to create individual Volume Groups for
/data, /log and /shared.

Azure burst functionality for premium storage


For Azure premium storage disks smaller or equal to 512 GiB in capacity, burst functionality is offered. The
exact way how disk bursting works is described in the article Disk bursting. When you read the article, you
understand the concept of accruing IOPS and throughput in the times when your I/O workload is below the
nominal IOPS and throughput of the disks (for details on the nominal throughput see Managed Disk pricing).
You are going to accrue the delta of IOPS and throughput between your current usage and the nominal
values of the disk. The bursts are limited to a maximum of 30 minutes.
The ideal cases where this burst functionality can be planned in is likely going to be the volumes or disks that
contain data files for the different DBMS. The I/O workload expected against those volumes, especially with
small to mid-ranged systems is expected to look like:
Low to moderate read workload since data ideally is cached in memory, or like in the case of HANA
should be completely in memory
Bursts of write triggered by database checkpoints or savepoints that are issued on a regular basis
Backup workload that reads in a continuous stream in cases where backups are not executed via storage
snapshots
For SAP HANA, load of the data into memory after an instance restart
Especially on smaller DBMS systems where your workload is handling a few hundred transactions per
seconds only, such a burst functionality can make sense as well for the disks or volumes that store the
transaction or redo log. Expected workload against such a disk or volumes looks like:
Regular writes to the disk that are dependent on the workload and the nature of workload since every
commit issued by the application is likely to trigger an I/O operation
Higher workload in throughput for cases of operational tasks, like creating or rebuilding indexes
Read bursts when performing transaction log or redo log backups
Production recommended storage solution based on Azure premium storage

IMPORTANT
SAP HANA certification for Azure M-Series virtual machines is exclusively with Azure Write Accelerator for the
/hana/log volume. As a result, production scenario SAP HANA deployments on Azure M-Series virtual machines are
expected to be configured with Azure Write Accelerator for the /hana/log volume.

NOTE
In scenarios that involve Azure premium storage, we are implementing burst capabilities into the configuration. As you
are using storage test tools of whatever shape or form, keep the way Azure premium disk bursting works in mind.
Running the storage tests delivered through the SAP HWCCT or HCMT tool, we are not expecting that all tests will
pass the criteria since some of the tests will exceed the bursting credits you can accumulate. Especially when all the
tests run sequentially without break.
NOTE
For production scenarios, check whether a certain VM type is supported for SAP HANA by SAP in the SAP
documentation for IAAS.

Recommendation: The recommended configurations with Azure premium storage for


production scenarios look like:
Configuration for SAP /hana/data volume:

M A X. VM P RO VISIO N M A XIM UM
I/ O ED B URST
T H RO UGH / H A N A / DA T H RO UGH T H RO UGH B URST
VM SK U RA M P UT TA P UT P UT IO P S IO P S

M32ts 192 GiB 500 MBps 4 x P6 200 MBps 680 MBps 960 14,000

M32ls 256 GiB 500 MBps 4 x P6 200 MBps 680 MBps 960 14,000

M64ls 512 GiB 1,000 4 x P10 400 MBps 680 MBps 2,000 14,000
MBps

M64s 1,000 GiB 1,000 4 x P15 500 MBps 680 MBps 4,400 14,000
MBps

M64ms 1,750 GiB 1,000 4 x P20 600 MBps 680 MBps 9,200 14,000
MBps

M128s 2,000 GiB 2,000 4 x P20 600 MBps 680 MBps 9,200 14,000
MBps

M128ms 3,800 GiB 2,000 4 x P30 800 MBps no 20,000 no


MBps bursting bursting

M208s_v2 2,850 GiB 1,000 4 x P30 800 MBps no 20,000 no


MBps bursting bursting

M208ms_v 5,700 GiB 1,000 4 x P40 1,000 no 30,000 no


2 MBps MBps bursting bursting

M416s_v2 5,700 GiB 2,000 4 x P40 1,000 no 30,000 no


MBps MBps bursting bursting

M416ms_v 11,400 GiB 2,000 4 x P50 2,000 no 30,000 no


2 MBps MBps bursting bursting

For the /hana/log volume. the configuration would look like:

M A X. VM P RO VISIO N M A XIM UM
I/ O ED B URST
T H RO UGH / H A N A / LO T H RO UGH T H RO UGH B URST
VM SK U RA M P UT G VO L UM E P UT P UT IO P S IO P S

M32ts 192 GiB 500 MBps 3 x P10 300 MBps 510 MBps 1,500 10,500

M32ls 256 GiB 500 MBps 3 x P10 300 MBps 510 MBps 1,500 10,500
M A X. VM P RO VISIO N M A XIM UM
I/ O ED B URST
T H RO UGH / H A N A / LO T H RO UGH T H RO UGH B URST
VM SK U RA M P UT G VO L UM E P UT P UT IO P S IO P S

M64ls 512 GiB 1,000 3 x P10 300 MBps 510 MBps 1,500 10,500
MBps

M64s 1,000 GiB 1,000 3 x P15 375 MBps 510 MBps 3,300 10,500
MBps

M64ms 1,750 GiB 1,000 3 x P15 375 MBps 510 MBps 3,300 10,500
MBps

M128s 2,000 GiB 2,000 3 x P15 375 MBps 510 MBps 3,300 10,500
MBps

M128ms 3,800 GiB 2,000 3 x P15 375 MBps 510 MBps 3,300 10,500
MBps

M208s_v2 2,850 GiB 1,000 3 x P15 375 MBps 510 MBps 3,300 10,500
MBps

M208ms_v 5,700 GiB 1,000 3 x P15 375 MBps 510 MBps 3,300 10,500
2 MBps

M416s_v2 5,700 GiB 2,000 3 x P15 375 MBps 510 MBps 3,300 10,500
MBps

M416ms_v 11,400 GiB 2,000 3 x P15 375 MBps 510 MBps 3,300 10,500
2 MBps

For the other volumes, the configuration would look like:

M A X. VM I/ O
VM SK U RA M T H RO UGH P UT / H A N A / SH A RED / RO OT VO L UM E / USR/ SA P

M32ts 192 GiB 500 MBps 1 x P15 1 x P6 1 x P6

M32ls 256 GiB 500 MBps 1 x P15 1 x P6 1 x P6

M64ls 512 GiB 1000 MBps 1 x P20 1 x P6 1 x P6

M64s 1,000 GiB 1,000 MBps 1 x P30 1 x P6 1 x P6

M64ms 1,750 GiB 1,000 MBps 1 x P30 1 x P6 1 x P6

M128s 2,000 GiB 2,000 MBps 1 x P30 1 x P10 1 x P6

M128ms 3,800 GiB 2,000 MBps 1 x P30 1 x P10 1 x P6

M208s_v2 2,850 GiB 1,000 MBps 1 x P30 1 x P10 1 x P6

M208ms_v2 5,700 GiB 1,000 MBps 1 x P30 1 x P10 1 x P6


M A X. VM I/ O
VM SK U RA M T H RO UGH P UT / H A N A / SH A RED / RO OT VO L UM E / USR/ SA P

M416s_v2 5,700 GiB 2,000 MBps 1 x P30 1 x P10 1 x P6

M416ms_v2 11,400 GiB 2,000 MBps 1 x P30 1 x P10 1 x P6

Check whether the storage throughput for the different suggested volumes meets the workload that you
want to run. If the workload requires higher volumes for /hana/data and /hana/log , you need to increase
the number of Azure premium storage VHDs. Sizing a volume with more VHDs than listed increases the IOPS
and I/O throughput within the limits of the Azure virtual machine type.
Azure Write Accelerator only works in conjunction with Azure managed disks. So at least the Azure premium
storage disks forming the /hana/log volume need to be deployed as managed disks. More detailed
instructions and restrictions of Azure Write Accelerator can be found in the article Write Accelerator.
For the HANA certified VMs of the Azure Esv3 family and the Edsv4, you need to ANF for the /hana/data
and /hana/log volume. Or you need to leverage Azure Ultra disk storage instead of Azure premium storage
only for the /hana/log volume. As a result, the configurations for the /hana/data volume on Azure
premium storage could look like:

M A X. VM P RO VISIO N M A XIM UM
I/ O ED B URST
T H RO UGH / H A N A / DA T H RO UGH T H RO UGH B URST
VM SK U RA M P UT TA P UT P UT IO P S IO P S

E20ds_v4 160 GiB 480 MBps 3 x P10 300 MBps 510 MBps 1,500 10,500

E32ds_v4 256 GiB 768 MBps 3 x P10 300 MBps 510 MBps 1,500 10,500

E48ds_v4 384 GiB 1,152 3 x P15 375 MBps 510 MBps 3,300 10,500
MBps

E64ds_v4 504 GiB 1,200 3 x P15 375 MBps 510 MBps 3,300 10,500
MBps

E64s_v3 432 GiB 1,200 3 x P15 375 MBps 510 MBps 3,300 10,500
MB/s

For the other volumes, including /hana/log on Ultra disk, the configuration could look like:

M A X. VM /HANA/L
I/ O /HANA/L O G I/ O
T H RO UG OG T H RO UG /HANA/L /HANA/S / RO OT / USR/ SA
VM SK U RA M H P UT VO L UM E H P UT O G IO P S H A RED VO L UM E P

E20ds_v4 160 GiB 480 80 GB 250 1,800 1 x P15 1 x P6 1 x P6


MBps MBps

E32ds_v4 256 GiB 768 128 GB 250 1,800 1 x P15 1 x P6 1 x P6


MBps MBps

E48ds_v4 384 GiB 1,152 192 GB 250 1,800 1 x P20 1 x P6 1 x P6


MBps MBps

E64ds_v4 504 GiB 1,200 256 GB 250 1,800 1 x P20 1 x P6 1 x P6


MBps MBps
M A X. VM /HANA/L
I/ O /HANA/L O G I/ O
T H RO UG OG T H RO UG /HANA/L /HANA/S / RO OT / USR/ SA
VM SK U RA M H P UT VO L UM E H P UT O G IO P S H A RED VO L UM E P

E64s_v3 432 GiB 1,200 220 GB 250 1,800 1 x P20 1 x P6 1 x P6


MBps MBps

Azure Ultra disk storage configuration for SAP HANA


Another Azure storage type is called Azure Ultra disk. The significant difference between Azure storage
offered so far and Ultra disk is that the disk capabilities are not bound to the disk size anymore. As a
customer you can define these capabilities for Ultra disk:
Size of a disk ranging from 4 GiB to 65,536 GiB
IOPS range from 100 IOPS to 160K IOPS (maximum depends on VM types as well)
Storage throughput from 300 MB/sec to 2,000 MB/sec
Ultra disk gives you the possibility to define a single disk that fulfills your size, IOPS, and disk throughput
range. Instead of using logical volume managers like LVM or MDADM on top of Azure premium storage to
construct volumes that fulfill IOPS and storage throughput requirements. You can run a configuration mix
between Ultra disk and premium storage. As a result, you can limit the usage of Ultra disk to the performance
critical /hana/data and /hana/log volumes and cover the other volumes with Azure premium storage
Other advantages of Ultra disk can be the better read latency in comparison to premium storage. The faster
read latency can have advantages when you want to reduce the HANA startup times and the subsequent load
of the data into memory. Advantages of Ultra disk storage also can be felt when HANA is writing savepoints.

NOTE
Ultra disk is not yet present in all the Azure regions and is also not yet supporting all VM types listed below. For
detailed information where Ultra disk is available and which VM families are supported, check the article What disk
types are available in Azure?.

Production recommended storage solution with pure Ultra disk configuration


In this configuration, you keep the /hana/data and /hana/log volumes separately. The suggested values are
derived out of the KPIs that SAP has to certify VM types for SAP HANA and storage configurations as
recommended in the SAP TDI Storage Whitepaper.
The recommendations are often exceeding the SAP minimum requirements as stated earlier in this article.
The listed recommendations are a compromise between the size recommendations by SAP and the
maximum storage throughput the different VM types provide.

NOTE
Azure Ultra disk is enforcing a minimum of 2 IOPS per Gigabyte capacity of a disk

M A X. VM /HANA/D /HANA/L
I/ O /HANA/ ATA I/ O /HANA/L O G I/ O
T H RO UG DATA T H RO UG /HANA/D OG T H RO UG /HANA/L
VM SK U RA M H P UT VO L UM E H P UT ATA IO P S VO L UM E H P UT O G IO P S

E20ds_v4 160 GiB 480 200 GB 400 2,500 80 GB 250 MB 1,800


MB/s MBps
M A X. VM /HANA/D /HANA/L
I/ O /HANA/ ATA I/ O /HANA/L O G I/ O
T H RO UG DATA T H RO UG /HANA/D OG T H RO UG /HANA/L
VM SK U RA M H P UT VO L UM E H P UT ATA IO P S VO L UM E H P UT O G IO P S

E32ds_v4 256 GiB 768 300 GB 400 2,500 128 GB 250 1,800
MB/s MBps MBps

E48ds_v4 384 GiB 1152 460 GB 400 3,000 192 GB 250 1,800
MB/s MBps MBps

E64ds_v4 504 GiB 1200 610 GB 400 3,500 256 GB 250 1,800
MB/s MBps MBps

E64s_v3 432 GiB 1,200 610 GB 400 3,500 220 GB 250 MB 1,800
MB/s MBps

M32ts 192 GiB 500 250 GB 400 2,500 96 GB 250 1,800


MB/s MBps MBps

M32ls 256 GiB 500 300 GB 400 2,500 256 GB 250 1,800
MB/s MBps MBps

M64ls 512 GiB 1,000 620 GB 400 3,500 256 GB 250 1,800
MB/s MBps MBps

M64s 1,000 1,000 1,200 GB 600 5,000 512 GB 250 2,500


GiB MB/s MBps MBps

M64ms 1,750 1,000 2,100 GB 600 5,000 512 GB 250 2,500


GiB MB/s MBps MBps

M128s 2,000 2,000 2,400 GB 750 7,000 512 GB 250 2,500


GiB MB/s MBps MBps

M128ms 3,800 2,000 4,800 GB 750 9,600 512 GB 250 2,500


GiB MB/s MBps MBps

M208s_v 2,850 1,000 3,500 GB 750 7,000 512 GB 250 2,500


2 GiB MB/s MBps MBps

M208ms 5,700 1,000 7,200 GB 750 14,400 512 GB 250 2,500


_v2 GiB MB/s MBps MBps

M416s_v 5,700 2,000 7,200 GB 1,000 14,400 512 GB 400 4,000


2 GiB MB/s MBps MBps

M416ms 11,400 2,000 14,400 1,500 28,800 512 GB 400 4,000


_v2 GiB MB/s GB MBps MBps

The values listed are intended to be a star ting point and need to be evaluated against the real
demands. The advantage with Azure Ultra disk is that the values for IOPS and throughput can be adapted
without the need to shut down the VM or halting the workload applied to the system.
NOTE
So far, storage snapshots with Ultra disk storage is not available. This blocks the usage of VM snapshots with Azure
Backup Services

NFS v4.1 volumes on Azure NetApp Files


For detail on ANF for HANA, read the document NFS v4.1 volumes on Azure NetApp Files for SAP HANA

Cost conscious solution with Azure premium storage


So far, the Azure premium storage solution described in this document in section Solutions with premium
storage and Azure Write Accelerator for Azure M-Series virtual machines were meant for SAP HANA
production supported scenarios. One of the characteristics of production supportable configurations is the
separation of the volumes for SAP HANA data and redo log into two different volumes. Reason for such a
separation is that the workload characteristics on the volumes are different. And that with the suggested
production configurations, different type of caching or even different types of Azure block storage could be
necessary. For non-production scenarios, some of the considerations taken for production systems may not
apply to more low end non-production systems. As a result the HANA data and log volume could be
combined. Though eventually with some culprits, like eventually not meeting certain throughput or latency
KPIs that are required for production systems. Another aspect to reduce costs in such environments can be
the usage of Azure Standard SSD storage. Keep in mind that choosing Standard SSD or Standard HDD Azure
storage has impact on your single VM SLAs as documented in the article SLA for Virtual Machines.
A less costly alternative for such configurations could look like:

/ H A N A / DA
TA A N D
/ H A N A / LO
G
M A X. VM ST RIP ED
I/ O W IT H LVM
T H RO UGH OR / H A N A / SH / RO OT C O M M EN T
VM SK U RA M P UT M DA DM A RED VO L UM E / USR/ SA P S

DS14v2 112 GiB 768 MB/s 4 x P6 1 x E10 1 x E6 1 x E6 Will not


achieve
less than
1ms
storage
latency1

E16v3 128 GiB 384 MB/s 4 x P6 1 x E10 1 x E6 1 x E6 VM type


not HANA
certified
Will not
achieve
less than
1ms
storage
latency1
/ H A N A / DA
TA A N D
/ H A N A / LO
G
M A X. VM ST RIP ED
I/ O W IT H LVM
T H RO UGH OR / H A N A / SH / RO OT C O M M EN T
VM SK U RA M P UT M DA DM A RED VO L UM E / USR/ SA P S

M32ts 192 GiB 500 MB/s 3 x P10 1 x E15 1 x E6 1 x E6 Using


Write
Accelerator
for
combined
data and
log volume
will limit
IOPS rate
to 5,0002

E20ds_v4 160 GiB 480 MB/s 4 x P6 1 x E15 1 x E6 1 x E6 Will not


achieve
less than
1ms
storage
latency1

E32v3 256 GiB 768 MB/s 4 x P10 1 x E15 1 x E6 1 x E6 VM type


not HANA
certified
Will not
achieve
less than
1ms
storage
latency1

E32ds_v4 256 GiB 768 MBps 4 x P10 1 x E15 1 x E6 1 x E6 Will not


achieve
less than
1ms
storage
latency1

M32ls 256 GiB 500 MB/s 4 x P10 1 x E15 1 x E6 1 x E6 Using


Write
Accelerator
for
combined
data and
log volume
will limit
IOPS rate
to 5,0002

E48ds_v4 384 GiB 1,152 6 x P10 1 x E20 1 x E6 1 x E6 Will not


MBps achieve
less than
1ms
storage
latency1
/ H A N A / DA
TA A N D
/ H A N A / LO
G
M A X. VM ST RIP ED
I/ O W IT H LVM
T H RO UGH OR / H A N A / SH / RO OT C O M M EN T
VM SK U RA M P UT M DA DM A RED VO L UM E / USR/ SA P S

E64v3 432 GiB 1,200 6 x P10 1 x E20 1 x E6 1 x E6 Will not


MB/s achieve
less than
1ms
storage
latency1

E64ds_v4 504 GiB 1200 MB/s 7 x P10 1 x E20 1 x E6 1 x E6 Will not


achieve
less than
1ms
storage
latency1

M64ls 512 GiB 1,000 7 x P10 1 x E20 1 x E6 1 x E6 Using


MB/s Write
Accelerator
for
combined
data and
log volume
will limit
IOPS rate
to 10,0002

M64s 1,000 GiB 1,000 7 x P15 1 x E30 1 x E6 1 x E6 Using


MB/s Write
Accelerator
for
combined
data and
log volume
will limit
IOPS rate
to 10,0002

M64ms 1,750 GiB 1,000 6 x P20 1 x E30 1 x E6 1 x E6 Using


MB/s Write
Accelerator
for
combined
data and
log volume
will limit
IOPS rate
to 10,0002
/ H A N A / DA
TA A N D
/ H A N A / LO
G
M A X. VM ST RIP ED
I/ O W IT H LVM
T H RO UGH OR / H A N A / SH / RO OT C O M M EN T
VM SK U RA M P UT M DA DM A RED VO L UM E / USR/ SA P S

M128s 2,000 GiB 2,000 6 x P20 1 x E30 1 x E10 1 x E6 Using


MB/s Write
Accelerator
for
combined
data and
log volume
will limit
IOPS rate
to 20,0002

M208s_v2 2,850 GiB 1,000 4 x P30 1 x E30 1 x E10 1 x E6 Using


MB/s Write
Accelerator
for
combined
data and
log volume
will limit
IOPS rate
to 10,0002

M128ms 3,800 GiB 2,000 5 x P30 1 x E30 1 x E10 1 x E6 Using


MB/s Write
Accelerator
for
combined
data and
log volume
will limit
IOPS rate
to 20,0002

M208ms_v 5,700 GiB 1,000 4 x P40 1 x E30 1 x E10 1 x E6 Using


2 MB/s Write
Accelerator
for
combined
data and
log volume
will limit
IOPS rate
to 10,0002

M416s_v2 5,700 GiB 2,000 4 x P40 1 x E30 1 x E10 1 x E6 Using


MB/s Write
Accelerator
for
combined
data and
log volume
will limit
IOPS rate
to 20,0002
/ H A N A / DA
TA A N D
/ H A N A / LO
G
M A X. VM ST RIP ED
I/ O W IT H LVM
T H RO UGH OR / H A N A / SH / RO OT C O M M EN T
VM SK U RA M P UT M DA DM A RED VO L UM E / USR/ SA P S

M416ms_v 11400 GiB 2,000 7 x P40 1 x E30 1 x E10 1 x E6 Using


2 MB/s Write
Accelerator
for
combined
data and
log volume
will limit
IOPS rate
to 20,0002

1 Azure Write Accelerator can't be used with the Ev4 and Ev4 VM families. As a result of using Azure premium
storage the I/O latency will not be less than 1ms
2 The VM family supports Azure Write Accelerator, but there is a potential that the IOPS limit of Write
accelerator could limit the disk configurations IOPS capabilities
In the case of combining the data and log volume for SAP HANA, the disks building the striped volume
should not have read cache or read/write cache enabled.
There are VM types listed that are not certified with SAP and as such not listed in the so called SAP HANA
hardware directory. Feedback of customers was that those non-listed VM types were used successfully for
some non-production tasks.

Next steps
For more information, see:
SAP HANA High Availability guide for Azure virtual machines.
NFS v4.1 volumes on Azure NetApp Files for SAP
HANA
12/22/2020 • 11 minutes to read • Edit Online

Azure NetApp Files provides native NFS shares that can be used for /hana/shared , /hana/data , and /hana/log
volumes. Using ANF-based NFS shares for the /hana/data and /hana/log volumes requires the usage of the v4.1
NFS protocol. The NFS protocol v3 is not supported for the usage of /hana/data and /hana/log volumes when
basing the shares on ANF.

IMPORTANT
The NFS v3 protocol implemented on Azure NetApp Files is not supported to be used for /hana/data and /hana/log . The
usage of the NFS 4.1 is mandatory for /hana/data and /hana/log volumes from a functional point of view. Whereas for the
/hana/shared volume the NFS v3 or the NFS v4.1 protocol can be used from a functional point of view.

Important considerations
When considering Azure NetApp Files for the SAP Netweaver and SAP HANA, be aware of the following important
considerations:
The minimum capacity pool is 4 TiB
The minimum volume size is 100 GiB
Azure NetApp Files and all virtual machines, where Azure NetApp Files volumes are mounted, must be in the
same Azure Virtual Network or in peered virtual networks in the same region
It is important to have the virtual machines deployed in close proximity to the Azure NetApp storage for low
latency.
The selected virtual network must have a subnet, delegated to Azure NetApp Files
Make sure the latency from the database server to the ANF volume is measured and below 1 millisecond
The throughput of an Azure NetApp volume is a function of the volume quota and Service level, as documented
in Service level for Azure NetApp Files. When sizing the HANA Azure NetApp volumes, make sure the resulting
throughput meets the HANA system requirements
Try to “consolidate” volumes to achieve more performance in a larger Volume for example, use one volume for
/sapmnt, /usr/sap/trans, … if possible
Azure NetApp Files offers export policy: you can control the allowed clients, the access type (Read&Write, Read
Only, etc.).
Azure NetApp Files feature isn't zone aware yet. Currently Azure NetApp Files feature isn't deployed in all
Availability zones in an Azure region. Be aware of the potential latency implications in some Azure regions.
The User ID for sid adm and the Group ID for sapsys on the virtual machines must match the configuration in
Azure NetApp Files.

IMPORTANT
For SAP HANA workloads, low latency is critical. Work with your Microsoft representative to ensure that the virtual machines
and the Azure NetApp Files volumes are deployed in close proximity.
IMPORTANT
If there is a mismatch between User ID for sid adm and the Group ID for sapsys between the virtual machine and the
Azure NetApp configuration, the permissions for files on Azure NetApp volumes, mounted to the VM, would be be displayed
as nobody . Make sure to specify the correct User ID for sid adm and the Group ID for sapsys , when on-boarding a new
system to Azure NetApp Files.

Sizing for HANA database on Azure NetApp Files


The throughput of an Azure NetApp volume is a function of the volume size and Service level, as documented in
Service level for Azure NetApp Files.
Important to understand is the performance relationship the size and that there are physical limits for an LIF
(Logical Interface) of the SVM (Storage Virtual Machine).
The table below demonstrates that it could make sense to create a large “Standard” volume to store backups and
that it does not make sense to create a “Ultra” volume larger than 12 TB because the physical bandwidth capacity
of a single LIF would be exceeded.
The maximum throughput for a LIF and a single Linux session is between 1.2 and 1.4 GB/s.

SIZ E T H RO UGH P UT STA N DA RD T H RO UGH P UT P REM IUM T H RO UGH P UT ULT RA

1 TB 16 MB/sec 64 MB/sec 128 MB/sec

2 TB 32 MB/sec 128 MB/sec 256 MB/sec

4 TB 64 MB/sec 256 MB/sec 512 MB/sec

10 TB 160 MB/sec 640 MB/sec 1.280 MB/sec

15 TB 240 MB/sec 960 MB/sec 1.400 MB/sec

20 TB 320 MB/sec 1.280 MB/sec 1.400 MB/sec

40 TB 640 MB/sec 1.400 MB/sec 1.400 MB/sec

It is important to understand that the data is written to the same SSDs in the storage backend. The performance
quota from the capacity pool was created to be able to manage the environment. The Storage KPIs are equal for all
HANA database sizes. In almost all cases, this assumption does not reflect the reality and the customer expectation.
The size of HANA Systems does not necessarily mean that a small system requires low storage throughput – and a
large system requires high storage throughput. But generally we can expect higher throughput requirements for
larger HANA database instances. As a result of SAP's sizing rules for the underlying hardware such larger HANA
instances also provide more CPU resources and higher parallelism in tasks like loading data after an instances
restart. As a result the volume sizes should be adopted to the customer expectations and requirements. And not
only driven by pure capacity requirements.
As you design the infrastructure for SAP in Azure you should be aware of some minimum storage throughput
requirements (for productions Systems) by SAP, which translate into minimum throughput characteristics of:

VO L UM E T Y P E A N D I/ O M IN IM UM K P I DEM A N DED
TYPE B Y SA P P REM IUM SERVIC E L EVEL ULT RA SERVIC E L EVEL

Log Volume Write 250 MB/sec 4 TB 2 TB


VO L UM E T Y P E A N D I/ O M IN IM UM K P I DEM A N DED
TYPE B Y SA P P REM IUM SERVIC E L EVEL ULT RA SERVIC E L EVEL

Data Volume Write 250 MB/sec 4 TB 2 TB

Data Volume Read 400 MB/sec 6.3 TB 3.2 TB

Since all three KPIs are demanded, the /hana/data volume needs to be sized toward the larger capacity to fulfill
the minimum read requirements.
For HANA systems, which are not requiring high bandwidth, the ANF volume sizes can be smaller. And in case a
HANA system requires more throughput the volume could be adapted by resizing the capacity online. No KPIs are
defined for backup volumes. However the backup volume throughput is essential for a well performing
environment. Log – and Data volume performance must be designed to the customer expectations.

IMPORTANT
Independent of the capacity you deploy on a single NFS volume, the throughput, is expected to plateau in the range of 1.2-
1.4 GB/sec bandwidth leveraged by a consumer in a virtual machine. This has to do with the underlying architecture of the
ANF offer and related Linux session limits around NFS. The performance and throughput numbers as documented in the
article Performance benchmark test results for Azure NetApp Files were conducted against one shared NFS volume with
multiple client VMs and as a result with multiple sessions. That scenario is different to the scenario we measure in SAP. Where
we measure throughput from a single VM against an NFS volume. Hosted on ANF.

To meet the SAP minimum throughput requirements for data and log, and according to the guidelines for
/hana/shared , the recommended sizes would look like:

SIZ E SIZ E
VO L UM E P REM IUM STO RA GE T IER ULT RA STO RA GE T IER SUP P O RT ED N F S P ROTO C O L

/hana/log/ 4 TiB 2 TiB v4.1

/hana/data 6.3 TiB 3.2 TiB v4.1

/hana/shared scale-up Min(1 TB, 1 x RAM) Min(1 TB, 1 x RAM) v3 or v4.1

/hana/shared scale-out 1 x RAM of worker node 1 x RAM of worker node v3 or v4.1


per 4 worker nodes per 4 worker nodes

/hana/logbackup 3 x RAM 3 x RAM v3 or v4.1

/hana/backup 2 x RAM 2 x RAM v3 or v4.1

For all volumes, NFS v4.1 is highly recommended


The sizes for the backup volumes are estimations. Exact requirements need to be defined based on workload and
operation processes. For backups, you could consolidate many volumes for different SAP HANA instances to one
(or two) larger volumes, which could have a lower service level of ANF.

NOTE
The Azure NetApp Files, sizing recommendations stated in this document are targeting the minimum requirements SAP
expresses towards their infrastructure providers. In real customer deployments and workload scenarios, that may not be
enough. Use these recommendations as a starting point and adapt, based on the requirements of your specific workload.
Therefore you could consider to deploy similar throughput for the ANF volumes as listed for Ultra disk storage
already. Also consider the sizes for the sizes listed for the volumes for the different VM SKUs as done in the Ultra
disk tables already.

TIP
You can re-size Azure NetApp Files volumes dynamically, without the need to unmount the volumes, stop the virtual
machines or stop SAP HANA. That allows flexibility to meet your application both expected and unforeseen throughput
demands.

Documentation on how to deploy an SAP HANA scale-out configuration with standby node using NFS v4.1
volumes that are hosted in ANF is published in SAP HANA scale-out with standby node on Azure VMs with Azure
NetApp Files on SUSE Linux Enterprise Server.

Availability
ANF system updates and upgrades are applied without impacting the customer environment. The defined SLA is
99.99%.

Volumes and IP addresses and capacity pools


With ANF, it is important to understand how the underlying infrastructure is built. A capacity pool is only a
structure, which makes it simpler to create a billing model for ANF. A capacity pool has no physical relationship to
the underlying infrastructure. If you create a capacity pool only a shell is created which can be charged, not more.
When you create a volume, the first SVM (Storage Virtual Machine) is created on a cluster of several NetApp
systems. A single IP is created for this SVM to access the volume. If you create several volumes, all the volumes are
distributed in this SVM over this multi-controller NetApp cluster. Even if you get only one IP the data is distributed
over several controllers. ANF has a logic that automatically distributes customer workloads once the volumes
or/and capacity of the configured storage reaches an internal pre-defined level. You might notice such cases
because a new IP address gets assigned to access the volumes.
##Log volume and log backup volume The “log volume” (/hana/log ) is used to write the online redo log. Thus,
there are open files located in this volume and it makes no sense to snapshot this volume. Online redo logfiles are
archived or backed up to the log backup volume once the online redo log file is full or a redo log backup is
executed. To provide reasonable backup performance, the log backup volume requires a good throughput. To
optimize storage costs, it can make sense to consolidate the log-backup-volume of multiple HANA instances. So
that multiple HANA instances leverage the same volume and write their backups into different directories. Using
such a consolidation, you can get more throughput with since you need to make the volume a bit larger.
The same applies for the volume you use write full HANA database backups to.

Backup
Besides streaming backups and Azure Back service backing up SAP HANA databases as described in the article
Backup guide for SAP HANA on Azure Virtual Machines, Azure NetApp Files opens the possibility to perform
storage-based snapshot backups.
SAP HANA supports:
Storage-based snapshot backups from SAP HANA 1.0 SPS7 on
Storage-based snapshot backup support for Multi Database Container (MDC) HANA environments from SAP
HANA 2.0 SPS4 on
Creating storage-based snapshot backups is a simple four-step procedure,
1. Creating a HANA (internal) database snapshot - an activity you or tools need to perform
2. SAP HANA writes data to the datafiles to create a consistent state on the storage - HANA performs this step as a
result of creating a HANA snapshot
3. Create a snapshot on the /hana/data volume on the storage - a step you or tools need to perform. There is no
need to perform a snapshot on the /hana/log volume
4. Delete the HANA (internal) database snapshot and resume normal operation - a step you or tools need to
perform

WARNING
Missing the last step or failing to perform the last step has severe impact on SAP HANA's memory demand and can lead to a
halt of SAP HANA

BACKUP DATA FOR FULL SYSTEM CREATE SNAPSHOT COMMENT 'SNAPSHOT-2019-03-18:11:00';

az netappfiles snapshot create -g mygroup --account-name myaccname --pool-name mypoolname --volume-name


myvolname --name mysnapname

BACKUP DATA FOR FULL SYSTEM CLOSE SNAPSHOT BACKUP_ID 47110815 SUCCESSFUL SNAPSHOT-2020-08-18:11:00';

This snapshot backup procedure can be managed in a variety of ways, using various tools. One example is the
python script “ntaphana_azure.py” available on GitHub https://github.com/netapp/ntaphana This is sample code,
provided “as-is” without any maintenance or support.
Cau t i on

A snapshot in itself is not a protected backup since it is located on the same physical storage as the volume you
just took a snapshot of. It is mandatory to “protect” at least one snapshot per day to a different location. This can be
done in the same environment, in a remote Azure region or on Azure Blob storage.
For users of Commvault backup products, a second option is Commvault IntelliSnap V.11.21 and later. This or later
versions of Commvault offer Azure NetApp Files Support. The article Commvault IntelliSnap 11.21 provides more
information.
Back up the snapshot using Azure blob storage
Back up to Azure blob storage is a cost effective and fast method to save ANF-based HANA database storage
snapshot backups. To save the snapshots to Azure Blob storage, the azcopy tool is preferred. Download the latest
version of this tool and install it, for example, in the bin directory where the python script from GitHub is installed.
Download the latest azcopy tool:
root # wget -O azcopy_v10.tar.gz https://aka.ms/downloadazcopy-v10-linux && tar -xf azcopy_v10.tar.gz --strip-
components=1
Saving to: ‘azcopy_v10.tar.gz’

The most advanced feature is the SYNC option. If you use the SYNC option, azcopy keeps the source and the
destination directory synchronized. The usage of the parameter --delete-destination is important. Without this
parameter, azcopy is not deleting files at the destination site and the space utilization on the destination side would
grow. Create a Block Blob container in your Azure storage account. Then create the SAS key for the blob container
and synchronize the snapshot folder to the Azure Blob container.
For example, if a daily snapshot should be synchronized to the Azure blob container to protect the data. And only
that one snapshot should be kept, the command below can be used.

root # > azcopy sync '/hana/data/SID/mnt00001/.snapshot'


'https://azacsnaptmytestblob01.blob.core.windows.net/abc?sv=2021-02-02&ss=bfqt&srt=sco&sp=rwdlacup&se=2021-02-
04T08:25:26Z&st=2021-02-04T00:25:26Z&spr=https&sig=abcdefghijklmnopqrstuvwxyz' --recursive=true --delete-
destination=true

Next steps
Read the article:
SAP HANA high availability for Azure virtual machines
SAP HANA high availability for Azure virtual
machines
12/22/2020 • 2 minutes to read • Edit Online

You can use numerous Azure capabilities to deploy mission-critical databases like SAP HANA on Azure VMs. This
article provides guidance on how to achieve availability for SAP HANA instances that are hosted in Azure VMs. The
article describes several scenarios that you can implement by using the Azure infrastructure to increase availability
of SAP HANA in Azure.

Prerequisites
This article assumes that you are familiar with infrastructure as a service (IaaS) basics in Azure, including:
How to deploy virtual machines or virtual networks via the Azure portal or PowerShell.
Using the Azure cross-platform command-line interface (Azure CLI), including the option to use JavaScript
Object Notation (JSON) templates.
This article also assumes that you are familiar with installing SAP HANA instances, and with administrating and
operating SAP HANA instances. It's especially important to be familiar with the setup and operations of HANA
system replication. This includes tasks like backup and restore for SAP HANA databases.
These articles provide a good overview of using SAP HANA in Azure:
Manual installation of single-instance SAP HANA on Azure VMs
Set up SAP HANA system replication in Azure VMs
Back up SAP HANA on Azure VMs
It's also a good idea to be familiar with these articles about SAP HANA:
High availability for SAP HANA
FAQ: High availability for SAP HANA
Perform system replication for SAP HANA
SAP HANA 2.0 SPS 01 What’s new: High availability
Network recommendations for SAP HANA system replication
SAP HANA system replication
SAP HANA service auto-restart
Configure SAP HANA system replication
Beyond being familiar with deploying VMs in Azure, before you define your availability architecture in Azure, we
recommend that you read Manage the availability of Windows virtual machines in Azure.

Service level agreements for Azure components


Azure has different availability SLAs for different components, like networking, storage, and VMs. All SLAs are
documented. For more information, see Microsoft Azure Service Level Agreements.
SLA for Virtual Machines describes three different SLAs, for three different configurations:
A single VM that uses Azure premium SSDs for the OS disk and all data disks. This option provides a monthly
uptime of 99.9 percent.
Multiple (at least two) VMs that are organized in an Azure availability set. This option provides a monthly uptime
of 99.95 percent.
Multiple (at least two) VMs that are organized in an Availablity Zone. This option provided a monthly uptime of
99.99 percent.
Measure your availability requirement against the SLAs that Azure components can provide. Then, choose your
scenarios for SAP HANA to achieve your required level of availability.

Next steps
Learn about SAP HANA availability within one Azure region.
Learn about SAP HANA availability across Azure regions.
SAP HANA availability within one Azure region
12/22/2020 • 10 minutes to read • Edit Online

This article describes several availability scenarios within one Azure region. Azure has many regions, spread
throughout the world. For the list of Azure regions, see Azure regions. For deploying SAP HANA on VMs within one
Azure region, Microsoft offers deployment of a single VM with a HANA instance. For increased availability, you can
deploy two VMs with two HANA instances within an Azure availability set that uses HANA system replication for
availability.
Currently, Azure is offering Azure Availability Zones. This article does not describe Availability Zones in detail. But, it
includes a general discussion about using Availability Sets versus Availability Zones.
Azure regions where Availability Zones are offered have multiple datacenters. The datacenters are independent in
the supply of power source, cooling, and network. The reason for offering different zones within a single Azure
region is to deploy applications across two or three Availability Zones that are offered. Deploying across zones,
issues in power and networking affecting only one Azure Availability Zone infrastructure, your application
deployment within an Azure region is still functional. Some reduced capacity might occur. For example, VMs in one
zone might be lost, but VMs in the other two zones would still be up and running.
An Azure Availability Set is a logical grouping capability that helps ensure that the VM resources that you place
within the Availability Set are failure-isolated from each other when they are deployed within an Azure datacenter.
Azure ensures that the VMs you place within an Availability Set run across multiple physical servers, compute racks,
storage units, and network switches. In some Azure documentation, this configuration is referred to as placements
in different update and fault domains. These placements usually are within an Azure datacenter. Assuming that
power source and network issues would affect the datacenter that you are deploying, all your capacity in one Azure
region would be affected.
The placement of datacenters that represent Azure Availability Zones is a compromise between delivering
acceptable network latency between services deployed in different zones, and a distance between datacenters.
Natural catastrophes ideally wouldn't affect the power, network supply, and infrastructure for all Availability Zones
in this region. However, as monumental natural catastrophes have shown, Availability Zones might not always
provide the availability that you want within one region. Think about Hurricane Maria that hit the island of Puerto
Rico on September 20, 2017. The hurricane basically caused a nearly 100 percent blackout on the 90-mile-wide
island.

Single-VM scenario
In a single-VM scenario, you create an Azure VM for the SAP HANA instance. You use Azure Premium Storage to
host the operating system disk and all your data disks. The Azure uptime SLA of 99.9 percent and the SLAs of other
Azure components is sufficient for you to fulfill your availability SLAs for your customers. In this scenario, you have
no need to leverage an Azure Availability Set for VMs that run the DBMS layer. In this scenario, you rely on two
different features:
Azure VM auto-restart (also referred to as Azure service healing)
SAP HANA auto-restart
Azure VM auto restart, or service healing, is a functionality in Azure that works on two levels:
The Azure server host checks the health of a VM that's hosted on the server host.
The Azure fabric controller monitors the health and availability of the server host.
A health check functionality monitors the health of every VM that's hosted on an Azure server host. If a VM falls into
a non-healthy state, a reboot of the VM can be initiated by the Azure host agent that checks the health of the VM.
The fabric controller checks the health of the host by checking many different parameters that might indicate issues
with the host hardware. It also checks on the accessibility of the host via the network. An indication of problems
with the host can lead to the following events:
If the host signals a bad health state, a reboot of the host and a restart of the VMs that were running on the host
is triggered.
If the host is not in a healthy state after successful reboot, a redeployment of the VMs that were originally on the
now unhealthy node onto an healthy host server is initiated. In this case, the original host is marked as not
healthy. It won't be used for further deployments until it's cleared or replaced.
If the unhealthy host has problems during the reboot process, an immediate restart of the VMs on an healthy
host is triggered.
With the host and VM monitoring provided by Azure, Azure VMs that experience host issues are automatically
restarted on a healthy Azure host.

IMPORTANT
Azure service healing will not restart Linux VMs where the guest OS is in a kernel panic state. The default settings of the
commonly used Linux releases, are not automatically restarting VMs or server where the Linux kernel is in panic state. Instead
the default foresees to keep the OS in kernel panic state to be able to attach a kernel debugger to analyze. Azure is honoring
that behavior by not automatically restarting a VM with the guest OS in a such a state. Assumption is that such occurrences
are extremely rare. You could overwrite the default behavior to enable a restart of the VM. To change the default behavior
enable the parameter 'kernel.panic' in /etc/sysctl.conf. The time you set for this parameter is in seconds. Common
recommended values are to wait for 20-30 seconds before triggering the reboot through this parameter. See also
https://gitlab.com/procps-ng/procps/blob/master/sysctl.conf.

The second feature that you rely on in this scenario is the fact that the HANA service that runs in a restarted VM
starts automatically after the VM reboots. You can set up HANA service auto-restart through the watchdog services
of the various HANA services.
You might improve this single-VM scenario by adding a cold failover node to an SAP HANA configuration. In the
SAP HANA documentation, this setup is called host auto-failover. This configuration might make sense in an on-
premises deployment situation where the server hardware is limited, and you dedicate a single-server node as the
host auto-failover node for a set of production hosts. But in Azure, where the underlying infrastructure of Azure
provides a healthy target server for a successful VM restart, it doesn't make sense to deploy SAP HANA host auto-
failover. Because of Azure service healing, there is no reference architecture that foresees a standby node for HANA
host auto-failover.
Special case of SAP HANA scale -out configurations in Azure
High availability for SAP HANA scale-out configurations is relying on service healing of Azure VMs and the restart
of the SAP HANA instance as the VM is up and running again. High availability architectures based on HANA
System Replication are going to be introduced at a later time.

Availability scenarios for two different VMs


If you use two Azure VMs within an Azure Availability Set, you can increase the uptime between these two VMs if
they're placed in an Azure Availability Set within one Azure region. The base setup in Azure would look like:
To illustrate the different availability scenarios, a few of the layers in the diagram are omitted. The diagram shows
only layers that depict VMs, hosts, Availability Sets, and Azure regions. Azure Virtual Network instances, resource
groups, and subscriptions don't play a role in the scenarios described in this section.
Replicate backups to a second virtual machine
One of the most rudimentary setups is to use backups. In particular, you might have transaction log backups
shipped from one VM to another Azure VM. You can choose the Azure Storage type. In this setup, you are
responsible for scripting the copy of scheduled backups that are conducted on the first VM to the second VM. If you
need to use the second VM instances, you must restore the full, incremental/differential, and transaction log
backups to the point that you need.
The architecture looks like:
This setup is not well suited to achieving great Recovery Point Objective (RPO) and Recovery Time Objective (RTO)
times. RTO times especially would suffer due to the need to fully restore the complete database by using the copied
backups. However, this setup is useful for recovering from unintended data deletion on the main instances. With
this setup, at any time, you can restore to a certain point in time, extract the data, and import the deleted data into
your main instance. Hence, it might make sense to use a backup copy method in combination with other high-
availability functionality.
While backups are being copied, you might be able to use a smaller VM than the main VM that the SAP HANA
instance is running on. Keep in mind that you can attach a smaller number of VHDs to smaller VMs. For information
about the limits of individual VM types, see Sizes for Linux virtual machines in Azure.
SAP HANA system replication without automatic failover
The scenarios described in this section use SAP HANA system replication. For the SAP documentation, see System
replication. Scenarios without automatic failover are not common for configurations within one Azure region. A
configuration without automatic failover, though avoiding a Pacemaker setup, obligates you to monitor and failover
manually. Since this takes and efforts as well, most customers are relying on Azure service healing instead. There
are some edge cases where this configuration might help in terms of failure scenarios. Or, in some cases, a
customer might want to realize more efficiency.
SAP HANA system replication without auto failover and without data preload
In this scenario, you use SAP HANA system replication to move data in a synchronous manner to achieve an RPO of
0. On the other hand, you have a long enough RTO that you don't need either failover or data preloading into the
HANA instance cache. In this case, it's possible to achieve further economy in your configuration by taking the
following actions:
Run another SAP HANA instance in the second VM. The SAP HANA instance in the second VM takes most of the
memory of the virtual machine. In case a failover to the second VM, you need to shut down the running SAP
HANA instance that has the data fully loaded in the second VM, so that the replicated data can be loaded into the
cache of the targeted HANA instance in the second VM.
Use a smaller VM size on the second VM. If a failover occurs, you have an additional step before the manual
failover. In this step, you resize the VM to the size of the source VM.
The scenario looks like:
NOTE
Even if you don't use data preload in the HANA system replication target, you need at least 64 GB of memory. You also need
enough memory in addition to 64 GB to keep the rowstore data in the memory of the target instance.

SAP HANA system replication without auto failover and with data preload
In this scenario, data that's replicated to the HANA instance in the second VM is preloaded. This eliminates the two
advantages of not preloading data. In this case, you can't run another SAP HANA system on the second VM. You
also can't use a smaller VM size. Hence, customers rarely implement this scenario.
SAP HANA system replication with automatic failover
In the standard and most common availability configuration within one Azure region, two Azure VMs running SLES
Linux have a failover cluster defined. The SLES Linux cluster is based on the Pacemaker framework, in conjunction
with a STONITH device.
From an SAP HANA perspective, the replication mode that's used is synced and an automatic failover is configured.
In the second VM, the SAP HANA instance acts as a hot standby node. The standby node receives a synchronous
stream of change records from the primary SAP HANA instance. As transactions are committed by the application
at the HANA primary node, the primary HANA node waits to confirm the commit to the application until the
secondary SAP HANA node confirms that it received the commit record. SAP HANA offers two synchronous
replication modes. For details and for a description of differences between these two synchronous replication
modes, see the SAP article Replication modes for SAP HANA system replication.
The overall configuration looks like:
You might choose this solution because it enables you to achieve an RPO=0 and an low RTO. Configure the SAP
HANA client connectivity so that the SAP HANA clients use the virtual IP address to connect to the HANA system
replication configuration. Such a configuration eliminates the need to reconfigure the application if a failover to the
secondary node occurs. In this scenario, the Azure VM SKUs for the primary and secondary VMs must be the same.

Next steps
For step-by-step guidance on setting up these configurations in Azure, see:
Set up SAP HANA system replication in Azure VMs
High availability for SAP HANA by using system replication
For more information about SAP HANA availability across Azure regions, see:
SAP HANA availability across Azure regions
SAP HANA availability across Azure regions
12/22/2020 • 5 minutes to read • Edit Online

This article describes scenarios related to SAP HANA availability across different Azure regions. Because of the
distance between Azure regions, setting up SAP HANA availability in multiple Azure regions involves special
considerations.

Why deploy across multiple Azure regions


Azure regions often are separated by large distances. Depending on the geopolitical region, the distance between
Azure regions might be hundreds of miles, or even several thousand miles, like in the United States. Because of the
distance, network traffic between assets that are deployed in two different Azure regions experience significant
network roundtrip latency. The latency is significant enough to exclude synchronous data exchange between two
SAP HANA instances under typical SAP workloads.
On the other hand, organizations often have a distance requirement between the location of the primary datacenter
and a secondary datacenter. A distance requirement helps provide availability if a natural disaster occurs in a wider
geographic location. Examples include the hurricanes that hit the Caribbean and Florida in September and October
2017. Your organization might have at least a minimum distance requirement. For most Azure customers, a
minimum distance definition requires you to design for availability across Azure regions. Because the distance
between two Azure regions is too large to use the HANA synchronous replication mode, RTO and RPO
requirements might force you to deploy availability configurations in one region, and then supplement with
additional deployments in a second region.
Another aspect to consider in this scenario is failover and client redirect. The assumption is that a failover between
SAP HANA instances in two different Azure regions always is a manual failover. Because the replication mode of
SAP HANA system replication is set to asynchronous, there's a potential that data committed in the primary HANA
instance hasn't yet made it to the secondary HANA instance. Therefore, automatic failover isn't an option for
configurations where the replication is asynchronous. Even with manually controlled failover, as in a failover
exercise, you need to take measures to ensure that all the committed data on the primary side made it to the
secondary instance before you manually move over to the other Azure region.
Azure Virtual Network uses a different IP address range. The IP addresses are deployed in the second Azure region.
So, you either need to change the SAP HANA client configuration, or preferably, you need to create steps to change
the name resolution. This way, the clients are redirected to the new secondary site's server IP address. For more
information, see the SAP article Client connection recovery after takeover.

Simple availability between two Azure regions


You might choose not to put any availability configuration in place within a single region, but still have the demand
to have the workload served if a disaster occurs. Typical cases for such scenarios are nonproduction systems.
Although having the system down for half a day or even a day is sustainable, you can't allow the system to be
unavailable for 48 hours or more. To make the setup less costly, run another system that is even less important in
the VM. The other system functions as a destination. You can also size the VM in the secondary region to be smaller,
and choose not to preload the data. Because the failover is manual and entails many more steps to fail over the
complete application stack, the additional time to shut down the VM, resize it, and then restart the VM is acceptable.
If you are using the scenario of sharing the DR target with a QA system in one VM, you need to take these
considerations into account:
There are two operation modes with delta_datashipping and logreplay, which are available for such a scenario
Both operation modes have different memory requirements without preloading data
Delta_datashipping might require drastically less memory without the preload option than logreplay could
require. See chapter 4.3 of the SAP document How To Perform System Replication for SAP HANA
The memory requirement of logreplay operation mode without preload is not deterministic and depends on the
columnstore structures loaded. In extreme cases, you might require 50% of the memory of the primary instance.
The memory for logreplay operation mode is independent on whether you chose to have the data preloaded set
or not.

NOTE
In this configuration, you can't provide an RPO=0 because your HANA system replication mode is asynchronous. If you need
to provide an RPO=0, this configuration isn't the configuration of choice.

A small change that you can make in the configuration might be to configure data as preloading. However, given
the manual nature of failover and the fact that application layers also need to move to the second region, it might
not make sense to preload data.

Combine availability within one region and across regions


A combination of availability within and across regions might be driven by these factors:
A requirement of RPO=0 within an Azure region.
The organization isn't willing or able to have global operations affected by a major natural catastrophe that
affects a larger region. This was the case for some hurricanes that hit the Caribbean over the past few years.
Regulations that demand distances between primary and secondary sites that are clearly beyond what Azure
availability zones can provide.
In these cases, you can set up what SAP calls an SAP HANA multitier system replication configuration by using
HANA system replication. The architecture would look like:
SAP introduced multi-target system replication with HANA 2.0 SPS3. Multi-target system replication brings some
advantages in update scenarios. For example, the DR site (Region 2) is not impacted when the secondary HA site is
down for maintenance or updates. You can find out more about HANA multi-target system replication here. Possible
architecture with multi-target replication would look like:

If the organization has requirements for high availability readiness in the second(DR) Azure region, then the
architecture would look like:
Using logreplay as operation mode, this configuration provides an RPO=0, with low RTO, within the primary region.
The configuration also provides decent RPO if a move to the second region is involved. The RTO times in the second
region are dependent on whether data is preloaded. Many customers use the VM in the secondary region to run a
test system. In that use case, the data can't be preloaded.

IMPORTANT
The operation modes between the different tiers need to be homogeneous. You can't use logreply as operation mode
between tier 1 and tier 2 and delta_datashipping to supply tier 3. You can only choose the one or the other operation mode
that needs to be consistent for all tiers. Since delta_datashipping is not suitable to give you an RPO=0, the only reasonable
operation mode for such a multi-tier configuration remains logreplay. For details about operation modes and some
restrictions, see the SAP article Operation modes for SAP HANA system replication.

Next steps
For step-by-step guidance on setting up these configurations in Azure, see:
Set up SAP HANA system replication in Azure VMs
High availability for SAP HANA by using system replication
High availability of SAP HANA on Azure VMs on
SUSE Linux Enterprise Server
12/22/2020 • 32 minutes to read • Edit Online

For on-premises development, you can use either HANA System Replication or use shared storage to
establish high availability for SAP HANA. On Azure virtual machines (VMs), HANA System Replication on
Azure is currently the only supported high availability function. SAP HANA Replication consists of one
primary node and at least one secondary node. Changes to the data on the primary node are replicated to the
secondary node synchronously or asynchronously.
This article describes how to deploy and configure the virtual machines, install the cluster framework, and
install and configure SAP HANA System Replication. In the example configurations, installation commands,
instance number 03 , and HANA System ID HN1 are used.
Read the following SAP Notes and papers first:
SAP Note 1928533, which has:
The list of Azure VM sizes that are supported for the deployment of SAP software.
Important capacity information for Azure VM sizes.
The supported SAP software, and operating system (OS) and database combinations.
The required SAP kernel version for Windows and Linux on Microsoft Azure.
SAP Note 2015553 lists the prerequisites for SAP-supported SAP software deployments in Azure.
SAP Note 2205917 has recommended OS settings for SUSE Linux Enterprise Server for SAP Applications.
SAP Note 1944799 has SAP HANA Guidelines for SUSE Linux Enterprise Server for SAP Applications.
SAP Note 2178632 has detailed information about all of the monitoring metrics that are reported for SAP
in Azure.
SAP Note 2191498 has the required SAP Host Agent version for Linux in Azure.
SAP Note 2243692 has information about SAP licensing on Linux in Azure.
SAP Note 1984787 has general information about SUSE Linux Enterprise Server 12.
SAP Note 1999351 has additional troubleshooting information for the Azure Enhanced Monitoring
Extension for SAP.
SAP Note 401162 has information on how to avoid "address already in use" when setting up HANA
System Replication.
SAP Community WIKI has all of the required SAP Notes for Linux.
SAP HANA Certified IaaS Platforms
Azure Virtual Machines planning and implementation for SAP on Linux guide.
Azure Virtual Machines deployment for SAP on Linux (this article).
Azure Virtual Machines DBMS deployment for SAP on Linux guide.
SUSE Linux Enterprise Server for SAP Applications 12 SP3 best practices guides
Setting up an SAP HANA SR Performance Optimized Infrastructure (SLES for SAP Applications 12
SP1). The guide contains all of the required information to set up SAP HANA System Replication for
on-premises development. Use this guide as a baseline.
Setting up an SAP HANA SR Cost Optimized Infrastructure (SLES for SAP Applications 12 SP1)

Overview
To achieve high availability, SAP HANA is installed on two virtual machines. The data is replicated by using
HANA System Replication.

SAP HANA System Replication setup uses a dedicated virtual hostname and virtual IP addresses. On Azure, a
load balancer is required to use a virtual IP address. The following list shows the configuration of the load
balancer:
Front-end configuration: IP address 10.0.0.13 for hn1-db
Back-end configuration: Connected to primary network interfaces of all virtual machines that should be
part of HANA System Replication
Probe Port: Port 62503
Load-balancing rules: 30313 TCP, 30315 TCP, 30317 TCP

Deploy for Linux


The resource agent for SAP HANA is included in SUSE Linux Enterprise Server for SAP Applications. The Azure
Marketplace contains an image for SUSE Linux Enterprise Server for SAP Applications 12 that you can use to
deploy new virtual machines.
Deploy with a template
You can use one of the quickstart templates that are on GitHub to deploy all the required resources. The
template deploys the virtual machines, the load balancer, the availability set, and so on. To deploy the
template, follow these steps:
1. Open the database template or the converged template on the Azure portal. The database template
creates the load-balancing rules for a database only. The converged template also creates the load-
balancing rules for an ASCS/SCS and ERS (Linux only) instance. If you plan to install an SAP
NetWeaver-based system and you want to install the ASCS/SCS instance on the same machines, use
the converged template.
2. Enter the following parameters:
Sap System ID : Enter the SAP system ID of the SAP system you want to install. The ID is used as a
prefix for the resources that are deployed.
Stack Type : (This parameter is applicable only if you use the converged template.) Select the SAP
NetWeaver stack type.
Os Type : Select one of the Linux distributions. For this example, select SLES 12 .
Db Type : Select HANA .
Sap System Size : Enter the number of SAPS that the new system is going to provide. If you're not
sure how many SAPS the system requires, ask your SAP Technology Partner or System Integrator.
System Availability : Select HA .
Admin Username and Admin Password : A new user is created that can be used to sign in to the
machine.
New Or Existing Subnet : Determines whether a new virtual network and subnet should be
created or an existing subnet used. If you already have a virtual network that's connected to your
on-premises network, select Existing .
Subnet ID : If you want to deploy the VM into an existing VNet where you have a subnet defined
the VM should be assigned to, name the ID of that specific subnet. The ID usually looks like
/subscriptions/<subscription ID>/resourceGroups/<resource group
name>/providers/Microsoft.Network/vir tualNetworks/<vir tual network
name>/subnets/<subnet name> .
Manual deployment

IMPORTANT
Make sure that the OS you select is SAP certified for SAP HANA on the specific VM types you are using. The list of SAP
HANA certified VM types and OS releases for those can be looked up in SAP HANA Certified IaaS Platforms. Make sure
to click into the details of the VM type listed to get the complete list of SAP HANA supported OS releases for the
specific VM type

1. Create a resource group.


2. Create a virtual network.
3. Create an availability set.
Set the max update domain.
4. Create a load balancer (internal). We recommend standard load balancer.
Select the virtual network created in step 2.
5. Create virtual machine 1.
Use a SLES4SAP image in the Azure gallery that is supported for SAP HANA on the VM type you
selected.
Select the availability set created in step 3.
6. Create virtual machine 2.
Use a SLES4SAP image in the Azure gallery that is supported for SAP HANA on the VM type you
selected.
Select the availability set created in step 3.
7. Add data disks.

IMPORTANT
Floating IP is not supported on a NIC secondary IP configuration in load-balancing scenarios. For details see Azure
Load balancer Limitations. If you need additional IP address for the VM, deploy a second NIC.
NOTE
When VMs without public IP addresses are placed in the backend pool of internal (no public IP address) Standard Azure
load balancer, there will be no outbound internet connectivity, unless additional configuration is performed to allow
routing to public end points. For details on how to achieve outbound connectivity see Public endpoint connectivity for
Virtual Machines using Azure Standard Load Balancer in SAP high-availability scenarios.

1. If using standard load balancer, follow these configuration steps:


a. First, create a front-end IP pool:
a. Open the load balancer, select frontend IP pool , and select Add .
b. Enter the name of the new front-end IP pool (for example, hana-frontend ).
c. Set the Assignment to Static and enter the IP address (for example, 10.0.0.13 ).
d. Select OK .
e. After the new front-end IP pool is created, note the pool IP address.
b. Next, create a back-end pool:
a. Open the load balancer, select backend pools , and select Add .
b. Enter the name of the new back-end pool (for example, hana-backend ).
c. Select Vir tual Network .
d. Select Add a vir tual machine .
e. Select ** Virtual machine**.
f. Select the virtual machines of the SAP HANA cluster and their IP addresses.
g. Select Add .
c. Next, create a health probe:
a. Open the load balancer, select health probes , and select Add .
b. Enter the name of the new health probe (for example, hana-hp ).
c. Select TCP as the protocol and port 62503 . Keep the Inter val value set to 5, and the
Unhealthy threshold value set to 2.
d. Select OK .
d. Next, create the load-balancing rules:
a. Open the load balancer, select load balancing rules , and select Add .
b. Enter the name of the new load balancer rule (for example, hana-lb ).
c. Select the front-end IP address, the back-end pool, and the health probe that you created
earlier (for example, hana-frontend , hana-backend and hana-hp ).
d. Select HA Por ts .
e. Increase the idle timeout to 30 minutes.
f. Make sure to enable Floating IP .
g. Select OK .
2. Alternatively, if your scenario dictates using basic load balancer, follow these configuration steps:
a. First, create a front-end IP pool:
a. Open the load balancer, select frontend IP pool , and select Add .
b. Enter the name of the new front-end IP pool (for example, hana-frontend ).
c. Set the Assignment to Static and enter the IP address (for example, 10.0.0.13 ).
d. Select OK .
e. After the new front-end IP pool is created, note the pool IP address.
b. Next, create a back-end pool:
a. Open the load balancer, select backend pools , and select Add .
b. Enter the name of the new back-end pool (for example, hana-backend ).
c. Select Add a vir tual machine .
d. Select the availability set created in step 3.
e. Select the virtual machines of the SAP HANA cluster.
f. Select OK .
c. Next, create a health probe:
a. Open the load balancer, select health probes , and select Add .
b. Enter the name of the new health probe (for example, hana-hp ).
c. Select TCP as the protocol and port 62503 . Keep the Inter val value set to 5, and the
Unhealthy threshold value set to 2.
d. Select OK .
d. For SAP HANA 1.0, create the load-balancing rules:
a. Open the load balancer, select load balancing rules , and select Add .
b. Enter the name of the new load balancer rule (for example, hana-lb-303 15).
c. Select the front-end IP address, the back-end pool, and the health probe that you created
earlier (for example, hana-frontend ).
d. Keep the Protocol set to TCP , and enter port 303 15.
e. Increase the idle timeout to 30 minutes.
f. Make sure to enable Floating IP .
g. Select OK .
h. Repeat these steps for port 303 17.
e. For SAP HANA 2.0, create the load-balancing rules for the system database:
a. Open the load balancer, select load balancing rules , and select Add .
b. Enter the name of the new load balancer rule (for example, hana-lb-303 13).
c. Select the front-end IP address, the back-end pool, and the health probe that you created
earlier (for example, hana-frontend ).
d. Keep the Protocol set to TCP , and enter port 303 13.
e. Increase the idle timeout to 30 minutes.
f. Make sure to enable Floating IP .
g. Select OK .
h. Repeat these steps for port 303 14.
f. For SAP HANA 2.0, first create the load-balancing rules for the tenant database:
a. Open the load balancer, select load balancing rules , and select Add .
b. Enter the name of the new load balancer rule (for example, hana-lb-303 40).
c. Select the frontend IP address, backend pool, and health probe you created earlier (for
example, hana-frontend ).
d. Keep the Protocol set to TCP , and enter port 303 40.
e. Increase the idle timeout to 30 minutes.
f. Make sure to enable Floating IP .
g. Select OK .
h. Repeat these steps for ports 303 41 and 303 42.
For more information about the required ports for SAP HANA, read the chapter Connections to Tenant
Databases in the SAP HANA Tenant Databases guide or SAP Note 2388694.
IMPORTANT
Do not enable TCP timestamps on Azure VMs placed behind Azure Load Balancer. Enabling TCP timestamps will cause
the health probes to fail. Set parameter net.ipv4.tcp_timestamps to 0 . For details see Load Balancer health probes.
See also SAP note 2382421.

Create a Pacemaker cluster


Follow the steps in Setting up Pacemaker on SUSE Linux Enterprise Server in Azure to create a basic
Pacemaker cluster for this HANA server. You can use the same Pacemaker cluster for SAP HANA and SAP
NetWeaver (A)SCS.

Install SAP HANA


The steps in this section use the following prefixes:
[A] : The step applies to all nodes.
[1] : The step applies to node 1 only.
[2] : The step applies to node 2 of the Pacemaker cluster only.
1. [A] Set up the disk layout: Logical Volume Manager (LVM) .
We recommend that you use LVM for volumes that store data and log files. The following example
assumes that the virtual machines have four data disks attached that are used to create two volumes.
List all of the available disks:

ls /dev/disk/azure/scsi1/lun*

Example output:

/dev/disk/azure/scsi1/lun0 /dev/disk/azure/scsi1/lun1 /dev/disk/azure/scsi1/lun2


/dev/disk/azure/scsi1/lun3

Create physical volumes for all of the disks that you want to use:

sudo pvcreate /dev/disk/azure/scsi1/lun0


sudo pvcreate /dev/disk/azure/scsi1/lun1
sudo pvcreate /dev/disk/azure/scsi1/lun2
sudo pvcreate /dev/disk/azure/scsi1/lun3

Create a volume group for the data files. Use one volume group for the log files and one for the shared
directory of SAP HANA:

sudo vgcreate vg_hana_data_HN1 /dev/disk/azure/scsi1/lun0 /dev/disk/azure/scsi1/lun1


sudo vgcreate vg_hana_log_HN1 /dev/disk/azure/scsi1/lun2
sudo vgcreate vg_hana_shared_HN1 /dev/disk/azure/scsi1/lun3

Create the logical volumes. A linear volume is created when you use lvcreate without the -i switch.
We suggest that you create a striped volume for better I/O performance, and align the stripe sizes to
the values documented in SAP HANA VM storage configurations. The -i argument should be the
number of the underlying physical volumes and the -I argument is the stripe size. In this document,
two physical volumes are used for the data volume, so the -i switch argument is set to 2 . The stripe
size for the data volume is 256KiB . One physical volume is used for the log volume, so no -i or -I
switches are explicitly used for the log volume commands.

IMPORTANT
Use the -i switch and set it to the number of the underlying physical volume when you use more than one
physical volume for each data, log, or shared volumes. Use the -I switch to specify the stripe size, when
creating a striped volume.
See SAP HANA VM storage configurations for recommended storage configurations, including stripe sizes and
number of disks.

sudo lvcreate -i 2 -I 256 -l 100%FREE -n hana_data vg_hana_data_HN1


sudo lvcreate -l 100%FREE -n hana_log vg_hana_log_HN1
sudo lvcreate -l 100%FREE -n hana_shared vg_hana_shared_HN1
sudo mkfs.xfs /dev/vg_hana_data_HN1/hana_data
sudo mkfs.xfs /dev/vg_hana_log_HN1/hana_log
sudo mkfs.xfs /dev/vg_hana_shared_HN1/hana_shared

Create the mount directories and copy the UUID of all of the logical volumes:

sudo mkdir -p /hana/data/HN1


sudo mkdir -p /hana/log/HN1
sudo mkdir -p /hana/shared/HN1
# Write down the ID of /dev/vg_hana_data_HN1/hana_data, /dev/vg_hana_log_HN1/hana_log, and
/dev/vg_hana_shared_HN1/hana_shared
sudo blkid

Create fstab entries for the three logical volumes:

sudo vi /etc/fstab

Insert the following line in the /etc/fstab file:

/dev/disk/by-uuid/<UUID of /dev/mapper/vg_hana_data_HN1-hana_data> /hana/data/HN1 xfs


defaults,nofail 0 2
/dev/disk/by-uuid/<UUID of /dev/mapper/vg_hana_log_HN1-hana_log> /hana/log/HN1 xfs
defaults,nofail 0 2
/dev/disk/by-uuid/<UUID of /dev/mapper/vg_hana_shared_HN1-hana_shared> /hana/shared/HN1 xfs
defaults,nofail 0 2

Mount the new volumes:

sudo mount -a

2. [A] Set up the disk layout: Plain Disks .


For demo systems, you can place your HANA data and log files on one disk. Create a partition on
/dev/disk/azure/scsi1/lun0 and format it with xfs:
sudo sh -c 'echo -e "n\n\n\n\n\nw\n" | fdisk /dev/disk/azure/scsi1/lun0'
sudo mkfs.xfs /dev/disk/azure/scsi1/lun0-part1

# Write down the ID of /dev/disk/azure/scsi1/lun0-part1


sudo /sbin/blkid
sudo vi /etc/fstab

Insert this line in the /etc/fstab file:

/dev/disk/by-uuid/<UUID> /hana xfs defaults,nofail 0 2

Create the target directory and mount the disk:

sudo mkdir /hana


sudo mount -a

3. [A] Set up host name resolution for all hosts.


You can either use a DNS server or modify the /etc/hosts file on all nodes. This example shows you
how to use the /etc/hosts file. Replace the IP address and the hostname in the following commands:

sudo vi /etc/hosts

Insert the following lines in the /etc/hosts file. Change the IP address and hostname to match your
environment:

10.0.0.5 hn1-db-0
10.0.0.6 hn1-db-1

4. [A] Install the SAP HANA high availability packages:

sudo zypper install SAPHanaSR

To install SAP HANA System Replication, follow chapter 4 of the SAP HANA SR Performance Optimized
Scenario guide.
1. [A] Run the hdblcm program from the HANA DVD. Enter the following values at the prompt:
Choose installation: Enter 1 .
Select additional components for installation: Enter 1 .
Enter Installation Path [/hana/shared]: Select Enter.
Enter Local Host Name [..]: Select Enter.
Do you want to add additional hosts to the system? (y/n) [n]: Select Enter.
Enter SAP HANA System ID: Enter the SID of HANA, for example: HN1 .
Enter Instance Number [00]: Enter the HANA Instance number. Enter 03 if you used the Azure
template or followed the manual deployment section of this article.
Select Database Mode / Enter Index [1]: Select Enter.
Select System Usage / Enter Index [4]: Select the system usage value.
Enter Location of Data Volumes [/hana/data/HN1]: Select Enter.
Enter Location of Log Volumes [/hana/log/HN1]: Select Enter.
Restrict maximum memory allocation? [n]: Select Enter.
Enter Certificate Host Name For Host '...' [...]: Select Enter.
Enter SAP Host Agent User (sapadm) Password: Enter the host agent user password.
Confirm SAP Host Agent User (sapadm) Password: Enter the host agent user password again to
confirm.
Enter System Administrator (hdbadm) Password: Enter the system administrator password.
Confirm System Administrator (hdbadm) Password: Enter the system administrator password again
to confirm.
Enter System Administrator Home Directory [/usr/sap/HN1/home]: Select Enter.
Enter System Administrator Login Shell [/bin/sh]: Select Enter.
Enter System Administrator User ID [1001]: Select Enter.
Enter ID of User Group (sapsys) [79]: Select Enter.
Enter Database User (SYSTEM) Password: Enter the database user password.
Confirm Database User (SYSTEM) Password: Enter the database user password again to confirm.
Restart system after machine reboot? [n]: Select Enter.
Do you want to continue? (y/n): Validate the summary. Enter y to continue.
2. [A] Upgrade the SAP Host Agent.
Download the latest SAP Host Agent archive from the SAP Software Center and run the following
command to upgrade the agent. Replace the path to the archive to point to the file that you
downloaded:

sudo /usr/sap/hostctrl/exe/saphostexec -upgrade -archive <path to SAP Host Agent SAR>

Configure SAP HANA 2.0 System Replication


The steps in this section use the following prefixes:
[A] : The step applies to all nodes.
[1] : The step applies to node 1 only.
[2] : The step applies to node 2 of the Pacemaker cluster only.
1. [1] Create the tenant database.
If you're using SAP HANA 2.0 or MDC, create a tenant database for your SAP NetWeaver system.
Replace NW1 with the SID of your SAP system.
Execute the following command as <hanasid>adm :

hdbsql -u SYSTEM -p "passwd" -i 03 -d SYSTEMDB 'CREATE DATABASE NW1 SYSTEM USER PASSWORD "passwd"'

2. [1] Configure System Replication on the first node:


Back up the databases as <hanasid>adm:

hdbsql -d SYSTEMDB -u SYSTEM -p "passwd" -i 03 "BACKUP DATA USING FILE ('initialbackupSYS')"


hdbsql -d HN1 -u SYSTEM -p "passwd" -i 03 "BACKUP DATA USING FILE ('initialbackupHN1')"
hdbsql -d NW1 -u SYSTEM -p "passwd" -i 03 "BACKUP DATA USING FILE ('initialbackupNW1')"

Copy the system PKI files to the secondary site:


scp /usr/sap/HN1/SYS/global/security/rsecssfs/data/SSFS_HN1.DAT hn1-db-
1:/usr/sap/HN1/SYS/global/security/rsecssfs/data/
scp /usr/sap/HN1/SYS/global/security/rsecssfs/key/SSFS_HN1.KEY hn1-db-
1:/usr/sap/HN1/SYS/global/security/rsecssfs/key/

Create the primary site:

hdbnsutil -sr_enable --name=SITE1

3. [2] Configure System Replication on the second node:


Register the second node to start the system replication. Run the following command as
<hanasid>adm :

sapcontrol -nr 03 -function StopWait 600 10


hdbnsutil -sr_register --remoteHost=hn1-db-0 --remoteInstance=03 --replicationMode=sync --
name=SITE2

Configure SAP HANA 1.0 System Replication


The steps in this section use the following prefixes:
[A] : The step applies to all nodes.
[1] : The step applies to node 1 only.
[2] : The step applies to node 2 of the Pacemaker cluster only.
1. [1] Create the required users.
Run the following command as root. Make sure to replace bold strings (HANA System ID HN1 and
instance number 03 ) with the values of your SAP HANA installation:

PATH="$PATH:/usr/sap/HN1/HDB03/exe"
hdbsql -u system -i 03 'CREATE USER hdbhasync PASSWORD "passwd"'
hdbsql -u system -i 03 'GRANT DATA ADMIN TO hdbhasync'
hdbsql -u system -i 03 'ALTER USER hdbhasync DISABLE PASSWORD LIFETIME'

2. [A] Create the keystore entry.


Run the following command as root to create a new keystore entry:

PATH="$PATH:/usr/sap/HN1/HDB03/exe"
hdbuserstore SET hdbhaloc localhost:30315 hdbhasync passwd

3. [1] Back up the database.


Back up the databases as root:

PATH="$PATH:/usr/sap/HN1/HDB03/exe"
hdbsql -d SYSTEMDB -u system -i 03 "BACKUP DATA USING FILE ('initialbackup')"

If you use a multi-tenant installation, also back up the tenant database:


hdbsql -d HN1 -u system -i 03 "BACKUP DATA USING FILE ('initialbackup')"

4. [1] Configure System Replication on the first node.


Create the primary site as <hanasid>adm :

su - hdbadm
hdbnsutil -sr_enable –-name=SITE1

5. [2] Configure System Replication on the secondary node.


Register the secondary site as <hanasid>adm:

sapcontrol -nr 03 -function StopWait 600 10


hdbnsutil -sr_register --remoteHost=hn1-db-0 --remoteInstance=03 --replicationMode=sync --
name=SITE2

Create SAP HANA cluster resources


First, create the HANA topology. Run the following commands on one of the Pacemaker cluster nodes:

sudo crm configure property maintenance-mode=true

# Replace the bold string with your instance number and HANA system ID

sudo crm configure primitive rsc_SAPHanaTopology_HN1_HDB03 ocf:suse:SAPHanaTopology \


operations \$id="rsc_sap2_HN1_HDB03-operations" \
op monitor interval="10" timeout="600" \
op start interval="0" timeout="600" \
op stop interval="0" timeout="300" \
params SID="HN1" InstanceNumber="03"

sudo crm configure clone cln_SAPHanaTopology_HN1_HDB03 rsc_SAPHanaTopology_HN1_HDB03 \


meta clone-node-max="1" target-role="Started" interleave="true"

Next, create the HANA resources:

IMPORTANT
Recent testing revealed situations, where netcat stops responding to requests due to backlog and its limitation of
handling only one connection. The netcat resource stops listening to the Azure Load balancer requests and the floating
IP becomes unavailable.
For existing Pacemaker clusters, we recommended in the past replacing netcat with socat. Currently we recommend
using azure-lb resource agent, which is part of package resource-agents, with the following package version
requirements:
For SLES 12 SP4/SP5, the version must be at least resource-agents-4.3.018.a7fb5035-3.30.1.
For SLES 15/15 SP1, the version must be at least resource-agents-4.3.0184.6ee15eb2-4.13.1.
Note that the change will require brief downtime.
For existing Pacemaker clusters, if the configuration was already changed to use socat as described in Azure Load-
Balancer Detection Hardening, there is no requirement to switch immediately to azure-lb resource agent.
NOTE
This article contains references to the terms master and slave, terms that Microsoft no longer uses. When these terms
are removed from the software, we’ll remove them from this article.

# Replace the bold string with your instance number, HANA system ID, and the front-end IP address of the
Azure load balancer.

sudo crm configure primitive rsc_SAPHana_HN1_HDB03 ocf:suse:SAPHana \


operations \$id="rsc_sap_HN1_HDB03-operations" \
op start interval="0" timeout="3600" \
op stop interval="0" timeout="3600" \
op promote interval="0" timeout="3600" \
op monitor interval="60" role="Master" timeout="700" \
op monitor interval="61" role="Slave" timeout="700" \
params SID="HN1" InstanceNumber="03" PREFER_SITE_TAKEOVER="true" \
DUPLICATE_PRIMARY_TIMEOUT="7200" AUTOMATED_REGISTER="false"

sudo crm configure ms msl_SAPHana_HN1_HDB03 rsc_SAPHana_HN1_HDB03 \


meta notify="true" clone-max="2" clone-node-max="1" \
target-role="Started" interleave="true"

sudo crm configure primitive rsc_ip_HN1_HDB03 ocf:heartbeat:IPaddr2 \


meta target-role="Started" \
operations \$id="rsc_ip_HN1_HDB03-operations" \
op monitor interval="10s" timeout="20s" \
params ip="10.0.0.13"

sudo crm configure primitive rsc_nc_HN1_HDB03 azure-lb port=62503 \


meta resource-stickiness=0

sudo crm configure group g_ip_HN1_HDB03 rsc_ip_HN1_HDB03 rsc_nc_HN1_HDB03

sudo crm configure colocation col_saphana_ip_HN1_HDB03 4000: g_ip_HN1_HDB03:Started \


msl_SAPHana_HN1_HDB03:Master

sudo crm configure order ord_SAPHana_HN1_HDB03 Optional: cln_SAPHanaTopology_HN1_HDB03 \


msl_SAPHana_HN1_HDB03

# Clean up the HANA resources. The HANA resources might have failed because of a known issue.
sudo crm resource cleanup rsc_SAPHana_HN1_HDB03

sudo crm configure property maintenance-mode=false


sudo crm configure rsc_defaults resource-stickiness=1000
sudo crm configure rsc_defaults migration-threshold=5000

Make sure that the cluster status is ok and that all of the resources are started. It's not important on which
node the resources are running.
sudo crm_mon -r

# Online: [ hn1-db-0 hn1-db-1 ]


#
# Full list of resources:
#
# stonith-sbd (stonith:external/sbd): Started hn1-db-0
# Clone Set: cln_SAPHanaTopology_HN1_HDB03 [rsc_SAPHanaTopology_HN1_HDB03]
# Started: [ hn1-db-0 hn1-db-1 ]
# Master/Slave Set: msl_SAPHana_HN1_HDB03 [rsc_SAPHana_HN1_HDB03]
# Masters: [ hn1-db-0 ]
# Slaves: [ hn1-db-1 ]
# Resource Group: g_ip_HN1_HDB03
# rsc_ip_HN1_HDB03 (ocf::heartbeat:IPaddr2): Started hn1-db-0
# rsc_nc_HN1_HDB03 (ocf::heartbeat:azure-lb): Started hn1-db-0

Test the cluster setup


This section describes how you can test your setup. Every test assumes that you are root and the SAP HANA
master is running on the hn1-db-0 virtual machine.
Test the migration
Before you start the test, make sure that Pacemaker does not have any failed action (via crm_mon -r), there
are no unexpected location constraints (for example leftovers of a migration test) and that HANA is sync state,
for example with SAPHanaSR-showAttr:

hn1-db-0:~ # SAPHanaSR-showAttr

Global cib-time
--------------------------------
global Mon Aug 13 11:26:04 2018

Hosts clone_state lpa_hn1_lpt node_state op_mode remoteHost roles


score site srmode sync_state version vhost
---------------------------------------------------------------------------------------------------------
--------------------------------------------------------------------
hn1-db-0 PROMOTED 1534159564 online logreplay nws-hana-vm-1 4:P:master1:master:worker:master 150
SITE1 sync PRIM 2.00.030.00.1522209842 nws-hana-vm-0
hn1-db-1 DEMOTED 30 online logreplay nws-hana-vm-0 4:S:master1:master:worker:master 100
SITE2 sync SOK 2.00.030.00.1522209842 nws-hana-vm-1

You can migrate the SAP HANA master node by executing the following command:

crm resource migrate msl_SAPHana_HN1_HDB03 hn1-db-1

If you set AUTOMATED_REGISTER="false" , this sequence of commands should migrate the SAP HANA master
node and the group that contains the virtual IP address to hn1-db-1.
Once the migration is done, the crm_mon -r output looks like this
Online: [ hn1-db-0 hn1-db-1 ]

Full list of resources:

stonith-sbd (stonith:external/sbd): Started hn1-db-1


Clone Set: cln_SAPHanaTopology_HN1_HDB03 [rsc_SAPHanaTopology_HN1_HDB03]
Started: [ hn1-db-0 hn1-db-1 ]
Master/Slave Set: msl_SAPHana_HN1_HDB03 [rsc_SAPHana_HN1_HDB03]
Masters: [ hn1-db-1 ]
Stopped: [ hn1-db-0 ]
Resource Group: g_ip_HN1_HDB03
rsc_ip_HN1_HDB03 (ocf::heartbeat:IPaddr2): Started hn1-db-1
rsc_nc_HN1_HDB03 (ocf::heartbeat:azure-lb): Started hn1-db-1

Failed Actions:
* rsc_SAPHana_HN1_HDB03_start_0 on hn1-db-0 'not running' (7): call=84, status=complete,
exitreason='none',
last-rc-change='Mon Aug 13 11:31:37 2018', queued=0ms, exec=2095ms

The SAP HANA resource on hn1-db-0 fails to start as secondary. In this case, configure the HANA instance as
secondary by executing this command:

su - hn1adm

# Stop the HANA instance just in case it is running


hn1adm@hn1-db-0:/usr/sap/HN1/HDB03> sapcontrol -nr 03 -function StopWait 600 10
hn1adm@hn1-db-0:/usr/sap/HN1/HDB03> hdbnsutil -sr_register --remoteHost=hn1-db-1 --remoteInstance=03 --
replicationMode=sync --name=SITE1

The migration creates location constraints that need to be deleted again:

# Switch back to root and clean up the failed state


exit
hn1-db-0:~ # crm resource unmigrate msl_SAPHana_HN1_HDB03

You also need to clean up the state of the secondary node resource:

hn1-db-0:~ # crm resource cleanup msl_SAPHana_HN1_HDB03 hn1-db-0

Monitor the state of the HANA resource using crm_mon -r. Once HANA is started on hn1-db-0, the output
should look like this

Online: [ hn1-db-0 hn1-db-1 ]

Full list of resources:

stonith-sbd (stonith:external/sbd): Started hn1-db-1


Clone Set: cln_SAPHanaTopology_HN1_HDB03 [rsc_SAPHanaTopology_HN1_HDB03]
Started: [ hn1-db-0 hn1-db-1 ]
Master/Slave Set: msl_SAPHana_HN1_HDB03 [rsc_SAPHana_HN1_HDB03]
Masters: [ hn1-db-1 ]
Slaves: [ hn1-db-0 ]
Resource Group: g_ip_HN1_HDB03
rsc_ip_HN1_HDB03 (ocf::heartbeat:IPaddr2): Started hn1-db-1
rsc_nc_HN1_HDB03 (ocf::heartbeat:azure-lb): Started hn1-db-1

Test the Azure fencing agent (not SBD)


You can test the setup of the Azure fencing agent by disabling the network interface on the hn1-db-0 node:
sudo ifdown eth0

The virtual machine should now restart or stop depending on your cluster configuration. If you set the
stonith-action setting to off, the virtual machine is stopped and the resources are migrated to the running
virtual machine.
After you start the virtual machine again, the SAP HANA resource fails to start as secondary if you set
AUTOMATED_REGISTER="false" . In this case, configure the HANA instance as secondary by executing this
command:

su - hn1adm

# Stop the HANA instance just in case it is running


sapcontrol -nr 03 -function StopWait 600 10
hdbnsutil -sr_register --remoteHost=hn1-db-1 --remoteInstance=03 --replicationMode=sync --name=SITE1

# Switch back to root and clean up the failed state


exit
crm resource cleanup msl_SAPHana_HN1_HDB03 hn1-db-0

Test SBD fencing


You can test the setup of SBD by killing the inquisitor process.

hn1-db-0:~ # ps aux | grep sbd


root 1912 0.0 0.0 85420 11740 ? SL 12:25 0:00 sbd: inquisitor
root 1929 0.0 0.0 85456 11776 ? SL 12:25 0:00 sbd: watcher: /dev/disk/by-id/scsi-
360014056f268462316e4681b704a9f73 - slot: 0 - uuid: 7b862dba-e7f7-4800-92ed-f76a4e3978c8
root 1930 0.0 0.0 85456 11776 ? SL 12:25 0:00 sbd: watcher: /dev/disk/by-id/scsi-
360014059bc9ea4e4bac4b18808299aaf - slot: 0 - uuid: 5813ee04-b75c-482e-805e-3b1e22ba16cd
root 1931 0.0 0.0 85456 11776 ? SL 12:25 0:00 sbd: watcher: /dev/disk/by-id/scsi-
36001405b8dddd44eb3647908def6621c - slot: 0 - uuid: 986ed8f8-947d-4396-8aec-b933b75e904c
root 1932 0.0 0.0 90524 16656 ? SL 12:25 0:00 sbd: watcher: Pacemaker
root 1933 0.0 0.0 102708 28260 ? SL 12:25 0:00 sbd: watcher: Cluster
root 13877 0.0 0.0 9292 1572 pts/0 S+ 12:27 0:00 grep sbd

hn1-db-0:~ # kill -9 1912

Cluster node hn1-db-0 should be rebooted. The Pacemaker service might not get started afterwards. Make
sure to start it again.
Test a manual failover
You can test a manual failover by stopping the pacemaker service on the hn1-db-0 node:

service pacemaker stop

After the failover, you can start the service again. If you set AUTOMATED_REGISTER="false" , the SAP HANA
resource on the hn1-db-0 node fails to start as secondary. In this case, configure the HANA instance as
secondary by executing this command:
service pacemaker start
su - hn1adm

# Stop the HANA instance just in case it is running


sapcontrol -nr 03 -function StopWait 600 10
hdbnsutil -sr_register --remoteHost=hn1-db-1 --remoteInstance=03 --replicationMode=sync --name=SITE1

# Switch back to root and clean up the failed state


exit
crm resource cleanup msl_SAPHana_HN1_HDB03 hn1-db-0

SUSE tests

IMPORTANT
Make sure that the OS you select is SAP certified for SAP HANA on the specific VM types you are using. The list of SAP
HANA certified VM types and OS releases for those can be looked up in SAP HANA Certified IaaS Platforms. Make sure
to click into the details of the VM type listed to get the complete list of SAP HANA supported OS releases for the
specific VM type

Run all test cases that are listed in the SAP HANA SR Performance Optimized Scenario or SAP HANA SR Cost
Optimized Scenario guide, depending on your use case. You can find the guides on the SLES for SAP best
practices page.
The following tests are a copy of the test descriptions of the SAP HANA SR Performance Optimized Scenario
SUSE Linux Enterprise Server for SAP Applications 12 SP1 guide. For an up-to-date version, always also read
the guide itself. Always make sure that HANA is in sync before starting the test and also make sure that the
Pacemaker configuration is correct.
In the following test descriptions we assume PREFER_SITE_TAKEOVER="true" and
AUTOMATED_REGISTER="false". NOTE: The following tests are designed to be run in sequence and depend on
the exit state of the preceding tests.
1. TEST 1: STOP PRIMARY DATABASE ON NODE 1
Resource state before starting the test:

Clone Set: cln_SAPHanaTopology_HN1_HDB03 [rsc_SAPHanaTopology_HN1_HDB03]


Started: [ hn1-db-0 hn1-db-1 ]
Master/Slave Set: msl_SAPHana_HN1_HDB03 [rsc_SAPHana_HN1_HDB03]
Masters: [ hn1-db-0 ]
Slaves: [ hn1-db-1 ]
Resource Group: g_ip_HN1_HDB03
rsc_ip_HN1_HDB03 (ocf::heartbeat:IPaddr2): Started hn1-db-0
rsc_nc_HN1_HDB03 (ocf::heartbeat:azure-lb): Started hn1-db-0

Run the following commands as <hanasid>adm on node hn1-db-0:

hn1adm@hn1-db-0:/usr/sap/HN1/HDB03> HDB stop

Pacemaker should detect the stopped HANA instance and failover to the other node. Once the failover
is done, the HANA instance on node hn1-db-0 is stopped because Pacemaker does not automatically
register the node as HANA secondary.
Run the following commands to register node hn1-db-0 as secondary and cleanup the failed resource.
hn1adm@hn1-db-0:/usr/sap/HN1/HDB03> hdbnsutil -sr_register --remoteHost=hn1-db-1 --
remoteInstance=03 --replicationMode=sync --name=SITE1

# run as root
hn1-db-0:~ # crm resource cleanup msl_SAPHana_HN1_HDB03 hn1-db-0

Resource state after the test:

Clone Set: cln_SAPHanaTopology_HN1_HDB03 [rsc_SAPHanaTopology_HN1_HDB03]


Started: [ hn1-db-0 hn1-db-1 ]
Master/Slave Set: msl_SAPHana_HN1_HDB03 [rsc_SAPHana_HN1_HDB03]
Masters: [ hn1-db-1 ]
Slaves: [ hn1-db-0 ]
Resource Group: g_ip_HN1_HDB03
rsc_ip_HN1_HDB03 (ocf::heartbeat:IPaddr2): Started hn1-db-1
rsc_nc_HN1_HDB03 (ocf::heartbeat:azure-lb): Started hn1-db-1

2. TEST 2: STOP PRIMARY DATABASE ON NODE 2


Resource state before starting the test:

Clone Set: cln_SAPHanaTopology_HN1_HDB03 [rsc_SAPHanaTopology_HN1_HDB03]


Started: [ hn1-db-0 hn1-db-1 ]
Master/Slave Set: msl_SAPHana_HN1_HDB03 [rsc_SAPHana_HN1_HDB03]
Masters: [ hn1-db-1 ]
Slaves: [ hn1-db-0 ]
Resource Group: g_ip_HN1_HDB03
rsc_ip_HN1_HDB03 (ocf::heartbeat:IPaddr2): Started hn1-db-1
rsc_nc_HN1_HDB03 (ocf::heartbeat:azure-lb): Started hn1-db-1

Run the following commands as <hanasid>adm on node hn1-db-1:

hn1adm@hn1-db-1:/usr/sap/HN1/HDB03> HDB stop

Pacemaker should detect the stopped HANA instance and failover to the other node. Once the failover
is done, the HANA instance on node hn1-db-1 is stopped because Pacemaker does not automatically
register the node as HANA secondary.
Run the following commands to register node hn1-db-1 as secondary and cleanup the failed resource.

hn1adm@hn1-db-1:/usr/sap/HN1/HDB03> hdbnsutil -sr_register --remoteHost=hn1-db-0 --


remoteInstance=03 --replicationMode=sync --name=SITE2

# run as root
hn1-db-1:~ # crm resource cleanup msl_SAPHana_HN1_HDB03 hn1-db-1

Resource state after the test:

Clone Set: cln_SAPHanaTopology_HN1_HDB03 [rsc_SAPHanaTopology_HN1_HDB03]


Started: [ hn1-db-0 hn1-db-1 ]
Master/Slave Set: msl_SAPHana_HN1_HDB03 [rsc_SAPHana_HN1_HDB03]
Masters: [ hn1-db-0 ]
Slaves: [ hn1-db-1 ]
Resource Group: g_ip_HN1_HDB03
rsc_ip_HN1_HDB03 (ocf::heartbeat:IPaddr2): Started hn1-db-0
rsc_nc_HN1_HDB03 (ocf::heartbeat:azure-lb): Started hn1-db-0
3. TEST 3: CRASH PRIMARY DATABASE ON NODE
Resource state before starting the test:

Clone Set: cln_SAPHanaTopology_HN1_HDB03 [rsc_SAPHanaTopology_HN1_HDB03]


Started: [ hn1-db-0 hn1-db-1 ]
Master/Slave Set: msl_SAPHana_HN1_HDB03 [rsc_SAPHana_HN1_HDB03]
Masters: [ hn1-db-0 ]
Slaves: [ hn1-db-1 ]
Resource Group: g_ip_HN1_HDB03
rsc_ip_HN1_HDB03 (ocf::heartbeat:IPaddr2): Started hn1-db-0
rsc_nc_HN1_HDB03 (ocf::heartbeat:azure-lb): Started hn1-db-0

Run the following commands as <hanasid>adm on node hn1-db-0:

hn1adm@hn1-db-0:/usr/sap/HN1/HDB03> HDB kill-9

Pacemaker should detect the killed HANA instance and failover to the other node. Once the failover is
done, the HANA instance on node hn1-db-0 is stopped because Pacemaker does not automatically
register the node as HANA secondary.
Run the following commands to register node hn1-db-0 as secondary and cleanup the failed resource.

hn1adm@hn1-db-0:/usr/sap/HN1/HDB03> hdbnsutil -sr_register --remoteHost=hn1-db-1 --


remoteInstance=03 --replicationMode=sync --name=SITE1

# run as root
hn1-db-0:~ # crm resource cleanup msl_SAPHana_HN1_HDB03 hn1-db-0

Resource state after the test:

Clone Set: cln_SAPHanaTopology_HN1_HDB03 [rsc_SAPHanaTopology_HN1_HDB03]


Started: [ hn1-db-0 hn1-db-1 ]
Master/Slave Set: msl_SAPHana_HN1_HDB03 [rsc_SAPHana_HN1_HDB03]
Masters: [ hn1-db-1 ]
Slaves: [ hn1-db-0 ]
Resource Group: g_ip_HN1_HDB03
rsc_ip_HN1_HDB03 (ocf::heartbeat:IPaddr2): Started hn1-db-1
rsc_nc_HN1_HDB03 (ocf::heartbeat:azure-lb): Started hn1-db-1

4. TEST 4: CRASH PRIMARY DATABASE ON NODE 2


Resource state before starting the test:

Clone Set: cln_SAPHanaTopology_HN1_HDB03 [rsc_SAPHanaTopology_HN1_HDB03]


Started: [ hn1-db-0 hn1-db-1 ]
Master/Slave Set: msl_SAPHana_HN1_HDB03 [rsc_SAPHana_HN1_HDB03]
Masters: [ hn1-db-1 ]
Slaves: [ hn1-db-0 ]
Resource Group: g_ip_HN1_HDB03
rsc_ip_HN1_HDB03 (ocf::heartbeat:IPaddr2): Started hn1-db-1
rsc_nc_HN1_HDB03 (ocf::heartbeat:azure-lb): Started hn1-db-1

Run the following commands as <hanasid>adm on node hn1-db-1:

hn1adm@hn1-db-1:/usr/sap/HN1/HDB03> HDB kill-9


Pacemaker should detect the killed HANA instance and failover to the other node. Once the failover is
done, the HANA instance on node hn1-db-1 is stopped because Pacemaker does not automatically
register the node as HANA secondary.
Run the following commands to register node hn1-db-1 as secondary and cleanup the failed resource.

hn1adm@hn1-db-1:/usr/sap/HN1/HDB03> hdbnsutil -sr_register --remoteHost=hn1-db-0 --


remoteInstance=03 --replicationMode=sync --name=SITE2

# run as root
hn1-db-1:~ # crm resource cleanup msl_SAPHana_HN1_HDB03 hn1-db-1

Resource state after the test:

Clone Set: cln_SAPHanaTopology_HN1_HDB03 [rsc_SAPHanaTopology_HN1_HDB03]


Started: [ hn1-db-0 hn1-db-1 ]
Master/Slave Set: msl_SAPHana_HN1_HDB03 [rsc_SAPHana_HN1_HDB03]
Masters: [ hn1-db-0 ]
Slaves: [ hn1-db-1 ]
Resource Group: g_ip_HN1_HDB03
rsc_ip_HN1_HDB03 (ocf::heartbeat:IPaddr2): Started hn1-db-0
rsc_nc_HN1_HDB03 (ocf::heartbeat:azure-lb): Started hn1-db-0

5. TEST 5: CRASH PRIMARY SITE NODE (NODE 1)


Resource state before starting the test:

Clone Set: cln_SAPHanaTopology_HN1_HDB03 [rsc_SAPHanaTopology_HN1_HDB03]


Started: [ hn1-db-0 hn1-db-1 ]
Master/Slave Set: msl_SAPHana_HN1_HDB03 [rsc_SAPHana_HN1_HDB03]
Masters: [ hn1-db-0 ]
Slaves: [ hn1-db-1 ]
Resource Group: g_ip_HN1_HDB03
rsc_ip_HN1_HDB03 (ocf::heartbeat:IPaddr2): Started hn1-db-0
rsc_nc_HN1_HDB03 (ocf::heartbeat:azure-lb): Started hn1-db-0

Run the following commands as root on node hn1-db-0:

hn1-db-0:~ # echo 'b' > /proc/sysrq-trigger

Pacemaker should detect the killed cluster node and fence the node. Once the node is fenced,
Pacemaker will trigger a takeover of the HANA instance. When the fenced node is rebooted, Pacemaker
will not start automatically.
Run the following commands to start Pacemaker, clean the SBD messages for node hn1-db-0, register
node hn1-db-0 as secondary, and cleanup the failed resource.
# run as root
# list the SBD device(s)
hn1-db-0:~ # cat /etc/sysconfig/sbd | grep SBD_DEVICE=
# SBD_DEVICE="/dev/disk/by-id/scsi-36001405772fe8401e6240c985857e116;/dev/disk/by-id/scsi-
36001405034a84428af24ddd8c3a3e9e1;/dev/disk/by-id/scsi-36001405cdd5ac8d40e548449318510c3"

hn1-db-0:~ # sbd -d /dev/disk/by-id/scsi-36001405772fe8401e6240c985857e116 -d /dev/disk/by-


id/scsi-36001405034a84428af24ddd8c3a3e9e1 -d /dev/disk/by-id/scsi-
36001405cdd5ac8d40e548449318510c3 message hn1-db-0 clear

hn1-db-0:~ # systemctl start pacemaker

# run as <hanasid>adm
hn1adm@hn1-db-0:/usr/sap/HN1/HDB03> hdbnsutil -sr_register --remoteHost=hn1-db-1 --
remoteInstance=03 --replicationMode=sync --name=SITE1

# run as root
hn1-db-0:~ # crm resource cleanup msl_SAPHana_HN1_HDB03 hn1-db-0

Resource state after the test:

Clone Set: cln_SAPHanaTopology_HN1_HDB03 [rsc_SAPHanaTopology_HN1_HDB03]


Started: [ hn1-db-0 hn1-db-1 ]
Master/Slave Set: msl_SAPHana_HN1_HDB03 [rsc_SAPHana_HN1_HDB03]
Masters: [ hn1-db-1 ]
Slaves: [ hn1-db-0 ]
Resource Group: g_ip_HN1_HDB03
rsc_ip_HN1_HDB03 (ocf::heartbeat:IPaddr2): Started hn1-db-1
rsc_nc_HN1_HDB03 (ocf::heartbeat:azure-lb): Started hn1-db-1

6. TEST 6: CRASH SECONDARY SITE NODE (NODE 2)


Resource state before starting the test:

Clone Set: cln_SAPHanaTopology_HN1_HDB03 [rsc_SAPHanaTopology_HN1_HDB03]


Started: [ hn1-db-0 hn1-db-1 ]
Master/Slave Set: msl_SAPHana_HN1_HDB03 [rsc_SAPHana_HN1_HDB03]
Masters: [ hn1-db-1 ]
Slaves: [ hn1-db-0 ]
Resource Group: g_ip_HN1_HDB03
rsc_ip_HN1_HDB03 (ocf::heartbeat:IPaddr2): Started hn1-db-1
rsc_nc_HN1_HDB03 (ocf::heartbeat:azure-lb): Started hn1-db-1

Run the following commands as root on node hn1-db-1:

hn1-db-1:~ # echo 'b' > /proc/sysrq-trigger

Pacemaker should detect the killed cluster node and fence the node. Once the node is fenced,
Pacemaker will trigger a takeover of the HANA instance. When the fenced node is rebooted, Pacemaker
will not start automatically.
Run the following commands to start Pacemaker, clean the SBD messages for node hn1-db-1, register
node hn1-db-1 as secondary, and cleanup the failed resource.
# run as root
# list the SBD device(s)
hn1-db-1:~ # cat /etc/sysconfig/sbd | grep SBD_DEVICE=
# SBD_DEVICE="/dev/disk/by-id/scsi-36001405772fe8401e6240c985857e116;/dev/disk/by-id/scsi-
36001405034a84428af24ddd8c3a3e9e1;/dev/disk/by-id/scsi-36001405cdd5ac8d40e548449318510c3"

hn1-db-1:~ # sbd -d /dev/disk/by-id/scsi-36001405772fe8401e6240c985857e116 -d /dev/disk/by-


id/scsi-36001405034a84428af24ddd8c3a3e9e1 -d /dev/disk/by-id/scsi-
36001405cdd5ac8d40e548449318510c3 message hn1-db-1 clear

hn1-db-1:~ # systemctl start pacemaker

# run as <hanasid>adm
hn1adm@hn1-db-1:/usr/sap/HN1/HDB03> hdbnsutil -sr_register --remoteHost=hn1-db-0 --
remoteInstance=03 --replicationMode=sync --name=SITE2

# run as root
hn1-db-1:~ # crm resource cleanup msl_SAPHana_HN1_HDB03 hn1-db-1

Resource state after the test:

Clone Set: cln_SAPHanaTopology_HN1_HDB03 [rsc_SAPHanaTopology_HN1_HDB03]


Started: [ hn1-db-0 hn1-db-1 ]
Master/Slave Set: msl_SAPHana_HN1_HDB03 [rsc_SAPHana_HN1_HDB03]
Masters: [ hn1-db-0 ]
Slaves: [ hn1-db-1 ]
Resource Group: g_ip_HN1_HDB03
rsc_ip_HN1_HDB03 (ocf::heartbeat:IPaddr2): Started hn1-db-0
rsc_nc_HN1_HDB03 (ocf::heartbeat:azure-lb): Started hn1-db-0

7. TEST 7: STOP THE SECONDARY DATABASE ON NODE 2


Resource state before starting the test:

Clone Set: cln_SAPHanaTopology_HN1_HDB03 [rsc_SAPHanaTopology_HN1_HDB03]


Started: [ hn1-db-0 hn1-db-1 ]
Master/Slave Set: msl_SAPHana_HN1_HDB03 [rsc_SAPHana_HN1_HDB03]
Masters: [ hn1-db-0 ]
Slaves: [ hn1-db-1 ]
Resource Group: g_ip_HN1_HDB03
rsc_ip_HN1_HDB03 (ocf::heartbeat:IPaddr2): Started hn1-db-0
rsc_nc_HN1_HDB03 (ocf::heartbeat:azure-lb): Started hn1-db-0

Run the following commands as <hanasid>adm on node hn1-db-1:

hn1adm@hn1-db-1:/usr/sap/HN1/HDB03> HDB stop

Pacemaker will detect the stopped HANA instance and mark the resource as failed on node hn1-db-1.
Pacemaker should automatically restart the HANA instance. Run the following command to clean up
the failed state.

# run as root
hn1-db-1:~ # crm resource cleanup msl_SAPHana_HN1_HDB03 hn1-db-1

Resource state after the test:


Clone Set: cln_SAPHanaTopology_HN1_HDB03 [rsc_SAPHanaTopology_HN1_HDB03]
Started: [ hn1-db-0 hn1-db-1 ]
Master/Slave Set: msl_SAPHana_HN1_HDB03 [rsc_SAPHana_HN1_HDB03]
Masters: [ hn1-db-0 ]
Slaves: [ hn1-db-1 ]
Resource Group: g_ip_HN1_HDB03
rsc_ip_HN1_HDB03 (ocf::heartbeat:IPaddr2): Started hn1-db-0
rsc_nc_HN1_HDB03 (ocf::heartbeat:azure-lb): Started hn1-db-0

8. TEST 8: CRASH THE SECONDARY DATABASE ON NODE 2


Resource state before starting the test:

Clone Set: cln_SAPHanaTopology_HN1_HDB03 [rsc_SAPHanaTopology_HN1_HDB03]


Started: [ hn1-db-0 hn1-db-1 ]
Master/Slave Set: msl_SAPHana_HN1_HDB03 [rsc_SAPHana_HN1_HDB03]
Masters: [ hn1-db-0 ]
Slaves: [ hn1-db-1 ]
Resource Group: g_ip_HN1_HDB03
rsc_ip_HN1_HDB03 (ocf::heartbeat:IPaddr2): Started hn1-db-0
rsc_nc_HN1_HDB03 (ocf::heartbeat:azure-lb): Started hn1-db-0

Run the following commands as <hanasid>adm on node hn1-db-1:

hn1adm@hn1-db-1:/usr/sap/HN1/HDB03> HDB kill-9

Pacemaker will detect the killed HANA instance and mark the resource as failed on node hn1-db-1. Run
the following command to clean up the failed state. Pacemaker should then automatically restart the
HANA instance.

# run as root
hn1-db-1:~ # crm resource cleanup msl_SAPHana_HN1_HDB03 hn1-db-1

Resource state after the test:

Clone Set: cln_SAPHanaTopology_HN1_HDB03 [rsc_SAPHanaTopology_HN1_HDB03]


Started: [ hn1-db-0 hn1-db-1 ]
Master/Slave Set: msl_SAPHana_HN1_HDB03 [rsc_SAPHana_HN1_HDB03]
Masters: [ hn1-db-0 ]
Slaves: [ hn1-db-1 ]
Resource Group: g_ip_HN1_HDB03
rsc_ip_HN1_HDB03 (ocf::heartbeat:IPaddr2): Started hn1-db-0
rsc_nc_HN1_HDB03 (ocf::heartbeat:azure-lb): Started hn1-db-0

9. TEST 9: CRASH SECONDARY SITE NODE (NODE 2) RUNNING SECONDARY HANA DATABASE
Resource state before starting the test:

Clone Set: cln_SAPHanaTopology_HN1_HDB03 [rsc_SAPHanaTopology_HN1_HDB03]


Started: [ hn1-db-0 hn1-db-1 ]
Master/Slave Set: msl_SAPHana_HN1_HDB03 [rsc_SAPHana_HN1_HDB03]
Masters: [ hn1-db-0 ]
Slaves: [ hn1-db-1 ]
Resource Group: g_ip_HN1_HDB03
rsc_ip_HN1_HDB03 (ocf::heartbeat:IPaddr2): Started hn1-db-0
rsc_nc_HN1_HDB03 (ocf::heartbeat:azure-lb): Started hn1-db-0
Run the following commands as root on node hn1-db-1:

hn1-db-1:~ # echo b > /proc/sysrq-trigger

Pacemaker should detect the killed cluster node and fence the node. When the fenced node is
rebooted, Pacemaker will not start automatically.
Run the following commands to start Pacemaker, clean the SBD messages for node hn1-db-1, and
cleanup the failed resource.

# run as root
# list the SBD device(s)
hn1-db-1:~ # cat /etc/sysconfig/sbd | grep SBD_DEVICE=
# SBD_DEVICE="/dev/disk/by-id/scsi-36001405772fe8401e6240c985857e116;/dev/disk/by-id/scsi-
36001405034a84428af24ddd8c3a3e9e1;/dev/disk/by-id/scsi-36001405cdd5ac8d40e548449318510c3"

hn1-db-1:~ # sbd -d /dev/disk/by-id/scsi-36001405772fe8401e6240c985857e116 -d /dev/disk/by-


id/scsi-36001405034a84428af24ddd8c3a3e9e1 -d /dev/disk/by-id/scsi-
36001405cdd5ac8d40e548449318510c3 message hn1-db-1 clear

hn1-db-1:~ # systemctl start pacemaker

hn1-db-1:~ # crm resource cleanup msl_SAPHana_HN1_HDB03 hn1-db-1

Resource state after the test:

Clone Set: cln_SAPHanaTopology_HN1_HDB03 [rsc_SAPHanaTopology_HN1_HDB03]


Started: [ hn1-db-0 hn1-db-1 ]
Master/Slave Set: msl_SAPHana_HN1_HDB03 [rsc_SAPHana_HN1_HDB03]
Masters: [ hn1-db-0 ]
Slaves: [ hn1-db-1 ]
Resource Group: g_ip_HN1_HDB03
rsc_ip_HN1_HDB03 (ocf::heartbeat:IPaddr2): Started hn1-db-0
rsc_nc_HN1_HDB03 (ocf::heartbeat:azure-lb): Started hn1-db-0

Next steps
Azure Virtual Machines planning and implementation for SAP
Azure Virtual Machines deployment for SAP
Azure Virtual Machines DBMS deployment for SAP
High availability of SAP HANA on Azure VMs on
Red Hat Enterprise Linux
12/22/2020 • 26 minutes to read • Edit Online

For on-premises development, you can use either HANA System Replication or use shared storage to establish
high availability for SAP HANA. On Azure virtual machines (VMs), HANA System Replication on Azure is
currently the only supported high availability function. SAP HANA Replication consists of one primary node and
at least one secondary node. Changes to the data on the primary node are replicated to the secondary node
synchronously or asynchronously.
This article describes how to deploy and configure the virtual machines, install the cluster framework, and install
and configure SAP HANA System Replication. In the example configurations, installation commands, instance
number 03 , and HANA System ID HN1 are used.
Read the following SAP Notes and papers first:
SAP Note 1928533, which has:
The list of Azure VM sizes that are supported for the deployment of SAP software.
Important capacity information for Azure VM sizes.
The supported SAP software, and operating system (OS) and database combinations.
The required SAP kernel version for Windows and Linux on Microsoft Azure.
SAP Note 2015553 lists prerequisites for SAP-supported SAP software deployments in Azure.
SAP Note 2002167 has recommended OS settings for Red Hat Enterprise Linux
SAP Note 2009879 has SAP HANA Guidelines for Red Hat Enterprise Linux
SAP Note 2178632 has detailed information about all monitoring metrics reported for SAP in Azure.
SAP Note 2191498 has the required SAP Host Agent version for Linux in Azure.
SAP Note 2243692 has information about SAP licensing on Linux in Azure.
SAP Note 1999351 has additional troubleshooting information for the Azure Enhanced Monitoring Extension
for SAP.
SAP Community WIKI has all required SAP Notes for Linux.
Azure Virtual Machines planning and implementation for SAP on Linux
Azure Virtual Machines deployment for SAP on Linux (this article)
Azure Virtual Machines DBMS deployment for SAP on Linux
SAP HANA system replication in pacemaker cluster
General RHEL documentation
High Availability Add-On Overview
High Availability Add-On Administration
High Availability Add-On Reference
Azure specific RHEL documentation:
Support Policies for RHEL High Availability Clusters - Microsoft Azure Virtual Machines as Cluster
Members
Installing and Configuring a Red Hat Enterprise Linux 7.4 (and later) High-Availability Cluster on
Microsoft Azure
Install SAP HANA on Red Hat Enterprise Linux for Use in Microsoft Azure

Overview
To achieve high availability, SAP HANA is installed on two virtual machines. The data is replicated by using HANA
System Replication.

SAP HANA System Replication setup uses a dedicated virtual hostname and virtual IP addresses. On Azure, a
load balancer is required to use a virtual IP address. The following list shows the configuration of the load
balancer:
Front-end configuration: IP address 10.0.0.13 for hn1-db
Back-end configuration: Connected to primary network interfaces of all virtual machines that should be part
of HANA System Replication
Probe Port: Port 62503
Load-balancing rules: 30313 TCP, 30315 TCP, 30317 TCP, 30340 TCP, 30341 TCP, 30342 TCP

Deploy for Linux


The Azure Marketplace contains an image for Red Hat Enterprise Linux 7.4 for SAP HANA that you can use to
deploy new virtual machines.
Deploy with a template
You can use one of the quickstart templates that are on GitHub to deploy all the required resources. The
template deploys the virtual machines, the load balancer, the availability set, and so on. To deploy the template,
follow these steps:
1. Open the database template on the Azure portal.
2. Enter the following parameters:
Sap System ID : Enter the SAP system ID of the SAP system you want to install. The ID is used as a
prefix for the resources that are deployed.
Os Type : Select one of the Linux distributions. For this example, select RHEL 7 .
Db Type : Select HANA .
Sap System Size : Enter the number of SAPS that the new system is going to provide. If you're not
sure how many SAPS the system requires, ask your SAP Technology Partner or System Integrator.
System Availability : Select HA .
Admin Username, Admin Password or SSH key : A new user is created that can be used to sign in
to the machine.
Subnet ID : If you want to deploy the VM into an existing VNet where you have a subnet defined the
VM should be assigned to, name the ID of that specific subnet. The ID usually looks like
/subscriptions/<subscription ID>/resourceGroups/<resource group
name>/providers/Microsoft.Network/vir tualNetworks/<vir tual network
name>/subnets/<subnet name> . Leave empty, if you want to create a new virtual network
Manual deployment
1. Create a resource group.
2. Create a virtual network.
3. Create an availability set.
Set the max update domain.
4. Create a load balancer (internal). We recommend standard load balancer.
Select the virtual network created in step 2.
5. Create virtual machine 1.
Use at least Red Hat Enterprise Linux 7.4 for SAP HANA. This example uses the Red Hat Enterprise Linux 7.4
for SAP HANA image https://portal.azure.com/#create/RedHat.RedHatEnterpriseLinux75forSAP-ARM Select
the availability set created in step 3.
6. Create virtual machine 2.
Use at least Red Hat Enterprise Linux 7.4 for SAP HANA. This example uses the Red Hat Enterprise Linux 7.4
for SAP HANA image https://portal.azure.com/#create/RedHat.RedHatEnterpriseLinux75forSAP-ARM Select
the availability set created in step 3.
7. Add data disks.

IMPORTANT
Floating IP is not supported on a NIC secondary IP configuration in load-balancing scenarios. For details see Azure Load
balancer Limitations. If you need additional IP address for the VM, deploy a second NIC.

NOTE
When VMs without public IP addresses are placed in the backend pool of internal (no public IP address) Standard Azure
load balancer, there will be no outbound internet connectivity, unless additional configuration is performed to allow
routing to public end points. For details on how to achieve outbound connectivity see Public endpoint connectivity for
Virtual Machines using Azure Standard Load Balancer in SAP high-availability scenarios.

1. If using standard load balancer, follow these configuration steps:


a. First, create a front-end IP pool:
a. Open the load balancer, select frontend IP pool , and select Add .
b. Enter the name of the new front-end IP pool (for example, hana-frontend ).
c. Set the Assignment to Static and enter the IP address (for example, 10.0.0.13 ).
d. Select OK .
e. After the new front-end IP pool is created, note the pool IP address.
b. Next, create a back-end pool:
a. Open the load balancer, select backend pools , and select Add .
b. Enter the name of the new back-end pool (for example, hana-backend ).
c. Select Add a vir tual machine .
d. Select ** Virtual machine**.
e. Select the virtual machines of the SAP HANA cluster and their IP addresses.
f. Select Add .
c. Next, create a health probe:
a. Open the load balancer, select health probes , and select Add .
b. Enter the name of the new health probe (for example, hana-hp ).
c. Select TCP as the protocol and port 62503 . Keep the Inter val value set to 5, and the
Unhealthy threshold value set to 2.
d. Select OK .
d. Next, create the load-balancing rules:
a. Open the load balancer, select load balancing rules , and select Add .
b. Enter the name of the new load balancer rule (for example, hana-lb ).
c. Select the front-end IP address, the back-end pool, and the health probe that you created earlier
(for example, hana-frontend , hana-backend and hana-hp ).
d. Select HA Por ts .
e. Increase the idle timeout to 30 minutes.
f. Make sure to enable Floating IP .
g. Select OK .
2. Alternatively, if your scenario dictates using basic load balancer, follow these configuration steps:
a. Configure the load balancer. First, create a front-end IP pool:
a. Open the load balancer, select frontend IP pool , and select Add .
b. Enter the name of the new front-end IP pool (for example, hana-frontend ).
c. Set the Assignment to Static and enter the IP address (for example, 10.0.0.13 ).
d. Select OK .
e. After the new front-end IP pool is created, note the pool IP address.
b. Next, create a back-end pool:
a. Open the load balancer, select backend pools , and select Add .
b. Enter the name of the new back-end pool (for example, hana-backend ).
c. Select Add a vir tual machine .
d. Select the availability set created in step 3.
e. Select the virtual machines of the SAP HANA cluster.
f. Select OK .
c. Next, create a health probe:
a. Open the load balancer, select health probes , and select Add .
b. Enter the name of the new health probe (for example, hana-hp ).
c. Select TCP as the protocol and port 62503 . Keep the Inter val value set to 5, and the
Unhealthy threshold value set to 2.
d. Select OK .
d. For SAP HANA 1.0, create the load-balancing rules:
a. Open the load balancer, select load balancing rules , and select Add .
b. Enter the name of the new load balancer rule (for example, hana-lb-303 15).
c. Select the front-end IP address, the back-end pool, and the health probe that you created earlier
(for example, hana-frontend ).
d. Keep the Protocol set to TCP , and enter port 303 15.
e. Increase the idle timeout to 30 minutes.
f. Make sure to enable Floating IP .
g. Select OK .
h. Repeat these steps for port 303 17.
e. For SAP HANA 2.0, create the load-balancing rules for the system database:
a. Open the load balancer, select load balancing rules , and select Add .
b. Enter the name of the new load balancer rule (for example, hana-lb-303 13).
c. Select the front-end IP address, the back-end pool, and the health probe that you created earlier
(for example, hana-frontend ).
d. Keep the Protocol set to TCP , and enter port 303 13.
e. Increase the idle timeout to 30 minutes.
f. Make sure to enable Floating IP .
g. Select OK .
h. Repeat these steps for port 303 14.
f. For SAP HANA 2.0, first create the load-balancing rules for the tenant database:
a. Open the load balancer, select load balancing rules , and select Add .
b. Enter the name of the new load balancer rule (for example, hana-lb-303 40).
c. Select the frontend IP address, backend pool, and health probe you created earlier (for example,
hana-frontend ).
d. Keep the Protocol set to TCP , and enter port 303 40.
e. Increase the idle timeout to 30 minutes.
f. Make sure to enable Floating IP .
g. Select OK .
h. Repeat these steps for ports 303 41 and 303 42.
For more information about the required ports for SAP HANA, read the chapter Connections to Tenant
Databases in the SAP HANA Tenant Databases guide or SAP Note 2388694.

IMPORTANT
Do not enable TCP timestamps on Azure VMs placed behind Azure Load Balancer. Enabling TCP timestamps will cause the
health probes to fail. Set parameter net.ipv4.tcp_timestamps to 0 . For details see Load Balancer health probes. See also
SAP note 2382421.

Install SAP HANA


The steps in this section use the following prefixes:
[A] : The step applies to all nodes.
[1] : The step applies to node 1 only.
[2] : The step applies to node 2 of the Pacemaker cluster only.
1. [A] Set up the disk layout: Logical Volume Manager (LVM) .
We recommend that you use LVM for volumes that store data and log files. The following example
assumes that the virtual machines have four data disks attached that are used to create two volumes.
List all of the available disks:
ls /dev/disk/azure/scsi1/lun*

Example output:

/dev/disk/azure/scsi1/lun0 /dev/disk/azure/scsi1/lun1 /dev/disk/azure/scsi1/lun2


/dev/disk/azure/scsi1/lun3

Create physical volumes for all of the disks that you want to use:

sudo pvcreate /dev/disk/azure/scsi1/lun0


sudo pvcreate /dev/disk/azure/scsi1/lun1
sudo pvcreate /dev/disk/azure/scsi1/lun2
sudo pvcreate /dev/disk/azure/scsi1/lun3

Create a volume group for the data files. Use one volume group for the log files and one for the shared
directory of SAP HANA:

sudo vgcreate vg_hana_data_HN1 /dev/disk/azure/scsi1/lun0 /dev/disk/azure/scsi1/lun1


sudo vgcreate vg_hana_log_HN1 /dev/disk/azure/scsi1/lun2
sudo vgcreate vg_hana_shared_HN1 /dev/disk/azure/scsi1/lun3

Create the logical volumes. A linear volume is created when you use lvcreate without the -i switch.
We suggest that you create a striped volume for better I/O performance, and align the stripe sizes to the
values documented in SAP HANA VM storage configurations. The -i argument should be the number of
the underlying physical volumes and the -I argument is the stripe size. In this document, two physical
volumes are used for the data volume, so the -i switch argument is set to 2 . The stripe size for the data
volume is 256KiB . One physical volume is used for the log volume, so no -i or -I switches are
explicitly used for the log volume commands.

IMPORTANT
Use the -i switch and set it to the number of the underlying physical volume when you use more than one
physical volume for each data, log, or shared volumes. Use the -I switch to specify the stripe size, when creating
a striped volume.
See SAP HANA VM storage configurations for recommended storage configurations, including stripe sizes and
number of disks.

sudo lvcreate -i 2 -I 256 -l 100%FREE -n hana_data vg_hana_data_HN1


sudo lvcreate -l 100%FREE -n hana_log vg_hana_log_HN1
sudo lvcreate -l 100%FREE -n hana_shared vg_hana_shared_HN1
sudo mkfs.xfs /dev/vg_hana_data_HN1/hana_data
sudo mkfs.xfs /dev/vg_hana_log_HN1/hana_log
sudo mkfs.xfs /dev/vg_hana_shared_HN1/hana_shared

Create the mount directories and copy the UUID of all of the logical volumes:
sudo mkdir -p /hana/data/HN1
sudo mkdir -p /hana/log/HN1
sudo mkdir -p /hana/shared/HN1
# Write down the ID of /dev/vg_hana_data_HN1/hana_data, /dev/vg_hana_log_HN1/hana_log, and
/dev/vg_hana_shared_HN1/hana_shared
sudo blkid

Create fstab entries for the three logical volumes:

sudo vi /etc/fstab

Insert the following line in the /etc/fstab file:

/dev/disk/by-uuid/<UUID of /dev/mapper/vg_hana_data_HN1-hana_data> /hana/data/HN1 xfs


defaults,nofail 0 2
/dev/disk/by-uuid/<UUID of /dev/mapper/vg_hana_log_HN1-hana_log> /hana/log/HN1 xfs defaults,nofail
0 2
/dev/disk/by-uuid/<UUID of /dev/mapper/vg_hana_shared_HN1-hana_shared> /hana/shared/HN1 xfs
defaults,nofail 0 2

Mount the new volumes:

sudo mount -a

2. [A] Set up the disk layout: Plain Disks .


For demo systems, you can place your HANA data and log files on one disk. Create a partition on
/dev/disk/azure/scsi1/lun0 and format it with xfs:

sudo sh -c 'echo -e "n\n\n\n\n\nw\n" | fdisk /dev/disk/azure/scsi1/lun0'


sudo mkfs.xfs /dev/disk/azure/scsi1/lun0-part1

# Write down the ID of /dev/disk/azure/scsi1/lun0-part1


sudo /sbin/blkid
sudo vi /etc/fstab

Insert this line in the /etc/fstab file:

/dev/disk/by-uuid/<UUID> /hana xfs defaults,nofail 0 2

Create the target directory and mount the disk:

sudo mkdir /hana


sudo mount -a

3. [A] Set up host name resolution for all hosts.


You can either use a DNS server or modify the /etc/hosts file on all nodes. This example shows you how
to use the /etc/hosts file. Replace the IP address and the hostname in the following commands:

sudo vi /etc/hosts

Insert the following lines in the /etc/hosts file. Change the IP address and hostname to match your
environment:

10.0.0.5 hn1-db-0
10.0.0.6 hn1-db-1

4. [A] RHEL for HANA configuration


Configure RHEL as described in https://access.redhat.com/solutions/2447641 and in the following SAP
notes:
2292690 - SAP HANA DB: Recommended OS settings for RHEL 7
2777782 - SAP HANA DB: Recommended OS Settings for RHEL 8
2455582 - Linux: Running SAP applications compiled with GCC 6.x
2593824 - Linux: Running SAP applications compiled with GCC 7.x
2886607 - Linux: Running SAP applications compiled with GCC 9.x
5. [A] Install the SAP HANA
To install SAP HANA System Replication, follow https://access.redhat.com/articles/3004101.
Run the hdblcm program from the HANA DVD. Enter the following values at the prompt:
Choose installation: Enter 1 .
Select additional components for installation: Enter 1 .
Enter Installation Path [/hana/shared]: Select Enter.
Enter Local Host Name [..]: Select Enter.
Do you want to add additional hosts to the system? (y/n) [n]: Select Enter.
Enter SAP HANA System ID: Enter the SID of HANA, for example: HN1 .
Enter Instance Number [00]: Enter the HANA Instance number. Enter 03 if you used the Azure template
or followed the manual deployment section of this article.
Select Database Mode / Enter Index [1]: Select Enter.
Select System Usage / Enter Index [4]: Select the system usage value.
Enter Location of Data Volumes [/hana/data/HN1]: Select Enter.
Enter Location of Log Volumes [/hana/log/HN1]: Select Enter.
Restrict maximum memory allocation? [n]: Select Enter.
Enter Certificate Host Name For Host '...' [...]: Select Enter.
Enter SAP Host Agent User (sapadm) Password: Enter the host agent user password.
Confirm SAP Host Agent User (sapadm) Password: Enter the host agent user password again to
confirm.
Enter System Administrator (hdbadm) Password: Enter the system administrator password.
Confirm System Administrator (hdbadm) Password: Enter the system administrator password again to
confirm.
Enter System Administrator Home Directory [/usr/sap/HN1/home]: Select Enter.
Enter System Administrator Login Shell [/bin/sh]: Select Enter.
Enter System Administrator User ID [1001]: Select Enter.
Enter ID of User Group (sapsys) [79]: Select Enter.
Enter Database User (SYSTEM) Password: Enter the database user password.
Confirm Database User (SYSTEM) Password: Enter the database user password again to confirm.
Restart system after machine reboot? [n]: Select Enter.
Do you want to continue? (y/n): Validate the summary. Enter y to continue.
6. [A] Upgrade the SAP Host Agent.
Download the latest SAP Host Agent archive from the SAP Software Center and run the following
command to upgrade the agent. Replace the path to the archive to point to the file that you downloaded:

sudo /usr/sap/hostctrl/exe/saphostexec -upgrade -archive <path to SAP Host Agent SAR>

7. [A] Configure firewall


Create the firewall rule for the Azure load balancer probe port.

sudo firewall-cmd --zone=public --add-port=62503/tcp


sudo firewall-cmd --zone=public --add-port=62503/tcp --permanent

Configure SAP HANA 2.0 System Replication


The steps in this section use the following prefixes:
[A] : The step applies to all nodes.
[1] : The step applies to node 1 only.
[2] : The step applies to node 2 of the Pacemaker cluster only.
1. [A] Configure firewall
Create firewall rules to allow HANA System Replication and client traffic. The required ports are listed on
TCP/IP Ports of All SAP Products. The following commands are just an example to allow HANA 2.0 System
Replication and client traffic to database SYSTEMDB, HN1 and NW1.

sudo firewall-cmd --zone=public --add-port=40302/tcp --permanent


sudo firewall-cmd --zone=public --add-port=40302/tcp
sudo firewall-cmd --zone=public --add-port=40301/tcp --permanent
sudo firewall-cmd --zone=public --add-port=40301/tcp
sudo firewall-cmd --zone=public --add-port=40307/tcp --permanent
sudo firewall-cmd --zone=public --add-port=40307/tcp
sudo firewall-cmd --zone=public --add-port=40303/tcp --permanent
sudo firewall-cmd --zone=public --add-port=40303/tcp
sudo firewall-cmd --zone=public --add-port=40340/tcp --permanent
sudo firewall-cmd --zone=public --add-port=40340/tcp
sudo firewall-cmd --zone=public --add-port=30340/tcp --permanent
sudo firewall-cmd --zone=public --add-port=30340/tcp
sudo firewall-cmd --zone=public --add-port=30341/tcp --permanent
sudo firewall-cmd --zone=public --add-port=30341/tcp
sudo firewall-cmd --zone=public --add-port=30342/tcp --permanent
sudo firewall-cmd --zone=public --add-port=30342/tcp

2. [1] Create the tenant database.


If you're using SAP HANA 2.0 or MDC, create a tenant database for your SAP NetWeaver system. Replace
NW1 with the SID of your SAP system.
Execute as <hanasid>adm the following command:

hdbsql -u SYSTEM -p "passwd" -i 03 -d SYSTEMDB 'CREATE DATABASE NW1 SYSTEM USER PASSWORD "passwd"'

3. [1] Configure System Replication on the first node:


Backup the databases as <hanasid>adm:
hdbsql -d SYSTEMDB -u SYSTEM -p "passwd" -i 03 "BACKUP DATA USING FILE ('initialbackupSYS')"
hdbsql -d HN1 -u SYSTEM -p "passwd" -i 03 "BACKUP DATA USING FILE ('initialbackupHN1')"
hdbsql -d NW1 -u SYSTEM -p "passwd" -i 03 "BACKUP DATA USING FILE ('initialbackupNW1')"

Copy the system PKI files to the secondary site:

scp /usr/sap/HN1/SYS/global/security/rsecssfs/data/SSFS_HN1.DAT hn1-db-


1:/usr/sap/HN1/SYS/global/security/rsecssfs/data/
scp /usr/sap/HN1/SYS/global/security/rsecssfs/key/SSFS_HN1.KEY hn1-db-
1:/usr/sap/HN1/SYS/global/security/rsecssfs/key/

Create the primary site:

hdbnsutil -sr_enable --name=SITE1

4. [2] Configure System Replication on the second node:


Register the second node to start the system replication. Run the following command as <hanasid>adm:

sapcontrol -nr 03 -function StopWait 600 10


hdbnsutil -sr_register --remoteHost=hn1-db-0 --remoteInstance=03 --replicationMode=sync --name=SITE2

5. [1] Check replication status


Check the replication status and wait until all databases are in sync. If the status remains UNKNOWN,
check your firewall settings.

sudo su - hn1adm -c "python /usr/sap/HN1/HDB03/exe/python_support/systemReplicationStatus.py"


# | Database | Host | Port | Service Name | Volume ID | Site ID | Site Name | Secondary |
Secondary | Secondary | Secondary | Secondary | Replication | Replication | Replication |
# | | | | | | | | Host | Port
| Site ID | Site Name | Active Status | Mode | Status | Status Details |
# | -------- | -------- | ----- | ------------ | --------- | ------- | --------- | --------- | ------
--- | --------- | --------- | ------------- | ----------- | ----------- | -------------- |
# | SYSTEMDB | hn1-db-0 | 30301 | nameserver | 1 | 1 | SITE1 | hn1-db-1 |
30301 | 2 | SITE2 | YES | SYNC | ACTIVE | |
# | HN1 | hn1-db-0 | 30307 | xsengine | 2 | 1 | SITE1 | hn1-db-1 |
30307 | 2 | SITE2 | YES | SYNC | ACTIVE | |
# | NW1 | hn1-db-0 | 30340 | indexserver | 2 | 1 | SITE1 | hn1-db-1 |
30340 | 2 | SITE2 | YES | SYNC | ACTIVE | |
# | HN1 | hn1-db-0 | 30303 | indexserver | 3 | 1 | SITE1 | hn1-db-1 |
30303 | 2 | SITE2 | YES | SYNC | ACTIVE | |
#
# status system replication site "2": ACTIVE
# overall system replication status: ACTIVE
#
# Local System Replication State
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#
# mode: PRIMARY
# site id: 1
# site name: SITE1

Configure SAP HANA 1.0 System Replication


The steps in this section use the following prefixes:
[A] : The step applies to all nodes.
[1] : The step applies to node 1 only.
[2] : The step applies to node 2 of the Pacemaker cluster only.
1. [A] Configure firewall
Create firewall rules to allow HANA System Replication and client traffic. The required ports are listed on
TCP/IP Ports of All SAP Products. The following commands are just an example to allow HANA 2.0 System
Replication. Adapt it to your SAP HANA 1.0 installation.

sudo firewall-cmd --zone=public --add-port=40302/tcp --permanent


sudo firewall-cmd --zone=public --add-port=40302/tcp

2. [1] Create the required users.


Run the following command as root. Make sure to replace bold strings (HANA System ID HN1 and
instance number 03 ) with the values of your SAP HANA installation:

PATH="$PATH:/usr/sap/HN1/HDB03/exe"
hdbsql -u system -i 03 'CREATE USER hdbhasync PASSWORD "passwd"'
hdbsql -u system -i 03 'GRANT DATA ADMIN TO hdbhasync'
hdbsql -u system -i 03 'ALTER USER hdbhasync DISABLE PASSWORD LIFETIME'

3. [A] Create the keystore entry.


Run the following command as root to create a new keystore entry:

PATH="$PATH:/usr/sap/HN1/HDB03/exe"
hdbuserstore SET hdbhaloc localhost:30315 hdbhasync passwd

4. [1] Back up the database.


Back up the databases as root:

PATH="$PATH:/usr/sap/HN1/HDB03/exe"
hdbsql -d SYSTEMDB -u system -i 03 "BACKUP DATA USING FILE ('initialbackup')"

If you use a multi-tenant installation, also back up the tenant database:

hdbsql -d HN1 -u system -i 03 "BACKUP DATA USING FILE ('initialbackup')"

5. [1] Configure System Replication on the first node.


Create the primary site as <hanasid>adm:

su - hdbadm
hdbnsutil -sr_enable –-name=SITE1

6. [2] Configure System Replication on the secondary node.


Register the secondary site as <hanasid>adm:

HDB stop
hdbnsutil -sr_register --remoteHost=hn1-db-0 --remoteInstance=03 --replicationMode=sync --name=SITE2
HDB start
Create a Pacemaker cluster
Follow the steps in Setting up Pacemaker on Red Hat Enterprise Linux in Azure to create a basic Pacemaker
cluster for this HANA server.

Create SAP HANA cluster resources


Install the SAP HANA resource agents on all nodes . Make sure to enable a repository that contains the package.
You don't need to enable additional repositories, if using RHEL 8.x HA-enabled image.

# Enable repository that contains SAP HANA resource agents


sudo subscription-manager repos --enable="rhel-sap-hana-for-rhel-7-server-rpms"

sudo yum install -y resource-agents-sap-hana

Next, create the HANA topology. Run the following commands on one of the Pacemaker cluster nodes:

sudo pcs property set maintenance-mode=true

# Replace the bold string with your instance number and HANA system ID
sudo pcs resource create SAPHanaTopology_HN1_03 SAPHanaTopology SID=HN1 InstanceNumber=03 \
op start timeout=600 op stop timeout=300 op monitor interval=10 timeout=600 \
clone clone-max=2 clone-node-max=1 interleave=true

Next, create the HANA resources.

NOTE
This article contains references to the term slave, a term that Microsoft no longer uses. When the term is removed from
the software, we’ll remove it from this article.

If building a cluster on RHEL 7.x , use the following commands:

# Replace the bold string with your instance number, HANA system ID, and the front-end IP address of the
Azure load balancer.
#
sudo pcs resource create SAPHana_HN1_03 SAPHana SID=HN1 InstanceNumber=03 PREFER_SITE_TAKEOVER=true
DUPLICATE_PRIMARY_TIMEOUT=7200 AUTOMATED_REGISTER=false \
op start timeout=3600 op stop timeout=3600 \
op monitor interval=61 role="Slave" timeout=700 \
op monitor interval=59 role="Master" timeout=700 \
op promote timeout=3600 op demote timeout=3600 \
master notify=true clone-max=2 clone-node-max=1 interleave=true

sudo pcs resource create vip_HN1_03 IPaddr2 ip="10.0.0.13"


sudo pcs resource create nc_HN1_03 azure-lb port=62503
sudo pcs resource group add g_ip_HN1_03 nc_HN1_03 vip_HN1_03

sudo pcs constraint order SAPHanaTopology_HN1_03-clone then SAPHana_HN1_03-master symmetrical=false


sudo pcs constraint colocation add g_ip_HN1_03 with master SAPHana_HN1_03-master 4000

sudo pcs property set maintenance-mode=false

If building a cluster on RHEL 8.x , use the following commands:


# Replace the bold string with your instance number, HANA system ID, and the front-end IP address of the
Azure load balancer.
#
sudo pcs resource create SAPHana_HN1_03 SAPHana SID=HN1 InstanceNumber=03 PREFER_SITE_TAKEOVER=true
DUPLICATE_PRIMARY_TIMEOUT=7200 AUTOMATED_REGISTER=false \
op start timeout=3600 op stop timeout=3600 \
op monitor interval=61 role="Slave" timeout=700 \
op monitor interval=59 role="Master" timeout=700 \
op promote timeout=3600 op demote timeout=3600 \
promotable meta notify=true clone-max=2 clone-node-max=1 interleave=true

sudo pcs resource create vip_HN1_03 IPaddr2 ip="10.0.0.13"


sudo pcs resource create nc_HN1_03 azure-lb port=62503
sudo pcs resource group add g_ip_HN1_03 nc_HN1_03 vip_HN1_03

sudo pcs constraint order SAPHanaTopology_HN1_03-clone then SAPHana_HN1_03-clone symmetrical=false


sudo pcs constraint colocation add g_ip_HN1_03 with master SAPHana_HN1_03-clone 4000

sudo pcs property set maintenance-mode=false

Make sure that the cluster status is ok and that all of the resources are started. It's not important on which node
the resources are running.

NOTE
The timeouts in the above configuration are just examples and may need to be adapted to the specific HANA setup. For
instance, you may need to increase the start timeout, if it takes longer to start the SAP HANA database.

sudo pcs status

# Online: [ hn1-db-0 hn1-db-1 ]


#
# Full list of resources:
#
# azure_fence (stonith:fence_azure_arm): Started hn1-db-0
# Clone Set: SAPHanaTopology_HN1_03-clone [SAPHanaTopology_HN1_03]
# Started: [ hn1-db-0 hn1-db-1 ]
# Master/Slave Set: SAPHana_HN1_03-master [SAPHana_HN1_03]
# Masters: [ hn1-db-0 ]
# Slaves: [ hn1-db-1 ]
# Resource Group: g_ip_HN1_03
# nc_HN1_03 (ocf::heartbeat:azure-lb): Started hn1-db-0
# vip_HN1_03 (ocf::heartbeat:IPaddr2): Started hn1-db-0

Test the cluster setup


This section describes how you can test your setup. Before you start a test, make sure that Pacemaker does not
have any failed action (via pcs status), there are no unexpected location constraints (for example leftovers of a
migration test) and that HANA is sync state, for example with systemReplicationStatus:

[root@hn1-db-0 ~]# sudo su - hn1adm -c "python


/usr/sap/HN1/HDB03/exe/python_support/systemReplicationStatus.py"

Test the migration


Resource state before starting the test:
Clone Set: SAPHanaTopology_HN1_03-clone [SAPHanaTopology_HN1_03]
Started: [ hn1-db-0 hn1-db-1 ]
Master/Slave Set: SAPHana_HN1_03-master [SAPHana_HN1_03]
Masters: [ hn1-db-0 ]
Slaves: [ hn1-db-1 ]
Resource Group: g_ip_HN1_03
nc_HN1_03 (ocf::heartbeat:azure-lb): Started hn1-db-0
vip_HN1_03 (ocf::heartbeat:IPaddr2): Started hn1-db-0

You can migrate the SAP HANA master node by executing the following command:

# On RHEL 7.x
[root@hn1-db-0 ~]# pcs resource move SAPHana_HN1_03-master
# On RHEL 8.x
[root@hn1-db-0 ~]# pcs resource move SAPHana_HN1_03-clone --master

If you set AUTOMATED_REGISTER="false" , this command should migrate the SAP HANA master node and the group
that contains the virtual IP address to hn1-db-1.
Once the migration is done, the 'sudo pcs status' output looks like this

Clone Set: SAPHanaTopology_HN1_03-clone [SAPHanaTopology_HN1_03]


Started: [ hn1-db-0 hn1-db-1 ]
Master/Slave Set: SAPHana_HN1_03-master [SAPHana_HN1_03]
Masters: [ hn1-db-1 ]
Stopped: [ hn1-db-0 ]
Resource Group: g_ip_HN1_03
nc_HN1_03 (ocf::heartbeat:azure-lb): Started hn1-db-1
vip_HN1_03 (ocf::heartbeat:IPaddr2): Started hn1-db-1

The SAP HANA resource on hn1-db-0 is stopped. In this case, configure the HANA instance as secondary by
executing this command:

[root@hn1-db-0 ~]# su - hn1adm

# Stop the HANA instance just in case it is running


hn1adm@hn1-db-0:/usr/sap/HN1/HDB03> sapcontrol -nr 03 -function StopWait 600 10
hn1adm@hn1-db-0:/usr/sap/HN1/HDB03> hdbnsutil -sr_register --remoteHost=hn1-db-1 --remoteInstance=03 --
replicationMod
e=sync --name=SITE1

The migration creates location constraints that need to be deleted again:

# Switch back to root


exit
[root@hn1-db-0 ~]# pcs resource clear SAPHana_HN1_03-master

Monitor the state of the HANA resource using 'pcs status'. Once HANA is started on hn1-db-0, the output should
look like this
Clone Set: SAPHanaTopology_HN1_03-clone [SAPHanaTopology_HN1_03]
Started: [ hn1-db-0 hn1-db-1 ]
Master/Slave Set: SAPHana_HN1_03-master [SAPHana_HN1_03]
Masters: [ hn1-db-1 ]
Slaves: [ hn1-db-0 ]
Resource Group: g_ip_HN1_03
nc_HN1_03 (ocf::heartbeat:azure-lb): Started hn1-db-1
vip_HN1_03 (ocf::heartbeat:IPaddr2): Started hn1-db-1

Test the Azure fencing agent

NOTE
This article contains references to the term slave, a term that Microsoft no longer uses. When the term is removed from
the software, we’ll remove it from this article.

Resource state before starting the test:

Clone Set: SAPHanaTopology_HN1_03-clone [SAPHanaTopology_HN1_03]


Started: [ hn1-db-0 hn1-db-1 ]
Master/Slave Set: SAPHana_HN1_03-master [SAPHana_HN1_03]
Masters: [ hn1-db-1 ]
Slaves: [ hn1-db-0 ]
Resource Group: g_ip_HN1_03
nc_HN1_03 (ocf::heartbeat:azure-lb): Started hn1-db-1
vip_HN1_03 (ocf::heartbeat:IPaddr2): Started hn1-db-1

You can test the setup of the Azure fencing agent by disabling the network interface on the node where SAP
HANA is running as Master. See Red Hat Knowledgebase article 79523 for a description on how to simulate a
network failure. In this example we use the net_breaker script to block all access to the network.

[root@hn1-db-1 ~]# sh ./net_breaker.sh BreakCommCmd 10.0.0.6

The virtual machine should now restart or stop depending on your cluster configuration. If you set the
stonith-action setting to off, the virtual machine is stopped and the resources are migrated to the running
virtual machine.
After you start the virtual machine again, the SAP HANA resource fails to start as secondary if you set
AUTOMATED_REGISTER="false" . In this case, configure the HANA instance as secondary by executing this command:

su - hn1adm

# Stop the HANA instance just in case it is running


hn1adm@hn1-db-1:/usr/sap/HN1/HDB03> sapcontrol -nr 03 -function StopWait 600 10
hn1adm@hn1-db-1:/usr/sap/HN1/HDB03> hdbnsutil -sr_register --remoteHost=hn1-db-0 --remoteInstance=03 --
replicationMode=sync --name=SITE2

# Switch back to root and clean up the failed state


exit
# On RHEL 7.x
[root@hn1-db-1 ~]# pcs resource cleanup SAPHana_HN1_03-master
# On RHEL 8.x
[root@hn1-db-1 ~]# pcs resource cleanup SAPHana_HN1_03 node=<hostname on which the resource needs to be
cleaned>

Resource state after the test:


Clone Set: SAPHanaTopology_HN1_03-clone [SAPHanaTopology_HN1_03]
Started: [ hn1-db-0 hn1-db-1 ]
Master/Slave Set: SAPHana_HN1_03-master [SAPHana_HN1_03]
Masters: [ hn1-db-0 ]
Slaves: [ hn1-db-1 ]
Resource Group: g_ip_HN1_03
nc_HN1_03 (ocf::heartbeat:azure-lb): Started hn1-db-0
vip_HN1_03 (ocf::heartbeat:IPaddr2): Started hn1-db-0

Test a manual failover


Resource state before starting the test:

Clone Set: SAPHanaTopology_HN1_03-clone [SAPHanaTopology_HN1_03]


Started: [ hn1-db-0 hn1-db-1 ]
Master/Slave Set: SAPHana_HN1_03-master [SAPHana_HN1_03]
Masters: [ hn1-db-0 ]
Slaves: [ hn1-db-1 ]
Resource Group: g_ip_HN1_03
nc_HN1_03 (ocf::heartbeat:azure-lb): Started hn1-db-0
vip_HN1_03 (ocf::heartbeat:IPaddr2): Started hn1-db-0

You can test a manual failover by stopping the cluster on the hn1-db-0 node:

[root@hn1-db-0 ~]# pcs cluster stop

After the failover, you can start the cluster again. If you set AUTOMATED_REGISTER="false" , the SAP HANA resource
on the hn1-db-0 node fails to start as secondary. In this case, configure the HANA instance as secondary by
executing this command:

[root@hn1-db-0 ~]# pcs cluster start


[root@hn1-db-0 ~]# su - hn1adm

# Stop the HANA instance just in case it is running


hn1adm@hn1-db-0:/usr/sap/HN1/HDB03> sapcontrol -nr 03 -function StopWait 600 10
hn1adm@hn1-db-0:/usr/sap/HN1/HDB03> hdbnsutil -sr_register --remoteHost=hn1-db-1 --remoteInstance=03 --
replicationMode=sync --name=SITE1

# Switch back to root and clean up the failed state


hn1adm@hn1-db-0:/usr/sap/HN1/HDB03> exit
# On RHEL 7.x
[root@hn1-db-1 ~]# pcs resource cleanup SAPHana_HN1_03-master
# On RHEL 8.x
[root@hn1-db-1 ~]# pcs resource cleanup SAPHana_HN1_03 node=<hostname on which the resource needs to be
cleaned>

Resource state after the test:

Clone Set: SAPHanaTopology_HN1_03-clone [SAPHanaTopology_HN1_03]


Started: [ hn1-db-0 hn1-db-1 ]
Master/Slave Set: SAPHana_HN1_03-master [SAPHana_HN1_03]
Masters: [ hn1-db-1 ]
Slaves: [ hn1-db-0 ]
Resource Group: g_ip_HN1_03
nc_HN1_03 (ocf::heartbeat:azure-lb): Started hn1-db-1
vip_HN1_03 (ocf::heartbeat:IPaddr2): Started hn1-db-1

Test a manual failover


Resource state before starting the test:

Clone Set: SAPHanaTopology_HN1_03-clone [SAPHanaTopology_HN1_03]


Started: [ hn1-db-0 hn1-db-1 ]
Master/Slave Set: SAPHana_HN1_03-master [SAPHana_HN1_03]
Masters: [ hn1-db-0 ]
Slaves: [ hn1-db-1 ]
Resource Group: g_ip_HN1_03
nc_HN1_03 (ocf::heartbeat:azure-lb): Started hn1-db-0
vip_HN1_03 (ocf::heartbeat:IPaddr2): Started hn1-db-0

You can test a manual failover by stopping the cluster on the hn1-db-0 node:

[root@hn1-db-0 ~]# pcs cluster stop

Next steps
Azure Virtual Machines planning and implementation for SAP
Azure Virtual Machines deployment for SAP
Azure Virtual Machines DBMS deployment for SAP
SAP HANA VM storage configurations
High availability of SAP HANA Scale-up with Azure
NetApp Files on Red Hat Enterprise Linux
12/22/2020 • 24 minutes to read • Edit Online

This article describes how to configure SAP HANA System Replication in Scale-up deployment, when the HANA file
systems are mounted via NFS, using Azure NetApp Files (ANF). In the example configurations and installation
commands, instance number 03 , and HANA System ID HN1 are used. SAP HANA Replication consists of one
primary node and at least one secondary node.
When steps in this document are marked with the following prefixes, the meaning is as follows:
[A] : The step applies to all nodes
[1] : The step applies to node1 only
[2] : The step applies to node2 only
Read the following SAP Notes and papers first:
SAP Note 1928533, which has:
The list of Azure VM sizes that are supported for the deployment of SAP software.
Important capacity information for Azure VM sizes.
The supported SAP software, and operating system (OS) and database combinations.
The required SAP kernel version for Windows and Linux on Microsoft Azure.
SAP Note 2015553 lists prerequisites for SAP-supported SAP software deployments in Azure.
SAP Note 405827 lists out recommended file system for HANA environment.
SAP Note 2002167 has recommended OS settings for Red Hat Enterprise Linux.
SAP Note 2009879 has SAP HANA Guidelines for Red Hat Enterprise Linux.
SAP Note 2178632 has detailed information about all monitoring metrics reported for SAP in Azure.
SAP Note 2191498 has the required SAP Host Agent version for Linux in Azure.
SAP Note 2243692 has information about SAP licensing on Linux in Azure.
SAP Note 1999351 has additional troubleshooting information for the Azure Enhanced Monitoring Extension
for SAP.
SAP Community Wiki has all required SAP Notes for Linux.
Azure Virtual Machines planning and implementation for SAP on Linux
Azure Virtual Machines deployment for SAP on Linux
Azure Virtual Machines DBMS deployment for SAP on Linux
SAP HANA system replication in pacemaker cluster.
General RHEL documentation
High Availability Add-On Overview
High Availability Add-On Administration.
High Availability Add-On Reference.
Configure SAP HANA System Replication in Scale-Up in a Pacemaker cluster when the HANA filesystems
are on NFS shares
Azure-specific RHEL documentation:
Support Policies for RHEL High Availability Clusters - Microsoft Azure Virtual Machines as Cluster
Members.
Installing and Configuring a Red Hat Enterprise Linux 7.4 (and later) High-Availability Cluster on
Microsoft Azure.
Install SAP HANA on Red Hat Enterprise Linux for Use in Microsoft Azure.
Configure SAP HANA scale-up system replication up Pacemaker cluster when the HANA file systems are
on NFS shares
NetApp SAP Applications on Microsoft Azure using Azure NetApp Files

Overview
Traditionally in scale-up environment all file systems for SAP HANA are mounted from local storage. Setting up
High Availability of SAP HANA System Replication on Red Hat Enterprise Linux is published in guide Set up SAP
HANA System Replication on RHEL
In order to achieve SAP HANA High Availability of scale-up system on Azure NetApp Files NFS shares, we need
some additional resource configuration in the cluster, in order for HANA resources to recover, when one node
loses access to the NFS shares on ANF. The cluster manages the NFS mounts, allowing it to monitor the health of
the resources. The dependencies between the file system mounts and the SAP HANA resources are enforced.

SAP HANA filesystems are mounted on NFS shares using Azure NetApp Files on each node. File systems
/hana/data, /hana/log, and /hana/shared are unique to each node.
Mounted on node1 (hanadb1 )
10.32.2.4:/hanadb1 -data-mnt00001 on /hana/data
10.32.2.4:/hanadb1 -log-mnt00001 on /hana/log
10.32.2.4:/hanadb1 -shared-mnt00001 on /hana/shared
Mounted on node2 (hanadb2 )
10.32.2.4:/hanadb2 -data-mnt00001 on /hana/data
10.32.2.4:/hanadb2 -log-mnt00001 on /hana/log
10.32.2.4:/hanadb2 -shared-mnt00001 on /hana/shared

NOTE
File systems /hana/shared, /hana/data and /hana/log are not shared between the two nodes. Each cluster node has its own,
separate file systems.

The SAP HANA System Replication configuration uses a dedicated virtual hostname and virtual IP addresses. On
Azure, a load balancer is required to use a virtual IP address. The following list shows the configuration of the load
balancer:
Front-end configuration: IP address 10.32.0.10 for hn1-db
Back-end configuration: Connected to primary network interfaces of all virtual machines that should be part of
HANA System Replication
Probe Port: Port 62503
Load-balancing rules: 30313 TCP, 30315 TCP, 30317 TCP, 30340 TCP, 30341 TCP, 30342 TCP (if using Basic Azure
Load balancer)

Set up the Azure NetApp File infrastructure


Before you proceed with the set up for Azure NetApp Files infrastructure, familiarize yourself with the Azure
NetApp Files documentation.
Azure NetApp Files is available in several Azure regions. Check to see whether your selected Azure region offers
Azure NetApp Files.
For information about the availability of Azure NetApp Files by Azure region, see Azure NetApp Files Availability
by Azure Region.
Before you deploy Azure NetApp Files, request onboarding to Azure NetApp Files by going to Register for Azure
NetApp Files instructions.
Deploy Azure NetApp Files resources
The following instructions assume that you've already deployed your Azure virtual network. The Azure NetApp
Files resources and VMs, where the Azure NetApp Files resources will be mounted, must be deployed in the same
Azure virtual network or in peered Azure virtual networks.
1. If you haven't already deployed the resources, request onboarding to Azure NetApp Files.
2. Create a NetApp account in your selected Azure region by following the instructions in Create a NetApp
account.
3. Set up an Azure NetApp Files capacity pool by following the instructions in Set up an Azure NetApp Files
capacity pool.
The HANA architecture presented in this article uses a single Azure NetApp Files capacity pool at the Ultra
Service level. For HANA workloads on Azure, we recommend using an Azure NetApp Files Ultra or
Premium service Level.
4. Delegate a subnet to Azure NetApp Files, as described in the instructions in Delegate a subnet to Azure
NetApp Files.
5. Deploy Azure NetApp Files volumes by following the instructions in Create an NFS volume for Azure
NetApp Files.
As you are deploying the volumes, be sure to select the NFSv4.1 version. Deploy the volumes in the
designated Azure NetApp Files subnet. The IP addresses of the Azure NetApp volumes are assigned
automatically.
Keep in mind that the Azure NetApp Files resources and the Azure VMs must be in the same Azure virtual
network or in peered Azure virtual networks. For example, hanadb1-data-mnt00001, hanadb1-log-
mnt00001, and so on, are the volume names and nfs://10.32.2.4/hanadb1-data-mnt00001,
nfs://10.32.2.4/hanadb1-log-mnt00001, and so on, are the file paths for the Azure NetApp Files volumes.
On hanadb1
Volume hanadb1-data-mnt00001 (nfs://10.32.2.4:/hanadb1-data-mnt00001)
Volume hanadb1-log-mnt00001 (nfs://10.32.2.4:/hanadb1-log-mnt00001)
Volume hanadb1-shared-mnt00001 (nfs://10.32.2.4:/hanadb1-shared-mnt00001)
On hanadb2
Volume hanadb2-data-mnt00001 (nfs://10.32.2.4:/hanadb2-data-mnt00001)
Volume hanadb2-log-mnt00001 (nfs://10.32.2.4:/hanadb2-log-mnt00001)
Volume hanadb2-shared-mnt00001 (nfs://10.32.2.4:/hanadb2-shared-mnt00001)
Important considerations
As you are creating your Azure NetApp Files for SAP HANA Scale-up systems, be aware of the following
consideration:
The minimum capacity pool is 4 tebibytes (TiB).
The minimum volume size is 100 gibibytes (GiB).
Azure NetApp Files and all virtual machines where the Azure NetApp Files volumes will be mounted must be in
the same Azure virtual network or in peered virtual networks in the same region.
The selected virtual network must have a subnet that is delegated to Azure NetApp Files.
The throughput of an Azure NetApp Files volume is a function of the volume quota and service level, as
documented in Service level for Azure NetApp Files. When you are sizing the HANA Azure NetApp volumes,
make sure that the resulting throughput meets the HANA system requirements.
With the Azure NetApp Files export policy, you can control the allowed clients, the access type (read-write, read
only, and so on).
The Azure NetApp Files feature is not zone-aware yet. Currently, the feature is not deployed in all availability
zones in an Azure region. Be aware of the potential latency implications in some Azure regions.

IMPORTANT
For SAP HANA workloads, low latency is critical. Work with your Microsoft representative to ensure that the virtual machines
and the Azure NetApp Files volumes are deployed in proximity.

Sizing of HANA database on Azure NetApp Files


The throughput of an Azure NetApp Files volume is a function of the volume size and service level, as documented
in Service level for Azure NetApp Files.
As you design the infrastructure for SAP in Azure, be aware of some minimum storage requirements by SAP, which
translate into minimum throughput characteristics:
Read-write on /hana/log of 250 megabytes per second (MB/s) with 1-MB I/O sizes.
Read activity of at least 400 MB/s for /hana/data for 16-MB and 64-MB I/O sizes.
Write activity of at least 250 MB/s for /hana/data with 16-MB and 64-MB I/O sizes.
The Azure NetApp Files throughput limits per 1 TiB of volume quota are:
Premium Storage tier - 64 MiB/s.
Ultra Storage tier - 128 MiB/s.
To meet the SAP minimum throughput requirements for /hana/data and /hana/log, and the guidelines for
/hana/shared, the recommended sizes would be:

SIZ E O F P REM IUM STO RA GE SIZ E O F ULT RA STO RA GE


VO L UM E T IER T IER SUP P O RT ED N F S P ROTO C O L

/hana/log 4 TiB 2 TiB v4.1

/hana/data 6.3 TiB 3.2 TiB v4.1

/hana/shared 1 x RAM 1 x RAM v3 or v4.1

NOTE
The Azure NetApp Files sizing recommendations stated here are targeted to meet the minimum requirements that SAP
recommends for their infrastructure providers. In real customer deployments and workload scenarios, these sizes may not be
sufficient. Use these recommendations as a starting point and adapt, based on the requirements of your specific workload.

TIP
You can resize Azure NetApp Files volumes dynamically, without having to unmount the volumes, stop the virtual machines,
or stop SAP HANA. This approach allows flexibility to meet both the expected and unforeseen throughput demands of your
application.

NOTE
All commands to mount /hana/shared in this article are presented for NFSv4.1 /hana/shared volumes. If you deployed the
/hana/shared volumes as NFSv3 volumes, don't forget to adjust the mount commands for /hana/shared for NFSv3.

Deploy Linux virtual machine via Azure portal


First you need to create the Azure NetApp Files volumes. Then do the following steps:
1. Create a resource group.
2. Create a virtual network.
3. Create an availability set. Set the max update domain.
4. Create a load balancer (internal). We recommend standard load balancer. Select the virtual network created in
step 2.
5. Create Virtual Machine 1 (hanadb1 ).
6. Create Virtual Machine 2 (hanadb2 ).
7. While creating virtual machine, we will not be adding any disk as all our mount points will be on NFS shares
from Azure NetApp Files.

IMPORTANT
Floating IP is not supported on a NIC secondary IP configuration in load-balancing scenarios. For details see Azure Load
balancer Limitations. If you need additional IP address for the VM, deploy a second NIC.
NOTE
When VMs without public IP addresses are placed in the backend pool of internal (no public IP address) Standard Azure load
balancer, there will be no outbound internet connectivity, unless additional configuration is performed to allow routing to
public end points. For details on how to achieve outbound connectivity see Public endpoint connectivity for Virtual Machines
using Azure Standard Load Balancer in SAP high-availability scenarios.

8. If using standard load balancer, follow these configuration steps:


a. First, create a front-end IP pool:
a. Open the load balancer, select frontend IP pool , and select Add .
b. Enter the name of the new front-end IP pool (for example, hana-frontend ).
c. Set the Assignment to Static and enter the IP address (for example, 10.32.0.10 ).
d. Select OK .
e. After the new front-end IP pool is created, note the pool IP address.
b. Next, create a back-end pool:
a. Open the load balancer, select backend pools , and select Add .
b. Enter the name of the new back-end pool (for example, hana-backend ).
c. Select Add a vir tual machine .
d. Select ** Virtual machine**.
e. Select the virtual machines of the SAP HANA cluster and their IP addresses.
f. Select Add .
c. Next, create a health probe:
a. Open the load balancer, select health probes , and select Add .
b. Enter the name of the new health probe (for example, hana-hp ).
c. Select TCP as the protocol and port 62503 . Keep the Inter val value set to 5, and the Unhealthy
threshold value set to 2.
d. Select OK .
d. Next, create the load-balancing rules:
a. Open the load balancer, select load balancing rules , and select Add .
b. Enter the name of the new load balancer rule (for example, hana-lb ).
c. Select the front-end IP address, the back-end pool, and the health probe that you created earlier
(for example, hana-frontend , hana-backend and hana-hp ).
d. Select HA Por ts .
e. Increase the idle timeout to 30 minutes.
f. Make sure to enable Floating IP .
g. Select OK .
9. Alternatively, if your scenario dictates using basic load balancer, follow these configuration steps:
a. Configure the load balancer. First, create a front-end IP pool:
a. Open the load balancer, select frontend IP pool , and select Add .
b. Enter the name of the new front-end IP pool (for example, hana-frontend ).
c. Set the Assignment to Static and enter the IP address (for example, 10.32.0.10 ).
d. Select OK .
e. After the new front-end IP pool is created, note the pool IP address.
b. Next, create a back-end pool:
a. Open the load balancer, select backend pools , and select Add .
b. Enter the name of the new back-end pool (for example, hana-backend ).
c. Select Add a vir tual machine .
d. Select the availability set created in step 3.
e. Select the virtual machines of the SAP HANA cluster.
f. Select OK .
c. Next, create a health probe:
a. Open the load balancer, select health probes , and select Add .
b. Enter the name of the new health probe (for example, hana-hp ).
c. Select TCP as the protocol and port 62503 . Keep the Inter val value set to 5, and the Unhealthy
threshold value set to 2.
d. Select OK .
d. For SAP HANA 1.0, create the load-balancing rules:
a. Open the load balancer, select load balancing rules , and select Add .
b. Enter the name of the new load balancer rule (for example, hana-lb-303 15).
c. Select the front-end IP address, the back-end pool, and the health probe that you created earlier
(for example, hana-frontend ).
d. Keep the Protocol set to TCP , and enter port 303 15.
e. Increase the idle timeout to 30 minutes.
f. Make sure to enable Floating IP .
g. Select OK .
h. Repeat these steps for port 303 17.
e. For SAP HANA 2.0, create the load-balancing rules for the system database:
a. Open the load balancer, select load balancing rules , and select Add .
b. Enter the name of the new load balancer rule (for example, hana-lb-303 13).
c. Select the front-end IP address, the back-end pool, and the health probe that you created earlier
(for example, hana-frontend ).
d. Keep the Protocol set to TCP , and enter port 303 13.
e. Increase the idle timeout to 30 minutes.
f. Make sure to enable Floating IP .
g. Select OK .
h. Repeat these steps for port 303 14.
f. For SAP HANA 2.0, first create the load-balancing rules for the tenant database:
a. Open the load balancer, select load balancing rules , and select Add .
b. Enter the name of the new load balancer rule (for example, hana-lb-303 40).
c. Select the frontend IP address, backend pool, and health probe you created earlier (for example,
hana-frontend ).
d. Keep the Protocol set to TCP , and enter port 303 40.
e. Increase the idle timeout to 30 minutes.
f. Make sure to enable Floating IP .
g. Select OK .
h. Repeat these steps for ports 303 41 and 303 42.
For more information about the required ports for SAP HANA, read the chapter Connections to Tenant Databases
in the SAP HANA Tenant Databases guide or SAP Note 2388694.
IMPORTANT
Do not enable TCP timestamps on Azure VMs placed behind Azure Load Balancer. Enabling TCP timestamps will cause the
health probes to fail. Set parameter net.ipv4.tcp_timestamps to 0 . For details see Load Balancer health probes. See also
SAP note 2382421.

Mount the Azure NetApp Files volume


1. [A] Create mount points for the HANA database volumes.

mkdir -p /hana/data
mkdir -p /hana/log
mkdir -p /hana/shared

2. [A] Verify the NFS domain setting. Make sure that the domain is configured as the default Azure NetApp
Files domain, i.e. defaultv4iddomain.com and the mapping is set to nobody .

sudo cat /etc/idmapd.conf


# Example
[General]
Domain = defaultv4iddomain.com
[Mapping]
Nobody-User = nobody
Nobody-Group = nobody

IMPORTANT
Make sure to set the NFS domain in /etc/idmapd.conf on the VM to match the default domain configuration on
Azure NetApp Files: defaultv4iddomain.com . If there's a mismatch between the domain configuration on the NFS
client (i.e. the VM) and the NFS server, i.e. the Azure NetApp configuration, then the permissions for files on Azure
NetApp volumes that are mounted on the VMs will be displayed as nobody.

3. [1] Mount the node-specific volumes on node1 (hanadb1 )

sudo mount -o
rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys
10.32.2.4:/hanadb1-shared-mnt00001 /hana/shared
sudo mount -o
rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys
10.32.2.4:/hanadb1-log-mnt00001 /hana/log
sudo mount -o
rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys
10.32.2.4:/hanadb1-data-mnt00001 /hana/data

4. [2] Mount the node-specific volumes on node2 (hanadb2 )

sudo mount -o
rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys
10.32.2.4:/hanadb2-shared-mnt00001 /hana/shared
sudo mount -o
rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys
10.32.2.4:/hanadb2-log-mnt00001 /hana/log
sudo mount -o
rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys
10.32.2.4:/hanadb2-data-mnt00001 /hana/data
5. [A] Verify that all HANA volumes are mounted with NFS protocol version NFSv4.

sudo nfsstat -m

# Verify that flag vers is set to 4.1


# Example from hanadb1

/hana/log from 10.32.2.4:/hanadb1-log-mnt00001


Flags:
rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,cli
entaddr=10.32.0.4,local_lock=none,addr=10.32.2.4
/hana/data from 10.32.2.4:/hanadb1-data-mnt00001
Flags:
rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,cli
entaddr=10.32.0.4,local_lock=none,addr=10.32.2.4
/hana/shared from 10.32.2.4:/hanadb1-shared-mnt00001
Flags:
rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,cli
entaddr=10.32.0.4,local_lock=none,addr=10.32.2.4

6. [A] Verify nfs4_disable_idmapping . It should be set to Y . To create the directory structure where
nfs4_disable_idmapping is located, execute the mount command. You won't be able to manually create
the directory under /sys/modules, because access is reserved for the kernel / drivers.

# Check nfs4_disable_idmapping
cat /sys/module/nfs/parameters/nfs4_disable_idmapping

# If you need to set nfs4_disable_idmapping to Y


echo "Y" > /sys/module/nfs/parameters/nfs4_disable_idmapping

# Make the configuration permanent


echo "options nfs nfs4_disable_idmapping=Y" >> /etc/modprobe.d/nfs.conf

For more details on how to change nfs_disable_idmapping parameter, see


https://access.redhat.com/solutions/1749883.

SAP HANA installation


1. [A] Set up host name resolution for all hosts.
You can either use a DNS server or modify the /etc/hosts file on all nodes. This example shows you how to
use the /etc/hosts file. Replace the IP address and the hostname in the following commands:

sudo vi /etc/hosts
# Insert the following lines in the /etc/hosts file. Change the IP address and hostname to match your
environment
10.32.0.4 hanadb1
10.32.0.5 hanadb2

2. [A] RHEL for HANA Configuration


Configure RHEL as described in below SAP Note based on your RHEL version
2292690 - SAP HANA DB: Recommended OS settings for RHEL 7
2777782 - SAP HANA DB: Recommended OS Settings for RHEL 8
2455582 - Linux: Running SAP applications compiled with GCC 6.x
2593824 - Linux: Running SAP applications compiled with GCC 7.x
2886607 - Linux: Running SAP applications compiled with GCC 9.x
3. [A] Install the SAP HANA
Started with HANA 2.0 SPS 01, MDC is the default option. When you install HANA system, SYSTEMDB and a
tenant with same SID will be created together. In some case you do not want the default tenant. In case, if
you don’t want to create initial tenant along with the installation you can follow SAP Note 2629711
Run the hdblcm program from the HANA DVD. Enter the following values at the prompt:
Choose installation: Enter 1 (for install)
Select additional components for installation: Enter 1 .
Enter Installation Path [/hana/shared]: press Enter to accept the default
Enter Local Host Name [..]: Press Enter to accept the default
Do you want to add additional hosts to the system? (y/n) [n]: n
Enter SAP HANA System ID: Enter HN1 .
Enter Instance Number [00]: Enter 03
Select Database Mode / Enter Index [1]: press Enter to accept the default
Select System Usage / Enter Index [4]: enter 4 (for custom)
Enter Location of Data Volumes [/hana/data]: press Enter to accept the default
Enter Location of Log Volumes [/hana/log]: press Enter to accept the default
Restrict maximum memory allocation? [n]: press Enter to accept the default
Enter Certificate Host Name For Host '...' [...]: press Enter to accept the default
Enter SAP Host Agent User (sapadm) Password: Enter the host agent user password
Confirm SAP Host Agent User (sapadm) Password: Enter the host agent user password again to confirm
Enter System Administrator (hn1adm) Password: Enter the system administrator password
Confirm System Administrator (hn1adm) Password: Enter the system administrator password again to
confirm
Enter System Administrator Home Directory [/usr/sap/HN1/home]: press Enter to accept the default
Enter System Administrator Login Shell [/bin/sh]: press Enter to accept the default
Enter System Administrator User ID [1001]: press Enter to accept the default
Enter ID of User Group (sapsys) [79]: press Enter to accept the default
Enter Database User (SYSTEM) Password: Enter the database user password
Confirm Database User (SYSTEM) Password: Enter the database user password again to confirm
Restart system after machine reboot? [n]: press Enter to accept the default
Do you want to continue? (y/n): Validate the summary. Enter y to continue
4. [A] Upgrade SAP Host Agent
Download the latest SAP Host Agent archive from the SAP Software Center and run the following command
to upgrade the agent. Replace the path to the archive to point to the file that you downloaded:

sudo /usr/sap/hostctrl/exe/saphostexec -upgrade -archive <path to SAP Host Agent SAR>

5. [A] Configure firewall


Create the firewall rule for the Azure load balancer probe port.

sudo firewall-cmd --zone=public --add-port=62503/tcp


sudo firewall-cmd --zone=public --add-port=62503/tcp –permanent

Configure SAP HANA system replication


Follow the steps in Set up SAP HANA System Replication to configure SAP HANA System Replication.

Cluster configuration
This section describes necessary steps required for cluster to operate seamlessly when SAP HANA is installed on
NFS shares using Azure NetApp Files.
Create a Pacemaker cluster
Follow the steps in Setting up Pacemaker on Red Hat Enterprise Linux in Azure to create a basic Pacemaker cluster
for this HANA server.
Configure filesystem resources
In this example each cluster node has its own HANA NFS filesystems /hana/shared, /hana/data, and /hana/log.
1. [1] Put the cluster in maintenance mode.

pcs property set maintenance-mode=true

2. [1] Create the Filesystem resources for the hanadb1 mounts.

pcs resource create hana_data1 ocf:heartbeat:Filesystem device=10.32.2.4:/hanadb1-data-mnt00001


directory=/hana/data fstype=nfs
options=rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec
=sys op monitor interval=20s on-fail=fence timeout=40s OCF_CHECK_LEVEL=20 --group hanadb1_nfs
pcs resource create hana_log1 ocf:heartbeat:Filesystem device=10.32.2.4:/hanadb1-log-mnt00001
directory=/hana/log fstype=nfs
options=rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec
=sys op monitor interval=20s on-fail=fence timeout=40s OCF_CHECK_LEVEL=20 --group hanadb1_nfs
pcs resource create hana_shared1 ocf:heartbeat:Filesystem device=10.32.2.4:/hanadb1-shared-mnt00001
directory=/hana/shared fstype=nfs
options=rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec
=sys op monitor interval=20s on-fail=fence timeout=40s OCF_CHECK_LEVEL=20 --group hanadb1_nfs

3. [2] Create the Filesystem resources for the hanadb2 mounts.

pcs resource create hana_data2 ocf:heartbeat:Filesystem device=10.32.2.4:/hanadb2-data-mnt00001


directory=/hana/data fstype=nfs
options=rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec
=sys op monitor interval=20s on-fail=fence timeout=40s OCF_CHECK_LEVEL=20 --group hanadb2_nfs
pcs resource create hana_log2 ocf:heartbeat:Filesystem device=10.32.2.4:/hanadb2-log-mnt00001
directory=/hana/log fstype=nfs
options=rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec
=sys op monitor interval=20s on-fail=fence timeout=40s OCF_CHECK_LEVEL=20 --group hanadb2_nfs
pcs resource create hana_shared2 ocf:heartbeat:Filesystem device=10.32.2.4:/hanadb2-shared-mnt00001
directory=/hana/shared fstype=nfs
options=rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec
=sys op monitor interval=20s on-fail=fence timeout=40s OCF_CHECK_LEVEL=20 --group hanadb2_nfs

OCF_CHECK_LEVEL=20 attribute is added to the monitor operation so that each monitor performs a read/write
test on the filesystem. Without this attribute, the monitor operation only verifies that the filesystem is
mounted. This can be a problem because when connectivity is lost, the filesystem may remain mounted
despite being inaccessible.
on-fail=fence attribute is also added to the monitor operation. With this option, if the monitor operation
fails on a node, that node is immediately fenced. Without this option, the default behavior is to stop all
resources that depend on the failed resource, then restart the failed resource, then start all the resources
that depend on the failed resource. Not only can this behavior take a long time when an SAPHana resource
depends on the failed resource, but it also can fail altogether. The SAPHana resource cannot stop
successfully if the NFS server holding the HANA executables is inaccessible.
4. [1] Configuring Location Constraints
Configure location constraints to ensure that the resources that manage hanadb1 unique mounts can never
run on hanadb2, and vice-versa.

pcs constraint location hanadb1_nfs rule score=-INFINITY resource-discovery=never \#uname eq hanadb2


pcs constraint location hanadb2_nfs rule score=-INFINITY resource-discovery=never \#uname eq hanadb1

The resource-discovery=never option is set because the unique mounts for each node share the same
mount point. For example, hana_data1 uses mount point /hana/data , and hana_data2 also uses mount
point /hana/data . This can cause a false positive for a probe operation, when resource state is checked at
cluster startup, and this can in turn cause unnecessary recovery behavior. This can be avoided by setting
resource-discovery=never

5. [1] Configuring Attribute Resources


Configure attribute resources. These attributes will be set to true if all of a node's NFS mounts (/hana/data,
/hana/log, and /hana/data) are mounted and will be set to false otherwise.

pcs resource create hana_nfs1_active ocf:pacemaker:attribute active_value=true inactive_value=false


name=hana_nfs1_active
pcs resource create hana_nfs2_active ocf:pacemaker:attribute active_value=true inactive_value=false
name=hana_nfs2_active

6. [1] Configuring Location Constraints


Configure location constraints to ensure that hanadb1’s attribute resource never runs on hanadb2, and
vice-versa.

pcs constraint location hana_nfs1_active avoids hanadb2


pcs constraint location hana_nfs2_active avoids hanadb1

7. [1] Creating Ordering Constraints


Configure ordering constraints so that a node's attribute resources start only after all of the node's NFS
mounts are mounted.

pcs constraint order hanadb1_nfs then hana_nfs1_active


pcs constraint order hanadb2_nfs then hana_nfs2_active

TIP
If your configuration includes file systems, outside of group hanadb1_nfs or hanadb2_nfs , then include the
sequential=false option, so that there are no ordering dependencies among the file systems. All file systems
must start before hana_nfs1_active , but they do not need to start in any order relative to each other. For more
details see How do I configure SAP HANA System Replication in Scale-Up in a Pacemaker cluster when the HANA
filesystems are on NFS shares

Configure SAP HANA cluster resources


1. Follow the steps in Create SAP HANA cluster resources to create the SAP HANA Resources in the cluster.
Once SAP HANA resources are created, we need to create a location rule constraint between SAP HANA
resources and Filesystems (NFS Mounts)
2. [1] Configure constraints between the SAP HANA resources and the NFS mounts
Location rule constraints will be set so that the SAP HANA resources can run on a node only if all of the
node's NFS mounts are mounted.

pcs constraint location SAPHanaTopology_HN1_03-clone rule score=-INFINITY hana_nfs1_active ne true and


hana_nfs2_active ne true
# On RHEL 7.x
pcs constraint location SAPHana_HN1_03-master rule score=-INFINITY hana_nfs1_active ne true and
hana_nfs2_active ne true
# On RHEL 8.x
pcs constraint location SAPHana_HN1_03-clone rule score=-INFINITY hana_nfs1_active ne true and
hana_nfs2_active ne true
# Take the cluster out of maintenance mode
sudo pcs property set maintenance-mode=false

Check the status of cluster and all the resources

NOTE
This article contains references to the term slave, a term that Microsoft no longer uses. When the term is removed
from the software, we’ll remove it from this article.

sudo pcs status

Online: [ hanadb1 hanadb2 ]

Full list of resources:

rsc_hdb_azr_agt(stonith:fence_azure_arm): Started hanadb1

Resource Group: hanadb1_nfs


hana_data1 (ocf::heartbeat:Filesystem):Started hanadb1
hana_log1 (ocf::heartbeat:Filesystem):Started hanadb1
hana_shared1 (ocf::heartbeat:Filesystem):Started hanadb1

Resource Group: hanadb2_nfs


hana_data2 (ocf::heartbeat:Filesystem):Started hanadb2
hana_log2 (ocf::heartbeat:Filesystem):Started hanadb2
hana_shared2 (ocf::heartbeat:Filesystem):Started hanadb2

hana_nfs1_active (ocf::pacemaker:attribute): Started hanadb1


hana_nfs2_active (ocf::pacemaker:attribute): Started hanadb2

Clone Set: SAPHanaTopology_HN1_03-clone [SAPHanaTopology_HN1_03]


Started: [ hanadb1 hanadb2 ]

Master/Slave Set: SAPHana_HN1_03-master [SAPHana_HN1_03]


Masters: [ hanadb1 ]
Slaves: [ hanadb2 ]

Resource Group: g_ip_HN1_03


nc_HN1_03 (ocf::heartbeat:azure-lb): Started hanadb1
vip_HN1_03 (ocf::heartbeat:IPaddr2): Started hanadb1

Test the cluster setup


This section describes how you can test your setup.
1. Before you start a test, make sure that Pacemaker does not have any failed action (via pcs status), there are
no unexpected location constraints (for example leftovers of a migration test) and that HANA system
replication is sync state, for example with systemReplicationStatus:
sudo su - hn1adm -c "python /usr/sap/HN1/HDB03/exe/python_support/systemReplicationStatus.py"

2. Verify the cluster configuration for a failure scenario when a node loses access to the NFS share
(/hana/shared)
The SAP HANA resource agents depend on binaries, stored on /hana/shared to perform operations during
failover. File system /hana/shared is mounted over NFS in the presented scenario.
It is difficult to simulate a failure, where one of the servers loses access to the NFS share. A test that can be
performed is to re-mount the file system as read-only. This approach validates that the cluster will be able
to failover, if access to /hana/shared is lost on the active node.
Expected Result: On making /hana/shared as read-only file system, the OCF_CHECK_LEVEL attribute of the
resource hana_shared1 which performs read/write operation on file system will fail as it is not able to write
anything on the file system and will perform HANA resource failover. The same result is expected when
your HANA node loses access to the NFS shares.
Resource state before starting the test:

sudo pcs status


# Example output
Full list of resources:
rsc_hdb_azr_agt (stonith:fence_azure_arm): Started hanadb1

Resource Group: hanadb1_nfs


hana_data1 (ocf::heartbeat:Filesystem): Started hanadb1
hana_log1 (ocf::heartbeat:Filesystem): Started hanadb1
hana_shared1 (ocf::heartbeat:Filesystem): Started hanadb1

Resource Group: hanadb2_nfs


hana_data2 (ocf::heartbeat:Filesystem): Started hanadb2
hana_log2 (ocf::heartbeat:Filesystem): Started hanadb2
hana_shared2 (ocf::heartbeat:Filesystem): Started hanadb2

hana_nfs1_active (ocf::pacemaker:attribute): Started hanadb1


hana_nfs2_active (ocf::pacemaker:attribute): Started hanadb2

Clone Set: SAPHanaTopology_HN1_03-clone [SAPHanaTopology_HN1_03]


Started: [ hanadb1 hanadb2 ]

Master/Slave Set: SAPHana_HN1_03-master [SAPHana_HN1_03]


Masters: [ hanadb1 ]
Slaves: [ hanadb2 ]

Resource Group: g_ip_HN1_03


nc_HN1_03 (ocf::heartbeat:azure-lb): Started hanadb1
vip_HN1_03 (ocf::heartbeat:IPaddr2): Started hanadb1

You can place /hana/shared in read-only mode on the active cluster node, using below command:

sudo mount -o ro 10.32.2.4:/hanadb1-shared-mnt00001 /hana/shared

hanadb1 will either reboot or poweroff based on the action set on stonith (
pcs property show stonith-action ). Once the server (hanadb1) is down, HANA resource move to hanadb2.
You can check the status of cluster from hanadb2.
pcs status

Full list of resources:

rsc_hdb_azr_agt (stonith:fence_azure_arm): Started hanadb2

Resource Group: hanadb1_nfs


hana_data1 (ocf::heartbeat:Filesystem): Stopped
hana_log1 (ocf::heartbeat:Filesystem): Stopped
hana_shared1 (ocf::heartbeat:Filesystem): Stopped

Resource Group: hanadb2_nfs


hana_data2 (ocf::heartbeat:Filesystem): Started hanadb2
hana_log2 (ocf::heartbeat:Filesystem): Started hanadb2
hana_shared2 (ocf::heartbeat:Filesystem): Started hanadb2

hana_nfs1_active (ocf::pacemaker:attribute): Stopped


hana_nfs2_active (ocf::pacemaker:attribute): Started hanadb2

Clone Set: SAPHanaTopology_HN1_03-clone [SAPHanaTopology_HN1_03]


Started: [ hanadb2 ]
Stopped: [ hanadb1 ]

Master/Slave Set: SAPHana_HN1_03-master [SAPHana_HN1_03]


Masters: [ hanadb2 ]
Stopped: [ hanadb1 ]

Resource Group: g_ip_HN1_03


nc_HN1_03 (ocf::heartbeat:azure-lb): Started hanadb2
vip_HN1_03 (ocf::heartbeat:IPaddr2): Started hanadb2

We recommend to thoroughly test the SAP HANA cluster configuration, by also performing the tests
described in Setup SAP HANA System Replication on RHEL.
Verify and troubleshoot SAP HANA scale-out high-
availability setup on SLES 12 SP3
12/22/2020 • 27 minutes to read • Edit Online

This article helps you check the Pacemaker cluster configuration for SAP HANA scale-out that runs on Azure virtual
machines (VMs). The cluster setup was accomplished in combination with SAP HANA System Replication (HSR) and
the SUSE RPM package SAPHanaSR-ScaleOut. All tests were done on SUSE SLES 12 SP3 only. The article's sections
cover different areas and include sample commands and excerpts from config files. We recommend these samples
as a method to verify and check the whole cluster setup.

Important notes
All testing for SAP HANA scale-out in combination with SAP HANA System Replication and Pacemaker was done
with SAP HANA 2.0 only. The operating system version was SUSE Linux Enterprise Server 12 SP3 for SAP
applications. The latest RPM package, SAPHanaSR-ScaleOut from SUSE, was used to set up the Pacemaker cluster.
SUSE published a detailed description of this performance-optimized setup.
For virtual machine types that are supported for SAP HANA scale-out, check the SAP HANA certified IaaS directory.

NOTE
This article contains references to the terms master and slave, terms that Microsoft no longer uses. When these terms are
removed from the software, we’ll remove them from this article.

There was a technical issue with SAP HANA scale-out in combination with multiple subnets and vNICs and setting
up HSR. It's mandatory to use the latest SAP HANA 2.0 patches where this issue was fixed. The following SAP HANA
versions are supported:
rev2.00.024.04 or higher
rev2.00.032 or higher
If you need support from SUSE, follow this guide. Collect all the information about the SAP HANA high-availability
(HA) cluster as described in the article. SUSE support needs this information for further analysis.
During internal testing, the cluster setup got confused by a normal graceful VM shutdown via the Azure portal. So
we recommend that you test a cluster failover by other methods. Use methods like forcing a kernel panic, or shut
down the networks or migrate the msl resource. See details in the following sections. The assumption is that a
standard shutdown happens with intention. The best example of an intentional shutdown is for maintenance. See
details in Planned maintenance.
Also, during internal testing, the cluster setup got confused after a manual SAP HANA takeover while the cluster
was in maintenance mode. We recommend that you switch it back again manually before you end the cluster
maintenance mode. Another option is to trigger a failover before you put the cluster into maintenance mode. For
more information, see Planned maintenance. The documentation from SUSE describes how you can reset the
cluster in this way by using the crm command. But the approach mentioned previously was robust during internal
testing and never showed any unexpected side effects.
When you use the crm migrate command, make sure to clean up the cluster configuration. It adds location
constraints that you might not be aware of. These constraints impact the cluster behavior. See more details in
Planned maintenance.
Test system description
For SAP HANA scale-out HA verification and certification, a setup was used. It consisted of two systems with three
SAP HANA nodes each: one master and two workers. The following table lists VM names and internal IP addresses.
All the verification samples that follow were done on these VMs. By using these VM names and IP addresses in the
command samples, you can better understand the commands and their outputs:

N O DE T Y P E VM N A M E IP A DDRESS

Master node on site 1 hso-hana-vm-s1-0 10.0.0.30

Worker node 1 on site 1 hso-hana-vm-s1-1 10.0.0.31

Worker node 2 on site 1 hso-hana-vm-s1-2 10.0.0.32

Master node on site 2 hso-hana-vm-s2-0 10.0.0.40

Worker node 1 on site 2 hso-hana-vm-s2-1 10.0.0.41

Worker node 2 on site 2 hso-hana-vm-s2-2 10.0.0.42

Majority maker node hso-hana-dm 10.0.0.13

SBD device server hso-hana-sbd 10.0.0.19

NFS server 1 hso-nfs-vm-0 10.0.0.15

NFS server 2 hso-nfs-vm-1 10.0.0.14

Multiple subnets and vNICs


Following SAP HANA network recommendations, three subnets were created within one Azure virtual network.
SAP HANA scale-out on Azure has to be installed in nonshared mode. That means every node uses local disk
volumes for /hana/data and /hana/log . Because the nodes use only local disk volumes, it's not necessary to
define a separate subnet for storage:
10.0.2.0/24 for SAP HANA internode communication
10.0.1.0/24 for SAP HANA System Replication (HSR)
10.0.0.0/24 for everything else
For information about SAP HANA configuration related to using multiple networks, see SAP HANA global.ini.
Every VM in the cluster has three vNICs that correspond to the number of subnets. How to create a Linux virtual
machine in Azure with multiple network interface cards describes a potential routing issue on Azure when
deploying a Linux VM. This specific routing article applies only for use of multiple vNICs. The problem is solved by
SUSE per default in SLES 12 SP3. For more information, see Multi-NIC with cloud-netconfig in EC2 and Azure.
To verify that SAP HANA is configured correctly to use multiple networks, run the following commands. First check
on the OS level that all three internal IP addresses for all three subnets are active. If you defined the subnets with
different IP address ranges, you have to adapt the commands:
ifconfig | grep "inet addr:10\."

The following sample output is from the second worker node on site 2. You can see three different internal IP
addresses from eth0, eth1, and eth2:

inet addr:10.0.0.42 Bcast:10.0.0.255 Mask:255.255.255.0


inet addr:10.0.1.42 Bcast:10.0.1.255 Mask:255.255.255.0
inet addr:10.0.2.42 Bcast:10.0.2.255 Mask:255.255.255.0

Next, verify the SAP HANA ports for the name server and HSR. SAP HANA should listen on the corresponding
subnets. Depending on the SAP HANA instance number, you have to adapt the commands. For the test system, the
instance number was 00 . There are different ways to find out which ports are used.
The following SQL statement returns the instance ID, instance number, and other information:

select * from "SYS"."M_SYSTEM_OVERVIEW"

To find the correct port numbers, you can look, for example, in HANA Studio under Configuration or via a SQL
statement:

select * from M_INIFILE_CONTENTS WHERE KEY LIKE 'listen%'

To find every port that's used in the SAP software stack including SAP HANA, search TCP/IP ports of all SAP
products.
Given the instance number 00 in the SAP HANA 2.0 test system, the port number for the name server is 30001 .
The port number for HSR metadata communication is 40002 . One option is to sign in to a worker node and then
check the master node services. For this article, we checked worker node 2 on site 2 trying to connect to the master
node on site 2.
Check the name server port:

nc -vz 10.0.0.40 30001


nc -vz 10.0.1.40 30001
nc -vz 10.0.2.40 30001

To prove that the internode communication uses subnet 10.0.2.0/24 , the result should look like the following
sample output. Only the connection via subnet 10.0.2.0/24 should succeed:

nc: connect to 10.0.0.40 port 30001 (tcp) failed: Connection refused


nc: connect to 10.0.1.40 port 30001 (tcp) failed: Connection refused
Connection to 10.0.2.40 30001 port [tcp/pago-services1] succeeded!

Now check for HSR port 40002 :


nc -vz 10.0.0.40 40002
nc -vz 10.0.1.40 40002
nc -vz 10.0.2.40 40002

To prove that the HSR communication uses subnet 10.0.1.0/24 , the result should look like the following sample
output. Only the connection via subnet 10.0.1.0/24 should succeed:

nc: connect to 10.0.0.40 port 40002 (tcp) failed: Connection refused


Connection to 10.0.1.40 40002 port [tcp/*] succeeded!
nc: connect to 10.0.2.40 port 40002 (tcp) failed: Connection refused

Corosync
The corosync config file has to be correct on every node in the cluster including the majority maker node. If the
cluster join of a node doesn't work as expected, create or copy /etc/corosync/corosync.conf manually onto all
nodes and restart the service.
The content of corosync.conf from the test system is an example.
The first section is totem , as described in Cluster installation, step 11. You can ignore the value for mcastaddr . Just
keep the existing entry. The entries for token and consensus must be set according to Microsoft Azure SAP HANA
documentation.

totem {
version: 2
secauth: on
crypto_hash: sha1
crypto_cipher: aes256
cluster_name: hacluster
clear_node_high_bit: yes

token: 30000
token_retransmits_before_loss_const: 10
join: 60
consensus: 36000
max_messages: 20

interface {
ringnumber: 0
bindnetaddr: 10.0.0.0
mcastaddr: 239.170.19.232
mcastport: 5405

ttl: 1
}
transport: udpu

The second section, logging , wasn't changed from the given defaults:
logging {
fileline: off
to_stderr: no
to_logfile: no
logfile: /var/log/cluster/corosync.log
to_syslog: yes
debug: off
timestamp: on
logger_subsys {
subsys: QUORUM
debug: off
}
}

The third section shows the nodelist . All nodes of the cluster have to show up with their nodeid :

nodelist {
node {
ring0_addr:hso-hana-vm-s1-0
nodeid: 1
}
node {
ring0_addr:hso-hana-vm-s1-1
nodeid: 2
}
node {
ring0_addr:hso-hana-vm-s1-2
nodeid: 3
}
node {
ring0_addr:hso-hana-vm-s2-0
nodeid: 4
}
node {
ring0_addr:hso-hana-vm-s2-1
nodeid: 5
}
node {
ring0_addr:hso-hana-vm-s2-2
nodeid: 6
}
node {
ring0_addr:hso-hana-dm
nodeid: 7
}
}

In the last section, quorum , it's important to set the value for expected_votes correctly. It must be the number of
nodes including the majority maker node. And the value for two_node has to be 0 . Don't remove the entry
completely. Just set the value to 0 .

quorum {
# Enable and configure quorum subsystem (default: off)
# see also corosync.conf.5 and votequorum.5
provider: corosync_votequorum
expected_votes: 7
two_node: 0
}
Restart the service via systemctl :

systemctl restart corosync

SBD device
How to set up an SBD device on an Azure VM is described in SBD fencing.
First, check on the SBD server VM if there are ACL entries for every node in the cluster. Run the following command
on the SBD server VM:

targetcli ls

On the test system, the output of the command looks like the following sample. ACL names like iqn.2006-04.hso-
db-0.local:hso-db-0 must be entered as the corresponding initiator names on the VMs. Every VM needs a
different one.
| | o- sbddbhso ................................................................... [/sbd/sbddbhso (50.0MiB)
write-thru activated]
| | o- alua
................................................................................................... [ALUA
Groups: 1]
| | o- default_tg_pt_gp ....................................................................... [ALUA
state: Active/optimized]
| o- pscsi ..................................................................................................
[Storage Objects: 0]
| o- ramdisk ................................................................................................
[Storage Objects: 0]
o- iscsi
............................................................................................................
[Targets: 1]
| o- iqn.2006-04.dbhso.local:dbhso
..................................................................................... [TPGs: 1]
| o- tpg1 ...............................................................................................
[no-gen-acls, no-auth]
| o- acls
..........................................................................................................
[ACLs: 7]
| | o- iqn.2006-04.hso-db-0.local:hso-db-0
.................................................................. [Mapped LUNs: 1]
| | | o- mapped_lun0 ............................................................................. [lun0
fileio/sbddbhso (rw)]
| | o- iqn.2006-04.hso-db-1.local:hso-db-1
.................................................................. [Mapped LUNs: 1]
| | | o- mapped_lun0 ............................................................................. [lun0
fileio/sbddbhso (rw)]
| | o- iqn.2006-04.hso-db-2.local:hso-db-2
.................................................................. [Mapped LUNs: 1]
| | | o- mapped_lun0 ............................................................................. [lun0
fileio/sbddbhso (rw)]
| | o- iqn.2006-04.hso-db-3.local:hso-db-3
.................................................................. [Mapped LUNs: 1]
| | | o- mapped_lun0 ............................................................................. [lun0
fileio/sbddbhso (rw)]
| | o- iqn.2006-04.hso-db-4.local:hso-db-4
.................................................................. [Mapped LUNs: 1]
| | | o- mapped_lun0 ............................................................................. [lun0
fileio/sbddbhso (rw)]
| | o- iqn.2006-04.hso-db-5.local:hso-db-5
.................................................................. [Mapped LUNs: 1]
| | | o- mapped_lun0 ............................................................................. [lun0
fileio/sbddbhso (rw)]
| | o- iqn.2006-04.hso-db-6.local:hso-db-6
.................................................................. [Mapped LUNs: 1]

Then check that the initiator names on all the VMs are different and correspond to the previously shown entries.
This example is from worker node 1 on site 1:

cat /etc/iscsi/initiatorname.iscsi

The output looks like the following sample:


##
## /etc/iscsi/iscsi.initiatorname
##
## Default iSCSI Initiatorname.
##
## DO NOT EDIT OR REMOVE THIS FILE!
## If you remove this file, the iSCSI daemon will not start.
## If you change the InitiatorName, existing access control lists
## may reject this initiator. The InitiatorName must be unique
## for each iSCSI initiator. Do NOT duplicate iSCSI InitiatorNames.
InitiatorName=iqn.2006-04.hso-db-1.local:hso-db-1

Next, verify that the discover y works correctly. Run the following command on every cluster node by using the IP
address of the SBD server VM:

iscsiadm -m discovery --type=st --portal=10.0.0.19:3260

The output should look like the following sample:

10.0.0.19:3260,1 iqn.2006-04.dbhso.local:dbhso

The next proof point is to verify that the node sees the SDB device. Check it on every node including the majority
maker node:

lsscsi | grep dbhso

The output should look like the following sample. However, the names might differ. The device name might also
change after the VM reboots:

[6:0:0:0] disk LIO-ORG sbddbhso 4.0 /dev/sdm

Depending on the status of the system, it sometimes helps to restart the iSCSI services to resolve issues. Then run
the following commands:

systemctl restart iscsi


systemctl restart iscsid

From any node, you can check if all nodes are clear . Make sure that you use the correct device name on a specific
node:

sbd -d /dev/sdm list

The output should show clear for every node in the cluster:
0 hso-hana-vm-s1-0 clear
1 hso-hana-vm-s2-2 clear
2 hso-hana-vm-s2-1 clear
3 hso-hana-dm clear
4 hso-hana-vm-s1-1 clear
5 hso-hana-vm-s2-0 clear
6 hso-hana-vm-s1-2 clear

Another SBD check is the dump option of the sbd command. In this sample command and output from the
majority maker node, the device name was sdd , not sdm :

sbd -d /dev/sdd dump

The output, apart from the device name, should look the same on all nodes:

==Dumping header on disk /dev/sdd


Header version : 2.1
UUID : 9fa6cc49-c294-4f1e-9527-c973f4d5a5b0
Number of slots : 255
Sector size : 512
Timeout (watchdog) : 60
Timeout (allocate) : 2
Timeout (loop) : 1
Timeout (msgwait) : 120
==Header on disk /dev/sdd is dumped

One more check for SBD is the possibility to send a message to another node. To send a message to worker node 2
on site 2, run the following command on worker node 1 on site 2:

sbd -d /dev/sdm message hso-hana-vm-s2-2 test

On the target VM side, hso-hana-vm-s2-2 in this example, you can find the following entry in
/var/log/messages :

/dev/disk/by-id/scsi-36001405e614138d4ec64da09e91aea68: notice: servant: Received command test from hso-hana-


vm-s2-1 on disk /dev/disk/by-id/scsi-36001405e614138d4ec64da09e91aea68

Check that the entries in /etc/sysconfig/sbd correspond to the description in Setting up Pacemaker on SUSE
Linux Enterprise Server in Azure. Verify that the startup setting in /etc/iscsi/iscsid.conf is set to automatic.
The following entries are important in /etc/sysconfig/sbd . Adapt the id value if necessary:

SBD_DEVICE="/dev/disk/by-id/scsi-36001405e614138d4ec64da09e91aea68;"
SBD_PACEMAKER=yes
SBD_STARTMODE=always
SBD_WATCHDOG=yes

Check the startup setting in /etc/iscsi/iscsid.conf . The required setting should have happened with the following
iscsiadm command, described in the documentation. Verify and adapt it manually with vi if it's different.
This command sets startup behavior:

iscsiadm -m node --op=update --name=node.startup --value=automatic

Make this entry in /etc/iscsi/iscsid.conf :

node.startup = automatic

During testing and verification, after the restart of a VM, the SBD device wasn't visible anymore in some cases.
There was a discrepancy between the startup setting and what YaST2 showed. To check the settings, take these
steps:
1. Start YaST2.
2. Select Network Ser vices on the left side.
3. Scroll down on the right side to iSCSI Initiator and select it.
4. On the next screen under the Ser vice tab, you see the unique initiator name for the node.
5. Above the initiator name, make sure that the Ser vice Star t value is set to When Booting .
6. If it's not, then set it to When Booting instead of Manually .
7. Next, switch the top tab to Connected Targets .
8. On the Connected Targets screen, you should see an entry for the SBD device like this sample:
10.0.0.19:3260 iqn.2006-04.dbhso.local:dbhso .
9. Check if the Star t-Up value is set to on boot .
10. If not, choose Edit and change it.
11. Save the changes and exit YaST2.

Pacemaker
After everything is set up correctly, you can run the following command on every node to check the status of the
Pacemaker service:

systemctl status pacemaker

The top of the output should look like the following sample. It's important that the status after Active is shown as
loaded and active (running) . The status after Loaded must be shown as enabled .
pacemaker.service - Pacemaker High Availability Cluster Manager
Loaded: loaded (/usr/lib/systemd/system/pacemaker.service; enabled; vendor preset: disabled)
Active: active (running) since Fri 2018-09-07 05:56:27 UTC; 4 days ago
Docs: man:pacemakerd
http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html/Pacemaker_Explained/index.html
Main PID: 4496 (pacemakerd)
Tasks: 7 (limit: 4915)
CGroup: /system.slice/pacemaker.service
├─4496 /usr/sbin/pacemakerd -f
├─4499 /usr/lib/pacemaker/cib
├─4500 /usr/lib/pacemaker/stonithd
├─4501 /usr/lib/pacemaker/lrmd
├─4502 /usr/lib/pacemaker/attrd
├─4503 /usr/lib/pacemaker/pengine
└─4504 /usr/lib/pacemaker/crmd

If the setting is still on disabled , run the following command:

systemctl enable pacemaker

To see all configured resources in Pacemaker, run the following command:

crm status

The output should look like the following sample. It's fine that the cln and msl resources are shown as stopped on
the majority maker VM, hso-hana-dm . There's no SAP HANA installation on the majority maker node. So the cln
and msl resources are shown as stopped. It's important that it shows the correct total number of VMs, 7 . All VMs
that are part of the cluster must be listed with the status Online . The current primary master node must be
recognized correctly. In this example, it's hso-hana-vm-s1-0 :

Stack: corosync
Current DC: hso-hana-dm (version 1.1.16-4.8-77ea74d) - partition with quorum
Last updated: Tue Sep 11 15:56:40 2018
Last change: Tue Sep 11 15:56:23 2018 by root via crm_attribute on hso-hana-vm-s1-0

7 nodes configured
17 resources configured

Online: [ hso-hana-dm hso-hana-vm-s1-0 hso-hana-vm-s1-1 hso-hana-vm-s1-2 hso-hana-vm-s2-0 hso-hana-vm-s2-1 hso-


hana-vm-s2-2 ]

Full list of resources:

stonith-sbd (stonith:external/sbd): Started hso-hana-dm


Clone Set: cln_SAPHanaTop_HSO_HDB00 [rsc_SAPHanaTop_HSO_HDB00]
Started: [ hso-hana-vm-s1-0 hso-hana-vm-s1-1 hso-hana-vm-s1-2 hso-hana-vm-s2-0 hso-hana-vm-s2-1 hso-hana-
vm-s2-2 ]
Stopped: [ hso-hana-dm ]
Master/Slave Set: msl_SAPHanaCon_HSO_HDB00 [rsc_SAPHanaCon_HSO_HDB00]
Masters: [ hso-hana-vm-s1-0 ]
Slaves: [ hso-hana-vm-s1-1 hso-hana-vm-s1-2 hso-hana-vm-s2-0 hso-hana-vm-s2-1 hso-hana-vm-s2-2 ]
Stopped: [ hso-hana-dm ]
Resource Group: g_ip_HSO_HDB00
rsc_ip_HSO_HDB00 (ocf::heartbeat:IPaddr2): Started hso-hana-vm-s1-0
rsc_nc_HSO_HDB00 (ocf::heartbeat:anything): Started hso-hana-vm-s1-0
An important feature of Pacemaker is maintenance mode. In this mode, you can make modifications without
provoking an immediate cluster action. An example is a VM reboot. A typical use case would be planned OS or
Azure infrastructure maintenance. See Planned maintenance. Use the following command to put Pacemaker into
maintenance mode:

crm configure property maintenance-mode=true

When you check with crm status , you notice in the output that all resources are marked as unmanaged . In this
state, the cluster doesn't react on any changes like starting or stopping SAP HANA. The following sample shows the
output of the crm status command while the cluster is in maintenance mode:

Stack: corosync
Current DC: hso-hana-dm (version 1.1.16-4.8-77ea74d) - partition with quorum
Last updated: Wed Sep 12 07:48:10 2018
Last change: Wed Sep 12 07:46:54 2018 by root via cibadmin on hso-hana-vm-s2-1

7 nodes configured
17 resources configured

*** Resource management is DISABLED ***


The cluster will not attempt to start, stop or recover services

Online: [ hso-hana-dm hso-hana-vm-s1-0 hso-hana-vm-s1-1 hso-hana-vm-s1-2 hso-hana-vm-s2-0 hso-hana-vm-s2-1 hso-


hana-vm-s2-2 ]

Full list of resources:

stonith-sbd (stonith:external/sbd): Started hso-hana-dm (unmanaged)


Clone Set: cln_SAPHanaTop_HSO_HDB00 [rsc_SAPHanaTop_HSO_HDB00] (unmanaged)
rsc_SAPHanaTop_HSO_HDB00 (ocf::suse:SAPHanaTopology): Started hso-hana-vm-s1-1 (unmanaged)
rsc_SAPHanaTop_HSO_HDB00 (ocf::suse:SAPHanaTopology): Started hso-hana-vm-s1-0 (unmanaged)
rsc_SAPHanaTop_HSO_HDB00 (ocf::suse:SAPHanaTopology): Started hso-hana-vm-s1-2 (unmanaged)
rsc_SAPHanaTop_HSO_HDB00 (ocf::suse:SAPHanaTopology): Started hso-hana-vm-s2-1 (unmanaged)
rsc_SAPHanaTop_HSO_HDB00 (ocf::suse:SAPHanaTopology): Started hso-hana-vm-s2-2 (unmanaged)
rsc_SAPHanaTop_HSO_HDB00 (ocf::suse:SAPHanaTopology): Started hso-hana-vm-s2-0 (unmanaged)
Stopped: [ hso-hana-dm ]
Master/Slave Set: msl_SAPHanaCon_HSO_HDB00 [rsc_SAPHanaCon_HSO_HDB00] (unmanaged)
rsc_SAPHanaCon_HSO_HDB00 (ocf::suse:SAPHanaController): Slave hso-hana-vm-s1-1 (unmanaged)
rsc_SAPHanaCon_HSO_HDB00 (ocf::suse:SAPHanaController): Slave hso-hana-vm-s1-2 (unmanaged)
rsc_SAPHanaCon_HSO_HDB00 (ocf::suse:SAPHanaController): Slave hso-hana-vm-s2-1 (unmanaged)
rsc_SAPHanaCon_HSO_HDB00 (ocf::suse:SAPHanaController): Slave hso-hana-vm-s2-2 (unmanaged)
rsc_SAPHanaCon_HSO_HDB00 (ocf::suse:SAPHanaController): Master hso-hana-vm-s2-0 (unmanaged)
Stopped: [ hso-hana-dm hso-hana-vm-s1-0 ]
Resource Group: g_ip_HSO_HDB00
rsc_ip_HSO_HDB00 (ocf::heartbeat:IPaddr2): Started hso-hana-vm-s2-0 (unmanaged)
rsc_nc_HSO_HDB00 (ocf::heartbeat:anything): Started hso-hana-vm-s2-0 (unmanaged)

This command sample shows how to end the cluster maintenance mode:

crm configure property maintenance-mode=false

Another crm command gets the complete cluster configuration into an editor, so you can edit it. After it saves the
changes, the cluster starts appropriate actions:

crm configure edit


To look at the complete cluster configuration, use the crm show option:

crm configure show

After failures of cluster resources, the crm status command shows a list of Failed Actions . See the following
sample of this output:

Stack: corosync
Current DC: hso-hana-dm (version 1.1.16-4.8-77ea74d) - partition with quorum
Last updated: Thu Sep 13 07:30:44 2018
Last change: Thu Sep 13 07:30:20 2018 by root via crm_attribute on hso-hana-vm-s1-0

7 nodes configured
17 resources configured

Online: [ hso-hana-dm hso-hana-vm-s1-0 hso-hana-vm-s1-1 hso-hana-vm-s1-2 hso-hana-vm-s2-0 hso-hana-vm-s2-1 hso-


hana-vm-s2-2 ]

Full list of resources:

stonith-sbd (stonith:external/sbd): Started hso-hana-dm


Clone Set: cln_SAPHanaTop_HSO_HDB00 [rsc_SAPHanaTop_HSO_HDB00]
Started: [ hso-hana-vm-s1-0 hso-hana-vm-s1-1 hso-hana-vm-s1-2 hso-hana-vm-s2-0 hso-hana-vm-s2-1 hso-hana-
vm-s2-2 ]
Stopped: [ hso-hana-dm ]
Master/Slave Set: msl_SAPHanaCon_HSO_HDB00 [rsc_SAPHanaCon_HSO_HDB00]
Masters: [ hso-hana-vm-s1-0 ]
Slaves: [ hso-hana-vm-s1-1 hso-hana-vm-s1-2 hso-hana-vm-s2-1 hso-hana-vm-s2-2 ]
Stopped: [ hso-hana-dm hso-hana-vm-s2-0 ]
Resource Group: g_ip_HSO_HDB00
rsc_ip_HSO_HDB00 (ocf::heartbeat:IPaddr2): Started hso-hana-vm-s1-0
rsc_nc_HSO_HDB00 (ocf::heartbeat:anything): Started hso-hana-vm-s1-0

Failed Actions:
* rsc_SAPHanaCon_HSO_HDB00_monitor_60000 on hso-hana-vm-s2-0 'unknown error' (1): call=86, status=complete,
exitreason='none',
last-rc-change='Wed Sep 12 17:01:28 2018', queued=0ms, exec=277663ms

It's necessary to do a cluster cleanup after failures. Use the crm command again, and use the command option
cleanup to get rid of these failed action entries. Name the corresponding cluster resource as follows:

crm resource cleanup rsc_SAPHanaCon_HSO_HDB00

The command should return output like the following sample:

Cleaned up rsc_SAPHanaCon_HSO_HDB00:0 on hso-hana-dm


Cleaned up rsc_SAPHanaCon_HSO_HDB00:0 on hso-hana-vm-s1-0
Cleaned up rsc_SAPHanaCon_HSO_HDB00:0 on hso-hana-vm-s1-1
Cleaned up rsc_SAPHanaCon_HSO_HDB00:0 on hso-hana-vm-s1-2
Cleaned up rsc_SAPHanaCon_HSO_HDB00:0 on hso-hana-vm-s2-0
Cleaned up rsc_SAPHanaCon_HSO_HDB00:0 on hso-hana-vm-s2-1
Cleaned up rsc_SAPHanaCon_HSO_HDB00:0 on hso-hana-vm-s2-2
Waiting for 7 replies from the CRMd....... OK

Failover or takeover
As discussed in Important notes, you shouldn't use a standard graceful shutdown to test the cluster failover or SAP
HANA HSR takeover. Instead, we recommend that you trigger a kernel panic, force a resource migration, or possibly
shut down all networks on the OS level of a VM. Another method is the crm <node> standby command. See the
SUSE document.
The following three sample commands can force a cluster failover:

echo c > /proc/sysrq-trigger

crm resource migrate msl_SAPHanaCon_HSO_HDB00 hso-hana-vm-s2-0 force

wicked ifdown eth0


wicked ifdown eth1
wicked ifdown eth2
......
wicked ifdown eth<n>

As described in Planned maintenance, a good way to monitor the cluster activities is to run SAPHanaSR-
showAttr with the watch command:

watch SAPHanaSR-showAttr

It also helps to look at the SAP HANA landscape status coming from an SAP Python script. The cluster setup is
looking for this status value. It becomes clear when you think about a worker node failure. If a worker node goes
down, SAP HANA doesn't immediately return an error for the health of the whole scale-out system.
There are some retries to avoid unnecessary failovers. The cluster reacts only if the status changes from Ok , return
value 4 , to error , return value 1 . So it's correct if the output from SAPHanaSR-showAttr shows a VM with the
state offline . But there's no activity yet to switch primary and secondary. No cluster activity gets triggered as long
as SAP HANA doesn't return an error.
You can monitor the SAP HANA landscape health status as user <HANA SID>adm by calling the SAP Python
script as follows. You might have to adapt the path:

watch python
/hana/shared/HSO/exe/linuxx86_64/HDB_2.00.032.00.1533114046_eeaf4723ec52ed3935ae0dc9769c9411ed73fec5/python_sup
port/landscapeHostConfiguration.py

The output of this command should look like the following sample. The Host Status column and the overall host
status are both important. The actual output is wider, with additional columns. To make the output table more
readable within this document, most columns on the right side were stripped:

| Host | Host | Host | Failover | Remove |


| | Active | Status | Status | Status |
| | | | | |
| ---------------- | ------ | ------ | -------- | ------ | .......
| hso-hana-vm-s2-0 | yes | ok | | |
| hso-hana-vm-s2-1 | yes | ok | | |
| hso-hana-vm-s2-2 | yes | ok | | |

overall host status: ok

There's another command to check current cluster activities. See the following command and the output tail after
the master node of the primary site was killed. You can see the list of transition actions like promoting the former
secondary master node, hso-hana-vm-s2-0 , as the new primary master. If everything is fine, and all activities are
finished, this Transition Summar y list has to be empty.

crm_simulate -Ls

...........

Transition Summary:
* Fence hso-hana-vm-s1-0
* Stop rsc_SAPHanaTop_HSO_HDB00:1 (hso-hana-vm-s1-0)
* Demote rsc_SAPHanaCon_HSO_HDB00:1 (Master -> Stopped hso-hana-vm-s1-0)
* Promote rsc_SAPHanaCon_HSO_HDB00:5 (Slave -> Master hso-hana-vm-s2-0)
* Move rsc_ip_HSO_HDB00 (Started hso-hana-vm-s1-0 -> hso-hana-vm-s2-0)
* Move rsc_nc_HSO_HDB00 (Started hso-hana-vm-s1-0 -> hso-hana-vm-s2-0)

Planned maintenance
There are different use cases when it comes to planned maintenance. One question is whether it's just
infrastructure maintenance like changes on the OS level and disk configuration or a HANA upgrade. You can find
additional information in documents from SUSE like Towards Zero Downtime or SAP HANA SR Performance
Optimized Scenario. These documents also include samples that show how to manually migrate a primary.
Intense internal testing was done to verify the infrastructure maintenance use case. To avoid any issues related to
migrating the primary, we decided to always migrate a primary before putting a cluster into maintenance mode.
This way, it's not necessary to make the cluster forget about the former situation: which side was primary and
which was secondary.
There are two different situations in this regard:
Planned maintenance on the current secondar y . In this case, you can just put the cluster into
maintenance mode and do the work on the secondary without affecting the cluster.
Planned maintenance on the current primar y . So that users can continue to work during maintenance,
you need to force a failover. With this approach, you must trigger the cluster failover by Pacemaker and not
just on the SAP HANA HSR level. The Pacemaker setup automatically triggers the SAP HANA takeover. You
also need to accomplish the failover before you put the cluster into maintenance mode.
The procedure for maintenance on the current secondary site is as follows:
1. Put the cluster into maintenance mode.
2. Accomplish the work on the secondary site.
3. End the cluster maintenance mode.
The procedure for maintenance on the current primary site is more complex:
1. Manually trigger a failover or SAP HANA takeover via a Pacemaker resource migration. See details that follow.
2. SAP HANA on the former primary site gets shut down by the cluster setup.
3. Put the cluster into maintenance mode.
4. After the maintenance work is done, register the former primary as the new secondary site.
5. Clean up the cluster configuration. See details that follow.
6. End the cluster maintenance mode.
Migrating a resource adds an entry to the cluster configuration. An example is forcing a failover. You have to clean
up these entries before you end maintenance mode. See the following sample.
First, force a cluster failover by migrating the msl resource to the current secondary master node. This command
gives a warning that a move constraint was created:

crm resource migrate msl_SAPHanaCon_HSO_HDB00 force

INFO: Move constraint created for msl_SAPHanaCon_HSO_HDB00

Check the failover process via the command SAPHanaSR-showAttr . To monitor the cluster status, open a
dedicated shell window and start the command with watch :

watch SAPHanaSR-showAttr

The output should show the manual failover. The former secondary master node got promoted , in this sample,
hso-hana-vm-s2-0 . The former primary site was stopped, lss value 1 for former primary master node hso-
hana-vm-s1-0 :

Global cib-time prim sec srHook sync_state


------------------------------------------------------------
global Wed Sep 12 07:40:02 2018 HSOS2 - SFAIL SFAIL

Sites lpt lss mns srr


------------------------------------------
HSOS1 10 1 hso-hana-vm-s1-0 P
HSOS2 1536738002 4 hso-hana-vm-s2-0 P

Hosts clone_state node_state roles score site


----------------------------------------------------------------------------------
hso-hana-dm online
hso-hana-vm-s1-0 UNDEFINED online master1::worker: 150 HSOS1
hso-hana-vm-s1-1 DEMOTED online slave::worker: -10000 HSOS1
hso-hana-vm-s1-2 DEMOTED online slave::worker: -10000 HSOS1
hso-hana-vm-s2-0 PROMOTED online master1:master:worker:master 150 HSOS2
hso-hana-vm-s2-1 DEMOTED online slave:slave:worker:slave -10000 HSOS2
hso-hana-vm-s2-2 DEMOTED online slave:slave:worker:slave -10000 HSOS2

After the cluster failover and SAP HANA takeover, put the cluster into maintenance mode as described in
Pacemaker.
The commands SAPHanaSR-showAttr and crm status don't indicate anything about the constraints created by
the resource migration. One option to make these constraints visible is to show the complete cluster resource
configuration with the following command:

crm configure show

Within the cluster configuration, you find a new location constraint caused by the former manual resource
migration. This example entry starts with location cli- :

location cli-ban-msl_SAPHanaCon_HSO_HDB00-on-hso-hana-vm-s1-0 msl_SAPHanaCon_HSO_HDB00 role=Started -inf: hso-


hana-vm-s1-0
Unfortunately, such constraints might impact the overall cluster behavior. So it's mandatory to remove them again
before you bring the whole system back up. With the unmigrate command, it's possible to clean up the location
constraints that were created before. The naming might be a bit confusing. It doesn't try to migrate the resource
back to the original VM from which it was migrated. It just removes the location constraints and also returns
corresponding information when you run the command:

crm resource unmigrate msl_SAPHanaCon_HSO_HDB00

INFO: Removed migration constraints for msl_SAPHanaCon_HSO_HDB00

At the end of the maintenance work, you stop the cluster maintenance mode as shown in Pacemaker.

hb_report to collect log files


To analyze Pacemaker cluster issues, it's helpful and also requested by SUSE support to run the hb_repor t utility. It
collects all the important log files that you need to analyze what happened. This sample call uses a start and end
time where a specific incident occurred. Also see Important notes:

hb_report -f "2018/09/13 07:36" -t "2018/09/13 08:00" /tmp/hb_report_log

The command tells you where it put the compressed log files:

The report is saved in /tmp/hb_report_log.tar.bz2


Report timespan: 09/13/18 07:36:00 - 09/13/18 08:00:00

You can then extract the individual files via the standard tar command:

tar -xvf hb_report_log.tar.bz2

When you look at the extracted files, you find all the log files. Most of them were put in separate directories for
every node in the cluster:

-rw-r--r-- 1 root root 13655 Sep 13 09:01 analysis.txt


-rw-r--r-- 1 root root 14422 Sep 13 09:01 description.txt
-rw-r--r-- 1 root root 0 Sep 13 09:01 events.txt
-rw-r--r-- 1 root root 275560 Sep 13 09:00 ha-log.txt
-rw-r--r-- 1 root root 26 Sep 13 09:00 ha-log.txt.info
drwxr-xr-x 4 root root 4096 Sep 13 09:01 hso-hana-dm
drwxr-xr-x 3 root root 4096 Sep 13 09:01 hso-hana-vm-s1-0
drwxr-xr-x 3 root root 4096 Sep 13 09:01 hso-hana-vm-s1-1
drwxr-xr-x 3 root root 4096 Sep 13 09:01 hso-hana-vm-s1-2
drwxr-xr-x 3 root root 4096 Sep 13 09:01 hso-hana-vm-s2-0
drwxr-xr-x 3 root root 4096 Sep 13 09:01 hso-hana-vm-s2-1
drwxr-xr-x 3 root root 4096 Sep 13 09:01 hso-hana-vm-s2-2
-rw-r--r-- 1 root root 264726 Sep 13 09:00 journal.log

Within the time range that was specified, the current master node hso-hana-vm-s1-0 was killed. You can find
entries related to this event in the journal.log :
2018-09-13T07:38:01+0000 hso-hana-vm-s2-1 su[93494]: (to hsoadm) root on none
2018-09-13T07:38:01+0000 hso-hana-vm-s2-1 su[93494]: pam_unix(su-l:session): session opened for user hsoadm by
(uid=0)
2018-09-13T07:38:01+0000 hso-hana-vm-s2-1 systemd[1]: Started Session c44290 of user hsoadm.
2018-09-13T07:38:02+0000 hso-hana-vm-s2-1 corosync[28302]: [TOTEM ] A new membership (10.0.0.13:120996) was
formed. Members left: 1
2018-09-13T07:38:02+0000 hso-hana-vm-s2-1 corosync[28302]: [TOTEM ] Failed to receive the leave message.
failed: 1
2018-09-13T07:38:02+0000 hso-hana-vm-s2-1 attrd[28313]: notice: Node hso-hana-vm-s1-0 state is now lost
2018-09-13T07:38:02+0000 hso-hana-vm-s2-1 attrd[28313]: notice: Removing all hso-hana-vm-s1-0 attributes for
peer loss
2018-09-13T07:38:02+0000 hso-hana-vm-s2-1 attrd[28313]: notice: Purged 1 peer with id=1 and/or uname=hso-
hana-vm-s1-0 from the membership cache
2018-09-13T07:38:02+0000 hso-hana-vm-s2-1 stonith-ng[28311]: notice: Node hso-hana-vm-s1-0 state is now lost
2018-09-13T07:38:02+0000 hso-hana-vm-s2-1 stonith-ng[28311]: notice: Purged 1 peer with id=1 and/or
uname=hso-hana-vm-s1-0 from the membership cache
2018-09-13T07:38:02+0000 hso-hana-vm-s2-1 cib[28310]: notice: Node hso-hana-vm-s1-0 state is now lost
2018-09-13T07:38:02+0000 hso-hana-vm-s2-1 corosync[28302]: [QUORUM] Members[6]: 7 2 3 4 5 6
2018-09-13T07:38:02+0000 hso-hana-vm-s2-1 corosync[28302]: [MAIN ] Completed service synchronization, ready
to provide service.
2018-09-13T07:38:02+0000 hso-hana-vm-s2-1 crmd[28315]: notice: Node hso-hana-vm-s1-0 state is now lost
2018-09-13T07:38:02+0000 hso-hana-vm-s2-1 pacemakerd[28308]: notice: Node hso-hana-vm-s1-0 state is now lost
2018-09-13T07:38:02+0000 hso-hana-vm-s2-1 cib[28310]: notice: Purged 1 peer with id=1 and/or uname=hso-hana-
vm-s1-0 from the membership cache
2018-09-13T07:38:03+0000 hso-hana-vm-s2-1 su[93494]: pam_unix(su-l:session): session closed for user hsoadm

Another example is the Pacemaker log file on the secondary master, which became the new primary master. This
excerpt shows that the status of the killed primary master node was set to offline :

Sep 13 07:38:02 [4178] hso-hana-vm-s2-0 stonith-ng: info: pcmk_cpg_membership: Node 3 still member of
group stonith-ng (peer=hso-hana-vm-s1-2, counter=5.1)
Sep 13 07:38:02 [4178] hso-hana-vm-s2-0 stonith-ng: info: pcmk_cpg_membership: Node 4 still member of
group stonith-ng (peer=hso-hana-vm-s2-0, counter=5.2)
Sep 13 07:38:02 [4178] hso-hana-vm-s2-0 stonith-ng: info: pcmk_cpg_membership: Node 5 still member of
group stonith-ng (peer=hso-hana-vm-s2-1, counter=5.3)
Sep 13 07:38:02 [4178] hso-hana-vm-s2-0 stonith-ng: info: pcmk_cpg_membership: Node 6 still member of
group stonith-ng (peer=hso-hana-vm-s2-2, counter=5.4)
Sep 13 07:38:02 [4178] hso-hana-vm-s2-0 stonith-ng: info: pcmk_cpg_membership: Node 7 still member of
group stonith-ng (peer=hso-hana-dm, counter=5.5)
Sep 13 07:38:02 [4184] hso-hana-vm-s2-0 crmd: info: pcmk_cpg_membership: Node 1 left group crmd
(peer=hso-hana-vm-s1-0, counter=5.0)
Sep 13 07:38:02 [4184] hso-hana-vm-s2-0 crmd: info: crm_update_peer_proc: pcmk_cpg_membership:
Node hso-hana-vm-s1-0[1] - corosync-cpg is now offline
Sep 13 07:38:02 [4184] hso-hana-vm-s2-0 crmd: info: peer_update_callback: Client hso-hana-vm-s1-
0/peer now has status [offline] (DC=hso-hana-dm, changed=4000000)
Sep 13 07:38:02 [4184] hso-hana-vm-s2-0 crmd: info: pcmk_cpg_membership: Node 2 still member of
group crmd (peer=hso-hana-vm-s1-1, counter=5.0)

SAP HANA global.ini


The following excerpts are from the SAP HANA global.ini file on cluster site 2. This example shows the hostname
resolution entries for using different networks for SAP HANA internode communication and HSR:

[communication]
tcp_keepalive_interval = 20
internal_network = 10.0.2/24
listeninterface = .internal
[internal_hostname_resolution]
10.0.2.40 = hso-hana-vm-s2-0
10.0.2.42 = hso-hana-vm-s2-2
10.0.2.41 = hso-hana-vm-s2-1

[ha_dr_provider_SAPHanaSR]
provider = SAPHanaSR
path = /hana/shared/myHooks
execution_order = 1

[system_replication_communication]
listeninterface = .internal

[system_replication_hostname_resolution]
10.0.1.30 = hso-hana-vm-s1-0
10.0.1.31 = hso-hana-vm-s1-1
10.0.1.32 = hso-hana-vm-s1-2
10.0.1.40 = hso-hana-vm-s2-0
10.0.1.41 = hso-hana-vm-s2-1
10.0.1.42 = hso-hana-vm-s2-2

Hawk
The cluster solution provides a browser interface that offers a GUI for users who prefer menus and graphics to
having all the commands on the shell level. To use the browser interface, replace <node> with an actual SAP
HANA node in the following URL. Then enter the credentials of the cluster (user cluster ):

https://<node>:7630

This screenshot shows the cluster dashboard:


This example shows the location constraints caused by a cluster resource migration as explained in Planned
maintenance:

You can also upload the hb_repor t output in Hawk under Histor y , shown as follows. See hb_report to collect log
files:

With the Histor y Explorer , you can then go through all the cluster transitions included in the hb_repor t output:
This final screenshot shows the Details section of a single transition. The cluster reacted on a primary master node
crash, node hso-hana-vm-s1-0 . It's now promoting the secondary node as the new master, hso-hana-vm-s2-0 :

Next steps
This troubleshooting guide describes high availability for SAP HANA in a scale-out configuration. In addition to the
database, another important component in an SAP landscape is the SAP NetWeaver stack. Learn about high
availability for SAP NetWeaver on Azure virtual machines that use SUSE Enterprise Linux Server.
High availability of SAP HANA scale-out system on
Red Hat Enterprise Linux
12/22/2020 • 37 minutes to read • Edit Online

This article describes how to deploy a highly available SAP HANA system in a scale-out configuration with HANA
system replication (HSR) and Pacemaker on Azure Red Hat Enterprise Linux virtual machines (VMs). The shared file
systems in the presented architecture are provided by Azure NetApp Files and are mounted over NFS.
In the example configurations, installation commands, and so on, the HANA instance is 03 and the HANA system
ID is HN1 . The examples are based on HANA 2.0 SP4 and Red Hat Enterprise Linux for SAP 7.6.
Before you begin, refer to the following SAP notes and papers:
SAP Note 1928533 includes:
A list of Azure VM sizes that are supported for the deployment of SAP software
Important capacity information for Azure VM sizes
Supported SAP software, and operating system (OS) and database combinations
The required SAP kernel version for Windows and Linux on Microsoft Azure
SAP Note 2015553: Lists prerequisites for SAP-supported SAP software deployments in Azure
SAP Note [2002167] has recommended OS settings for Red Hat Enterprise Linux
SAP Note 2009879 has SAP HANA Guidelines for Red Hat Enterprise Linux
SAP Note 2178632: Contains detailed information about all monitoring metrics reported for SAP in Azure
SAP Note 2191498: Contains the required SAP Host Agent version for Linux in Azure
SAP Note 2243692: Contains information about SAP licensing on Linux in Azure
SAP Note 1999351: Contains additional troubleshooting information for the Azure Enhanced Monitoring
Extension for SAP
SAP Note 1900823: Contains information about SAP HANA storage requirements
SAP Community Wiki: Contains all required SAP notes for Linux
Azure Virtual Machines planning and implementation for SAP on Linux
Azure Virtual Machines deployment for SAP on Linux
Azure Virtual Machines DBMS deployment for SAP on Linux
SAP HANA Network Requirements
General RHEL documentation
High Availability Add-On Overview
High Availability Add-On Administration
High Availability Add-On Reference
Red Hat Enterprise Linux Networking Guide
How do I configure SAP HANA Scale-Out System Replication in a Pacemaker cluster with HANA file
systems on NFS shares
Azure-specific RHEL documentation:
Install SAP HANA on Red Hat Enterprise Linux for Use in Microsoft Azure
Red Hat Enterprise Linux Solution for SAP HANA Scale-Out and System Replication
NetApp SAP Applications on Microsoft Azure using Azure NetApp Files
Azure NetApp Files documentation
Overview
One method to achieve HANA high availability for HANA scale-out installations, is to configure HANA system
replication and protect the solution with Pacemaker cluster to allow automatic failover. When an active node fails,
the cluster fails over the HANA resources to the other site.
The presented configuration shows three HANA nodes on each site, plus majority maker node to prevent split-
brain scenario. The instructions can be adapted, to include more VMs as HANA DB nodes.
The HANA shared file system /hana/shared in the presented architecture is provided by Azure NetApp Files. It is
mounted via NFSv4.1 on each HANA node in the same HANA system replication site. File systems /hana/data and
/hana/log are local file systems and are not shared between the HANA DB nodes. SAP HANA will be installed in
non-shared mode.

TIP
For recommended SAP HANA storage configurations, see SAP HANA Azure VMs storage configurations.

In the preceding diagram, three subnets are represented within one Azure virtual network, following the SAP
HANA network recommendations:
for client communication - client 10.23.0.0/24
for internal HANA inter-node communication - inter 10.23.1.128/26
for HANA system replication - hsr 10.23.1.192/26
As /hana/data and /hana/log are deployed on local disks, it is not necessary to deploy separate subnet and
separate virtual network cards for communication to the storage.
The Azure NetApp volumes are deployed in a separate subnet, [delegated to Azure NetApp Files]
(https://docs.microsoft.com/azure/azure-netapp-files/azure-netapp-files-delegate-subnet: anf 10.23.1.0/26.

Set up the infrastructure


In the instructions that follow, we assume that you've already created the resource group, the Azure virtual
network with three Azure network subnets: client , inter and hsr .
Deploy Linux virtual machines via the Azure portal
1. Deploy the Azure VMs.
For the configuration presented in this document, deploy seven virtual machines:
three virtual machines to serve as HANA DB nodes for HANA replication site 1: hana-s1-db1 , hana-s1-
db2 and hana-s1-db3
three virtual machines to serve as HANA DB nodes for HANA replication site 2: hana-s2-db1 , hana-s2-
db2 and hana-s2-db3
a small virtual machine to serve as majority maker: hana-s-mm
The VMs, deployed as SAP DB HANA nodes should be certified by SAP for HANA as published in the SAP
HANA Hardware directory. When deploying the HANA DB nodes, make sure that Accelerated Network is
selected.
For the majority maker node, you can deploy a small VM, as this VM doesn't run any of the SAP HANA
resources. The majority maker VM is used in the cluster configuration to achieve odd number of cluster
nodes in a split-brain scenario. The majority maker VM only needs one virtual network interface in the
client subnet in this example.

Deploy local managed disks for /hana/data and /hana/log . The minimum recommended storage
configuration for /hana/data and /hana/log is described in SAP HANA Azure VMs storage configurations.
Deploy the primary network interface for each VM in the client virtual network subnet.
When the VM is deployed via Azure portal, the network interface name is automatically generated. In these
instructions for simplicity we'll refer to the automatically generated, primary network interfaces, which are
attached to the client Azure virtual network subnet as hana-s1-db1-client , hana-s1-db2-client ,
hana-s1-db3-client , and so on.

IMPORTANT
Make sure that the OS you select is SAP-certified for SAP HANA on the specific VM types you're using. For a list of
SAP HANA certified VM types and OS releases for those types, go to the SAP HANA certified IaaS platforms site.
Click into the details of the listed VM type to get the complete list of SAP HANA-supported OS releases for that type.

2. Create six network interfaces, one for each HANA DB virtual machine, in the inter virtual network subnet
(in this example, hana-s1-db1-inter , hana-s1-db2-inter , hana-s1-db3-inter , hana-s2-db1-inter ,
hana-s2-db2-inter , and hana-s2-db3-inter ).
3. Create six network interfaces, one for each HANA DB virtual machine, in the hsr virtual network subnet (in
this example, hana-s1-db1-hsr , hana-s1-db2-hsr , hana-s1-db3-hsr , hana-s2-db1-hsr , hana-s2-
db2-hsr , and hana-s2-db3-hsr ).
4. Attach the newly created virtual network interfaces to the corresponding virtual machines:
a. Go to the virtual machine in the Azure portal.
b. In the left pane, select Vir tual Machines . Filter on the virtual machine name (for example, hana-s1-
db1 ), and then select the virtual machine.
c. In the Over view pane, select Stop to deallocate the virtual machine.
d. Select Networking , and then attach the network interface. In the Attach network interface drop-down
list, select the already created network interfaces for the inter and hsr subnets.
e. Select Save .
f. Repeat steps b through e for the remaining virtual machines (in our example, hana-s1-db2 , hana-s1-
db3 , hana-s2-db1 , hana-s2-db2 and hana-s2-db3 ).
g. Leave the virtual machines in stopped state for now. Next, we'll enable accelerated networking for all
newly attached network interfaces.
5. Enable accelerated networking for the additional network interfaces for the inter and hsr subnets by
doing the following steps:
a. Open Azure Cloud Shell in the Azure portal.
b. Execute the following commands to enable accelerated networking for the additional network interfaces,
which are attached to the inter and hsr subnets.

az network nic update --id /subscriptions/your subscription/resourceGroups/your resource


group/providers/Microsoft.Network/networkInterfaces/hana-s1-db1-inter --accelerated-networking true
az network nic update --id /subscriptions/your subscription/resourceGroups/your resource
group/providers/Microsoft.Network/networkInterfaces/hana-s1-db2-inter --accelerated-networking true
az network nic update --id /subscriptions/your subscription/resourceGroups/your resource
group/providers/Microsoft.Network/networkInterfaces/hana-s1-db3-inter --accelerated-networking true
az network nic update --id /subscriptions/your subscription/resourceGroups/your resource
group/providers/Microsoft.Network/networkInterfaces/hana-s2-db1-inter --accelerated-networking true
az network nic update --id /subscriptions/your subscription/resourceGroups/your resource
group/providers/Microsoft.Network/networkInterfaces/hana-s2-db2-inter --accelerated-networking true
az network nic update --id /subscriptions/your subscription/resourceGroups/your resource
group/providers/Microsoft.Network/networkInterfaces/hana-s2-db3-inter --accelerated-networking true

az network nic update --id /subscriptions/your subscription/resourceGroups/your resource


group/providers/Microsoft.Network/networkInterfaces/hana-s1-db1-hsr --accelerated-networking true
az network nic update --id /subscriptions/your subscription/resourceGroups/your resource
group/providers/Microsoft.Network/networkInterfaces/hana-s1-db2-hsr --accelerated-networking true
az network nic update --id /subscriptions/your subscription/resourceGroups/your resource
group/providers/Microsoft.Network/networkInterfaces/hana-s1-db3-hsr --accelerated-networking true
az network nic update --id /subscriptions/your subscription/resourceGroups/your resource
group/providers/Microsoft.Network/networkInterfaces/hana-s2-db1-hsr --accelerated-networking true
az network nic update --id /subscriptions/your subscription/resourceGroups/your resource
group/providers/Microsoft.Network/networkInterfaces/hana-s2-db2-hsr --accelerated-networking true
az network nic update --id /subscriptions/your subscription/resourceGroups/your resource
group/providers/Microsoft.Network/networkInterfaces/hana-s2-db3-hsr --accelerated-networking true

6. Start the HANA DB virtual machines


Deploy Azure Load Balancer
1. We recommend using standard load balancer. Follow these configuration steps to deploy standard load
balancer:
a. First, create a front-end IP pool:
a. Open the load balancer, select frontend IP pool , and select Add .
b. Enter the name of the new front-end IP pool (for example, hana-frontend ).
c. Set the Assignment to Static and enter the IP address (for example, 10.23.0.18 ).
d. Select OK .
e. After the new front-end IP pool is created, note the pool IP address.
b. Next, create a back-end pool and add all cluster VMs to the backend pool:
a. Open the load balancer, select backend pools , and select Add .
b. Enter the name of the new back-end pool (for example, hana-backend ).
c. Select Add a vir tual machine .
d. Select Vir tual machine .
e. Select the virtual machines of the SAP HANA cluster and their IP addresses for the client subnet.
f. Select Add .
c. Next, create a health probe:
a. Open the load balancer, select health probes , and select Add .
b. Enter the name of the new health probe (for example, hana-hp ).
c. Select TCP as the protocol and port 62503 . Keep the Inter val value set to 5, and the Unhealthy
threshold value set to 2.
d. Select OK .
d. Next, create the load-balancing rules:
a. Open the load balancer, select load balancing rules , and select Add .
b. Enter the name of the new load balancer rule (for example, hana-lb ).
c. Select the front-end IP address, the back-end pool, and the health probe that you created earlier
(for example, hana-frontend , hana-backend and hana-hp ).
d. Select HA Por ts .
e. Increase the idle timeout to 30 minutes.
f. Make sure to enable Floating IP .
g. Select OK .

IMPORTANT
Floating IP is not supported on a NIC secondary IP configuration in load-balancing scenarios. For details see Azure
Load balancer Limitations. If you need additional IP address for the VM, deploy a second NIC.

NOTE
When VMs without public IP addresses are placed in the backend pool of internal (no public IP address) Standard
Azure load balancer, there will be no outbound internet connectivity, unless additional configuration is performed to
allow routing to public end points. For details on how to achieve outbound connectivity see Public endpoint
connectivity for Virtual Machines using Azure Standard Load Balancer in SAP high-availability scenarios.

IMPORTANT
Do not enable TCP timestamps on Azure VMs placed behind Azure Load Balancer. Enabling TCP timestamps will
cause the health probes to fail. Set parameter net.ipv4.tcp_timestamps to 0 . For details see Load Balancer health
probes. See also SAP note 2382421.

Deploy the Azure NetApp Files infrastructure


Deploy the ANF volumes for the /hana/shared file system. You will need a separate /hana/shared volume for each
HANA system replication site. For more information, see Set up the Azure NetApp Files infrastructure.
In this example, the following Azure NetApp Files volumes were used:
volume HN1 -shared-s1 (nfs://10.23.1.7/HN1 -shared-s1)
volume HN1 -shared-s2 (nfs://10.23.1.7/HN1 -shared-s2)

Operating system configuration and preparation


The instructions in the next sections are prefixed with one of the following abbreviations:
[A] : Applicable to all nodes
[AH] : Applicable to all HANA DB nodes
[M] : Applicable to the majority maker node
[AH1] : Applicable to all HANA DB nodes on SITE 1
[AH2] : Applicable to all HANA DB nodes on SITE 2
[1] : Applicable only to HANA DB node 1, SITE 1
[2] : Applicable only to HANA DB node 1, SITE 2
Configure and prepare your OS by doing the following steps:
1. [A] Maintain the host files on the virtual machines. Include entries for all subnets. The following entries
were added to /etc/hosts for this example.

# Client subnet
10.23.0.11 hana-s1-db1
10.23.0.12 hana-s1-db1
10.23.0.13 hana-s1-db2
10.23.0.14 hana-s2-db1
10.23.0.15 hana-s2-db2
10.23.0.16 hana-s2-db3
10.23.0.17 hana-s-mm
# Internode subnet
10.23.1.138 hana-s1-db1-inter
10.23.1.139 hana-s1-db2-inter
10.23.1.140 hana-s1-db3-inter
10.23.1.141 hana-s2-db1-inter
10.23.1.142 hana-s2-db2-inter
10.23.1.143 hana-s2-db3-inter
# HSR subnet
10.23.1.202 hana-s1-db1-hsr
10.23.1.203 hana-s1-db2-hsr
10.23.1.204 hana-s1-db3-hsr
10.23.1.205 hana-s2-db1-hsr
10.23.1.206 hana-s2-db2-hsr
10.23.1.207 hana-s2-db3-hsr

2. [A] Install the NFS client package.


yum install nfs-utils

3. [AH] Red Hat for HANA configuration.


Configure RHEL as described in https://access.redhat.com/solutions/2447641 and in the following SAP
notes:
2292690 - SAP HANA DB: Recommended OS settings for RHEL 7
2777782 - SAP HANA DB: Recommended OS Settings for RHEL 8
2455582 - Linux: Running SAP applications compiled with GCC 6.x
2593824 - Linux: Running SAP applications compiled with GCC 7.x
2886607 - Linux: Running SAP applications compiled with GCC 9.x

Prepare the file systems


Mount the shared file systems
In this example, the shared HANA file systems are deployed on Azure NetApp Files and mounted over NFSv4.
1. [AH] Create mount points for the HANA database volumes.

mkdir -p /hana/shared

2. [AH] Verify the NFS domain setting. Make sure that the domain is configured as the default Azure NetApp
Files domain, that is, defaultv4iddomain.com and the mapping is set to nobody .
This step is only needed, if using Azure NetAppFiles NFSv4.1.
IMPORTANT
Make sure to set the NFS domain in /etc/idmapd.conf on the VM to match the default domain configuration on
Azure NetApp Files: defaultv4iddomain.com . If there's a mismatch between the domain configuration on the NFS
client (i.e. the VM) and the NFS server, i.e. the Azure NetApp configuration, then the permissions for files on Azure
NetApp volumes that are mounted on the VMs will be displayed as nobody .

sudo cat /etc/idmapd.conf


# Example
[General]
Domain = defaultv4iddomain.com
[Mapping]
Nobody-User = nobody
Nobody-Group = nobody

3. [AH] Verify nfs4_disable_idmapping . It should be set to Y . To create the directory structure where
nfs4_disable_idmapping is located, execute the mount command. You won't be able to manually create the
directory under /sys/modules, because access is reserved for the kernel / drivers.
This step is only needed, if using Azure NetAppFiles NFSv4.1.

# Check nfs4_disable_idmapping
cat /sys/module/nfs/parameters/nfs4_disable_idmapping
# If you need to set nfs4_disable_idmapping to Y
mkdir /mnt/tmp
mount 10.9.0.4:/HN1-shared /mnt/tmp
umount /mnt/tmp
echo "Y" > /sys/module/nfs/parameters/nfs4_disable_idmapping
# Make the configuration permanent
echo "options nfs nfs4_disable_idmapping=Y" >> /etc/modprobe.d/nfs.conf

For more information on how to change nfs4_disable_idmapping parameter, see


https://access.redhat.com/solutions/1749883.
4. [AH1] Mount the shared Azure NetApp Files volumes on the SITE1 HANA DB VMs.

sudo mount -o
rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys
10.23.1.7:/HN1-shared-s1 /hana/shared

5. [AH2] Mount the shared Azure NetApp Files volumes on the SITE2 HANA DB VMs.

sudo mount -o
rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys
10.23.1.7:/HN1-shared-s2 /hana/shared

6. [AH] Verify that the corresponding /hana/shared/ file systems are mounted on all HANA DB VMs with NFS
protocol version NFSv4 .
sudo nfsstat -m
# Verify that flag vers is set to 4.1
# Example from SITE 1, hana-s1-db1
/hana/shared from 10.23.1.7:/HN1-shared-s1
Flags:
rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,cli
entaddr=10.23.0.11,local_lock=none,addr=10.23.1.7
# Example from SITE 2, hana-s2-db1
/hana/shared from 10.23.1.7:/HN1-shared-s2
Flags:
rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,cli
entaddr=10.23.0.14,local_lock=none,addr=10.23.1.7

Prepare the data and log local file systems


In the presented configuration, file systems /hana/data and /hana/log are deployed on managed disk and are
locally attached to each HANA DB VM. You will need to execute the steps to create the local data and log volumes
on each HANA DB virtual machine.
Set up the disk layout with Logical Volume Manager (LVM) . The following example assumes that each HANA
virtual machine has three data disks attached, that are used to create two volumes.
1. [AH] List all of the available disks:

ls /dev/disk/azure/scsi1/lun*

Example output:

/dev/disk/azure/scsi1/lun0 /dev/disk/azure/scsi1/lun1 /dev/disk/azure/scsi1/lun2

2. [AH] Create physical volumes for all of the disks that you want to use:

sudo pvcreate /dev/disk/azure/scsi1/lun0


sudo pvcreate /dev/disk/azure/scsi1/lun1
sudo pvcreate /dev/disk/azure/scsi1/lun2

3. [AH] Create a volume group for the data files. Use one volume group for the log files and one for the
shared directory of SAP HANA:

sudo vgcreate vg_hana_data_HN1 /dev/disk/azure/scsi1/lun0 /dev/disk/azure/scsi1/lun1


sudo vgcreate vg_hana_log_HN1 /dev/disk/azure/scsi1/lun2

4. [AH] Create the logical volumes. A linear volume is created when you use lvcreate without the -i switch.
We suggest that you create a striped volume for better I/O performance, and align the stripe sizes to the
values documented in SAP HANA VM storage configurations. The -i argument should be the number of
the underlying physical volumes and the -I argument is the stripe size. In this document, two physical
volumes are used for the data volume, so the -i switch argument is set to 2 . The stripe size for the data
volume is 256 KiB . One physical volume is used for the log volume, so no -i or -I switches are explicitly
used for the log volume commands.
IMPORTANT
Use the -i switch and set it to the number of the underlying physical volume when you use more than one
physical volume for each data or log volumes. Use the -I switch to specify the stripe size, when creating a striped
volume.
See SAP HANA VM storage configurations for recommended storage configurations, including stripe sizes and
number of disks.

sudo lvcreate -i 2 -I 256 -l 100%FREE -n hana_data vg_hana_data_HN1


sudo lvcreate -l 100%FREE -n hana_log vg_hana_log_HN1
sudo mkfs.xfs /dev/vg_hana_data_HN1/hana_data
sudo mkfs.xfs /dev/vg_hana_log_HN1/hana_log

5. [AH] Create the mount directories and copy the UUID of all of the logical volumes:

sudo mkdir -p /hana/data/HN1


sudo mkdir -p /hana/log/HN1
# Write down the ID of /dev/vg_hana_data_HN1/hana_data and /dev/vg_hana_log_HN1/hana_log
sudo blkid

6. [AH] Create fstab entries for the logical volumes and mount:

sudo vi /etc/fstab

Insert the following line in the /etc/fstab file:

/dev/disk/by-uuid/UUID of /dev/mapper/vg_hana_data_HN1-hana_data /hana/data/HN1 xfs defaults,nofail 0


2
/dev/disk/by-uuid/UUID of /dev/mapper/vg_hana_log_HN1-hana_log /hana/log/HN1 xfs defaults,nofail 0 2

Mount the new volumes:

sudo mount -a

Installation
In this example for deploying SAP HANA in scale-out configuration with HSR on Azure VMs, we've used HANA 2.0
SP4.
Prepare for HANA installation
1. [AH] Before the HANA installation, set the root password. You can disable the root password after the
installation has been completed. Execute as root command passwd .
2. [1,2] Change the permissions on /hana/shared

chmod 775 /hana/shared

3. [1] Verify that you can log in via SSH to the HANA DB VMs in this site hana-s1-db2 and hana-s1-db3 ,
without being prompted for a password.
If that is not the case, exchange ssh keys, as documented in Using Key-based Authentication.
ssh root@hana-s1-db2
ssh root@hana-s1-db3

4. [2] Verify that you can log in via SSH to the HANA DB VMs in this site hana-s2-db2 and hana-s2-db3 ,
without being prompted for a password.
If that is not the case, exchange ssh keys, as documented in Using Key-based Authentication.

ssh root@hana-s2-db2
ssh root@hana-s2-db3

5. [AH] Install additional packages, which are required for HANA 2.0 SP4. For more information, see SAP Note
2593824 for RHEL 7.

# If using RHEL 7
yum install libgcc_s1 libstdc++6 compat-sap-c++-7 libatomic1
# If using RHEL 8
yum install libatomic libtool-ltdl.x86_64

6. [A] Disable the firewall temporarily, so that it doesn't interfere with the HANA installation. You can re-enable
it, after the HANA installation is done.

# Execute as root
systemctl stop firewalld
systemctl disable firewalld

HANA installation on the first node on each site


1. [1] Install SAP HANA by following the instructions in the SAP HANA 2.0 Installation and Update guide. In
the instructions that follow, we show the SAP HANA installation on the first node on SITE 1.
a. Start the hdblcm program as root from the HANA installation software directory. Use the
internal_network parameter and pass the address space for subnet, which is used for the internal HANA
inter-node communication.

./hdblcm --internal_network=10.23.1.128/26

b. At the prompt, enter the following values:


For Choose an action : enter 1 (for install)
For Additional components for installation : enter 2, 3
For installation path: press Enter (defaults to /hana/shared)
For Local Host Name : press Enter to accept the default
For Do you want to add hosts to the system? : enter n
For SAP HANA System ID : enter HN1
For Instance number [00]: enter 03
For Local Host Worker Group [default]: press Enter to accept the default
For Select System Usage / Enter index [4] : enter 4 (for custom)
For Location of Data Volumes [/hana/data/HN1]: press Enter to accept the default
For Location of Log Volumes [/hana/log/HN1]: press Enter to accept the default
For Restrict maximum memor y allocation? [n]: enter n
For Cer tificate Host Name For Host hana-s1-db1 [hana-s1-db1]: press Enter to accept the default
For SAP Host Agent User (sapadm) Password : enter the password
For Confirm SAP Host Agent User (sapadm) Password : enter the password
For System Administrator (hn1adm) Password : enter the password
For System Administrator Home Director y [/usr/sap/HN1/home]: press Enter to accept the default
For System Administrator Login Shell [/bin/sh]: press Enter to accept the default
For System Administrator User ID [1001]: press Enter to accept the default
For Enter ID of User Group (sapsys) [79]: press Enter to accept the default
For System Database User (system) Password : enter the system's password
For Confirm System Database User (system) Password : enter system's password
For Restar t system after machine reboot? [n]: enter n
For Do you want to continue (y/n) : validate the summary and if everything looks good, enter y
2. [2] Repeat the preceding step to install SAP HANA on the first node on SITE 2.
3. [1,2] Verify global.ini
Display global.ini, and ensure that the configuration for the internal SAP HANA inter-node communication is
in place. Verify the communication section. It should have the address space for the inter subnet, and
listeninterface should be set to .internal . Verify the internal_hostname_resolution section. It should
have the IP addresses for the HANA virtual machines that belong to the inter subnet.

sudo cat /usr/sap/HN1/SYS/global/hdb/custom/config/global.ini


# Example from SITE1
[communication]
internal_network = 10.23.1.128/26
listeninterface = .internal
[internal_hostname_resolution]
10.23.1.138 = hana-s1-db1
10.23.1.139 = hana-s1-db2
10.23.1.140 = hana-s1-db3

4. [1,2] Prepare global.ini for installation in non-shared environment, as described in SAP note 2080991.

sudo vi /usr/sap/HN1/SYS/global/hdb/custom/config/global.ini
[persistence]
basepath_shared = no

5. [1,2] Restart SAP HANA to activate the changes.

sudo -u hn1adm /usr/sap/hostctrl/exe/sapcontrol -nr 03 -function StopSystem


sudo -u hn1adm /usr/sap/hostctrl/exe/sapcontrol -nr 03 -function StartSystem

6. [1,2] Verify that the client interface will be using the IP addresses from the client subnet for
communication.

# Execute as hn1adm
/usr/sap/HN1/HDB03/exe/hdbsql -u SYSTEM -p "password" -i 03 -d SYSTEMDB 'select * from
SYS.M_HOST_INFORMATION'|grep net_publicname
# Expected result - example from SITE 2
"hana-s2-db1","net_publicname","10.23.0.14"

For information about how to verify the configuration, see SAP Note 2183363 - Configuration of SAP HANA
internal network.
7. [AH] Change permissions on the data and log directories to avoid HANA installation error.
sudo chmod o+w -R /hana/data /hana/log

8. [1] Install the secondary HANA nodes. The example instructions in this step are for SITE 1.
a. Start the resident hdblcm program as root .

cd /hana/shared/HN1/hdblcm
./hdblcm

b. At the prompt, enter the following values:


For Choose an action : enter 2 (for add hosts)
For Enter comma separated host names to add : hana-s1-db2, hana-s1-db3
For Additional components for installation : enter 2, 3
For Enter Root User Name [root] : press Enter to accept the default
For Select roles for host 'hana-s1-db2' [1] : 1 (for worker)
For Enter Host Failover Group for host 'hana-s1-db2' [default] : press Enter to accept the default
For Enter Storage Par tition Number for host 'hana-s1-db2' [< >] : press Enter to accept the
default
For Enter Worker Group for host 'hana-s1-db2' [default] : press Enter to accept the default
For Select roles for host 'hana-s1-db3' [1] : 1 (for worker)
For Enter Host Failover Group for host 'hana-s1-db3' [default] : press Enter to accept the default
For Enter Storage Par tition Number for host 'hana-s1-db3' [< >] : press Enter to accept the
default
For Enter Worker Group for host 'hana-s1-db3' [default] : press Enter to accept the default
For System Administrator (hn1adm) Password : enter the password
For Enter SAP Host Agent User (sapadm) Password : enter the password
For Confirm SAP Host Agent User (sapadm) Password : enter the password
For Cer tificate Host Name For Host hana-s1-db2 [hana-s1-db2]: press Enter to accept the default
For Cer tificate Host Name For Host hana-s1-db3 [hana-s1-db3]: press Enter to accept the default
For Do you want to continue (y/n) : validate the summary and if everything looks good, enter y
9. [2] Repeat the preceding step to install the secondary SAP HANA nodes on SITE 2.

Configure SAP HANA 2.0 System Replication


1. [1] Configure System Replication on SITE 1:
Back up the databases as hn1 adm:

hdbsql -d SYSTEMDB -u SYSTEM -p "passwd" -i 03 "BACKUP DATA USING FILE ('initialbackupSYS')"


hdbsql -d HN1 -u SYSTEM -p "passwd" -i 03 "BACKUP DATA USING FILE ('initialbackupHN1')"

Copy the system PKI files to the secondary site:

scp /usr/sap/HN1/SYS/global/security/rsecssfs/data/SSFS_HN1.DAT hana-s2-


db1:/usr/sap/HN1/SYS/global/security/rsecssfs/data/
scp /usr/sap/HN1/SYS/global/security/rsecssfs/key/SSFS_HN1.KEY hana-s2-
db1:/usr/sap/HN1/SYS/global/security/rsecssfs/key/

Create the primary site:


hdbnsutil -sr_enable --name=HANA_S1

2. [2] Configure System Replication on SITE 2:


Register the second site to start the system replication. Run the following command as <hanasid>adm:

sapcontrol -nr 03 -function StopWait 600 10


hdbnsutil -sr_register --remoteHost=hana-s1-db1 --remoteInstance=03 --replicationMode=sync --
name=HANA_S2
sapcontrol -nr 03 -function StartSystem

3. [1] Check replication status


Check the replication status and wait until all databases are in sync.

sudo su - hn1adm -c "python /usr/sap/HN1/HDB03/exe/python_support/systemReplicationStatus.py"


# | Database | Host | Port | Service Name | Volume ID | Site ID | Site Name | Secondary
| Secondary | Secondary | Secondary | Secondary | Replication | Replication | Replication |
# | | | | | | | | Host
| Port | Site ID | Site Name | Active Status | Mode | Status | Status Details |
# | -------- | ------------- | ----- | ------------ | --------- | ------- | --------- | -------------
| --------- | --------- | --------- | ------------- | ----------- | ----------- | -------------- |
# | HN1 | hana-s1-db3 | 30303 | indexserver | 5 | 1 | HANA_S1 | hana-s2-db3
| 30303 | 2 | HANA_S2 | YES | SYNC | ACTIVE | |
# | SYSTEMDB | hana-s1-db1 | 30301 | nameserver | 1 | 1 | HANA_S1 | hana-s2-db1
| 30301 | 2 | HANA_S2 | YES | SYNC | ACTIVE | |
# | HN1 | hana-s1-db1 | 30307 | xsengine | 2 | 1 | HANA_S1 | hana-s2-db1
| 30307 | 2 | HANA_S2 | YES | SYNC | ACTIVE | |
# | HN1 | hana-s1-db1 | 30303 | indexserver | 3 | 1 | HANA_S1 | hana-s2-db1
| 30303 | 2 | HANA_S2 | YES | SYNC | ACTIVE | |
# | HN1 | hana-s1-db2 | 30303 | indexserver | 4 | 1 | HANA_S1 | hana-s2-db2
| 30303 | 2 | HANA_S2 | YES | SYNC | ACTIVE | |
#
# status system replication site "2": ACTIVE
# overall system replication status: ACTIVE
#
# Local System Replication State
#
# mode: PRIMARY
# site id: 1
# site name: HANA_S1

4. [1,2] Change the HANA configuration so that communication for HANA system replication if directed
though the HANA system replication virtual network interfaces.
Stop HANA on both sites

sudo -u hn1adm /usr/sap/hostctrl/exe/sapcontrol -nr 03 -function StopSystem HDB

Edit global.ini to add the host mapping for HANA system replication: use the IP addresses from the hsr
subnet.
sudo vi /usr/sap/HN1/SYS/global/hdb/custom/config/global.ini
#Add the section
[system_replication_hostname_resolution]
10.23.1.202 = hana-s1-db1
10.23.1.203 = hana-s1-db2
10.23.1.204 = hana-s1-db3
10.23.1.205 = hana-s2-db1
10.23.1.206 = hana-s2-db2
10.23.1.207 = hana-s2-db3

Start HANA on both sites

sudo -u hn1adm /usr/sap/hostctrl/exe/sapcontrol -nr 03 -function StartSystem HDB

For more information, see Host Name resolution for System Replication.
5. [AH] Re-enable the firewall.
Re-enable the firewall

# Execute as root
systemctl start firewalld
systemctl enable firewalld

Open the necessary firewall ports. You will need to adjust the ports for your HANA instance number.

IMPORTANT
Create firewall rules to allow HANA inter node communication and client traffic. The required ports are listed
on TCP/IP Ports of All SAP Products. The following commands are just an example. In this scenario with used
system number 03.
# Execute as root
sudo firewall-cmd --zone=public --add-port=30301/tcp --permanent
sudo firewall-cmd --zone=public --add-port=30301/tcp
sudo firewall-cmd --zone=public --add-port=30303/tcp --permanent
sudo firewall-cmd --zone=public --add-port=30303/tcp
sudo firewall-cmd --zone=public --add-port=30306/tcp --permanent
sudo firewall-cmd --zone=public --add-port=30306/tcp
sudo firewall-cmd --zone=public --add-port=30307/tcp --permanent
sudo firewall-cmd --zone=public --add-port=30307/tcp
sudo firewall-cmd --zone=public --add-port=30313/tcp --permanent
sudo firewall-cmd --zone=public --add-port=30313/tcp
sudo firewall-cmd --zone=public --add-port=30315/tcp --permanent
sudo firewall-cmd --zone=public --add-port=30315/tcp
sudo firewall-cmd --zone=public --add-port=30317/tcp --permanent
sudo firewall-cmd --zone=public --add-port=30317/tcp
sudo firewall-cmd --zone=public --add-port=30340/tcp --permanent
sudo firewall-cmd --zone=public --add-port=30340/tcp
sudo firewall-cmd --zone=public --add-port=30341/tcp --permanent
sudo firewall-cmd --zone=public --add-port=30341/tcp
sudo firewall-cmd --zone=public --add-port=30342/tcp --permanent
sudo firewall-cmd --zone=public --add-port=30342/tcp
sudo firewall-cmd --zone=public --add-port=1128/tcp --permanent
sudo firewall-cmd --zone=public --add-port=1128/tcp
sudo firewall-cmd --zone=public --add-port=1129/tcp --permanent
sudo firewall-cmd --zone=public --add-port=1129/tcp
sudo firewall-cmd --zone=public --add-port=40302/tcp --permanent
sudo firewall-cmd --zone=public --add-port=40302/tcp
sudo firewall-cmd --zone=public --add-port=40301/tcp --permanent
sudo firewall-cmd --zone=public --add-port=40301/tcp
sudo firewall-cmd --zone=public --add-port=40307/tcp --permanent
sudo firewall-cmd --zone=public --add-port=40307/tcp
sudo firewall-cmd --zone=public --add-port=40303/tcp --permanent
sudo firewall-cmd --zone=public --add-port=40303/tcp
sudo firewall-cmd --zone=public --add-port=40340/tcp --permanent
sudo firewall-cmd --zone=public --add-port=40340/tcp
sudo firewall-cmd --zone=public --add-port=50313/tcp --permanent
sudo firewall-cmd --zone=public --add-port=50313/tcp
sudo firewall-cmd --zone=public --add-port=50314/tcp --permanent
sudo firewall-cmd --zone=public --add-port=50314/tcp
sudo firewall-cmd --zone=public --add-port=30310/tcp --permanent
sudo firewall-cmd --zone=public --add-port=30310/tcp
sudo firewall-cmd --zone=public --add-port=30302/tcp --permanent
sudo firewall-cmd --zone=public --add-port=30302/tcp

Create a Pacemaker cluster


Follow the steps in Setting up Pacemaker on Red Hat Enterprise Linux in Azure to create a basic Pacemaker cluster
for this HANA server. Include all virtual machines, including the majority maker in the cluster.

IMPORTANT
Don't set quorum expected-votes to 2, as this is not a two node cluster.
Make sure that cluster property concurrent-fencing is enabled, so that node fencing is deserialized.

Create file system resources


1. [1,2] Stop SAP HANA on both replication sites. Execute as <sid>adm.

sapcontrol -nr 03 -function StopSystem


2. [AH] Un-mount file system /hana/shared , which was temporarily mounted for the installation on all HANA
DB VMs. You will need to stop any processes and sessions, that are using the file system, before you can un-
mount it.

umount /hana/shared

3. [1] Create the file system cluster resources for /hana/shared in disabled state. The resources are created
with the option --disabled , because you have to define the location constraints, before the mounts are
enabled.

# /hana/shared file system for site 1


pcs resource create fs_hana_shared_s1 --disabled ocf:heartbeat:Filesystem device=10.23.1.7:/HN1-shared-
s1 directory=/hana/shared \
fstype=nfs
options='defaults,rw,hard,timeo=600,rsize=262144,wsize=262144,proto=tcp,intr,noatime,sec=sys,vers=4.1,l
ock,_netdev' op monitor interval=20s on-fail=fence timeout=40s OCF_CHECK_LEVEL=20 \
op start interval=0 timeout=120 op stop interval=0 timeout=120

# /hana/shared file system for site 2


pcs resource create fs_hana_shared_s2 --disabled ocf:heartbeat:Filesystem device=10.23.1.7:/HN1-shared-
s1 directory=/hana/shared \
fstype=nfs
options='defaults,rw,hard,timeo=600,rsize=262144,wsize=262144,proto=tcp,intr,noatime,sec=sys,vers=4.1,l
ock,_netdev' op monitor interval=20s on-fail=fence timeout=40s OCF_CHECK_LEVEL=20 \
op start interval=0 timeout=120 op stop interval=0 timeout=120

# clone the /hana/shared file system resources for both site1 and site2
pcs resource clone fs_hana_shared_s1 meta clone-node-max=1 interleave=true
pcs resource clone fs_hana_shared_s2 meta clone-node-max=1 interleave=true

OCF_CHECK_LEVEL=20 attribute is added to the monitor operation, so that monitor operations perform a
read/write test on the file system. Without this attribute, the monitor operation only verifies that the file
system is mounted. This can be a problem because when connectivity is lost, the file system may remain
mounted, despite being inaccessible.
on-fail=fence attribute is also added to the monitor operation. With this option, if the monitor operation
fails on a node, that node is immediately fenced. Without this option, the default behavior is to stop all
resources that depend on the failed resource, then restart the failed resource, then start all the resources
that depend on the failed resource. Not only can this behavior take a long time when an SAPHana resource
depends on the failed resource, but it also can fail altogether. The SAPHana resource cannot stop
successfully, if the NFS share holding the HANA binaries is inaccessible.
4. [1] Configure and verify the node attributes. All SAP HANA DB nodes on replication site 1 are assigned
attribute S1 , and all SAP HANA DB nodes on replication site 2 are assigned attribute S2 .

# HANA replication site 1


pcs node attribute hana-s1-db1 NFS_SID_SITE=S1
pcs node attribute hana-s1-db2 NFS_SID_SITE=S1
pcs node attribute hana-s1-db3 NFS_SID_SITE=S1
# HANA replication site 2
pcs node attribute hana-s2-db1 NFS_SID_SITE=S2
pcs node attribute hana-s2-db2 NFS_SID_SITE=S2
pcs node attribute hana-s2-db3 NFS_SID_SITE=S2
#To verify the attribute assignment to nodes execute
pcs node attribute

5. [1] Configure the constraints, that determine where the NFS file systems will be mounted and enable the
file system resources.
# Configure the constraints
pcs constraint location fs_hana_shared_s1-clone rule resource-discovery=never score=-INFINITY
NFS_SID_SITE ne S1
pcs constraint location fs_hana_shared_s2-clone rule resource-discovery=never score=-INFINITY
NFS_SID_SITE ne S2
# Enable the file system resources
pcs resource enable fs_hana_shared_s1
pcs resource enable fs_hana_shared_s2

When you enable the file system resources, the cluster will mount the /hana/shared file systems.
6. [AH] Verify that the ANF volumes are mounted under /hana/shared on all HANA DB VMs on both sites.

sudo nfsstat -m
# Verify that flag vers is set to 4.1
# Example from SITE 1, hana-s1-db1
/hana/shared from 10.23.1.7:/HN1-shared-s1
Flags:
rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,cli
entaddr=10.23.0.11,local_lock=none,addr=10.23.1.7
# Example from SITE 2, hana-s2-db1
/hana/shared from 10.23.1.7:/HN1-shared-s2
Flags:
rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,cli
entaddr=10.23.0.14,local_lock=none,addr=10.23.1.7

7. [1] Configure the attribute resources. Configure the constraints, that will set the attributes to true , if the
NFS mounts for hana/shared are mounted.

# Configure the attribure resources


pcs resource create hana_nfs_s1_active ocf:pacemaker:attribute active_value=true inactive_value=false
name=hana_nfs_s1_active
pcs resource create hana_nfs_s2_active ocf:pacemaker:attribute active_value=true inactive_value=false
name=hana_nfs_s2_active
# Clone the attribure resources
pcs resource clone hana_nfs_s1_active meta clone-node-max=1 interleave=true
pcs resource clone hana_nfs_s2_active meta clone-node-max=1 interleave=true
# Configure the constraints, which will set the attribute values
pcs constraint order fs_hana_shared_s1-clone then hana_nfs_s1_active-clone
pcs constraint order fs_hana_shared_s2-clone then hana_nfs_s2_active-clone

TIP
If your configuration includes other file systems, besides / hana/shared , which are NFS mounted, then include
sequential=false option, so that there are no ordering dependencies among the file systems. All NFS mounted file
systems must start, before the corresponding attribute resource, but they do not need to start in any order relative
to each other. For more information see How do I configure SAP HANA Scale-Out HSR in a pacemaker cluster when
the HANA file systems are NFS shares.

8. [1] Place pacemaker in maintenance mode, in preparation for the creation of the HANA cluster resources.

pcs property set maintenance-mode=true

Create SAP HANA cluster resources


1. [A] Install the HANA scale-out resource agent on all cluster nodes, including the majority maker.
yum install -y resource-agents-sap-hana-scaleout

NOTE
Consult Support Policies for RHEL HA clusters - Management of SAP HANA in a cluster for the minimum supported
version of package resource-agents-sap-hana-scaleout for your OS release.

2. [1,2] Install the HANA "system replication hook". The hook needs to be installed on one HANA DB node on
each system replication site. SAP HANA should be still down.
a. Prepare the hook as root

mkdir -p /hana/shared/myHooks
cp /usr/share/SAPHanaSR-ScaleOut/SAPHanaSR.py /hana/shared/myHooks
chown -R hn1adm:sapsys /hana/shared/myHooks

b. Adjust global.ini

# add to global.ini
[ha_dr_provider_SAPHanaSR]
provider = SAPHanaSR
path = /hana/shared/myHooks
execution_order = 1

[trace]
ha_dr_saphanasr = info

3. [AH] The cluster requires sudoers configuration on the cluster node for <sid>adm. In this example that is
achieved by creating a new file. Execute the commands as root .

cat << EOF > /etc/sudoers.d/20-saphana


# SAPHanaSR-ScaleOut needs for srHook
Cmnd_Alias SOK = /usr/sbin/crm_attribute -n hana_hn1_glob_srHook -v SOK -t crm_config -s SAPHanaSR
Cmnd_Alias SFAIL = /usr/sbin/crm_attribute -n hana_hn1_glob_srHook -v SFAIL -t crm_config -s SAPHanaSR
hn1adm ALL=(ALL) NOPASSWD: SOK, SFAIL
EOF

4. [1,2] Start SAP HANA on both replication sites. Execute as <sid>adm.

sapcontrol -nr 03 -function StartSystem

5. [1] Verify the hook installation. Execute as <sid>adm on the active HANA system replication site.

cdtrace
awk '/ha_dr_SAPHanaSR.*crm_attribute/ \
{ printf "%s %s %s %s\n",$2,$3,$5,$16 }' nameserver_*

# Example entries
# 2020-07-21 22:04:32.364379 ha_dr_SAPHanaSR SFAIL
# 2020-07-21 22:04:46.905661 ha_dr_SAPHanaSR SFAIL
# 2020-07-21 22:04:52.092016 ha_dr_SAPHanaSR SFAIL
# 2020-07-21 22:04:52.782774 ha_dr_SAPHanaSR SFAIL
# 2020-07-21 22:04:53.117492 ha_dr_SAPHanaSR SFAIL
# 2020-07-21 22:06:35.599324 ha_dr_SAPHanaSR SOK
6. [1] Create the HANA cluster resources. Execute the following commands as root .
a. Make sure the cluster is already maintenance mode.
b. Next, create the HANA Topology resource.
If building RHEL 7.x cluster, use the following commands:

pcs resource create SAPHanaTopology_HN1_HDB03 SAPHanaTopologyScaleOut \


SID=HN1 InstanceNumber=03 \
op start timeout=600 op stop timeout=300 op monitor interval=10 timeout=600

pcs resource clone SAPHanaTopology_HN1_HDB03 meta clone-node-max=1 interleave=true

If building RHEL 8.x cluster, use the following commands:

pcs resource create SAPHanaTopology_HN1_HDB03 SAPHanaTopology \


SID=HN1 InstanceNumber=03 meta clone-node-max=1 interleave=true \
op methods interval=0s timeout=5 \
op start timeout=600 op stop timeout=300 op monitor interval=10 timeout=600

pcs resource clone SAPHanaTopology_HN1_HDB03 meta clone-node-max=1 interleave=true

c. Next, create the HANA instance resource.

NOTE
This article contains references to the term slave, a term that Microsoft no longer uses. When the term is
removed from the software, we’ll remove it from this article.

If building RHEL 7.x cluster, use the following commands:

pcs resource create SAPHana_HN1_HDB03 SAPHanaController \


SID=HN1 InstanceNumber=03 PREFER_SITE_TAKEOVER=true DUPLICATE_PRIMARY_TIMEOUT=7200
AUTOMATED_REGISTER=false \
op start interval=0 timeout=3600 op stop interval=0 timeout=3600 op promote interval=0
timeout=3600 \
op monitor interval=60 role="Master" timeout=700 op monitor interval=61 role="Slave" timeout=700

pcs resource master msl_SAPHana_HN1_HDB03 SAPHana_HN1_HDB03 \


meta master-max="1" clone-node-max=1 interleave=true

If building RHEL 8.x cluster, use the following commands:

pcs resource create SAPHana_HN1_HDB03 SAPHanaController \


SID=HN1 InstanceNumber=03 PREFER_SITE_TAKEOVER=true DUPLICATE_PRIMARY_TIMEOUT=7200
AUTOMATED_REGISTER=false \
op demote interval=0s timeout=320 op methods interval=0s timeout=5 \
op start interval=0 timeout=3600 op stop interval=0 timeout=3600 op promote interval=0
timeout=3600 \
op monitor interval=60 role="Master" timeout=700 op monitor interval=61 role="Slave" timeout=700

pcs resource promotable SAPHana_HN1_HDB03 \


meta master-max="1" clone-node-max=1 interleave=true
IMPORTANT
We recommend as a best practice that you only set AUTOMATED_REGISTER to no , while performing
thorough fail-over tests, to prevent failed primary instance to automatically register as secondary. Once the
fail-over tests have completed successfully, set AUTOMATED_REGISTER to yes , so that after takeover system
replication can resume automatically.

d. Create Virtual IP and associated resources.

pcs resource create vip_HN1_03 ocf:heartbeat:IPaddr2 ip=10.23.0.18 op monitor interval="10s"


timeout="20s"
sudo pcs resource create nc_HN1_03 azure-lb port=62503
sudo pcs resource group add g_ip_HN1_03 nc_HN1_03 vip_HN1_03

e. Create the cluster constraints


If building RHEL 7.x cluster, use the following commands:

#Start HANA topology, before the HANA instance


pcs constraint order SAPHanaTopology_HN1_HDB03-clone then msl_SAPHana_HN1_HDB03

pcs constraint colocation add g_ip_HN1_03 with master msl_SAPHana_HN1_HDB03 4000


#HANA resources are only allowed to run on a node, if the node's NFS file systems are mounted.
The constraint also avoids the majority maker node
pcs constraint location SAPHanaTopology_HN1_HDB03-clone rule resource-discovery=never score=-
INFINITY hana_nfs_s1_active ne true and hana_nfs_s2_active ne true

If building RHEL 8.x cluster, use the following commands:

#Start HANA topology, before the HANA instance


pcs constraint order SAPHanaTopology_HN1_HDB03-clone then SAPHana_HN1_HDB03-clone

pcs constraint colocation add g_ip_HN1_03 with master SAPHana_HN1_HDB03-clone 4000


#HANA resources are only allowed to run on a node, if the node's NFS file systems are mounted.
The constraint also avoids the majority maker node
pcs constraint location SAPHanaTopology_HN1_HDB03-clone rule resource-discovery=never score=-
INFINITY hana_nfs_s1_active ne true and hana_nfs_s2_active ne true

7. [1] Place the cluster out of maintenance mode. Make sure that the cluster status is ok and that all of the
resources are started.

sudo pcs property set maintenance-mode=false


#If there are failed cluster resources, you may need to run the next command
pcs resource cleanup

NOTE
The timeouts in the above configuration are just examples and may need to be adapted to the specific HANA setup.
For instance, you may need to increase the start timeout, if it takes longer to start the SAP HANA database.

Test SAP HANA failover


1. Before you start a test, check the cluster and SAP HANA system replication status.
a. Verify that there are no failed cluster actions
#Verify that there are no failed cluster actions
pcs status
# Example
#Stack: corosync
#Current DC: hana-s-mm (version 1.1.19-8.el7_6.5-c3c624ea3d) - partition with quorum
#Last updated: Thu Sep 24 06:00:20 2020
#Last change: Thu Sep 24 05:59:17 2020 by root via crm_attribute on hana-s1-db1
#
#7 nodes configured
#45 resources configured
#
#Online: [ hana-s-mm hana-s1-db1 hana-s1-db2 hana-s1-db3 hana-s2-db1 hana-s2-db2 hana-s2-db3 ]
#
#Active resources:
#
#rsc_st_azure (stonith:fence_azure_arm): Started hana-s-mm
#Clone Set: fs_hana_shared_s1-clone [fs_hana_shared_s1]
# Started: [ hana--s1-db1 hana-s1-db2 hana-s1-db3 ]
#Clone Set: fs_hana_shared_s2-clone [fs_hana_shared_s2]
# Started: [ hana-s2-db1 hana-s2-db2 hana-s2-db3 ]
#Clone Set: hana_nfs_s1_active-clone [hana_nfs_s1_active]
# Started: [ hana-s1-db1 hana-s1-db2 hana-s1-db3 ]
#Clone Set: hana_nfs_s2_active-clone [hana_nfs_s2_active]
# Started: [ hana-s2-db1 hana-s2-db2 hana-s2-db3 ]
#Clone Set: SAPHanaTopology_HN1_HDB03-clone [SAPHanaTopology_HN1_HDB03]
# Started: [ hana-s1-db1 hana-s1-db2 hana-s1-db3 hana-s2-db1 hana-s2-db2 hana-s2-db3 ]
#Master/Slave Set: msl_SAPHana_HN1_HDB03 [SAPHana_HN1_HDB03]
# Masters: [ hana-s1-db1 ]
# Slaves: [ hana-s1-db2 hana-s1-db3 hana-s2-db1 hana-s2-db2 hana-s2-db3 ]
#Resource Group: g_ip_HN1_03
# nc_HN1_03 (ocf::heartbeat:azure-lb): Started hana-s1-db1
# vip_HN1_03 (ocf::heartbeat:IPaddr2): Started hana-s1-db1

b. Verify that SAP HANA system replication is in sync

# Verify HANA HSR is in sync


sudo su - hn1adm -c "python /usr/sap/HN1/HDB03/exe/python_support/systemReplicationStatus.py"
#| Database | Host | Port | Service Name | Volume ID | Site ID | Site Name | Secondary |
Secondary| Secondary | Secondary | Secondary | Replication | Replication | Replication |
#| | | | | | | | Host |
Port | Site ID | Site Name | Active Status | Mode | Status | Status Details |
#| -------- | ----------- | ----- | ------------ | --------- | ------- | --------- | ------------- | --
------ | --------- | --------- | ------------- | ----------- | ----------- | -------------- |
#| HN1 | hana-s1-db3 | 30303 | indexserver | 5 | 2 | HANA_S1 | hana-s2-db3 |
30303 | 1 | HANA_S2 | YES | SYNC | ACTIVE | |
#| HN1 | hana-s1-db2 | 30303 | indexserver | 4 | 2 | HANA_S1 | hana-s2-db2 |
30303 | 1 | HANA_S2 | YES | SYNC | ACTIVE | |
#| SYSTEMDB | hana-s1-db1 | 30301 | nameserver | 1 | 2 | HANA_S1 | hana-s2-db1 |
30301 | 1 | HANA_S2 | YES | SYNC | ACTIVE | |
#| HN1 | hana-s1-db1 | 30307 | xsengine | 2 | 2 | HANA_S1 | hana-s2-db1 |
30307 | 1 | HANA_S2 | YES | SYNC | ACTIVE | |
#| HN1 | hana-s1-db1 | 30303 | indexserver | 3 | 2 | HANA_S1 | hana-s2-db1 |
30303 | 1 | HANA_S2 | YES | SYNC | ACTIVE | |

#status system replication site "1": ACTIVE


#overall system replication status: ACTIVE

#Local System Replication State


#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

#mode: PRIMARY
#site id: 1
#site name: HANA_S1

2. Verify the cluster configuration for a failure scenario, when a node loses access to the NFS share (
/hana/shared ).
The SAP HANA resource agents depend on binaries, stored on /hana/shared to perform operations during
failover. File system /hana/shared is mounted over NFS in the presented configuration. A test that can be
performed, is to remount the /hana/shared file system as Read only. This approach validates that the cluster
will fail over, if access to /hana/shared is lost on the active system replication site.
Expected result : When you remount /hana/shared as Read only, the monitoring operation that performs
read/write operation on file system, will fail, as it is not able to write to the file system and will trigger HANA
resource failover. The same result is expected when your HANA node loses access to the NFS share.
You can check the state of the cluster resources by executing crm_mon or pcs status . Resource state before
starting the test:

# Output of crm_mon
#7 nodes configured
#45 resources configured

#Online: [ hana-s-mm hana-s1-db1 hana-s1-db2 hana-s1-db3 hana-s2-db1 hana-s2-db2 hana-s2-db3 ]


#
#Active resources:

#rsc_st_azure (stonith:fence_azure_arm): Started hana-s-mm


# Clone Set: fs_hana_shared_s1-clone [fs_hana_shared_s1]
# Started: [ hana-s1-db1 hana-s1-db2 hana-s1-db3 ]
# Clone Set: fs_hana_shared_s2-clone [fs_hana_shared_s2]
# Started: [ hana-s2-db1 hana-s2-db2 hana-s2-db3 ]
# Clone Set: hana_nfs_s1_active-clone [hana_nfs_s1_active]
# Started: [ hana-s1-db1 hana-s1-db2 hana-s1-db3 ]
# Clone Set: hana_nfs_s2_active-clone [hana_nfs_s2_active]
# Started: [ hana-s2-db1 hana-s2-db2 hana-s2-db3 ]
# Clone Set: SAPHanaTopology_HN1_HDB03-clone [SAPHanaTopology_HN1_HDB03]
# Started: [ hana-s1-db1 hana-s1-db2 hana-s1-db3 hana-s2-db1 hana-s2-db2 hana-s2-db3 ]
# Master/Slave Set: msl_SAPHana_HN1_HDB03 [SAPHana_HN1_HDB03]
# Masters: [ hana-s1-db1 ]
# Slaves: [ hana-s1-db2 hana-s1-db3 hana-s2-db1 hana-s2-db2 hana-s2-db3 ]
# Resource Group: g_ip_HN1_03
# nc_HN1_03 (ocf::heartbeat:azure-lb): Started hana-s1-db1
# vip_HN1_03 (ocf::heartbeat:IPaddr2): Started hana-s1-db1

To simulate failure for /hana/shared on one of the primary replication site VMs, execute the following
command:

# Execute as root
mount -o ro /hana/shared
# Or if the above command returns an error
sudo mount -o ro 10.23.1.7/HN1-shared-s1 /hana/shared

The HANA VM, that lost access to to /hana/shared should restart or stop, depending on the cluster
configuration. The cluster resources are migrated to the other HANA system replication site.
If the cluster has not started on the VM, that was restarted, start the cluster by executing:

# Start the cluster


pcs cluster start

When the cluster starts, file system /hana/shared will be automatically mounted.
If you set AUTOMATED_REGISTER="false", you will need to configure SAP HANA system replication on
secondary site. In this case, you can execute these commands to reconfigure SAP HANA as secondary.
# Execute on the secondary
su - hn1adm
# Make sure HANA is not running on the secondary site. If it is started, stop HANA
sapcontrol -nr 03 -function StopWait 600 10
# Register the HANA secondary site
hdbnsutil -sr_register --name=HANA_S1 --remoteHost=hana-s2-db1 --remoteInstance=03 --
replicationMode=sync
# Switch back to root and cleanup failed resources
pcs resource cleanup SAPHana_HN1_HDB03

The state of the resources, after the test:

# Output of crm_mon
#7 nodes configured
#45 resources configured

#Online: [ hana-s-mm hana-s1-db1 hana-s1-db2 hana-s1-db3 hana-s2-db1 hana-s2-db2 hana-s2-db3 ]

#Active resources:

#rsc_st_azure (stonith:fence_azure_arm): Started hana-s-mm


# Clone Set: fs_hana_shared_s1-clone [fs_hana_shared_s1]
# Started: [ hana-s1-db1 hana-s1-db2 hana-s1-db3 ]
# Clone Set: fs_hana_shared_s2-clone [fs_hana_shared_s2]
# Started: [ hana-s2-db1 hana-s2-db2 hana-s2-db3 ]
# Clone Set: hana_nfs_s1_active-clone [hana_nfs_s1_active]
# Started: [ hana-s1-db1 hana-s1-db2 hana-s1-db3 ]
# Clone Set: hana_nfs_s2_active-clone [hana_nfs_s2_active]
# Started: [ hana-s2-db1 hana-s2-db2 hana-s2-db3 ]
# Clone Set: SAPHanaTopology_HN1_HDB03-clone [SAPHanaTopology_HN1_HDB03]
# Started: [ hana-s1-db1 hana-s1-db2 hana-s1-db3 hana-s2-db1 hana-s2-db2 hana-s2-db3 ]
# Master/Slave Set: msl_SAPHana_HN1_HDB03 [SAPHana_HN1_HDB03]
# Masters: [ hana-s2-db1 ]
# Slaves: [ hana-s1-db1 hana-s1-db2 hana-s1-db3 hana-s2-db2 hana-s2-db3 ]
# Resource Group: g_ip_HN1_03
# nc_HN1_03 (ocf::heartbeat:azure-lb): Started hana-s2-db1
# vip_HN1_03 (ocf::heartbeat:IPaddr2): Started hana-s2-db1

We recommend to thoroughly test the SAP HANA cluster configuration, by also performing the tests, documented
in HA for SAP HANA on Azure VMs on RHEL.

Next steps
Azure Virtual Machines planning and implementation for SAP
Azure Virtual Machines deployment for SAP
Azure Virtual Machines DBMS deployment for SAP
To learn how to establish high availability and plan for disaster recovery of SAP HANA on Azure VMs, see High
Availability of SAP HANA on Azure Virtual Machines (VMs).
Deploy a SAP HANA scale-out system with standby
node on Azure VMs by using Azure NetApp Files on
SUSE Linux Enterprise Server
12/22/2020 • 33 minutes to read • Edit Online

This article describes how to deploy a highly available SAP HANA system in a scale-out configuration with
standby on Azure virtual machines (VMs) by using Azure NetApp Files for the shared storage volumes.
In the example configurations, installation commands, and so on, the HANA instance is 03 and the HANA system
ID is HN1 . The examples are based on HANA 2.0 SP4 and SUSE Linux Enterprise Server for SAP 12 SP4.
Before you begin, refer to the following SAP notes and papers:
Azure NetApp Files documentation
SAP Note 1928533 includes:
A list of Azure VM sizes that are supported for the deployment of SAP software
Important capacity information for Azure VM sizes
Supported SAP software, and operating system (OS) and database combinations
The required SAP kernel version for Windows and Linux on Microsoft Azure
SAP Note 2015553: Lists prerequisites for SAP-supported SAP software deployments in Azure
SAP Note 2205917: Contains recommended OS settings for SUSE Linux Enterprise Server for SAP
Applications
SAP Note 1944799: Contains SAP Guidelines for SUSE Linux Enterprise Server for SAP Applications
SAP Note 2178632: Contains detailed information about all monitoring metrics reported for SAP in Azure
SAP Note 2191498: Contains the required SAP Host Agent version for Linux in Azure
SAP Note 2243692: Contains information about SAP licensing on Linux in Azure
SAP Note 1984787: Contains general information about SUSE Linux Enterprise Server 12
SAP Note 1999351: Contains additional troubleshooting information for the Azure Enhanced Monitoring
Extension for SAP
SAP Note 1900823: Contains information about SAP HANA storage requirements
SAP Community Wiki: Contains all required SAP notes for Linux
Azure Virtual Machines planning and implementation for SAP on Linux
Azure Virtual Machines deployment for SAP on Linux
Azure Virtual Machines DBMS deployment for SAP on Linux
SUSE SAP HA Best Practice Guides: Contains all required information to set up NetWeaver High Availability
and SAP HANA System Replication on-premises (to be used as a general baseline; they provide much more
detailed information)
SUSE High Availability Extension 12 SP3 Release Notes
NetApp SAP Applications on Microsoft Azure using Azure NetApp Files

Overview
One method for achieving HANA high availability is by configuring host auto failover. To configure host auto
failover, you add one or more virtual machines to the HANA system and configure them as standby nodes. When
active node fails, a standby node automatically takes over. In the presented configuration with Azure virtual
machines, you achieve auto failover by using NFS on Azure NetApp Files.
NOTE
The standby node needs access to all database volumes. The HANA volumes must be mounted as NFSv4 volumes. The
improved file lease-based locking mechanism in the NFSv4 protocol is used for I/O fencing.

IMPORTANT
To build the supported configuration, you must deploy the HANA data and log volumes as NFSv4.1 volumes and mount
them by using the NFSv4.1 protocol. The HANA host auto-failover configuration with standby node is not supported with
NFSv3.

In the preceding diagram, which follows SAP HANA network recommendations, three subnets are represented
within one Azure virtual network:
For client communication
For communication with the storage system
For internal HANA inter-node communication
The Azure NetApp volumes are in separate subnet, delegated to Azure NetApp Files.
For this example configuration, the subnets are:
client 10.23.0.0/24
storage 10.23.2.0/24
hana 10.23.3.0/24
anf 10.23.1.0/26

Set up the Azure NetApp Files infrastructure


Before you proceed with the setup for Azure NetApp Files infrastructure, familiarize yourself with the Azure
NetApp Files documentation.
Azure NetApp Files is available in several Azure regions. Check to see whether your selected Azure region offers
Azure NetApp Files.
For information about the availability of Azure NetApp Files by Azure region, see Azure NetApp Files Availability
by Azure Region.
Before you deploy Azure NetApp Files, request onboarding to Azure NetApp Files by going to Register for Azure
NetApp Files instructions.
Deploy Azure NetApp Files resources
The following instructions assume that you've already deployed your Azure virtual network. The Azure NetApp
Files resources and VMs, where the Azure NetApp Files resources will be mounted, must be deployed in the same
Azure virtual network or in peered Azure virtual networks.
1. If you haven't already deployed the resources, request onboarding to Azure NetApp Files.
2. Create a NetApp account in your selected Azure region by following the instructions in Create a NetApp
account.
3. Set up an Azure NetApp Files capacity pool by following the instructions in Set up an Azure NetApp Files
capacity pool.
The HANA architecture presented in this article uses a single Azure NetApp Files capacity pool at the Ultra
Service level. For HANA workloads on Azure, we recommend using an Azure NetApp Files Ultra or
Premium service Level.
4. Delegate a subnet to Azure NetApp Files, as described in the instructions in Delegate a subnet to Azure
NetApp Files.
5. Deploy Azure NetApp Files volumes by following the instructions in Create an NFS volume for Azure
NetApp Files.
As you're deploying the volumes, be sure to select the NFSv4.1 version. Currently, access to NFSv4.1
requires being added to an allow list. Deploy the volumes in the designated Azure NetApp Files subnet.
The IP addresses of the Azure NetApp volumes are assigned automatically.
Keep in mind that the Azure NetApp Files resources and the Azure VMs must be in the same Azure virtual
network or in peered Azure virtual networks. For example, HN1 -data-mnt00001, HN1 -log-mnt00001, and
so on, are the volume names and nfs://10.23.1.5/HN1 -data-mnt00001, nfs://10.23.1.4/HN1 -log-
mnt00001, and so on, are the file paths for the Azure NetApp Files volumes.
volume HN1 -data-mnt00001 (nfs://10.23.1.5/HN1 -data-mnt00001)
volume HN1 -data-mnt00002 (nfs://10.23.1.6/HN1 -data-mnt00002)
volume HN1 -log-mnt00001 (nfs://10.23.1.4/HN1 -log-mnt00001)
volume HN1 -log-mnt00002 (nfs://10.23.1.6/HN1 -log-mnt00002)
volume HN1 -shared (nfs://10.23.1.4/HN1 -shared)
In this example, we used a separate Azure NetApp Files volume for each HANA data and log volume. For a
more cost-optimized configuration on smaller or non-productive systems, it's possible to place all data
mounts and all logs mounts on a single volume.
Important considerations
As you're creating your Azure NetApp Files for SAP NetWeaver on SUSE High Availability architecture, be aware
of the following important considerations:
The minimum capacity pool is 4 tebibytes (TiB).
The minimum volume size is 100 gibibytes (GiB).
Azure NetApp Files and all virtual machines where the Azure NetApp Files volumes will be mounted must be
in the same Azure virtual network or in peered virtual networks in the same region.
The selected virtual network must have a subnet that's delegated to Azure NetApp Files.
The throughput of an Azure NetApp Files volume is a function of the volume quota and service level, as
documented in Service level for Azure NetApp Files. When you're sizing the HANA Azure NetApp volumes,
make sure that the resulting throughput meets the HANA system requirements.
With the Azure NetApp Files export policy, you can control the allowed clients, the access type (read-write,
read only, and so on).
The Azure NetApp Files feature isn't zone-aware yet. Currently, the feature isn't deployed in all availability
zones in an Azure region. Be aware of the potential latency implications in some Azure regions.

IMPORTANT
For SAP HANA workloads, low latency is critical. Work with your Microsoft representative to ensure that the virtual
machines and the Azure NetApp Files volumes are deployed in close proximity.

Sizing for HANA database on Azure NetApp Files


The throughput of an Azure NetApp Files volume is a function of the volume size and service level, as
documented in Service level for Azure NetApp Files.
As you design the infrastructure for SAP in Azure, be aware of some minimum storage requirements by SAP,
which translate into minimum throughput characteristics:
Enable read-write on /hana/log of 250 megabytes per second (MB/s) with 1-MB I/O sizes.
Enable read activity of at least 400 MB/s for /hana/data for 16-MB and 64-MB I/O sizes.
Enable write activity of at least 250 MB/s for /hana/data with 16-MB and 64-MB I/O sizes.
The Azure NetApp Files throughput limits per 1 TiB of volume quota are:
Premium Storage tier - 64 MiB/s
Ultra Storage tier - 128 MiB/s
To meet the SAP minimum throughput requirements for data and log, and the guidelines for /hana/shared, the
recommended sizes would be:

SIZ E O F SIZ E O F
VO L UM E P REM IUM STO RA GE T IER ULT RA STO RA GE T IER SUP P O RT ED N F S P ROTO C O L

/hana/log/ 4 TiB 2 TiB v4.1

/hana/data 6.3 TiB 3.2 TiB v4.1


SIZ E O F SIZ E O F
VO L UM E P REM IUM STO RA GE T IER ULT RA STO RA GE T IER SUP P O RT ED N F S P ROTO C O L

/hana/shared Max (512 GB, 1xRAM) per 4 Max (512 GB, 1xRAM) per 4 v3 or v4.1
worker nodes worker nodes

The SAP HANA configuration for the layout that's presented in this article, using Azure NetApp Files Ultra Storage
tier, would be:

SIZ E O F
VO L UM E ULT RA STO RA GE T IER SUP P O RT ED N F S P ROTO C O L

/hana/log/mnt00001 2 TiB v4.1

/hana/log/mnt00002 2 TiB v4.1

/hana/data/mnt00001 3.2 TiB v4.1

/hana/data/mnt00002 3.2 TiB v4.1

/hana/shared 2 TiB v3 or v4.1

NOTE
The Azure NetApp Files sizing recommendations stated here are targeted to meet the minimum requirements that SAP
recommends for their infrastructure providers. In real customer deployments and workload scenarios, these sizes may not
be sufficient. Use these recommendations as a starting point and adapt, based on the requirements of your specific
workload.

TIP
You can resize Azure NetApp Files volumes dynamically, without having to unmount the volumes, stop the virtual
machines, or stop SAP HANA. This approach allows flexibility to meet both the expected and unforeseen throughput
demands of your application.

Deploy Linux virtual machines via the Azure portal


First you need to create the Azure NetApp Files volumes. Then do the following steps:
1. Create the Azure virtual network subnets in your Azure virtual network.
2. Deploy the VMs.
3. Create the additional network interfaces, and attach the network interfaces to the corresponding VMs.
Each virtual machine has three network interfaces, which correspond to the three Azure virtual network
subnets ( client , storage and hana ).
For more information, see Create a Linux virtual machine in Azure with multiple network interface cards.
IMPORTANT
For SAP HANA workloads, low latency is critical. To achieve low latency, work with your Microsoft representative to ensure
that the virtual machines and the Azure NetApp Files volumes are deployed in close proximity. When you're onboarding
new SAP HANA system that's using SAP HANA Azure NetApp Files, submit the necessary information.

The next instructions assume that you've already created the resource group, the Azure virtual network, and the
three Azure virtual network subnets: client , storage and hana . When you deploy the VMs, select the client
subnet, so that the client network interface is the primary interface on the VMs. You will also need to configure an
explicit route to the Azure NetApp Files delegated subnet via the storage subnet gateway.

IMPORTANT
Make sure that the OS you select is SAP-certified for SAP HANA on the specific VM types you're using. For a list of SAP
HANA certified VM types and OS releases for those types, go to the SAP HANA certified IaaS platforms site. Click into the
details of the listed VM type to get the complete list of SAP HANA-supported OS releases for that type.

1. Create an availability set for SAP HANA. Make sure to set the max update domain.
2. Create three virtual machines (hanadb1 , hanadb2 , hanadb3 ) by doing the following steps:
a. Use a SLES4SAP image in the Azure gallery that's supported for SAP HANA. We used a SLES4SAP 12
SP4 image in this example.
b. Select the availability set that you created earlier for SAP HANA.
c. Select the client Azure virtual network subnet. Select Accelerated Network.
When you deploy the virtual machines, the network interface name is automatically generated. In these
instructions for simplicity we'll refer to the automatically generated network interfaces, which are attached
to the client Azure virtual network subnet, as hanadb1-client , hanadb2-client , and hanadb3-client .
3. Create three network interfaces, one for each virtual machine, for the storage virtual network subnet (in
this example, hanadb1-storage , hanadb2-storage , and hanadb3-storage ).
4. Create three network interfaces, one for each virtual machine, for the hana virtual network subnet (in this
example, hanadb1-hana , hanadb2-hana , and hanadb3-hana ).
5. Attach the newly created virtual network interfaces to the corresponding virtual machines by doing the
following steps:
a. Go to the virtual machine in the Azure portal.
b. In the left pane, select Vir tual Machines . Filter on the virtual machine name (for example, hanadb1 ),
and then select the virtual machine.
c. In the Over view pane, select Stop to deallocate the virtual machine.
d. Select Networking , and then attach the network interface. In the Attach network interface drop-
down list, select the already created network interfaces for the storage and hana subnets.
e. Select Save .
f. Repeat steps b through e for the remaining virtual machines (in our example, hanadb2 and hanadb3 ).
g. Leave the virtual machines in stopped state for now. Next, we'll enable accelerated networking for all
newly attached network interfaces.
6. Enable accelerated networking for the additional network interfaces for the storage and hana subnets by
doing the following steps:
a. Open Azure Cloud Shell in the Azure portal.
b. Execute the following commands to enable accelerated networking for the additional network interfaces,
which are attached to the storage and hana subnets.

az network nic update --id /subscriptions/your subscription/resourceGroups/your resource


group/providers/Microsoft.Network/networkInterfaces/hanadb1-storage --accelerated-networking true
az network nic update --id /subscriptions/your subscription/resourceGroups/your resource
group/providers/Microsoft.Network/networkInterfaces/hanadb2-storage --accelerated-networking true
az network nic update --id /subscriptions/your subscription/resourceGroups/your resource
group/providers/Microsoft.Network/networkInterfaces/hanadb3-storage --accelerated-networking true

az network nic update --id /subscriptions/your subscription/resourceGroups/your resource


group/providers/Microsoft.Network/networkInterfaces/hanadb1-hana --accelerated-networking true
az network nic update --id /subscriptions/your subscription/resourceGroups/your resource
group/providers/Microsoft.Network/networkInterfaces/hanadb2-hana --accelerated-networking true
az network nic update --id /subscriptions/your subscription/resourceGroups/your resource
group/providers/Microsoft.Network/networkInterfaces/hanadb3-hana --accelerated-networking true

7. Start the virtual machines by doing the following steps:


a. In the left pane, select Vir tual Machines . Filter on the virtual machine name (for example, hanadb1 ),
and then select it.
b. In the Over view pane, select Star t .

Operating system configuration and preparation


The instructions in the next sections are prefixed with one of the following:
[A] : Applicable to all nodes
[1] : Applicable only to node 1
[2] : Applicable only to node 2
[3] : Applicable only to node 3
Configure and prepare your OS by doing the following steps:
1. [A] Maintain the host files on the virtual machines. Include entries for all subnets. The following entries
were added to /etc/hosts for this example.

# Storage
10.23.2.4 hanadb1-storage
10.23.2.5 hanadb2-storage
10.23.2.6 hanadb3-storage
# Client
10.23.0.5 hanadb1
10.23.0.6 hanadb2
10.23.0.7 hanadb3
# Hana
10.23.3.4 hanadb1-hana
10.23.3.5 hanadb2-hana
10.23.3.6 hanadb3-hana
2. [A] Change DHCP and cloud config settings for the network interface for storage to avoid unintended
hostname changes.
The following instructions assume that the storage network interface is eth1 .

vi /etc/sysconfig/network/dhcp
# Change the following DHCP setting to "no"
DHCLIENT_SET_HOSTNAME="no"
vi /etc/sysconfig/network/ifcfg-eth1
# Edit ifcfg-eth1
#Change CLOUD_NETCONFIG_MANAGE='yes' to "no"
CLOUD_NETCONFIG_MANAGE='no'

3. [A] Add a network route, so that the communication to the Azure NetApp Files goes via the storage
network interface.
The following instructions assume that the storage network interface is eth1 .

vi /etc/sysconfig/network/ifroute-eth1
# Add the following routes
# RouterIPforStorageNetwork - - -
# ANFNetwork/cidr RouterIPforStorageNetwork - -
10.23.2.1 - - -
10.23.1.0/26 10.23.2.1 - -

Reboot the VM to activate the changes.


4. [A] Prepare the OS for running SAP HANA on NetApp Systems with NFS, as described in NetApp SAP
Applications on Microsoft Azure using Azure NetApp Files. Create configuration file /etc/sysctl.d/netapp-
hana.conf for the NetApp configuration settings.

vi /etc/sysctl.d/netapp-hana.conf
# Add the following entries in the configuration file
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.core.rmem_default = 16777216
net.core.wmem_default = 16777216
net.core.optmem_max = 16777216
net.ipv4.tcp_rmem = 65536 16777216 16777216
net.ipv4.tcp_wmem = 65536 16777216 16777216
net.core.netdev_max_backlog = 300000
net.ipv4.tcp_slow_start_after_idle=0
net.ipv4.tcp_no_metrics_save = 1
net.ipv4.tcp_moderate_rcvbuf = 1
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_timestamps = 1
net.ipv4.tcp_sack = 1

5. [A] Create configuration file /etc/sysctl.d/ms-az.conf with Microsoft for Azure configuration settings.
vi /etc/sysctl.d/ms-az.conf
# Add the following entries in the configuration file
ipv6.conf.all.disable_ipv6 = 1
net.ipv4.tcp_max_syn_backlog = 16348
net.ipv4.ip_local_port_range = 40000 65300
net.ipv4.conf.all.rp_filter = 0
sunrpc.tcp_slot_table_entries = 128
vm.swappiness=10

6. [A] Adjust the sunrpc settings, as recommended in the NetApp SAP Applications on Microsoft Azure using
Azure NetApp Files.

vi /etc/modprobe.d/sunrpc.conf
# Insert the following line
options sunrpc tcp_max_slot_table_entries=128

Mount the Azure NetApp Files volumes


1. [A] Create mount points for the HANA database volumes.

mkdir -p /hana/data/HN1/mnt00001
mkdir -p /hana/data/HN1/mnt00002
mkdir -p /hana/log/HN1/mnt00001
mkdir -p /hana/log/HN1/mnt00002
mkdir -p /hana/shared
mkdir -p /usr/sap/HN1

2. [1] Create node-specific directories for /usr/sap on HN1 -shared.

# Create a temporary directory to mount HN1-shared


mkdir /mnt/tmp
# if using NFSv3 for this volume, mount with the following command
mount 10.23.1.4:/HN1-shared /mnt/tmp
# if using NFSv4.1 for this volume, mount with the following command
mount -t nfs -o sec=sys,vers=4.1 10.23.1.4:/HN1-shared /mnt/tmp
cd /mnt/tmp
mkdir shared usr-sap-hanadb1 usr-sap-hanadb2 usr-sap-hanadb3
# unmount /hana/shared
cd
umount /mnt/tmp

3. [A] Verify the NFS domain setting. Make sure that the domain is configured as the default Azure NetApp
Files domain, i.e. defaultv4iddomain.com and the mapping is set to nobody .
IMPORTANT
Make sure to set the NFS domain in /etc/idmapd.conf on the VM to match the default domain configuration on
Azure NetApp Files: defaultv4iddomain.com . If there's a mismatch between the domain configuration on the NFS
client (i.e. the VM) and the NFS server, i.e. the Azure NetApp configuration, then the permissions for files on Azure
NetApp volumes that are mounted on the VMs will be displayed as nobody .

sudo cat /etc/idmapd.conf


# Example
[General]
Verbosity = 0
Pipefs-Directory = /var/lib/nfs/rpc_pipefs
Domain = defaultv4iddomain.com
[Mapping]
Nobody-User = nobody
Nobody-Group = nobody

4. [A] Verify nfs4_disable_idmapping. It should be set to Y . To create the directory structure where
nfs4_disable_idmapping is located, execute the mount command. You won't be able to manually create the
directory under /sys/modules, because access is reserved for the kernel / drivers.

# Check nfs4_disable_idmapping
cat /sys/module/nfs/parameters/nfs4_disable_idmapping
# If you need to set nfs4_disable_idmapping to Y
mkdir /mnt/tmp
mount 10.23.1.4:/HN1-shared /mnt/tmp
umount /mnt/tmp
echo "Y" > /sys/module/nfs/parameters/nfs4_disable_idmapping
# Make the configuration permanent
echo "options nfs nfs4_disable_idmapping=Y" >> /etc/modprobe.d/nfs.conf

5. [A] Create the SAP HANA group and user manually. The IDs for group sapsys and user hn1 adm must be
set to the same IDs, which are provided during the onboarding. (In this example, the IDs are set to 1001 .) If
the IDs aren't set correctly, you won't be able to access the volumes. The IDs for group sapsys and user
accounts hn1 adm and sapadm must be the same on all virtual machines.

# Create user group


sudo groupadd -g 1001 sapsys
# Create users
sudo useradd hn1adm -u 1001 -g 1001 -d /usr/sap/HN1/home -c "SAP HANA Database System" -s /bin/sh
sudo useradd sapadm -u 1002 -g 1001 -d /home/sapadm -c "SAP Local Administrator" -s /bin/sh
# Set the password for both user ids
sudo passwd hn1adm
sudo passwd sapadm

6. [A] Mount the shared Azure NetApp Files volumes.


sudo vi /etc/fstab
# Add the following entries
10.23.1.5:/HN1-data-mnt00001 /hana/data/HN1/mnt00001 nfs
rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys 0
0
10.23.1.6:/HN1-data-mnt00002 /hana/data/HN1/mnt00002 nfs
rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys 0
0
10.23.1.4:/HN1-log-mnt00001 /hana/log/HN1/mnt00001 nfs
rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys 0
0
10.23.1.6:/HN1-log-mnt00002 /hana/log/HN1/mnt00002 nfs
rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys 0
0
10.23.1.4:/HN1-shared/shared /hana/shared nfs
rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys 0
0
# Mount all volumes
sudo mount -a

7. [1] Mount the node-specific volumes on hanadb1 .

sudo vi /etc/fstab
# Add the following entries
10.23.1.4:/HN1-shared/usr-sap-hanadb1 /usr/sap/HN1 nfs
rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys 0
0
# Mount the volume
sudo mount -a

8. [2] Mount the node-specific volumes on hanadb2 .

sudo vi /etc/fstab
# Add the following entries
10.23.1.4:/HN1-shared/usr-sap-hanadb2 /usr/sap/HN1 nfs
rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys 0
0
# Mount the volume
sudo mount -a

9. [3] Mount the node-specific volumes on hanadb3 .

sudo vi /etc/fstab
# Add the following entries
10.23.1.4:/HN1-shared/usr-sap-hanadb3 /usr/sap/HN1 nfs
rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys 0
0
# Mount the volume
sudo mount -a

10. [A] Verify that all HANA volumes are mounted with NFS protocol version NFSv4 .
sudo nfsstat -m
# Verify that flag vers is set to 4.1
# Example from hanadb1
/hana/data/HN1/mnt00001 from 10.23.1.5:/HN1-data-mnt00001
Flags:
rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,cl
ientaddr=10.23.2.4,local_lock=none,addr=10.23.1.5
/hana/log/HN1/mnt00002 from 10.23.1.6:/HN1-log-mnt00002
Flags:
rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,cl
ientaddr=10.23.2.4,local_lock=none,addr=10.23.1.6
/hana/data/HN1/mnt00002 from 10.23.1.6:/HN1-data-mnt00002
Flags:
rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,cl
ientaddr=10.23.2.4,local_lock=none,addr=10.23.1.6
/hana/log/HN1/mnt00001 from 10.23.1.4:/HN1-log-mnt00001
Flags:
rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,cl
ientaddr=10.23.2.4,local_lock=none,addr=10.23.1.4
/usr/sap/HN1 from 10.23.1.4:/HN1-shared/usr-sap-hanadb1
Flags:
rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,cl
ientaddr=10.23.2.4,local_lock=none,addr=10.23.1.4
/hana/shared from 10.23.1.4:/HN1-shared/shared
Flags:
rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,cl
ientaddr=10.23.2.4,local_lock=none,addr=10.23.1.4

Installation
In this example for deploying SAP HANA in scale-out configuration with standby node with Azure, we've used
HANA 2.0 SP4.
Prepare for HANA installation
1. [A] Before the HANA installation, set the root password. You can disable the root password after the
installation has been completed. Execute as root command passwd .
2. [1] Verify that you can log in via SSH to hanadb2 and hanadb3 , without being prompted for a password.

ssh root@hanadb2
ssh root@hanadb3

3. [A] Install additional packages, which are required for HANA 2.0 SP4. For more information, see SAP Note
2593824.

sudo zypper install libgcc_s1 libstdc++6 libatomic1

4. [2], [3] Change ownership of SAP HANA data and log directories to hn1 adm.
# Execute as root
sudo chown hn1adm:sapsys /hana/data/HN1
sudo chown hn1adm:sapsys /hana/log/HN1

HANA installation
1. [1] Install SAP HANA by following the instructions in the SAP HANA 2.0 Installation and Update guide. In
this example, we install SAP HANA scale-out with master, one worker, and one standby node.
a. Start the hdblcm program from the HANA installation software directory. Use the internal_network
parameter and pass the address space for subnet, which is used for the internal HANA inter-node
communication.

./hdblcm --internal_network=10.23.3.0/24

b. At the prompt, enter the following values:


For Choose an action : enter 1 (for install)
For Additional components for installation : enter 2, 3
For installation path: press Enter (defaults to /hana/shared)
For Local Host Name : press Enter to accept the default
Under Do you want to add hosts to the system? : enter y
For comma-separated host names to add : enter hanadb2, hanadb3
For Root User Name [root]: press Enter to accept the default
For Root User Password : enter the root user's password
For roles for host hanadb2: enter 1 (for worker)
For Host Failover Group for host hanadb2 [default]: press Enter to accept the default
For Storage Par tition Number for host hanadb2 [<>]: press Enter to accept the default
For Worker Group for host hanadb2 [default]: press Enter to accept the default
For Select roles for host hanadb3: enter 2 (for standby)
For Host Failover Group for host hanadb3 [default]: press Enter to accept the default
For Worker Group for host hanadb3 [default]: press Enter to accept the default
For SAP HANA System ID : enter HN1
For Instance number [00]: enter 03
For Local Host Worker Group [default]: press Enter to accept the default
For Select System Usage / Enter index [4] : enter 4 (for custom)
For Location of Data Volumes [/hana/data/HN1]: press Enter to accept the default
For Location of Log Volumes [/hana/log/HN1]: press Enter to accept the default
For Restrict maximum memor y allocation? [n]: enter n
For Cer tificate Host Name For Host hanadb1 [hanadb1]: press Enter to accept the default
For Cer tificate Host Name For Host hanadb2 [hanadb2]: press Enter to accept the default
For Cer tificate Host Name For Host hanadb3 [hanadb3]: press Enter to accept the default
For System Administrator (hn1adm) Password : enter the password
For System Database User (system) Password : enter the system's password
For Confirm System Database User (system) Password : enter system's password
For Restar t system after machine reboot? [n]: enter n
For Do you want to continue (y/n) : validate the summary and if everything looks good, enter y
2. [1] Verify global.ini
Display global.ini, and ensure that the configuration for the internal SAP HANA inter-node communication
is in place. Verify the communication section. It should have the address space for the hana subnet, and
listeninterface should be set to .internal . Verify the internal_hostname_resolution section. It
should have the IP addresses for the HANA virtual machines that belong to the hana subnet.

sudo cat /usr/sap/HN1/SYS/global/hdb/custom/config/global.ini


# Example
#global.ini last modified 2019-09-10 00:12:45.192808 by hdbnameserve
[communication]
internal_network = 10.23.3/24
listeninterface = .internal
[internal_hostname_resolution]
10.23.3.4 = hanadb1
10.23.3.5 = hanadb2
10.23.3.6 = hanadb3

3. [1] Add host mapping to ensure that the client IP addresses are used for client communication. Add
section public_host_resolution , and add the corresponding IP addresses from the client subnet.

sudo vi /usr/sap/HN1/SYS/global/hdb/custom/config/global.ini
#Add the section
[public_hostname_resolution]
map_hanadb1 = 10.23.0.5
map_hanadb2 = 10.23.0.6
map_hanadb3 = 10.23.0.7

4. [1] Restart SAP HANA to activate the changes.

sudo -u hn1adm /usr/sap/hostctrl/exe/sapcontrol -nr 03 -function StopSystem HDB


sudo -u hn1adm /usr/sap/hostctrl/exe/sapcontrol -nr 03 -function StartSystem HDB

5. [1] Verify that the client interface will be using the IP addresses from the client subnet for
communication.

sudo -u hn1adm /usr/sap/HN1/HDB03/exe/hdbsql -u SYSTEM -p "password" -i 03 -d SYSTEMDB 'select * from


SYS.M_HOST_INFORMATION'|grep net_publicname
# Expected result
"hanadb3","net_publicname","10.23.0.7"
"hanadb2","net_publicname","10.23.0.6"
"hanadb1","net_publicname","10.23.0.5"

For information about how to verify the configuration, see SAP Note 2183363 - Configuration of SAP
HANA internal network.
6. To optimize SAP HANA for the underlying Azure NetApp Files storage, set the following SAP HANA
parameters:
max_parallel_io_requests 128
async_read_submit on
async_write_submit_active on
async_write_submit_blocksall
For more information, see NetApp SAP Applications on Microsoft Azure using Azure NetApp Files.
Starting with SAP HANA 2.0 systems, you can set the parameters in global.ini . For more information, see
SAP Note 1999930.
For SAP HANA 1.0 systems versions SPS12 and earlier, these parameters can be set during the installation,
as described in SAP Note 2267798.
7. The storage that's used by Azure NetApp Files has a file size limitation of 16 terabytes (TB). SAP HANA is
not implicitly aware of the storage limitation, and it won't automatically create a new data file when the file
size limit of 16 TB is reached. As SAP HANA attempts to grow the file beyond 16 TB, that attempt will result
in errors and, eventually, in an index server crash.

IMPORTANT
To prevent SAP HANA from trying to grow data files beyond the 16-TB limit of the storage subsystem, set the
following parameters in global.ini .
datavolume_striping = true
datavolume_striping_size_gb = 15000 For more information, see SAP Note 2400005. Be aware of SAP Note
2631285.

Test SAP HANA failover


NOTE
This article contains references to the terms master and slave, terms that Microsoft no longer uses. When these terms are
removed from the software, we’ll remove them from this article.

1. Simulate a node crash on an SAP HANA worker node. Do the following:


a. Before you simulate the node crash, run the following commands as hn1 adm to capture the status of
the environment:
# Check the landscape status
python /usr/sap/HN1/HDB03/exe/python_support/landscapeHostConfiguration.py
| Host | Host | Host | Failover | Remove | Storage | Storage | Failover | Failover |
NameServer | NameServer | IndexServer | IndexServer | Host | Host | Worker | Worker |
| | Active | Status | Status | Status | Config | Actual | Config | Actual |
Config | Actual | Config | Actual | Config | Actual | Config | Actual |
| | | | | | Partition | Partition | Group | Group | Role
| Role | Role | Role | Roles | Roles | Groups | Groups |
| ------- | ------ | ------ | -------- | ------ | --------- | --------- | -------- | -------- | -----
----- | ---------- | ----------- | ----------- | ------- | ------- | ------- | ------- |
| hanadb1 | yes | ok | | | 1 | 1 | default | default |
master 1 | master | worker | master | worker | worker | default | default |
| hanadb2 | yes | ok | | | 2 | 2 | default | default |
master 2 | slave | worker | slave | worker | worker | default | default |
| hanadb3 | yes | ignore | | | 0 | 0 | default | default |
master 3 | slave | standby | standby | standby | standby | default | - |
# Check the instance status
sapcontrol -nr 03 -function GetSystemInstanceList
GetSystemInstanceList
OK
hostname, instanceNr, httpPort, httpsPort, startPriority, features, dispstatus
hanadb2, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GREEN
hanadb1, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GREEN
hanadb3, 3, 50313, 50314, 0.3, HDB|HDB_STANDBY, GREEN

b. To simulate a node crash, run the following command as root on the worker node, which is hanadb2 in
this case:

echo b > /proc/sysrq-trigger

c. Monitor the system for failover completion. When the failover has been completed, capture the status,
which should look like the following:

# Check the instance status


sapcontrol -nr 03 -function GetSystemInstanceList
GetSystemInstanceList
OK
hostname, instanceNr, httpPort, httpsPort, startPriority, features, dispstatus
hanadb1, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GREEN
hanadb3, 3, 50313, 50314, 0.3, HDB|HDB_STANDBY, GREEN
hanadb2, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GRAY
# Check the landscape status
/usr/sap/HN1/HDB03/exe/python_support> python landscapeHostConfiguration.py
| Host | Host | Host | Failover | Remove | Storage | Storage | Failover | Failover |
NameServer | NameServer | IndexServer | IndexServer | Host | Host | Worker | Worker |
| | Active | Status | Status | Status | Config | Actual | Config | Actual |
Config | Actual | Config | Actual | Config | Actual | Config | Actual |
| | | | | | Partition | Partition | Group | Group | Role
| Role | Role | Role | Roles | Roles | Groups | Groups |
| ------- | ------ | ------ | -------- | ------ | --------- | --------- | -------- | -------- | -----
----- | ---------- | ----------- | ----------- | ------- | ------- | ------- | ------- |
| hanadb1 | yes | ok | | | 1 | 1 | default | default |
master 1 | master | worker | master | worker | worker | default | default |
| hanadb2 | no | info | | | 2 | 0 | default | default |
master 2 | slave | worker | standby | worker | standby | default | - |
| hanadb3 | yes | info | | | 0 | 2 | default | default |
master 3 | slave | standby | slave | standby | worker | default | default |
IMPORTANT
When a node experiences kernel panic, avoid delays with SAP HANA failover by setting kernel.panic to 20
seconds on all HANA virtual machines. The configuration is done in /etc/sysctl . Reboot the virtual machines to
activate the change. If this change isn't performed, failover can take 10 or more minutes when a node is
experiencing kernel panic.

2. Kill the name server by doing the following:


a. Prior to the test, check the status of the environment by running the following commands as hn1 adm:

#Landscape status
python /usr/sap/HN1/HDB03/exe/python_support/landscapeHostConfiguration.py
| Host | Host | Host | Failover | Remove | Storage | Storage | Failover | Failover |
NameServer | NameServer | IndexServer | IndexServer | Host | Host | Worker | Worker |
| | Active | Status | Status | Status | Config | Actual | Config | Actual |
Config | Actual | Config | Actual | Config | Actual | Config | Actual |
| | | | | | Partition | Partition | Group | Group | Role
| Role | Role | Role | Roles | Roles | Groups | Groups |
| ------- | ------ | ------ | -------- | ------ | --------- | --------- | -------- | -------- | -----
----- | ---------- | ----------- | ----------- | ------- | ------- | ------- | ------- |
| hanadb1 | yes | ok | | | 1 | 1 | default | default |
master 1 | master | worker | master | worker | worker | default | default |
| hanadb2 | yes | ok | | | 2 | 2 | default | default |
master 2 | slave | worker | slave | worker | worker | default | default |
| hanadb3 | no | ignore | | | 0 | 0 | default | default |
master 3 | slave | standby | standby | standby | standby | default | - |
# Check the instance status
sapcontrol -nr 03 -function GetSystemInstanceList
GetSystemInstanceList
OK
hostname, instanceNr, httpPort, httpsPort, startPriority, features, dispstatus
hanadb2, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GREEN
hanadb1, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GREEN
hanadb3, 3, 50313, 50314, 0.3, HDB|HDB_STANDBY, GRAY

b. Run the following commands as hn1 adm on the active master node, which is hanadb1 in this case:

hn1adm@hanadb1:/usr/sap/HN1/HDB03> HDB kill

The standby node hanadb3 will take over as master node. Here is the resource state after the failover test
is completed:
# Check the instance status
sapcontrol -nr 03 -function GetSystemInstanceList
GetSystemInstanceList
OK
hostname, instanceNr, httpPort, httpsPort, startPriority, features, dispstatus
hanadb2, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GREEN
hanadb1, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GRAY
hanadb3, 3, 50313, 50314, 0.3, HDB|HDB_STANDBY, GREEN
# Check the landscape status
python /usr/sap/HN1/HDB03/exe/python_support/landscapeHostConfiguration.py
| Host | Host | Host | Failover | Remove | Storage | Storage | Failover | Failover |
NameServer | NameServer | IndexServer | IndexServer | Host | Host | Worker | Worker |
| | Active | Status | Status | Status | Config | Actual | Config | Actual |
Config | Actual | Config | Actual | Config | Actual | Config | Actual |
| | | | | | Partition | Partition | Group | Group |
Role | Role | Role | Role | Roles | Roles | Groups | Groups |
| ------- | ------ | ------ | -------- | ------ | --------- | --------- | -------- | -------- | -
--------- | ---------- | ----------- | ----------- | ------- | ------- | ------- | ------- |
| hanadb1 | no | info | | | 1 | 0 | default | default |
master 1 | slave | worker | standby | worker | standby | default | - |
| hanadb2 | yes | ok | | | 2 | 2 | default | default |
master 2 | slave | worker | slave | worker | worker | default | default |
| hanadb3 | yes | info | | | 0 | 1 | default | default |
master 3 | master | standby | master | standby | worker | default | default |

c. Restart the HANA instance on hanadb1 (that is, on the same virtual machine, where the name server
was killed). The hanadb1 node will rejoin the environment and will keep its standby role.

hn1adm@hanadb1:/usr/sap/HN1/HDB03> HDB start

After SAP HANA has started on hanadb1 , expect the following status:

# Check the instance status


sapcontrol -nr 03 -function GetSystemInstanceList
GetSystemInstanceList
OK
hostname, instanceNr, httpPort, httpsPort, startPriority, features, dispstatus
hanadb1, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GREEN
hanadb2, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GREEN
hanadb3, 3, 50313, 50314, 0.3, HDB|HDB_STANDBY, GREEN
# Check the landscape status
python /usr/sap/HN1/HDB03/exe/python_support/landscapeHostConfiguration.py
| Host | Host | Host | Failover | Remove | Storage | Storage | Failover | Failover |
NameServer | NameServer | IndexServer | IndexServer | Host | Host | Worker | Worker |
| | Active | Status | Status | Status | Config | Actual | Config | Actual |
Config | Actual | Config | Actual | Config | Actual | Config | Actual |
| | | | | | Partition | Partition | Group | Group | Role
| Role | Role | Role | Roles | Roles | Groups | Groups |
| ------- | ------ | ------ | -------- | ------ | --------- | --------- | -------- | -------- | -----
----- | ---------- | ----------- | ----------- | ------- | ------- | ------- | ------- |
| hanadb1 | yes | info | | | 1 | 0 | default | default |
master 1 | slave | worker | standby | worker | standby | default | - |
| hanadb2 | yes | ok | | | 2 | 2 | default | default |
master 2 | slave | worker | slave | worker | worker | default | default |
| hanadb3 | yes | info | | | 0 | 1 | default | default |
master 3 | master | standby | master | standby | worker | default | default |

d. Again, kill the name server on the currently active master node (that is, on node hanadb3 ).
hn1adm@hanadb3:/usr/sap/HN1/HDB03> HDB kill

Node hanadb1 will resume the role of master node. After the failover test has been completed, the status
will look like this:

# Check the instance status


sapcontrol -nr 03 -function GetSystemInstanceList & python
/usr/sap/HN1/HDB03/exe/python_support/landscapeHostConfiguration.py
GetSystemInstanceList
OK
hostname, instanceNr, httpPort, httpsPort, startPriority, features, dispstatus
GetSystemInstanceList
OK
hostname, instanceNr, httpPort, httpsPort, startPriority, features, dispstatus
hanadb1, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GREEN
hanadb2, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GREEN
hanadb3, 3, 50313, 50314, 0.3, HDB|HDB_STANDBY, GRAY
# Check the landscape status
python /usr/sap/HN1/HDB03/exe/python_support/landscapeHostConfiguration.py
| Host | Host | Host | Failover | Remove | Storage | Storage | Failover | Failover |
NameServer | NameServer | IndexServer | IndexServer | Host | Host | Worker | Worker |
| | Active | Status | Status | Status | Config | Actual | Config | Actual |
Config | Actual | Config | Actual | Config | Actual | Config | Actual |
| | | | | | Partition | Partition | Group | Group | Role
| Role | Role | Role | Roles | Roles | Groups | Groups |
| ------- | ------ | ------ | -------- | ------ | --------- | --------- | -------- | -------- | -----
----- | ---------- | ----------- | ----------- | ------- | ------- | ------- | ------- |
| hanadb1 | yes | ok | | | 1 | 1 | default | default |
master 1 | master | worker | master | worker | worker | default | default |
| hanadb2 | yes | ok | | | 2 | 2 | default | default |
master 2 | slave | worker | slave | worker | worker | default | default |
| hanadb3 | no | ignore | | | 0 | 0 | default | default |
master 3 | slave | standby | standby | standby | standby | default | - |

e. Start SAP HANA on hanadb3 , which will be ready to serve as a standby node.

hn1adm@hanadb3:/usr/sap/HN1/HDB03> HDB start

After SAP HANA has started on hanadb3 , the status looks like the following:
# Check the instance status
sapcontrol -nr 03 -function GetSystemInstanceList & python
/usr/sap/HN1/HDB03/exe/python_support/landscapeHostConfiguration.py
GetSystemInstanceList
OK
hostname, instanceNr, httpPort, httpsPort, startPriority, features, dispstatus
GetSystemInstanceList
OK
hostname, instanceNr, httpPort, httpsPort, startPriority, features, dispstatus
hanadb1, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GREEN
hanadb2, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GREEN
hanadb3, 3, 50313, 50314, 0.3, HDB|HDB_STANDBY, GRAY
# Check the landscape status
python /usr/sap/HN1/HDB03/exe/python_support/landscapeHostConfiguration.py
| Host | Host | Host | Failover | Remove | Storage | Storage | Failover | Failover |
NameServer | NameServer | IndexServer | IndexServer | Host | Host | Worker | Worker |
| | Active | Status | Status | Status | Config | Actual | Config | Actual |
Config | Actual | Config | Actual | Config | Actual | Config | Actual |
| | | | | | Partition | Partition | Group | Group | Role
| Role | Role | Role | Roles | Roles | Groups | Groups |
| ------- | ------ | ------ | -------- | ------ | --------- | --------- | -------- | -------- | -----
----- | ---------- | ----------- | ----------- | ------- | ------- | ------- | ------- |
| hanadb1 | yes | ok | | | 1 | 1 | default | default |
master 1 | master | worker | master | worker | worker | default | default |
| hanadb2 | yes | ok | | | 2 | 2 | default | default |
master 2 | slave | worker | slave | worker | worker | default | default |
| hanadb3 | no | ignore | | | 0 | 0 | default | default |
master 3 | slave | standby | standby | standby | standby | default | - |

Next steps
Azure Virtual Machines planning and implementation for SAP
Azure Virtual Machines deployment for SAP
Azure Virtual Machines DBMS deployment for SAP
To learn how to establish high availability and plan for disaster recovery of SAP HANA on Azure VMs, see High
Availability of SAP HANA on Azure Virtual Machines (VMs).
Deploy a SAP HANA scale-out system with standby
node on Azure VMs by using Azure NetApp Files on
Red Hat Enterprise Linux
12/22/2020 • 34 minutes to read • Edit Online

This article describes how to deploy a highly available SAP HANA system in a scale-out configuration with
standby on Azure Red Hat Enterprise Linux virtual machines (VMs), by using Azure NetApp Files for the shared
storage volumes.
In the example configurations, installation commands, and so on, the HANA instance is 03 and the HANA system
ID is HN1 . The examples are based on HANA 2.0 SP4 and Red Hat Enterprise Linux for SAP 7.6.

NOTE
This article contains references to the terms master and slave, terms that Microsoft no longer uses. When these terms are
removed from the software, we’ll remove them from this article.

Before you begin, refer to the following SAP notes and papers:
Azure NetApp Files documentation
SAP Note 1928533 includes:
A list of Azure VM sizes that are supported for the deployment of SAP software
Important capacity information for Azure VM sizes
Supported SAP software, and operating system (OS) and database combinations
The required SAP kernel version for Windows and Linux on Microsoft Azure
SAP Note 2015553: Lists prerequisites for SAP-supported SAP software deployments in Azure
SAP Note [2002167] has recommended OS settings for Red Hat Enterprise Linux
SAP Note 2009879 has SAP HANA Guidelines for Red Hat Enterprise Linux
SAP Note 2178632: Contains detailed information about all monitoring metrics reported for SAP in Azure
SAP Note 2191498: Contains the required SAP Host Agent version for Linux in Azure
SAP Note 2243692: Contains information about SAP licensing on Linux in Azure
SAP Note 1999351: Contains additional troubleshooting information for the Azure Enhanced Monitoring
Extension for SAP
SAP Note 1900823: Contains information about SAP HANA storage requirements
SAP Community Wiki: Contains all required SAP notes for Linux
Azure Virtual Machines planning and implementation for SAP on Linux
Azure Virtual Machines deployment for SAP on Linux
Azure Virtual Machines DBMS deployment for SAP on Linux
General RHEL documentation
High Availability Add-On Overview
High Availability Add-On Administration
High Availability Add-On Reference
Red Hat Enterprise Linux Networking Guide
Azure-specific RHEL documentation:
Install SAP HANA on Red Hat Enterprise Linux for Use in Microsoft Azure
NetApp SAP Applications on Microsoft Azure using Azure NetApp Files

Overview
One method for achieving HANA high availability is by configuring host auto failover. To configure host auto
failover, you add one or more virtual machines to the HANA system and configure them as standby nodes. When
active node fails, a standby node automatically takes over. In the presented configuration with Azure virtual
machines, you achieve auto failover by using NFS on Azure NetApp Files.

NOTE
The standby node needs access to all database volumes. The HANA volumes must be mounted as NFSv4 volumes. The
improved file lease-based locking mechanism in the NFSv4 protocol is used for I/O fencing.

IMPORTANT
To build the supported configuration, you must deploy the HANA data and log volumes as NFSv4.1 volumes and mount
them by using the NFSv4.1 protocol. The HANA host auto-failover configuration with standby node is not supported with
NFSv3.
In the preceding diagram, which follows SAP HANA network recommendations, three subnets are represented
within one Azure virtual network:
For client communication
For communication with the storage system
For internal HANA inter-node communication
The Azure NetApp volumes are in separate subnet, delegated to Azure NetApp Files.
For this example configuration, the subnets are:
client 10.9.1.0/26
storage 10.9.3.0/26
hana 10.9.2.0/26
anf 10.9.0.0/26 (delegated subnet to Azure NetApp Files)

Set up the Azure NetApp Files infrastructure


Before you proceed with the setup for Azure NetApp Files infrastructure, familiarize yourself with the Azure
NetApp Files documentation.
Azure NetApp Files is available in several Azure regions. Check to see whether your selected Azure region offers
Azure NetApp Files.
For information about the availability of Azure NetApp Files by Azure region, see Azure NetApp Files Availability
by Azure Region.
Before you deploy Azure NetApp Files, request onboarding to Azure NetApp Files by going to Register for Azure
NetApp Files instructions.
Deploy Azure NetApp Files resources
The following instructions assume that you've already deployed your Azure virtual network. The Azure NetApp
Files resources and VMs, where the Azure NetApp Files resources will be mounted, must be deployed in the same
Azure virtual network or in peered Azure virtual networks.
1. If you haven't already deployed the resources, request onboarding to Azure NetApp Files.
2. Create a NetApp account in your selected Azure region by following the instructions in Create a NetApp
account.
3. Set up an Azure NetApp Files capacity pool by following the instructions in Set up an Azure NetApp Files
capacity pool.
The HANA architecture presented in this article uses a single Azure NetApp Files capacity pool at the Ultra
Service level. For HANA workloads on Azure, we recommend using an Azure NetApp Files Ultra or
Premium service Level.
4. Delegate a subnet to Azure NetApp Files, as described in the instructions in Delegate a subnet to Azure
NetApp Files.
5. Deploy Azure NetApp Files volumes by following the instructions in Create an NFS volume for Azure
NetApp Files.
As you're deploying the volumes, be sure to select the NFSv4.1 version. Deploy the volumes in the
designated Azure NetApp Files subnet. The IP addresses of the Azure NetApp volumes are assigned
automatically.
Keep in mind that the Azure NetApp Files resources and the Azure VMs must be in the same Azure virtual
network or in peered Azure virtual networks. For example, HN1 -data-mnt00001, HN1 -log-mnt00001, and
so on, are the volume names and nfs://10.9.0.4/HN1 -data-mnt00001, nfs://10.9.0.4/HN1 -log-mnt00001,
and so on, are the file paths for the Azure NetApp Files volumes.
volume HN1 -data-mnt00001 (nfs://10.9.0.4/HN1 -data-mnt00001)
volume HN1 -data-mnt00002 (nfs://10.9.0.4/HN1 -data-mnt00002)
volume HN1 -log-mnt00001 (nfs://10.9.0.4/HN1 -log-mnt00001)
volume HN1 -log-mnt00002 (nfs://10.9.0.4/HN1 -log-mnt00002)
volume HN1 -shared (nfs://10.9.0.4/HN1 -shared)
In this example, we used a separate Azure NetApp Files volume for each HANA data and log volume. For a
more cost-optimized configuration on smaller or non-productive systems, it's possible to place all data
mounts on a single volume and all logs mounts on a different single volume.
Important considerations
As you're creating your Azure NetApp Files for SAP HANA scale-out with stand by nodes scenario, be aware of the
following important considerations:
The minimum capacity pool is 4 tebibytes (TiB).
The minimum volume size is 100 gibibytes (GiB).
Azure NetApp Files and all virtual machines where the Azure NetApp Files volumes will be mounted must be
in the same Azure virtual network or in peered virtual networks in the same region.
The selected virtual network must have a subnet that's delegated to Azure NetApp Files.
The throughput of an Azure NetApp Files volume is a function of the volume quota and service level, as
documented in Service level for Azure NetApp Files. When you're sizing the HANA Azure NetApp volumes,
make sure that the resulting throughput meets the HANA system requirements.
With the Azure NetApp Files export policy, you can control the allowed clients, the access type (read-write,
read only, and so on).
The Azure NetApp Files feature isn't zone-aware yet. Currently, the feature isn't deployed in all availability
zones in an Azure region. Be aware of the potential latency implications in some Azure regions.

IMPORTANT
For SAP HANA workloads, low latency is critical. Work with your Microsoft representative to ensure that the virtual
machines and the Azure NetApp Files volumes are deployed in close proximity.

Sizing for HANA database on Azure NetApp Files


The throughput of an Azure NetApp Files volume is a function of the volume size and service level, as
documented in Service level for Azure NetApp Files.
As you design the infrastructure for SAP in Azure, be aware of some minimum storage requirements by SAP,
which translate into minimum throughput characteristics:
Read-write on /hana/log of 250 megabytes per second (MB/s) with 1-MB I/O sizes.
Read activity of at least 400 MB/s for /hana/data for 16-MB and 64-MB I/O sizes.
Write activity of at least 250 MB/s for /hana/data with 16-MB and 64-MB I/O sizes.
The Azure NetApp Files throughput limits per 1 TiB of volume quota are:
Premium Storage tier - 64 MiB/s
Ultra Storage tier - 128 MiB/s
To meet the SAP minimum throughput requirements for data and log, and the guidelines for /hana/shared, the
recommended sizes would be:
SIZ E O F SIZ E O F
VO L UM E P REM IUM STO RA GE T IER ULT RA STO RA GE T IER SUP P O RT ED N F S P ROTO C O L

/hana/log/ 4 TiB 2 TiB v4.1

/hana/data 6.3 TiB 3.2 TiB v4.1

/hana/shared 1xRAM per 4 worker nodes 1xRAM per 4 worker nodes v3 or v4.1

The SAP HANA configuration for the layout that's presented in this article, using Azure NetApp Files Ultra Storage
tier, would be:

SIZ E O F
VO L UM E ULT RA STO RA GE T IER SUP P O RT ED N F S P ROTO C O L

/hana/log/mnt00001 2 TiB v4.1

/hana/log/mnt00002 2 TiB v4.1

/hana/data/mnt00001 3.2 TiB v4.1

/hana/data/mnt00002 3.2 TiB v4.1

/hana/shared 2 TiB v3 or v4.1

NOTE
The Azure NetApp Files sizing recommendations stated here are targeted to meet the minimum requirements that SAP
recommends for their infrastructure providers. In real customer deployments and workload scenarios, these sizes may not
be sufficient. Use these recommendations as a starting point and adapt, based on the requirements of your specific
workload.

TIP
You can resize Azure NetApp Files volumes dynamically, without having to unmount the volumes, stop the virtual
machines, or stop SAP HANA. This approach allows flexibility to meet both the expected and unforeseen throughput
demands of your application.

Deploy Linux virtual machines via the Azure portal


First you need to create the Azure NetApp Files volumes. Then do the following steps:
1. Create the Azure virtual network subnets in your Azure virtual network.
2. Deploy the VMs.
3. Create the additional network interfaces, and attach the network interfaces to the corresponding VMs.
Each virtual machine has three network interfaces, which correspond to the three Azure virtual network
subnets ( client , storage and hana ).
For more information, see Create a Linux virtual machine in Azure with multiple network interface cards.
IMPORTANT
For SAP HANA workloads, low latency is critical. To achieve low latency, work with your Microsoft representative to ensure
that the virtual machines and the Azure NetApp Files volumes are deployed in close proximity. When you're onboarding
new SAP HANA system that's using SAP HANA Azure NetApp Files, submit the necessary information.

The next instructions assume that you've already created the resource group, the Azure virtual network, and the
three Azure virtual network subnets: client , storage and hana . When you deploy the VMs, select the client
subnet, so that the client network interface is the primary interface on the VMs. You will also need to configure an
explicit route to the Azure NetApp Files delegated subnet via the storage subnet gateway.

IMPORTANT
Make sure that the OS you select is SAP-certified for SAP HANA on the specific VM types you're using. For a list of SAP
HANA certified VM types and OS releases for those types, go to the SAP HANA certified IaaS platforms site. Click into the
details of the listed VM type to get the complete list of SAP HANA-supported OS releases for that type.

1. Create an availability set for SAP HANA. Make sure to set the max update domain.
2. Create three virtual machines (hanadb1 , hanadb2 , hanadb3 ) by doing the following steps:
a. Use a Red Hat Enterprise Linux image in the Azure gallery that's supported for SAP HANA. We used a
RHEL-SAP-HA 7.6 image in this example.
b. Select the availability set that you created earlier for SAP HANA.
c. Select the client Azure virtual network subnet. Select Accelerated Network.
When you deploy the virtual machines, the network interface name is automatically generated. In these
instructions for simplicity we'll refer to the automatically generated network interfaces, which are attached
to the client Azure virtual network subnet, as hanadb1-client , hanadb2-client , and hanadb3-client .
3. Create three network interfaces, one for each virtual machine, for the storage virtual network subnet (in
this example, hanadb1-storage , hanadb2-storage , and hanadb3-storage ).
4. Create three network interfaces, one for each virtual machine, for the hana virtual network subnet (in this
example, hanadb1-hana , hanadb2-hana , and hanadb3-hana ).
5. Attach the newly created virtual network interfaces to the corresponding virtual machines by doing the
following steps:
a. Go to the virtual machine in the Azure portal.
b. In the left pane, select Vir tual Machines . Filter on the virtual machine name (for example, hanadb1 ),
and then select the virtual machine.
c. In the Over view pane, select Stop to deallocate the virtual machine.
d. Select Networking , and then attach the network interface. In the Attach network interface drop-
down list, select the already created network interfaces for the storage and hana subnets.
e. Select Save .
f. Repeat steps b through e for the remaining virtual machines (in our example, hanadb2 and hanadb3 ).
g. Leave the virtual machines in stopped state for now. Next, we'll enable accelerated networking for all
newly attached network interfaces.
6. Enable accelerated networking for the additional network interfaces for the storage and hana subnets by
doing the following steps:
a. Open Azure Cloud Shell in the Azure portal.
b. Execute the following commands to enable accelerated networking for the additional network interfaces,
which are attached to the storage and hana subnets.

az network nic update --id /subscriptions/your subscription/resourceGroups/your resource


group/providers/Microsoft.Network/networkInterfaces/hanadb1-storage --accelerated-networking true
az network nic update --id /subscriptions/your subscription/resourceGroups/your resource
group/providers/Microsoft.Network/networkInterfaces/hanadb2-storage --accelerated-networking true
az network nic update --id /subscriptions/your subscription/resourceGroups/your resource
group/providers/Microsoft.Network/networkInterfaces/hanadb3-storage --accelerated-networking true

az network nic update --id /subscriptions/your subscription/resourceGroups/your resource


group/providers/Microsoft.Network/networkInterfaces/hanadb1-hana --accelerated-networking true
az network nic update --id /subscriptions/your subscription/resourceGroups/your resource
group/providers/Microsoft.Network/networkInterfaces/hanadb2-hana --accelerated-networking true
az network nic update --id /subscriptions/your subscription/resourceGroups/your resource
group/providers/Microsoft.Network/networkInterfaces/hanadb3-hana --accelerated-networking true

7. Start the virtual machines by doing the following steps:


a. In the left pane, select Vir tual Machines . Filter on the virtual machine name (for example, hanadb1 ),
and then select it.
b. In the Over view pane, select Star t .

Operating system configuration and preparation


The instructions in the next sections are prefixed with one of the following:
[A] : Applicable to all nodes
[1] : Applicable only to node 1
[2] : Applicable only to node 2
[3] : Applicable only to node 3
Configure and prepare your OS by doing the following steps:
1. [A] Maintain the host files on the virtual machines. Include entries for all subnets. The following entries
were added to /etc/hosts for this example.

# Storage
10.9.3.4 hanadb1-storage
10.9.3.5 hanadb2-storage
10.9.3.6 hanadb3-storage
# Client
10.9.1.5 hanadb1
10.9.1.6 hanadb2
10.9.1.7 hanadb3
# Hana
10.9.2.4 hanadb1-hana
10.9.2.5 hanadb2-hana
10.9.2.6 hanadb3-hana
2. [A] Add a network route, so that the communication to the Azure NetApp Files goes via the storage
network interface.
In this example will use Networkmanager to configure the additional network route. The following
instructions assume that the storage network interface is eth1 .
First, determine the connection name for device eth1 . In this example the connection name for device
eth1 is Wired connection 1 .

# Execute as root
nmcli connection
# Result
#NAME UUID TYPE DEVICE
#System eth0 5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03 ethernet eth0
#Wired connection 1 4b0789d1-6146-32eb-83a1-94d61f8d60a7 ethernet eth1

Then configure additional route to the Azure NetApp Files delegated network via eth1 .

# Add the following route


# ANFDelegatedSubnet/cidr via StorageSubnetGW dev StorageNetworkInterfaceDevice
nmcli connection modify "Wired connection 1" +ipv4.routes "10.9.0.0/26 10.9.3.1"

Reboot the VM to activate the changes.


3. [A] Install the NFS client package.

yum install nfs-utils

4. [A] Prepare the OS for running SAP HANA on Azure NetApp with NFS, as described in NetApp SAP
Applications on Microsoft Azure using Azure NetApp Files. Create configuration file /etc/sysctl.d/netapp-
hana.conf for the NetApp configuration settings.

vi /etc/sysctl.d/netapp-hana.conf
# Add the following entries in the configuration file
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.core.rmem_default = 16777216
net.core.wmem_default = 16777216
net.core.optmem_max = 16777216
net.ipv4.tcp_rmem = 65536 16777216 16777216
net.ipv4.tcp_wmem = 65536 16777216 16777216
net.core.netdev_max_backlog = 300000
net.ipv4.tcp_slow_start_after_idle=0
net.ipv4.tcp_no_metrics_save = 1
net.ipv4.tcp_moderate_rcvbuf = 1
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_timestamps = 1
net.ipv4.tcp_sack = 1

5. [A] Create configuration file /etc/sysctl.d/ms-az.conf with additional optimization settings.


vi /etc/sysctl.d/ms-az.conf
# Add the following entries in the configuration file
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv4.tcp_max_syn_backlog = 16348
net.ipv4.ip_local_port_range = 40000 65300
net.ipv4.conf.all.rp_filter = 0
sunrpc.tcp_slot_table_entries = 128
vm.swappiness=10

6. [A] Adjust the sunrpc settings, as recommended in the NetApp SAP Applications on Microsoft Azure using
Azure NetApp Files.

vi /etc/modprobe.d/sunrpc.conf
# Insert the following line
options sunrpc tcp_max_slot_table_entries=128

7. [A] Red Hat for HANA configuration.


Configure RHEL as described in SAP Note 2292690, 2455582, 2593824 and
https://access.redhat.com/solutions/2447641.

NOTE
If installing HANA 2.0 SP04 you will be required to install package compat-sap-c++-7 as described in SAP note
2593824, before you can install SAP HANA.

Mount the Azure NetApp Files volumes


1. [A] Create mount points for the HANA database volumes.

mkdir -p /hana/data/HN1/mnt00001
mkdir -p /hana/data/HN1/mnt00002
mkdir -p /hana/log/HN1/mnt00001
mkdir -p /hana/log/HN1/mnt00002
mkdir -p /hana/shared
mkdir -p /usr/sap/HN1

2. [1] Create node-specific directories for /usr/sap on HN1 -shared.


# Create a temporary directory to mount HN1-shared
mkdir /mnt/tmp
# if using NFSv3 for this volume, mount with the following command
mount 10.9.0.4:/HN1-shared /mnt/tmp
# if using NFSv4.1 for this volume, mount with the following command
mount -t nfs -o sec=sys,vers=4.1 10.9.0.4:/HN1-shared /mnt/tmp
cd /mnt/tmp
mkdir shared usr-sap-hanadb1 usr-sap-hanadb2 usr-sap-hanadb3
# unmount /hana/shared
cd
umount /mnt/tmp

3. [A] Verify the NFS domain setting. Make sure that the domain is configured as the default Azure NetApp
Files domain, i.e. defaultv4iddomain.com and the mapping is set to nobody .

IMPORTANT
Make sure to set the NFS domain in /etc/idmapd.conf on the VM to match the default domain configuration on
Azure NetApp Files: defaultv4iddomain.com . If there's a mismatch between the domain configuration on the NFS
client (i.e. the VM) and the NFS server, i.e. the Azure NetApp configuration, then the permissions for files on Azure
NetApp volumes that are mounted on the VMs will be displayed as nobody .

sudo cat /etc/idmapd.conf


# Example
[General]
Domain = defaultv4iddomain.com
[Mapping]
Nobody-User = nobody
Nobody-Group = nobody

4. [A] Verify nfs4_disable_idmapping. It should be set to Y . To create the directory structure where
nfs4_disable_idmapping is located, execute the mount command. You won't be able to manually create the
directory under /sys/modules, because access is reserved for the kernel / drivers.

# Check nfs4_disable_idmapping
cat /sys/module/nfs/parameters/nfs4_disable_idmapping
# If you need to set nfs4_disable_idmapping to Y
mkdir /mnt/tmp
mount 10.9.0.4:/HN1-shared /mnt/tmp
umount /mnt/tmp
echo "Y" > /sys/module/nfs/parameters/nfs4_disable_idmapping
# Make the configuration permanent
echo "options nfs nfs4_disable_idmapping=Y" >> /etc/modprobe.d/nfs.conf

For more details on how to change nfs4_disable_idmapping parameter see


https://access.redhat.com/solutions/1749883.
5. [A] Mount the shared Azure NetApp Files volumes.
sudo vi /etc/fstab
# Add the following entries
10.9.0.4:/HN1-data-mnt00001 /hana/data/HN1/mnt00001 nfs
rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys 0
0
10.9.0.4:/HN1-data-mnt00002 /hana/data/HN1/mnt00002 nfs
rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys 0
0
10.9.0.4:/HN1-log-mnt00001 /hana/log/HN1/mnt00001 nfs
rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys 0
0
10.9.0.4:/HN1-log-mnt00002 /hana/log/HN1/mnt00002 nfs
rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys 0
0
10.9.0.4:/HN1-shared/shared /hana/shared nfs
rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys 0
0
# Mount all volumes
sudo mount -a

6. [1] Mount the node-specific volumes on hanadb1 .

sudo vi /etc/fstab
# Add the following entries
10.9.0.4:/HN1-shared/usr-sap-hanadb1 /usr/sap/HN1 nfs
rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys 0
0
# Mount the volume
sudo mount -a

7. [2] Mount the node-specific volumes on hanadb2 .

sudo vi /etc/fstab
# Add the following entries
10.9.0.4:/HN1-shared/usr-sap-hanadb2 /usr/sap/HN1 nfs
rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys 0
0
# Mount the volume
sudo mount -a

8. [3] Mount the node-specific volumes on hanadb3 .

sudo vi /etc/fstab
# Add the following entries
10.9.0.4:/HN1-shared/usr-sap-hanadb3 /usr/sap/HN1 nfs
rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys 0
0
# Mount the volume
sudo mount -a

9. [A] Verify that all HANA volumes are mounted with NFS protocol version NFSv4 .
sudo nfsstat -m
# Verify that flag vers is set to 4.1
# Example from hanadb1
/hana/data/HN1/mnt00001 from 10.9.0.4:/HN1-data-mnt00001
Flags:
rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,cl
ientaddr=10.9.3.4,local_lock=none,addr=10.9.0.4
/hana/log/HN1/mnt00002 from 10.9.0.4:/HN1-log-mnt00002
Flags:
rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,cl
ientaddr=10.9.3.4,local_lock=none,addr=10.9.0.4
/hana/data/HN1/mnt00002 from 10.9.0.4:/HN1-data-mnt00002
Flags:
rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,cl
ientaddr=10.9.3.4,local_lock=none,addr=10.9.0.4
/hana/log/HN1/mnt00001 from 10.9.0.4:/HN1-log-mnt00001
Flags:
rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,cl
ientaddr=10.9.3.4,local_lock=none,addr=10.9.0.4
/usr/sap/HN1 from 10.9.0.4:/HN1-shared/usr-sap-hanadb1
Flags:
rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,cl
ientaddr=10.9.3.4,local_lock=none,addr=10.9.0.4
/hana/shared from 10.9.0.4:/HN1-shared/shared
Flags:
rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,cl
ientaddr=10.9.3.4,local_lock=none,addr=10.9.0.4

Installation
In this example for deploying SAP HANA in scale-out configuration with standby node with Azure, we've used
HANA 2.0 SP4.
Prepare for HANA installation
1. [A] Before the HANA installation, set the root password. You can disable the root password after the
installation has been completed. Execute as root command passwd .
2. [1] Verify that you can log in via SSH to hanadb2 and hanadb3 , without being prompted for a password.

ssh root@hanadb2
ssh root@hanadb3

3. [A] Install additional packages, which are required for HANA 2.0 SP4. For more information, see SAP Note
2593824.

yum install libgcc_s1 libstdc++6 compat-sap-c++-7 libatomic1

4. [2], [3] Change ownership of SAP HANA data and log directories to hn1 adm.
# Execute as root
sudo chown hn1adm:sapsys /hana/data/HN1
sudo chown hn1adm:sapsys /hana/log/HN1

5. [A] Disable the firewall temporarily, so that it doesn't interfere with the HANA installation. You can re-
enable it, after the HANA installation is done.

# Execute as root
systemctl stop firewalld
systemctl disable firewalld

HANA installation
1. [1] Install SAP HANA by following the instructions in the SAP HANA 2.0 Installation and Update guide. In
this example, we install SAP HANA scale-out with master, one worker, and one standby node.
a. Start the hdblcm program from the HANA installation software directory. Use the internal_network
parameter and pass the address space for subnet, which is used for the internal HANA inter-node
communication.

./hdblcm --internal_network=10.9.2.0/26

b. At the prompt, enter the following values:


For Choose an action : enter 1 (for install)
For Additional components for installation : enter 2, 3
For installation path: press Enter (defaults to /hana/shared)
For Local Host Name : press Enter to accept the default
Under Do you want to add hosts to the system? : enter y
For comma-separated host names to add : enter hanadb2, hanadb3
For Root User Name [root]: press Enter to accept the default
For roles for host hanadb2: enter 1 (for worker)
For Host Failover Group for host hanadb2 [default]: press Enter to accept the default
For Storage Par tition Number for host hanadb2 [<>]: press Enter to accept the default
For Worker Group for host hanadb2 [default]: press Enter to accept the default
For Select roles for host hanadb3: enter 2 (for standby)
For Host Failover Group for host hanadb3 [default]: press Enter to accept the default
For Worker Group for host hanadb3 [default]: press Enter to accept the default
For SAP HANA System ID : enter HN1
For Instance number [00]: enter 03
For Local Host Worker Group [default]: press Enter to accept the default
For Select System Usage / Enter index [4] : enter 4 (for custom)
For Location of Data Volumes [/hana/data/HN1]: press Enter to accept the default
For Location of Log Volumes [/hana/log/HN1]: press Enter to accept the default
For Restrict maximum memor y allocation? [n]: enter n
For Cer tificate Host Name For Host hanadb1 [hanadb1]: press Enter to accept the default
For Cer tificate Host Name For Host hanadb2 [hanadb2]: press Enter to accept the default
For Cer tificate Host Name For Host hanadb3 [hanadb3]: press Enter to accept the default
For System Administrator (hn1adm) Password : enter the password
For System Database User (system) Password : enter the system's password
For Confirm System Database User (system) Password : enter system's password
For Restar t system after machine reboot? [n]: enter n
For Do you want to continue (y/n) : validate the summary and if everything looks good, enter y
2. [1] Verify global.ini
Display global.ini, and ensure that the configuration for the internal SAP HANA inter-node communication
is in place. Verify the communication section. It should have the address space for the hana subnet, and
listeninterface should be set to .internal . Verify the internal_hostname_resolution section. It
should have the IP addresses for the HANA virtual machines that belong to the hana subnet.

sudo cat /usr/sap/HN1/SYS/global/hdb/custom/config/global.ini


# Example
#global.ini last modified 2019-09-10 00:12:45.192808 by hdbnameserve
[communication]
internal_network = 10.9.2.0/26
listeninterface = .internal
[internal_hostname_resolution]
10.9.2.4 = hanadb1
10.9.2.5 = hanadb2
10.9.2.6 = hanadb3

3. [1] Add host mapping to ensure that the client IP addresses are used for client communication. Add
section public_host_resolution , and add the corresponding IP addresses from the client subnet.

sudo vi /usr/sap/HN1/SYS/global/hdb/custom/config/global.ini
#Add the section
[public_hostname_resolution]
map_hanadb1 = 10.9.1.5
map_hanadb2 = 10.9.1.6
map_hanadb3 = 10.9.1.7

4. [1] Restart SAP HANA to activate the changes.

sudo -u hn1adm /usr/sap/hostctrl/exe/sapcontrol -nr 03 -function StopSystem HDB


sudo -u hn1adm /usr/sap/hostctrl/exe/sapcontrol -nr 03 -function StartSystem HDB

5. [1] Verify that the client interface will be using the IP addresses from the client subnet for
communication.

# Execute as hn1adm
/usr/sap/HN1/HDB03/exe/hdbsql -u SYSTEM -p "password" -i 03 -d SYSTEMDB 'select * from
SYS.M_HOST_INFORMATION'|grep net_publicname
# Expected result
"hanadb3","net_publicname","10.9.1.7"
"hanadb2","net_publicname","10.9.1.6"
"hanadb1","net_publicname","10.9.1.5"

For information about how to verify the configuration, see SAP Note 2183363 - Configuration of SAP
HANA internal network.
6. [A] Re-enable the firewall.
Stop HANA

sudo -u hn1adm /usr/sap/hostctrl/exe/sapcontrol -nr 03 -function StopSystem HDB

Re-enable the firewall

# Execute as root
systemctl start firewalld
systemctl enable firewalld

Open the necessary firewall ports

IMPORTANT
Create firewall rules to allow HANA inter node communication and client traffic. The required ports are listed
on TCP/IP Ports of All SAP Products. The following commands are just an example. In this scenario with used
system number 03.
# Execute as root
sudo firewall-cmd --zone=public --add-port=30301/tcp --permanent
sudo firewall-cmd --zone=public --add-port=30301/tcp
sudo firewall-cmd --zone=public --add-port=30303/tcp --permanent
sudo firewall-cmd --zone=public --add-port=30303/tcp
sudo firewall-cmd --zone=public --add-port=30306/tcp --permanent
sudo firewall-cmd --zone=public --add-port=30306/tcp
sudo firewall-cmd --zone=public --add-port=30307/tcp --permanent
sudo firewall-cmd --zone=public --add-port=30307/tcp
sudo firewall-cmd --zone=public --add-port=30313/tcp --permanent
sudo firewall-cmd --zone=public --add-port=30313/tcp
sudo firewall-cmd --zone=public --add-port=30315/tcp --permanent
sudo firewall-cmd --zone=public --add-port=30315/tcp
sudo firewall-cmd --zone=public --add-port=30317/tcp --permanent
sudo firewall-cmd --zone=public --add-port=30317/tcp
sudo firewall-cmd --zone=public --add-port=30340/tcp --permanent
sudo firewall-cmd --zone=public --add-port=30340/tcp
sudo firewall-cmd --zone=public --add-port=30341/tcp --permanent
sudo firewall-cmd --zone=public --add-port=30341/tcp
sudo firewall-cmd --zone=public --add-port=30342/tcp --permanent
sudo firewall-cmd --zone=public --add-port=30342/tcp
sudo firewall-cmd --zone=public --add-port=1128/tcp --permanent
sudo firewall-cmd --zone=public --add-port=1128/tcp
sudo firewall-cmd --zone=public --add-port=1129/tcp --permanent
sudo firewall-cmd --zone=public --add-port=1129/tcp
sudo firewall-cmd --zone=public --add-port=40302/tcp --permanent
sudo firewall-cmd --zone=public --add-port=40302/tcp
sudo firewall-cmd --zone=public --add-port=40301/tcp --permanent
sudo firewall-cmd --zone=public --add-port=40301/tcp
sudo firewall-cmd --zone=public --add-port=40307/tcp --permanent
sudo firewall-cmd --zone=public --add-port=40307/tcp
sudo firewall-cmd --zone=public --add-port=40303/tcp --permanent
sudo firewall-cmd --zone=public --add-port=40303/tcp
sudo firewall-cmd --zone=public --add-port=40340/tcp --permanent
sudo firewall-cmd --zone=public --add-port=40340/tcp
sudo firewall-cmd --zone=public --add-port=50313/tcp --permanent
sudo firewall-cmd --zone=public --add-port=50313/tcp
sudo firewall-cmd --zone=public --add-port=50314/tcp --permanent
sudo firewall-cmd --zone=public --add-port=50314/tcp
sudo firewall-cmd --zone=public --add-port=30310/tcp --permanent
sudo firewall-cmd --zone=public --add-port=30310/tcp
sudo firewall-cmd --zone=public --add-port=30302/tcp --permanent
sudo firewall-cmd --zone=public --add-port=30302/tcp

Start HANA

sudo -u hn1adm /usr/sap/hostctrl/exe/sapcontrol -nr 03 -function StartSystem HDB

7. To optimize SAP HANA for the underlying Azure NetApp Files storage, set the following SAP HANA
parameters:
max_parallel_io_requests 128
async_read_submit on
async_write_submit_active on
async_write_submit_blocks all
For more information, see NetApp SAP Applications on Microsoft Azure using Azure NetApp Files.
Starting with SAP HANA 2.0 systems, you can set the parameters in global.ini . For more information, see
SAP Note 1999930.
For SAP HANA 1.0 systems versions SPS12 and earlier, these parameters can be set during the installation,
as described in SAP Note 2267798.
8. The storage that's used by Azure NetApp Files has a file size limitation of 16 terabytes (TB). SAP HANA is
not implicitly aware of the storage limitation, and it won't automatically create a new data file when the file
size limit of 16 TB is reached. As SAP HANA attempts to grow the file beyond 16 TB, that attempt will result
in errors and, eventually, in an index server crash.

IMPORTANT
To prevent SAP HANA from trying to grow data files beyond the 16-TB limit of the storage subsystem, set the
following parameters in global.ini .
datavolume_striping = true
datavolume_striping_size_gb = 15000 For more information, see SAP Note 2400005. Be aware of SAP Note
2631285.

Test SAP HANA failover


1. Simulate a node crash on an SAP HANA worker node. Do the following:
a. Before you simulate the node crash, run the following commands as hn1 adm to capture the status of
the environment:

# Check the landscape status


python /usr/sap/HN1/HDB03/exe/python_support/landscapeHostConfiguration.py
| Host | Host | Host | Failover | Remove | Storage | Storage | Failover | Failover |
NameServer | NameServer | IndexServer | IndexServer | Host | Host | Worker | Worker |
| | Active | Status | Status | Status | Config | Actual | Config | Actual |
Config | Actual | Config | Actual | Config | Actual | Config | Actual |
| | | | | | Partition | Partition | Group | Group | Role
| Role | Role | Role | Roles | Roles | Groups | Groups |
| ------- | ------ | ------ | -------- | ------ | --------- | --------- | -------- | -------- | -----
----- | ---------- | ----------- | ----------- | ------- | ------- | ------- | ------- |
| hanadb1 | yes | ok | | | 1 | 1 | default | default |
master 1 | master | worker | master | worker | worker | default | default |
| hanadb2 | yes | ok | | | 2 | 2 | default | default |
master 2 | slave | worker | slave | worker | worker | default | default |
| hanadb3 | yes | ignore | | | 0 | 0 | default | default |
master 3 | slave | standby | standby | standby | standby | default | - |

# Check the instance status


sapcontrol -nr 03 -function GetSystemInstanceList
GetSystemInstanceList
OK
hostname, instanceNr, httpPort, httpsPort, startPriority, features, dispstatus
hanadb2, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GREEN
hanadb1, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GREEN
hanadb3, 3, 50313, 50314, 0.3, HDB|HDB_STANDBY, GREEN

b. To simulate a node crash, run the following command as root on the worker node, which is hanadb2 in
this case:

echo b > /proc/sysrq-trigger


c. Monitor the system for failover completion. When the failover has been completed, capture the status,
which should look like the following:

# Check the instance status


sapcontrol -nr 03 -function GetSystemInstanceList
GetSystemInstanceList
OK
hostname, instanceNr, httpPort, httpsPort, startPriority, features, dispstatus
hanadb1, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GREEN
hanadb2, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GRAY
hanadb3, 3, 50313, 50314, 0.3, HDB|HDB_STANDBY, GREEN
# Check the landscape status
python /usr/sap/HN1/HDB03/exe/python_support/landscapeHostConfiguration.py
| Host | Host | Host | Failover | Remove | Storage | Storage | Failover | Failover |
NameServer | NameServer | IndexServer | IndexServer | Host | Host | Worker | Worker |
| | Active | Status | Status | Status | Config | Actual | Config | Actual |
Config | Actual | Config | Actual | Config | Actual | Config | Actual |
| | | | | | Partition | Partition | Group | Group | Role
| Role | Role | Role | Roles | Roles | Groups | Groups |
| ------- | ------ | ------ | -------- | ------ | --------- | --------- | -------- | -------- | -----
----- | ---------- | ----------- | ----------- | ------- | ------- | ------- | ------- |
| hanadb1 | yes | ok | | | 1 | 1 | default | default |
master 1 | master | worker | master | worker | worker | default | default |
| hanadb2 | no | info | | | 2 | 0 | default | default |
master 2 | slave | worker | standby | worker | standby | default | - |
| hanadb3 | yes | info | | | 0 | 2 | default | default |
master 3 | slave | standby | slave | standby | worker | default | default |

IMPORTANT
When a node experiences kernel panic, avoid delays with SAP HANA failover by setting kernel.panic to 20
seconds on all HANA virtual machines. The configuration is done in /etc/sysctl . Reboot the virtual machines to
activate the change. If this change isn't performed, failover can take 10 or more minutes when a node is
experiencing kernel panic.

2. Kill the name server by doing the following:


a. Prior to the test, check the status of the environment by running the following commands as hn1 adm:
#Landscape status
python /usr/sap/HN1/HDB03/exe/python_support/landscapeHostConfiguration.py
| Host | Host | Host | Failover | Remove | Storage | Storage | Failover | Failover |
NameServer | NameServer | IndexServer | IndexServer | Host | Host | Worker | Worker |
| | Active | Status | Status | Status | Config | Actual | Config | Actual |
Config | Actual | Config | Actual | Config | Actual | Config | Actual |
| | | | | | Partition | Partition | Group | Group | Role
| Role | Role | Role | Roles | Roles | Groups | Groups |
| ------- | ------ | ------ | -------- | ------ | --------- | --------- | -------- | -------- | -----
----- | ---------- | ----------- | ----------- | ------- | ------- | ------- | ------- |
| hanadb1 | yes | ok | | | 1 | 1 | default | default |
master 1 | master | worker | master | worker | worker | default | default |
| hanadb2 | yes | ok | | | 2 | 2 | default | default |
master 2 | slave | worker | slave | worker | worker | default | default |
| hanadb3 | yes | ignore | | | 0 | 0 | default | default |
master 3 | slave | standby | standby | standby | standby | default | - |
# Check the instance status
sapcontrol -nr 03 -function GetSystemInstanceList
GetSystemInstanceList
OK
hostname, instanceNr, httpPort, httpsPort, startPriority, features, dispstatus
hanadb2, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GREEN
hanadb3, 3, 50313, 50314, 0.3, HDB|HDB_STANDBY, GREEN
hanadb1, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GREEN

b. Run the following commands as hn1 adm on the active master node, which is hanadb1 in this case:

hn1adm@hanadb1:/usr/sap/HN1/HDB03> HDB kill

The standby node hanadb3 will take over as master node. Here is the resource state after the failover test
is completed:

# Check the instance status


sapcontrol -nr 03 -function GetSystemInstanceList
GetSystemInstanceList
OK
hostname, instanceNr, httpPort, httpsPort, startPriority, features, dispstatus
hanadb2, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GREEN
hanadb3, 3, 50313, 50314, 0.3, HDB|HDB_STANDBY, GREEN
hanadb1, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GRAY
# Check the landscape status
python /usr/sap/HN1/HDB03/exe/python_support/landscapeHostConfiguration.py
| Host | Host | Host | Failover | Remove | Storage | Storage | Failover | Failover |
NameServer | NameServer | IndexServer | IndexServer | Host | Host | Worker | Worker |
| | Active | Status | Status | Status | Config | Actual | Config | Actual |
Config | Actual | Config | Actual | Config | Actual | Config | Actual |
| | | | | | Partition | Partition | Group | Group | Role
| Role | Role | Role | Roles | Roles | Groups | Groups |
| ------- | ------ | ------ | -------- | ------ | --------- | --------- | -------- | -------- | ----
------ | ---------- | ----------- | ----------- | ------- | ------- | ------- | ------- |
| hanadb1 | no | info | | | 1 | 0 | default | default |
master 1 | slave | worker | standby | worker | standby | default | - |
| hanadb2 | yes | ok | | | 2 | 2 | default | default |
master 2 | slave | worker | slave | worker | worker | default | default |
| hanadb3 | yes | info | | | 0 | 1 | default | default |
master 3 | master | standby | master | standby | worker | default | default |

c. Restart the HANA instance on hanadb1 (that is, on the same virtual machine, where the name server
was killed). The hanadb1 node will rejoin the environment and will keep its standby role.

hn1adm@hanadb1:/usr/sap/HN1/HDB03> HDB start

After SAP HANA has started on hanadb1 , expect the following status:

# Check the instance status


sapcontrol -nr 03 -function GetSystemInstanceList
GetSystemInstanceList
OK
hostname, instanceNr, httpPort, httpsPort, startPriority, features, dispstatus
hanadb2, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GREEN
hanadb3, 3, 50313, 50314, 0.3, HDB|HDB_STANDBY, GREEN
hanadb1, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GREEN
# Check the landscape status
python /usr/sap/HN1/HDB03/exe/python_support/landscapeHostConfiguration.py
| Host | Host | Host | Failover | Remove | Storage | Storage | Failover | Failover |
NameServer | NameServer | IndexServer | IndexServer | Host | Host | Worker | Worker |
| | Active | Status | Status | Status | Config | Actual | Config | Actual |
Config | Actual | Config | Actual | Config | Actual | Config | Actual |
| | | | | | Partition | Partition | Group | Group | Role
| Role | Role | Role | Roles | Roles | Groups | Groups |
| ------- | ------ | ------ | -------- | ------ | --------- | --------- | -------- | -------- | -----
----- | ---------- | ----------- | ----------- | ------- | ------- | ------- | ------- |
| hanadb1 | no | info | | | 1 | 0 | default | default |
master 1 | slave | worker | standby | worker | standby | default | - |
| hanadb2 | yes | ok | | | 2 | 2 | default | default |
master 2 | slave | worker | slave | worker | worker | default | default |
| hanadb3 | yes | info | | | 0 | 1 | default | default |
master 3 | master | standby | master | standby | worker | default | default |

d. Again, kill the name server on the currently active master node (that is, on node hanadb3 ).

hn1adm@hanadb3:/usr/sap/HN1/HDB03> HDB kill

Node hanadb1 will resume the role of master node. After the failover test has been completed, the status
will look like this:
# Check the instance status
sapcontrol -nr 03 -function GetSystemInstanceList
GetSystemInstanceList
OK
hostname, instanceNr, httpPort, httpsPort, startPriority, features, dispstatus
hanadb2, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GREEN
hanadb3, 3, 50313, 50314, 0.3, HDB|HDB_STANDBY, GRAY
hanadb1, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GREEN
# Check the landscape status
python /usr/sap/HN1/HDB03/exe/python_support/landscapeHostConfiguration.py
| Host | Host | Host | Failover | Remove | Storage | Storage | Failover | Failover |
NameServer | NameServer | IndexServer | IndexServer | Host | Host | Worker | Worker |
| | Active | Status | Status | Status | Config | Actual | Config | Actual |
Config | Actual | Config | Actual | Config | Actual | Config | Actual |
| | | | | | Partition | Partition | Group | Group | Role
| Role | Role | Role | Roles | Roles | Groups | Groups |
| ------- | ------ | ------ | -------- | ------ | --------- | --------- | -------- | -------- | -----
----- | ---------- | ----------- | ----------- | ------- | ------- | ------- | ------- |
| hanadb1 | yes | ok | | | 1 | 1 | default | default |
master 1 | master | worker | master | worker | worker | default | default |
| hanadb2 | yes | ok | | | 2 | 2 | default | default |
master 2 | slave | worker | slave | worker | worker | default | default |
| hanadb3 | no | ignore | | | 0 | 0 | default | default |
master 3 | slave | standby | standby | standby | standby | default | - |

e. Start SAP HANA on hanadb3 , which will be ready to serve as a standby node.

hn1adm@hanadb3:/usr/sap/HN1/HDB03> HDB start

After SAP HANA has started on hanadb3 , the status looks like the following:

# Check the instance status


sapcontrol -nr 03 -function GetSystemInstanceList & python
/usr/sap/HN1/HDB03/exe/python_support/landscapeHostConfiguration.py
GetSystemInstanceList
OK
hostname, instanceNr, httpPort, httpsPort, startPriority, features, dispstatus
GetSystemInstanceList
OK
hostname, instanceNr, httpPort, httpsPort, startPriority, features, dispstatus
hanadb2, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GREEN
hanadb3, 3, 50313, 50314, 0.3, HDB|HDB_STANDBY, GREEN
hanadb1, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GREEN
# Check the landscape status
python /usr/sap/HN1/HDB03/exe/python_support/landscapeHostConfiguration.py
| Host | Host | Host | Failover | Remove | Storage | Storage | Failover | Failover |
NameServer | NameServer | IndexServer | IndexServer | Host | Host | Worker | Worker |
| | Active | Status | Status | Status | Config | Actual | Config | Actual |
Config | Actual | Config | Actual | Config | Actual | Config | Actual |
| | | | | | Partition | Partition | Group | Group | Role
| Role | Role | Role | Roles | Roles | Groups | Groups |
| ------- | ------ | ------ | -------- | ------ | --------- | --------- | -------- | -------- | -----
----- | ---------- | ----------- | ----------- | ------- | ------- | ------- | ------- |
| hanadb1 | yes | ok | | | 1 | 1 | default | default |
master 1 | master | worker | master | worker | worker | default | default |
| hanadb2 | yes | ok | | | 2 | 2 | default | default |
master 2 | slave | worker | slave | worker | worker | default | default |
| hanadb3 | no | ignore | | | 0 | 0 | default | default |
master 3 | slave | standby | standby | standby | standby | default | - |
Next steps
Azure Virtual Machines planning and implementation for SAP
Azure Virtual Machines deployment for SAP
Azure Virtual Machines DBMS deployment for SAP
To learn how to establish high availability and plan for disaster recovery of SAP HANA on Azure VMs, see High
Availability of SAP HANA on Azure Virtual Machines (VMs).
Backup guide for SAP HANA on Azure Virtual
Machines
12/22/2020 • 9 minutes to read • Edit Online

Getting Started
The backup guide for SAP HANA running on Azure virtual Machines will only describe Azure-specific topics. For
general SAP HANA backup related items, check the SAP HANA documentation. We expect you to be familiar with
principle database backup strategies, the reasons, and motivations to have a sound and valid backup strategy,
and are aware of the requirements your company has for the backup procedure, retention period of backups and
restore procedure.
SAP HANA is officially supported on various Azure VM types, like Azure M-Series. For a complete list of SAP
HANA certified Azure VMs and HANA Large Instance units, check out Find Certified IaaS Platforms. Microsoft
Azure offers a number of units where SAP HANA runs non-virtualized on physical servers. This service is called
HANA Large Instances. This guide will not cover backup processes and tools for HANA Large Instances. But is
going to be limited to Azure virtual machines. For details about backup/restore processes with HANA Large
Instances, read the article HLI Backup and Restore.
The focus of this article is on three backup possibilities for SAP HANA on Azure virtual machines:
HANA backup through Azure Backup Services
HANA backup to the file system in an Azure Linux Virtual Machine (see SAP HANA Azure Backup on file level)
HANA backup based on storage snapshots using the Azure storage blob snapshot feature manually or Azure
Backup service
SAP HANA offers a backup API, which allows third-party backup tools to integrate directly with SAP HANA.
Products like Azure Backup service, or Commvault are using this proprietary interface to trigger SAP HANA
database or redo log backups.
Information on how you can find what SAP software is supported on Azure can be found in the article What SAP
software is supported for Azure deployments.

Azure Backup Service


The first scenario shown is a scenario where Azure Backup Service is either using the SAP HANA backint
interface to perform a streaming backup with from an SAP HANA database. Or you use a more generic capability
of Azure Backup service to create an application consistent disk snapshot and have that one transferred to the
Azure Backup service.
Azure Backup integrates and is certified as backup solution for SAP HANA using the proprietary SAP HANA
interface called backint. For more details of the solution, its capabilities and the Azure regions where it is
available, read the article Support matrix for backup of SAP HANA databases on Azure VMs. For details and
principles about Azure Backup service for HANA, read the article About SAP HANA database backup in Azure
VMs.
The second possibility to leverage Azure Backup service is to create an application consistent backup using disk
snapshots of Azure Premium Storage. Other HANA certified Azure storages, like Azure Ultra disk and Azure
NetApp Files are not supporting this kind of snapshot through Azure Backup service. Reading these articles:
Plan your VM backup infrastructure in Azure
Application-consistent backup of Azure Linux VMs
this sequence of activity emerges:
Azure Backup needs to execute a pre-snapshot script that puts the application, in this case SAP HANA, in a
consistent state
As this consistent state is confirmed, Azure Backup will execute the disk snapshots
After finishing the snapshots, Azure Backup will undo the activity it did in the pre-snapshot script
After successful execution, Azure Backup will stream the data into the Backup vault
In case of SAP HANA, most customers are using Azure Write Accelerator for the volumes that contain the SAP
HANA redo log. Azure Backup service will automatically exclude these volumes from the snapshots. This
exclusion does not harm the ability of HANA to restore. Though it would block the ability to restore with nearly
all other SAP supported DBMS.
The downside of this possibility is the fact that you need to develop your own pre- and post-snapshot script. The
pre-snapshot script needs to create a HANA snapshot and handle eventual exception cases. Whereas the post-
snapshot script needs to delete the HANA snapshot again. For more details on the logic required, start with SAP
support note #2039883. The considerations of the section 'SAP HANA data consistency when taking storage
snapshots' in this article do fully apply to this kind of backup.

NOTE
Disk snapshot based backups for SAP HANA in deployments where multiple database containers are used, require a
minimum release of HANA 2.0 SP04

See details about storage snapshots later in this document.

Other HANA backup methods


There are three other backup methods or paths that can be considered:
Backing up against an NFS share that is based on Azure NetApp Files (ANF). ANF again has the ability to
create snapshots of those volumes you store backups on. Given the throughput that you eventually require to
write the backups, this solution could become an expensive method. Though easy to establish since HANA can
write the backups directly into the Azure native NFS share
Executing the HANA Backup against VM attached disks of Standard SSD or Azure Premium Storage. As next
step you can copy those backup files against Azure Blob storage. This strategy might be price wise attractive
Executing the HANA Backup against VM attached disks of Standard SSD or Azure Premium Storage. As next
step the disk gets snapshotted on a regular basis. After the first snapshot, incremental snapshots can be used
to reduce costs

This figure shows options for taking an SAP HANA file backup inside the VM, and then storing it HANA backup
files somewhere else using different tools. However, all solutions not involving a third-party backup service or
Azure Backup service have several hurdles in common. Some of them can be listed, like retention administration,
automatic restore process and providing automatic point-in-time recovery as Azure Backup service or other
specialized third-party backup suites and services provide. Many of those third-party services being able to run
on Azure.

SAP resources for HANA backup


SAP HANA backup documentation
Introduction to SAP HANA Administration
Planning Your Backup and Recovery Strategy
Schedule HANA Backup using ABAP DBACOCKPIT
Schedule Data Backups (SAP HANA Cockpit)
FAQ about SAP HANA backup in SAP Note 1642148
FAQ about SAP HANA database and storage snapshots in SAP Note 2039883
Unsuitable network file systems for backup and recovery in SAP Note 1820529
How to verify correctness of SAP HANA backup
Independent of your backup method, running a test restore against a different system is an absolute necessity.
This approach provides a way to ensure that a backup is correct, and internal processes for backup and restore
work as expected. While restoring backups could be a hurdle on-premises due to its infrastructure requirement, it
is much easier to accomplish in the cloud by providing necessary resources temporarily for this purpose. It is
correct that there are tools provided with HANA that can check backup files on ability to restore. However, the
purpose of frequent restore exercises is to test the process of a database restore and train that process with the
operations staff.
Keep in mind that doing a simple restore and checking if HANA is up and running is not sufficient. You should
run a table consistency check to be sure that the restored database is fine. SAP HANA offers several kinds of
consistency checks described in SAP Note 1977584.
Information about the table consistency check can also be found on the SAP website at Table and Catalog
Consistency Checks.
Pros and cons of HANA backup versus storage snapshot
SAP doesn't give preference to either HANA backup versus storage snapshot. It lists their pros and cons, so one
can determine which to use depending on the situation and available storage technology (see Planning Your
Backup and Recovery Strategy).
On Azure, be aware of the fact that the Azure blob snapshot feature doesn't provide file system consistency
across multiple disks (see Using blob snapshots with PowerShell).
In addition, one has to understand the billing implications when working frequently with blob snapshots as
described in this article: Understanding How Snapshots Accrue Charges—it isn't as obvious as using Azure
virtual disks.
SAP HANA data consistency when taking storage snapshots
As documented earlier, describing the snapshot backup capabilities of Azure Backup, file system and application
consistency is mandatory when taking storage snapshots. The easiest way to avoid problems would be to shut
down SAP HANA, or maybe even the whole virtual machine. Something that is not feasible for a production
instance.

NOTE
Disk snapshot based backups for SAP HANA in deployments where multiple database containers are used, require a
minimum release of HANA 2.0 SP04

Azure storage, does not provide file system consistency across multiple disks or volumes that are attached to a
VM during the snapshot process. That means the application consistency during the snapshot needs to be
delivered by the application, in this case SAP HANA itself. SAP Note 2039883 has important information about
SAP HANA backups by storage snapshots. For example, with XFS file systems, it is necessary to run xfs_freeze
before starting a storage snapshot to provide application consistency (see xfs_freeze(8) - Linux man page for
details on xfs_freeze ).
Assuming there is an XFS file system spanning four Azure virtual disks, the following steps provide a consistent
snapshot that represents the HANA data area:
1. Create HANA data snapshot prepare
2. Freeze the file systems of all disks/volumes (for example, use xfs_freeze )
3. Create all necessary blob snapshots on Azure
4. Unfreeze the file system
5. Confirm the HANA data snapshot (will delete the snapshot)
When using the Azure Backup's capability to perform application consistent snapshot backups, steps #1 need to
be coded/scripted by you in for the pre-snapshot script. Azure Backup service will execute steps #2 and #3. Steps
#4 and #5 need to be again provided by your code in the post-snapshot script. If you are not using Azure backup
service, you also need to code/script step #2 and #3 on your own. More information on creating HANA data
snapshots can be found in these articles:
[HANA data snapshots](https://help.sap.com/viewer/6b94445c94ae495c83a19646e7c3fd56/2.0.04/en-
US/ac114d4b34d542b99bc390b34f8ef375.html
More details to perform step #1 can be found in article Create a Data Snapshot (Native SQL)
Details to confirm/delete HANA data snapshots as need in step #5 can be found in the article Create a Data
Snapshot (Native SQL)
It is important to confirm the HANA snapshot. Due to the "Copy-on-Write," SAP HANA might not require
additional disk space while in this snapshot-prepare mode. It's also not possible to start new backups until the
SAP HANA snapshot is confirmed.
SAP HANA backup scheduling strategy
The SAP HANA article Planning Your Backup and Recovery Strategy states a basic plan to do backups. Rely on
SAP documentation around HANA and your experiences with other DBMS in defining the backup/restore
strategy and process for SAP HANA. The sequence of different types of backups, and the retention period are
highly dependent on the SLAs you need to provide.
SAP HANA backup encryption
SAP HANA offers encryption of data and log. If SAP HANA data and log are not encrypted, then the backups are
not encrypted by default. However, SAP HANA offers a separate backup encryption as documented in SAP HANA
Backup Encryption. If you are running older releases of SAP HANA, you might need to check whether backup
encryption was part of the functionality provided already.

Next steps
SAP HANA Azure Backup on file level describes the file-based backup option.
To learn how to establish high availability and plan for disaster recovery of SAP HANA on Azure (large
instances), see SAP HANA (large instances) high availability and disaster recovery on Azure.
SAP HANA Azure Backup on file level
12/22/2020 • 8 minutes to read • Edit Online

Introduction
This article is a related article to Backup guide for SAP HANA on Azure Virtual Machines, which provides an
overview and information on getting started and more details on Azure Backup service and storage snapshots.
Different VM types in Azure allow a different number of VHDs attached. The exact details are documented in Sizes
for Linux virtual machines in Azure. For the tests referred to in this documentation we used a GS5 Azure VM,
which allows 64 attached data disks. For larger SAP HANA systems, a significant number of disks might already be
taken for data and log files, possibly in combination with software striping for optimal disk IO throughput. For
more details on suggested disk configurations for SAP HANA deployments on Azure VMs, read the article SAP
HANA Azure virtual machine storage configurations. The recommendations made are including disk space
recommendations for local backups as well.
The standard way to manage backup/restore at the file level is with a file-based backup via SAP HANA Studio or
via SAP HANA SQL statements. For more information, read the article SAP HANA SQL and System Views
Reference.

This figure shows the dialog of the backup menu item in SAP HANA Studio. When choosing type "file," one has to
specify a path in the file system where SAP HANA writes the backup files. Restore works the same way.
While this choice sounds simple and straight forward, there are some considerations. An Azure VM has a
limitation of number of data disks that can be attached. There might not be capacity to store SAP HANA backup
files on the file systems of the VM, depending on the size of the database and disk throughput requirements,
which might involve software striping across multiple data disks. Various options for moving these backup files,
and managing file size restrictions and performance when handling terabytes of data, are provided later in this
article.
Another option, which offers more freedom regarding total capacity, is Azure blob storage. While a single blob is
also restricted to 1 TB, the total capacity of a single blob container is currently 500 TB. Additionally, it gives
customers the choice to select so-called "cool" blob storage, which has a cost benefit. See Azure Blob storage: hot,
cool, and archive access tiers for details about cool blob storage.
For additional safety, use a geo-replicated storage account to store the SAP HANA backups. See Azure Storage
redundancy for details about storage redundancy and storage replication.
One could place dedicated VHDs for SAP HANA backups in a dedicated backup storage account that is geo-
replicated. Or else one could copy the VHDs that keep the SAP HANA backups to a geo-replicated storage account,
or to a storage account that is in a different region.

Azure blobxfer utility details


To store directories and files on Azure storage, one could use CLI or PowerShell, or develop a tool using one of the
Azure SDKs. There is also a ready-to-use utility, AzCopy, for copying data to Azure storage. (see Transfer data with
the AzCopy Command-Line Utility).
Therefore, blobxfer was used for copying SAP HANA backup files. It is open source, used by many customers in
production environments, and available on GitHub. This tool allows one to copy data directly to either Azure blob
storage or Azure file share. It also offers a range of useful features, like md5 hash, or automatic parallelism when
copying a directory with multiple files.

SAP HANA backup performance


In this chapter, performance considerations are discussed. The numbers achieved may not represent most recent
state since there is steady development to achieve better throughput to Azure storage. As a result, you should
perform individual tests for your configuration and Azure region.

This screenshot shows the SAP HANA backup console of SAP HANA Studio. It took about 42 minutes to perform a
backup of 230 GB on a single Azure Standard HDD storage disk attached to the HANA VM using the XFS file
system on the one disk.
This screenshot is of YaST on the SAP HANA test VM. You can see the 1-TB single disk for SAP HANA backup. It
took about 42 minutes to backup 230 GB. In addition, five 200-GB disks were attached and software RAID md0
created, with striping on top of these five Azure data disks.

Repeating the same backup on software RAID with striping across five attached Azure standard storage data disks
brought the backup time from 42 minutes down to 10 minutes. The disks were attached without caching to the
VM. This exercise demonstrates the importance of disk write throughput for achieving good backup time. You
could switch to Azure Standard SSD storage or Azure Premium Storage to further accelerate the process for
optimal performance. In general, Azure standard HDD storage is not recommended and was used for
demonstration purposes only. Recommendation is to use a minimum of Azure Standard SSD storage or Azure
Premium Storage for production systems.

Copy SAP HANA backup files to Azure blob storage


The performance numbers, backup duration numbers, and copy duration numbers mentioned might not
represent most recent state of Azure technology. Microsoft is steadily improving Azure storage to deliver more
throughput and lower latencies. Therefore the numbers are for demonstration purposes only. You need to test for
your individual need in the Azure region of your choice to be able to judge with method is the best for you.
Another option to quickly store SAP HANA backup files is Azure blob storage. One single blob container has a
limit of around 500 TB, enough for SAP HANA systems, using M32ts, M32ls, M64ls, and GS5 VM types of Azure,
to keep sufficient SAP HANA backups. Customers have the choice between "hot" and "cold" blob storage (see
Azure Blob storage: hot, cool, and archive access tiers).
With the blobxfer tool, it is easy to copy the SAP HANA backup files directly to Azure blob storage.

You can see the files of a full SAP HANA file backup. Of the four files, the biggest one has roughly 230 GB size.
Not using md5 hash in the initial test, it took roughly 3000 seconds to copy the 230 GB to an Azure standard
storage account blob container.
The HANA Studio backup console allows one to restrict the max file size of HANA backup files. In the sample
environment, it improved performance by having multiple smaller backup files, instead of one large 230-GB file.
Setting the backup file size limit on the HANA side doesn't improve the backup time, because the files are written
sequentially. The file size limit was set to 60 GB, so the backup created four large data files instead of the 230-GB
single file. Using multiple backup files can become a necessity for backing up HANA databases if your backup
targets have limitations on file sizes of blob sizes.

To test parallelism of the blobxfer tool, the max file size for HANA backups was then set to 15 GB, which resulted in
19 backup files. This configuration brought the time for blobxfer to copy the 230 GB to Azure blob storage from
3000 seconds down to 875 seconds.
As you are exploring copying backups performed against local disks to other locations, like Azure blob storage,
keep in mind that the bandwidth used by an eventual parallel copy process is accounting against the network or
storage quota of your individual VM type. As a result, you need to balance the duration of the copy against the
network and storage bandwidth the normal workload running in the VM is requiring.

Copy SAP HANA backup files to NFS share


Microsoft Azure offer native NFS shares through Azure NetApp Files. You can create different volumes of dozen of
TBs in capacity to store and manage backups. You also can snapshot those volumes based on NetApp's
technology. Azure NetApp Files (ANF) is offered in three different service levels that give different storage
throughput. For more details, read the article Service levels for Azure NetApp Files. You can create and mount an
NFS volume from ANF as described in the article Quickstart: Set up Azure NetApp Files and create an NFS
volume.
Besides using native NFS volumes of Azure through ANF, there are various possibilities of creating own
deployments that provide NFS shares on Azure. All have the disadvantage that you need to deploy and manage
those solutions yourself. Some of those possibilities are documented in these articles:
High availability for NFS on Azure VMs on SUSE Linux Enterprise Server
GlusterFS on Azure VMs on Red Hat Enterprise Linux for SAP NetWeaver
NFS shares created by means described above can be used to directly execute HANA backups against or to copy
backups that were performed against local disks to those NFS shares.

NOTE
SAP HANA support NFS v3 and NFS v4.x. Any other format like SMB with CIFS file system is not supported to write HANA
backups against. See also SAP support note #1820529

Copy SAP HANA backup files to Azure Files


It is possible to mount an Azure Files share inside an Azure Linux VM. The article How to use Azure File storage
with Linux provides details on how to perform the configuration. For limitations of on Azure Files or Azure
premium files, read the article Azure Files scalability and performance targets.

NOTE
SMB with CIFS file system is not supported by SAP HANA to write HANA backups against. See also SAP support note
#1820529. As a result, you only can use this solution as final destination of a HANA database backup that has been
conducted directly against local attached disks

In a test conducted against Azure Files, not Azure Premium Files it took around 929 seconds to copy 19 backup
files with an overall volume of 230 GB. We expect the time using Azure Premium Files being way better. However,
you need to keep in mind that you need to balance the interests of a fast copy with the requirements your
workload has on network bandwidth. Since every Azure VM type enforces network bandwidth quota, you need to
stay within the range of that quota with your workload plus the copy of the backup files.
Storing SAP HANA backup files on Azure files could be an interesting option. Especially with the improved latency
and throughput of Azure Premium Files.

Next steps
Backup guide for SAP HANA on Azure Virtual Machines gives an overview and information on getting started.
To learn how to establish high availability and plan for disaster recovery of SAP HANA on Azure (large
instances), see SAP HANA (large instances) high availability and disaster recovery on Azure.
SAP workloads on Azure: planning and deployment
checklist
12/22/2020 • 28 minutes to read • Edit Online

This checklist is designed for customers moving SAP NetWeaver, S/4HANA, and Hybris applications to Azure
infrastructure as a service. Throughout the duration of the project, a customer and/or SAP partner should review
the checklist. It's important to note that many of the checks are completed at the beginning of the project and
during the planning phase. After the deployment is done, straightforward changes on deployed Azure
infrastructure or SAP software releases can become complex.
Review the checklist at key milestones during your project. Doing so will enable you to detect small problems
before they become large problems. You'll also have enough time to re-engineer and test any necessary changes.
Don't consider this checklist complete. Depending on your situation, you might need to perform many more
checks.
The checklist doesn't include tasks that are independent of Azure. For example, SAP application interfaces change
during a move to the Azure platform or to a hosting provider.
This checklist can also be used for systems that are already deployed. New features, like Write Accelerator and
Availability Zones, and new VM types might have been added since you deployed. So it's useful to review the
checklist periodically to ensure you're aware of new features in the Azure platform.

Project preparation and planning phase


During this phase, you plan the migration of your SAP workload to the Azure platform. At a minimum, during this
phase you need to create the following documents and define and discuss the following elements of the
migration:
1. High-level design document. This document should contain:
The current inventory of SAP components and applications, and a target application inventory for Azure.
A responsibility assignment matrix (RACI) that defines the responsibilities and assignments of the
parties involved. Start at a high level, and work to more granular levels throughout planning and the
first deployments.
A high-level solution architecture.
A decision about which Azure regions to deploy to. See the list of Azure regions. To learn which services
are available in each region, see Products available by region.
A networking architecture to connect from on-premises to Azure. Start to familiarize yourself with the
Virtual Datacenter blueprint for Azure.
Security principles for running high-impact business data in Azure. To learn about data security, start
with the Azure security documentation.
2. Technical design document. This document should contain:
A block diagram for the solution.
The sizing of compute, storage, and networking components in Azure. For SAP sizing of Azure VMs, see
[SAP
note #1928533](https://launchpad.support.sap.com/#/notes/1928533).
Business continuity and disaster recovery architecture.
Detailed information about OS, DB, kernel, and SAP support pack versions. It's not necessarily true that
every OS release supported by SAP NetWeaver or S/4HANA is supported on Azure VMs. The same is
true for DBMS releases. Check the following sources to align and if necessary upgrade SAP releases,
DBMS releases, and OS releases to ensure SAP and Azure support. You need to have release
combinations supported by SAP and Azure to get full support from SAP and Microsoft. If necessary, you
need to plan for upgrading some software components. More details on supported SAP, OS, and DBMS
software are documented here:
SAP support note #1928533. This note defines the minimum OS releases supported on Azure
VMs. It also defines the minimum database releases required for most non-HANA databases.
Finally, it provides the SAP sizing for SAP-supported Azure VM types.
SAP support note #2015553. This note defines support policies around Azure storage and
support relationship needed with Microsoft.
SAP support note #2039619. This note defines the Oracle support matrix for Azure. Oracle
supports only Windows and Oracle Linux as guest operating systems on Azure for SAP
workloads. This support statement also applies for the SAP application layer that runs SAP
instances. However, Oracle doesn't support high availability for SAP Central Services in Oracle
Linux through Pacemaker. If you need high availability for ASCS on Oracle Linux, you need to use
SIOS Protection Suite for Linux. For detailed SAP certification data, see SAP support note
#1662610 - Support details for SIOS Protection Suite for Linux. For Windows, the SAP-supported
Windows Server Failover Clustering solution for SAP Central Services is supported in conjunction
with Oracle as the DBMS layer.
SAP support note #2235581. This note provides the support matrix for SAP HANA on different
OS releases.
SAP HANA-supported Azure VMs and HANA Large Instances are listed on the SAP website.
SAP Product Availability Matrix.
SAP support note #2555629 - SAP HANA 2.0 Dynamic Tiering – Hypervisor and Cloud Support
SAP support note #1662610 - Support details for SIOS Protection Suite for Linux
SAP notes for other SAP-specific products.
Using multi-SID cluster configurations for SAP Central Services is supported on Windows, SLES and
RHEL guest operating systems on Azure. Keep in mind that the blast radius can increase the more
ASCS/SCS you place on such a multi-SID cluster. You can find documentation for the respective guest
OS scenario in these articles:
SAP ASCS/SCS instance multi-SID high availability with Windows Server Failover Clustering and
shared disk on Azure
SAP ASCS/SCS instance multi-SID high availability with Windows Server Failover Clustering and
file share on Azure
High availability for SAP NetWeaver on Azure VMs on SUSE Linux Enterprise Server for SAP
applications multi-SID guide
High availability for SAP NetWeaver on Azure VMs on Red Hat Enterprise Linux for SAP
applications multi-SID guide
High availability and disaster recovery architecture.
Based on RTO and RPO, define what the high availability and disaster recovery architecture needs
to look like.
For high availability within a zone, check what the desired DBMS has to offer in Azure. Most
DBMS packages offer synchronous methods of a synchronous hot standby, which we
recommend for production systems. Also check the SAP-related documentation for different
databases, starting with Considerations for Azure Virtual Machines DBMS deployment for SAP
workloads and related documents. Using Windows Server Failover Clustering with a shared disk
configuration for the DBMS layer as, for example, described for SQL Server, isn't supported.
Instead, use solutions like:
SQL Server Always On
Oracle Data Guard
HANA System Replication
For disaster recovery across Azure regions, review the solutions offered by different DBMS
vendors. Most of them support asynchronous replication or log shipping.
For the SAP application layer, determine whether you'll run your business regression test
systems, which ideally are replicas of your production deployments, in the same Azure region or
in your DR region. In the second case, you can target that business regression system as the DR
target for your production deployments.
If you decide not to place the non-production systems in the DR site, look into Azure Site
Recovery as a method for replicating the SAP application layer into the Azure DR region. For
more information, see a Set up disaster recovery for a multi-tier SAP NetWeaver app
deployment.
If you decide to use a combined HADR configuration by using Azure Availability Zones,
familiarize yourself with the Azure regions where Availability Zones are available. Also take into
account restrictions that can be introduced by increased network latencies between two
Availability Zones.
3. An inventory of all SAP interfaces (SAP and non-SAP).
4. Design of foundation services. This design should include the following items:
Active Directory and DNS design.
Network topology within Azure and assignment of different SAP systems.
Azure role-based access control (Azure RBAC) structure for teams that manage infrastructure and SAP
applications in Azure.
Resource group topology.
Tagging strategy.
Naming conventions for VMs and other infrastructure components and/or logical names.
5. Microsoft Professional or Premier Support contract. Identify your Microsoft Technical Account Manager (TAM)
if you have a Premier support contract with Microsoft. For SAP support requirements, see SAP support note
#2015553.
6. The number of Azure subscriptions and core quota for the subscriptions. Open support requests to increase
quotas of Azure subscriptions as needed.
7. Data reduction and data migration plan for migrating SAP data into Azure. For SAP NetWeaver systems, SAP
has guidelines on how to limit the volume of large amounts of data. See this SAP guide about data
management in SAP ERP systems. Some of the content also applies to NetWeaver and S/4HANA systems in
general.
8. An automated deployment approach. The goal of the automation of infrastructure deployments on Azure is to
deploy in a deterministic way and get deterministic results. Many customers use PowerShell or CLI-based
scripts. But there are various open-source technologies that you can use to deploy Azure infrastructure for SAP
and even install SAP software. You can find examples on GitHub:
Automated SAP Deployments in Azure Cloud
SAP HANA Installation
9. Define a regular design and deployment review cadence between you as the customer, the system integrator,
Microsoft, and other involved parties.

Pilot phase (strongly recommended)


You can run a pilot before or during project planning and preparation. You can also use the pilot phase to test
approaches and designs made during the planning and preparation phase. And you can expand the pilot phase to
make it a real proof of concept.
We recommend that you set up and validate a full HADR solution and security design during a pilot deployment.
Some customers perform scalability tests during this phase. Other customers use deployments of SAP sandbox
systems as a pilot phase. We assume you've already identified a system that you want to migrate to Azure for the
pilot.
1. Optimize data transfer to Azure. The optimal choice is highly dependent on the specific scenario. Transfer from
on-premises through Azure ExpressRoute is fastest if the ExpressRoute circuit has enough bandwidth. In other
scenarios, transferring through the internet is faster.
2. For a heterogeneous SAP platform migration that involves an export and import of data, test and optimize the
export and import phases. For large migrations in which SQL Server is the destination platform, you can find
recommendations. You can use Migration Monitor/SWPM if you don't need a combined release upgrade. You
can use the SAP DMO process when you combine the migration with an SAP release upgrade. To do so, you
need to meet certain requirements for the source and target DBMS platform combination. This process is
documented in Database Migration Option (DMO) of SUM 2.0 SP03.
a. Export to source, export file upload to Azure, and import performance. Maximize overlap between
export and import.
b. Evaluate the volume of the database on the target and destination platforms for the purposes of
infrastructure sizing.
c. Validate and optimize timing.
3. Technical validation.
a. VM types.
Review the resources in SAP support notes, in the SAP HANA hardware directory, and in the SAP
PAM again. Make sure there are no changes to supported VMs for Azure, supported OS releases
for those VM types, and supported SAP and DBMS releases.
Validate again the sizing of your application and the infrastructure you deploy on Azure. If you're
moving existing applications, you can often derive the necessary SAPS from the infrastructure
you use and the SAP benchmark webpage and compare it to the SAPS numbers listed in SAP
support note #1928533. Also keep this article on SAPS ratings in mind.
Evaluate and test the sizing of your Azure VMs with regard to maximum storage throughput and
network throughput of the VM types you chose during the planning phase. You can find the data
here:
Sizes for Windows virtual machines in Azure. It's important to consider the max uncached
disk throughput for sizing.
Sizes for Linux virtual machines in Azure. It's important to consider the max uncached disk
throughput for sizing.
b. Storage.
Check the document Azure Storage types for SAP workload
At a minimum, use Azure Standard SSD storage for VMs that represent SAP application layers
and for deployment of DBMSs that aren't performance sensitive.
In general, we don't recommend the use of Azure Standard HDD disks.
Use Azure Premium Storage for any DBMS VMs that are remotely performance sensitive.
Use Azure managed disks.
Use Azure Write Accelerator for DBMS log drives with M-Series. Be aware of Write Accelerator
limits and usage, as documented in Write Accelerator.
For the different DBMS types, check the generic SAP-related DBMS documentation and the
DBMS-specific documentation that the generic document points to.
For more information about SAP HANA, see SAP HANA infrastructure configurations and
operations on Azure.
Never mount Azure data disks to an Azure Linux VM by using the device ID. Instead, use the
universally unique identifier (UUID). Be careful when you use graphical tools to mount Azure data
disks, for example. Double-check the entries in /etc/fstab to make sure the UUID is used to mount
the disks. You can find more details in this article.
c. Networking.
Test and evaluate your virtual network infrastructure and the distribution of your SAP
applications across or within the different Azure virtual networks.
Evaluate the hub-and-spoke virtual network architecture approach or the microsegmentation
approach within a single Azure virtual network. Base this evaluation on: 1. Costs of data exchange
between peered Azure virtual networks. For information about costs, see Virtual Network pricing.
2. Advantages of a fast disconnection of the peering between Azure virtual networks as opposed
to changing the network security group to isolate a subnet within a virtual network. This
evaluation is for cases when applications or VMs hosted in a subnet of the virtual network
became a security risk. 3. Central logging and auditing of network traffic between on-premises,
the outside world, and the virtual datacenter you built in Azure.
Evaluate and test the data path between the SAP application layer and the SAP DBMS layer.
Placement of Azure network virtual appliances in the communication path between the
SAP application and the DBMS layer of SAP systems based on SAP NetWeaver, Hybris, or
S/4HANA isn't supported.
Placement of the SAP application layer and SAP DBMS in different Azure virtual networks
that aren't peered isn't supported.
You can use application security group and network security group rules to define routes
between the SAP application layer and the SAP DBMS layer.
Make sure that Azure Accelerated Networking is enabled on the VMs used in the SAP application
layer and the SAP DBMS layer. Keep in mind that different OS levels are needed to support
Accelerated Networking in Azure:
Windows Server 2012 R2 or later.
SUSE Linux 12 SP3 or later.
RHEL 7.4 or later.
Oracle Linux 7.5. If you're using the RHCKL kernel, release 3.10.0-862.13.1.el7 is required. If
you're using the Oracle UEK kernel, release 5 is required.
Test and evaluate the network latency between the SAP application layer VMs and DBMS VMs
according to SAP support notes #500235 and #1100926. Evaluate the results against the
network latency guidance in SAP support note #1100926. The network latency should be in the
moderate or good range. Exceptions apply to traffic between VMs and HANA Large Instance
units, as documented in this article.
Make sure ILB deployments are set up to use Direct Server Return. This setting will reduce latency
when Azure ILBs are used for high availability configurations on the DBMS layer.
If you're using Azure Load Balancer together with Linux guest operating systems, check that the
Linux network parameter net.ipv4.tcp_timestamps is set to 0 . This recommendation conflicts
with recommendations in older versions of SAP note #2382421. The SAP note is now updated to
state that this parameter needs to be set to 0 to work with Azure load balancers.
Consider using Azure proximity placement groups to get optimal network latency. For more
information, see Azure proximity placement groups for optimal network latency with SAP
applications.
d. High availability and disaster recovery deployments.
If you deploy the SAP application layer without defining a specific Azure Availability Zone, make
sure that all VMs that run SAP dialog instances or middleware instances of a single SAP system
are deployed in an availability set.
If you don't need high availability for SAP Central Services and the DBMS, you can deploy these
VMs into the same availability set as the SAP application layer.
If you protect SAP Central Services and the DBMS layer for high availability by using passive
replication, place the two nodes for SAP Central Services in one separate availability set and the
two DBMS nodes in another availability set.
If you deploy into Azure Availability Zones, you can't use availability sets. But you do need to
make sure you deploy the active and passive Central Services nodes into two different
Availability Zones. Use Availability Zones that have the lowest latency between them. Keep in
mind that you need to use Azure Standard Load Balancer for the use case of establishing
Windows or Pacemaker failover clusters for the DBMS and SAP Central Services layer across
Availability Zones. You can't use Basic Load Balancer for zonal deployments.
e. Timeout settings.
Check the SAP NetWeaver developer traces of the SAP instances to make sure there are no
connection breaks between the enqueue server and the SAP work processes. You can avoid these
connection breaks by setting these two registry parameters:
HKLM\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\KeepAliveTime = 120000.
For more information, see KeepAliveTime.
HKLM\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\KeepAliveInterval =
120000. For more information, see KeepAliveInterval.
To avoid GUI timeouts between on-premises SAP GUI interfaces and SAP application layers
deployed in Azure, check whether these parameters are set in the default.pfl or the instance
profile:
rdisp/keepalive_timeout = 3600
rdisp/keepalive = 20
To prevent disruption of established connections between the SAP enqueue process and the SAP
work processes, you need to set the enque/encni/set_so_keepalive parameter to true . See
also SAP note #2743751.
If you use a Windows failover cluster configuration, make sure that the time to react on non-
responsive nodes is set correctly for Azure. The article Tuning Failover Cluster Network
Thresholds lists parameters and how they affect failover sensitivities. Assuming the cluster nodes
are in the same subnet, you should change these parameters:
SameSubNetDelay = 2000
SameSubNetThreshold = 15
RoutingHistorylength = 30
f. OS Settings or Patches
For running HANA on SAP, read these notes and documentations:
SAP support note #2814271 - SAP HANA Backup fails on Azure with Checksum Error
SAP support note #2753418 - Potential Performance Degradation Due to Timer Fallback
SAP support note #2791572 - Performance Degradation Because of Missing VDSO
Support For Hyper-V in Azure
SAP support note #2382421 - Optimizing the Network Configuration on HANA- and OS-
Level
SAP support note #2694118 - Red Hat Enterprise Linux HA Add-On on Azure
SAP support note #1984787 - SUSE LINUX Enterprise Server 12: Installation notes
SAP support note #2002167 - Red Hat Enterprise Linux 7.x: Installation and Upgrade
SAP support note #2292690 - SAP HANA DB: Recommended OS settings for RHEL 7
SAP support note #2772999 - Red Hat Enterprise Linux 8.x: Installation and Configuration
SAP support note #2777782 - SAP HANA DB: Recommended OS Settings for RHEL 8
SAP support note #2578899 - SUSE Linux Enterprise Server 15: Installation Note
SAP support note #2455582 - Linux: Running SAP applications compiled with GCC 6.x
SAP support note #2729475 - HWCCT Failed with Error "Hypervisor is not supported" on
Azure VMs certified for SAP HANA
4. Test your high availability and disaster recovery procedures.
a. Simulate failover situations by shutting down VMs (Windows guest operating systems) or putting
operating systems in panic mode (Linux guest operating systems). This step will help you figure out
whether your failover configurations work as designed.
b. Measure how long it takes to execute a failover. If the times are too long, consider:
For SUSE Linux, use SBD devices instead of the Azure Fence agent to speed up failover.
For SAP HANA, if the reload of data takes too long, consider provisioning more storage
bandwidth.
c. Test your backup/restore sequence and timing and make corrections if you need to. Make sure that
backup times are sufficient. You also need to test the restore and time restore activities. Make sure that
restore times are within your RTO SLAs wherever your RTO relies on a database or VM restore process.
d. Test cross-region DR functionality and architecture.
5. Security checks.
a. Test the validity of your Azure role-based access control (Azure RBAC) architecture. The goal is to
separate and limit the access and permissions of different teams. For example, SAP Basis team members
should be able to deploy VMs and assign disks from Azure Storage into a given Azure virtual network.
But the SAP Basis team shouldn't be able to create its own virtual networks or change the settings of
existing virtual networks. Members of the network team shouldn't be able to deploy VMs into virtual
networks in which SAP application and DBMS VMs are running. Nor should members of this team be
able to change attributes of VMs or even delete VMs or disks.
b. Verify that network security group and ASC rules work as expected and shield the protected resources.
c. Make sure that all resources that need to be encrypted are encrypted. Define and implement processes
to back up certificates, store and access those certificates, and restore the encrypted entities.
d. Use Azure Disk Encryption for OS disks where possible from an OS-support point of view.
e. Be sure that you're not using too many layers of encryption. In some cases, it does make sense to use
Azure Disk Encryption together with one of the DBMS Transparent Data Encryption methods to protect
different disks or components on the same server. For example, on an SAP DBMS server, the Azure Disk
Encryption (ADE) can be enabled on the operating system boot disk (if the OS supports ADE) and those
data disk(s) not used by the DBMS data persistence files. An example is to use ADE on the disk holding
the DBMS TDE encryption keys.
6. Performance testing. In SAP, based on SAP tracing and measurements, make these comparisons:
Where applicable, compare the top 10 online reports to your current implementation.
Where applicable, compare the top 10 batch jobs to your current implementation.
Compare data transfers through interfaces into the SAP system. Focus on interfaces where you know
the transfer is going between different locations, like from on-premises to Azure.

Non-production phase
In this phase, we assume that after a successful pilot or proof of concept (POC), you're starting to deploy non-
production SAP systems to Azure. Incorporate everything you learned and experienced during the POC to this
deployment. All the criteria and steps listed for POCs apply to this deployment as well.
During this phase, you usually deploy development systems, unit testing systems, and business regression testing
systems to Azure. We recommend that at least one non-production system in one SAP application line has the full
high availability configuration that the future production system will have. Here are some additional steps that you
need to complete during this phase:
1. Before you move systems from the old platform to Azure, collect resource consumption data, like CPU usage,
storage throughput, and IOPS data. Especially collect this data from the DBMS layer units, but also collect it
from the application layer units. Also measure network and storage latency.
2. Record the availability usage time patterns of your systems. The goal is to figure out whether non-production
systems need to be available all day, every day or whether there are non-production systems that can be shut
down during certain phases of a week or month.
3. Test and determine whether you want to create your own OS images for your VMs in Azure or whether you
want to use an image from the Azure Shared Image Gallery. If you're using an image from the Shared Image
Gallery, make sure to use an image that reflects the support contract with your OS vendor. For some OS
vendors, Shared Image Gallery lets you bring your own license images. For other OS images, support is
included in the price quoted by Azure. If you decide to create your own OS images, you can find documentation
in these articles:
Build a generalized image of a Windows VM deployed in Azure
Build a generalized image of a Linux VM deployed in Azure
4. If you use SUSE and Red Hat Linux images from the Shared Image Gallery, you need to use the images for SAP
provided by the Linux vendors in the Shared Image Gallery.
5. Make sure to fulfill the SAP support requirements for Microsoft support agreements. See SAP support note
#2015553. For HANA Large Instances, see Onboarding requirements.
6. Make sure the right people get planned maintenance notifications so you can choose the best downtimes.
7. Frequently check for Azure presentations on channels like Channel 9 for new functionality that might apply to
your deployments.
8. Check SAP notes related to Azure, like support note #1928533, for new VM SKUs and newly supported OS and
DBMS releases. Compare the pricing of new VM types against that of older VM types, so you can deploy VMs
with the best price/performance ratio.
9. Recheck SAP support notes, the SAP HANA hardware directory, and the SAP PAM. Make sure there were no
changes in supported VMs for Azure, supported OS releases on those VMs, and supported SAP and DBMS
releases.
10. Check the SAP website for new HANA-certified SKUs in Azure. Compare the pricing of new SKUs with the ones
you planned to use. Eventually, make necessary changes to use the ones that have the best price/performance
ratio.
11. Adapt your deployment scripts to use new VM types and incorporate new Azure features that you want to use.
12. After deployment of the infrastructure, test and evaluate the network latency between SAP application layer
VMs and DBMS VMs, according to SAP support notes #500235 and #1100926. Evaluate the results against the
network latency guidance in SAP support note #1100926. The network latency should be in the moderate or
good range. Exceptions apply to traffic between VMs and HANA Large Instance units, as documented in this
article. Make sure that none of the restrictions mentioned in Considerations for Azure Virtual Machines DBMS
deployment for SAP workloads and SAP HANA infrastructure configurations and operations on Azure apply to
your deployment.
13. Make sure your VMs are deployed to the correct Azure proximity placement group, as described in Azure
proximity placement groups for optimal network latency with SAP applications.
14. Perform all the other checks listed for the proof of concept phase before applying the workload.
15. As the workload applies, record the resource consumption of the systems in Azure. Compare this consumption
with records from your old platform. Adjust VM sizing of future deployments if you see that you have large
differences. Keep in mind that when you downsize, storage, and network bandwidths of VMs will be reduced as
well.
Sizes for Windows virtual machines in Azure
Sizes for Linux virtual machines in Azure
16. Experiment with system copy functionality and processes. The goal is to make it easy for you to copy a
development system or a test system, so project teams can get new systems quickly.
17. Optimize and hone your team's Azure role-based access, permissions, and processes to make sure you have
separation of duties. At the same time, make sure all teams can perform their tasks in the Azure infrastructure.
18. Exercise, test, and document high-availability and disaster recovery procedures to enable your staff to execute
these tasks. Identify shortcomings and adapt new Azure functionality that you're integrating into your
deployments.
Production preparation phase
In this phase, collect what you experienced and learned during your non-production deployments and apply it to
future production deployments. You also need to prepare the work of the data transfer between your current
hosting location and Azure.
1. Complete necessary SAP release upgrades of your production systems before moving to Azure.
2. Agree with the business owners on functional and business tests that need to be conducted after migration of
the production system.
3. Make sure these tests are completed with the source systems in the current hosting location. Avoid conducting
tests for the first time after the system is moved to Azure.
4. Test the process of migrating production systems to Azure. If you're not moving all production systems to
Azure during the same time frame, build groups of production systems that need to be at the same hosting
location. Test data migration. Here are some common methods:
Use DBMS methods like backup/restore in combination with SQL Server Always On, HANA System
Replication, or Log shipping to seed and synchronize database content in Azure.
Use backup/restore for smaller databases.
Use SAP Migration Monitor, which is integrated into SAP SWPM, to perform heterogeneous migrations.
Use the SAP DMO process if you need to combine your migration with an SAP release upgrade. Keep in
mind that not all combinations of source DBMS and target DBMS are supported. You can find more
information in the specific SAP support notes for the different releases of DMO. For example, Database
Migration Option (DMO) of SUM 2.0 SP04.
Test whether data transfer throughput is better through the internet or through ExpressRoute, in case
you need to move backups or SAP export files. If you're moving data through the internet, you might
need to change some of your network security group/application security group rules that you'll need to
have in place for future production systems.
5. Before moving systems from your old platform to Azure, collect resource consumption data. Useful data
includes CPU usage, storage throughput, and IOPS data. Especially collect this data from the DBMS layer units,
but also collect it from the application layer units. Also measure network and storage latency.
6. Recheck SAP support notes and the required OS settings, the SAP HANA hardware directory, and the SAP PAM.
Make sure there were no changes in supported VMs for Azure, supported OS releases in those VMs, and
supported SAP and DBMS releases.
7. Update deployment scripts to take into account the latest decisions you've made on VM types and Azure
functionality.
8. After deploying infrastructure and applications, validate that:
The correct VM types were deployed, with the correct attributes and storage sizes.
The VMs are on the correct and desired OS releases and patches and are uniform.
VMs are hardened as required and in a uniform way.
The correct application releases and patches were installed and deployed.
The VMs were deployed into Azure availability sets as planned.
Azure Premium Storage is used for latency-sensitive disks or where the single-VM SLA of 99.9% is
required.
Azure Write Accelerator is deployed correctly.
Make sure that, within the VMs, storage spaces, or stripe sets were built correctly across disks
that need Write Accelerator.
Check the configuration of software RAID on Linux.
Check the configuration of LVM on Linux VMs in Azure.
Azure managed disks are used exclusively.
VMs were deployed into the correct availability sets and Availability Zones.
Azure Accelerated Networking is enabled on the VMs used in the SAP application layer and the SAP
DBMS layer.
No Azure network virtual appliances are in the communication path between the SAP application and
the DBMS layer of SAP systems based on SAP NetWeaver, Hybris, or S/4HANA.
Application security group and network security group rules allow communication as desired and
planned and block communication where required.
Timeout settings are set correctly, as described earlier.
VMs are deployed to the correct Azure proximity placement group, as described in Azure proximity
placement groups for optimal network latency with SAP applications.
Network latency between SAP application layer VMs and DBMS VMs is tested and validated as
described in SAP support notes #500235 and #1100926. Evaluate the results against the network
latency guidance in SAP support note #1100926. The network latency should be in the moderate or
good range. Exceptions apply to traffic between VMs and HANA Large Instance units, as documented in
this article.
Encryption was implemented where necessary and with the appropriate encryption method.
Interfaces and other applications can connect the newly deployed infrastructure.
9. Create a playbook for reacting to planned Azure maintenance. Determine the order in which systems and VMs
should be rebooted for planned maintenance.

Go-live phase
During the go-live phase, be sure to follow the playbooks you developed during earlier phases. Execute the steps
that you tested and practiced. Don't accept last-minute changes in configurations and processes. Also complete
these steps:
1. Verify that Azure portal monitoring and other monitoring tools are working. We recommend Windows
Performance Monitor (perfmon) for Windows and SAR for Linux.
CPU counters.
Average CPU time, total (all CPUs)
Average CPU time, each individual processor (128 processors on M128 VMs)
CPU kernel time, each individual processor
CPU user time, each individual processor
Memory.
Free memory
Memory page in/second
Memory page out/second
Disk.
Disk read in KBps, per individual disk
Disk reads/second, per individual disk
Disk read in microseconds/read, per individual disk
Disk write in KBps, per individual disk
Disk write/second, per individual disk
Disk write in microseconds/read, per individual disk
Network.
Network packets in/second
Network packets out/second
Network KB in/second
Network KB out/second
2. After data migration, perform all the validation tests you agreed upon with the business owners. Accept
validation test results only when you have results for the original source systems.
3. Check whether interfaces are functioning and whether other applications can communicate with the newly
deployed production systems.
4. Check the transport and correction system through SAP transaction STMS.
5. Perform database backups after the system is released for production.
6. Perform VM backups for the SAP application layer VMs after the system is released for production.
7. For SAP systems that weren't part of the current go-live phase but that communicate with the SAP systems
that you moved to Azure during this go-live phase, you need to reset the host name buffer in SM51. Doing so
will remove the old cached IP addresses associated with the names of the application instances you moved to
Azure.

Post production
This phase is about monitoring, operating, and administering the system. From an SAP point of view, the usual
tasks that you were required to complete in your old hosting location apply. Complete these Azure-specific tasks
as well:
1. Review Azure invoices for high-charging systems.
2. Optimize price/performance efficiency on the VM side and the storage side.
3. Optimize the times when you can shut down systems.

Next steps
See these articles:
Azure Virtual Machines planning and implementation for SAP NetWeaver
Azure Virtual Machines deployment for SAP NetWeaver
Considerations for Azure Virtual Machines DBMS deployment for SAP workloads
Azure Virtual Machines planning and
implementation for SAP NetWeaver
12/22/2020 • 112 minutes to read • Edit Online

Microsoft Azure enables companies to acquire compute and storage resources in minimal time without
lengthy procurement cycles. Azure Virtual Machine service allows companies to deploy classical
applications, like SAP NetWeaver based applications into Azure and extend their reliability and
availability without having further resources available on-premises. Azure Virtual Machine Services
also supports cross-premises connectivity, which enables companies to actively integrate Azure Virtual
Machines into their on-premises domains, their Private Clouds and their SAP System Landscape. This
white paper describes the fundamentals of Microsoft Azure Virtual Machine and provides a walk-
through of planning and implementation considerations for SAP NetWeaver installations in Azure and
as such should be the document to read before starting actual deployments of SAP NetWeaver on
Azure. The paper complements the SAP Installation Documentation and SAP Notes, which represent
the primary resources for installations and deployments of SAP software on given platforms.

NOTE
This article has been updated to use the new Azure PowerShell Az module. You can still use the AzureRM
module, which will continue to receive bug fixes until at least December 2020. To learn more about the new Az
module and AzureRM compatibility, see Introducing the new Azure PowerShell Az module. For Az module
installation instructions, see Install Azure PowerShell.

Summary
Cloud Computing is a widely used term, which is gaining more and more importance within the IT
industry, from small companies up to large and multinational corporations.
Microsoft Azure is the Cloud Services Platform from Microsoft, which offers a wide spectrum of new
possibilities. Now customers are able to rapidly provision and de-provision applications as a service in
the cloud, so they are not limited to technical or budgeting restrictions. Instead of investing time and
budget into hardware infrastructure, companies can focus on the application, business processes, and
its benefits for customers and users.
With Microsoft Azure Virtual Machine Services, Microsoft offers a comprehensive Infrastructure as a
Service (IaaS) platform. SAP NetWeaver based applications are supported on Azure Virtual Machines
(IaaS). This whitepaper describes how to plan and implement SAP NetWeaver based applications
within Microsoft Azure as the platform of choice.
The paper itself focuses on two main aspects:
The first part describes two supported deployment patterns for SAP NetWeaver based applications
on Azure. It also describes general handling of Azure with SAP deployments in mind.
The second part details implementing the different scenarios described in the first part.
For additional resources, see chapter Resources in this document.
Definitions upfront
Throughout the document, we use the following terms:
IaaS: Infrastructure as a Service
PaaS: Platform as a Service
SaaS: Software as a Service
SAP Component: an individual SAP application such as ECC, BW, Solution Manager, or S/4HANA.
SAP components can be based on traditional ABAP or Java technologies or a non-NetWeaver based
application such as Business Objects.
SAP Environment: one or more SAP components logically grouped to perform a business function
such as Development, QAS, Training, DR, or Production.
SAP Landscape: This term refers to the entire SAP assets in a customer's IT landscape. The SAP
landscape includes all production and non-production environments.
SAP System: The combination of DBMS layer and application layer of, for example, an SAP ERP
development system, SAP BW test system, SAP CRM production system, etc. In Azure deployments,
it is not supported to divide these two layers between on-premises and Azure. Means an SAP
system is either deployed on-premises or it is deployed in Azure. However, you can deploy the
different systems of an SAP landscape into either Azure or on-premises. For example, you could
deploy the SAP CRM development and test systems in Azure but the SAP CRM production system
on-premises.
Cross-premises or hybrid: Describes a scenario where VMs are deployed to an Azure subscription
that has site-to-site, multi-site, or ExpressRoute connectivity between the on-premises datacenter(s)
and Azure. In common Azure documentation, these kinds of deployments are also described as
cross-premises or hybrid scenarios. The reason for the connection is to extend on-premises
domains, on-premises Active Directory/OpenLDAP, and on-premises DNS into Azure. The on-
premises landscape is extended to the Azure assets of the subscription. Having this extension, the
VMs can be part of the on-premises domain. Domain users of the on-premises domain can access
the servers and can run services on those VMs (like DBMS services). Communication and name
resolution between VMs deployed on-premises and Azure deployed VMs is possible. This is the
most common and nearly exclusive case deploying SAP assets into Azure. For more information,
see this article and this.
Azure Monitoring Extension, Enhanced Monitoring, and Azure Extension for SAP: Describe one and
the same item. It describes a VM extension that needs to be deployed by you to provide some basic
data about the Azure infrastructure to the SAP Host Agent. SAP in SAP notes might refer to it as
Monitoring Extension or Enhanced monitoring. In Azure, we are referring to it as Azure Extension
for SAP .

NOTE
Cross-premises or hybrid deployments of SAP systems where Azure Virtual Machines running SAP systems are
members of an on-premises domain are supported for production SAP systems. Cross-premises or hybrid
configurations are supported for deploying parts or complete SAP landscapes into Azure. Even running the
complete SAP landscape in Azure requires having those VMs being part of on-premises domain and
ADS/OpenLDAP.

Resources
The entry point for SAP workload on Azure documentation is found here. Starting with this entry point
you find many articles that cover the topics of:
SAP NetWeaver and Business One on Azure
SAP DBMS guides for various DBMS systems in Azure
High availability and disaster recovery for SAP workload on Azure
Specific guidance for running SAP HANA on Azure
Guidance specific to Azure HANA Large Instances for the SAP HANA DBMS
IMPORTANT
Wherever possible a link to the referring SAP Installation Guides or other SAP documentation is used (Reference
InstGuide-01, see http://service.sap.com/instguides). When it comes to the prerequisites, installation process, or
details of specific SAP functionality the SAP documentation and guides should always be read carefully, as the
Microsoft documents only covers specific tasks for SAP software installed and operated in a Microsoft Azure
Virtual Machine.

The following SAP Notes are related to the topic of SAP on Azure:

N OT E N UM B ER T IT L E

1928533 SAP Applications on Azure: Supported Products and


Sizing

2015553 SAP on Microsoft Azure: Support Prerequisites

1999351 Troubleshooting Enhanced Azure Monitoring for SAP

2178632 Key Monitoring Metrics for SAP on Microsoft Azure

1409604 Virtualization on Windows: Enhanced Monitoring

2191498 SAP on Linux with Azure: Enhanced Monitoring

2243692 Linux on Microsoft Azure (IaaS) VM: SAP license issues

1984787 SUSE LINUX Enterprise Server 12: Installation notes

2002167 Red Hat Enterprise Linux 7.x: Installation and Upgrade

2069760 Oracle Linux 7.x SAP Installation and Upgrade

1597355 Swap-space recommendation for Linux

Also read the SCN Wiki that contains all SAP Notes for Linux.
General default limitations and maximum limitations of Azure subscriptions can be found in this
article.

Possible Scenarios
SAP is often seen as one of the most mission-critical applications within enterprises. The architecture
and operations of these applications is mostly complex and ensuring that you meet requirements on
availability and performance is important.
Thus enterprises have to think carefully about which cloud provider to choose for running such
business critical business processes on. Azure is the ideal public cloud platform for business critical
SAP applications and business processes. Given the wide variety of Azure infrastructure, nearly all
existing SAP NetWeaver, and S/4HANA systems can be hosted in Azure today. Azure provides VMs
with many Terabytes of memory and more than 200 CPUs. Beyond that Azure offers HANA Large
Instances, which allow scale-up HANA deployments of up to 24 TB and SAP HANA scale-out
deployments of up to 120 TB. One can state today that nearly all on-premise SAP scenarios can be run
in Azure as well.
For a rough description of the scenarios and some non-supported scenarios, see the document SAP
workload on Azure virtual machine supported scenarios.
Check these scenarios and some of the conditions that were named as not supported in the referenced
documentation throughout the planning and the development of your architecture that you want to
deploy into Azure.
Overall the most common deployment pattern is a cross-premises scenario like displayed

Reason for many customers to apply a cross-premises deployment pattern is that fact that it is most
transparent for all applications to extend on-premises into Azure using Azure ExpressRoute and treat
Azure as virtual datacenter. As more and more assets are getting moved into Azure, the Azure
deployed infrastructure and network infrastructure will grow and the on-premises assets will reduce
accordingly. Everything transparent to users and applications.
In order to successfully deploy SAP systems into either Azure IaaS or IaaS in general, it is important to
understand the significant differences between the offerings of traditional outsourcers or hosters and
IaaS offerings. Whereas the traditional hoster or outsourcer adapts infrastructure (network, storage
and server type) to the workload a customer wants to host, it is instead the customer's or partner's
responsibility to characterize the workload and choose the correct Azure components of VMs, storage,
and network for IaaS deployments.
In order to gather data for the planning of your deployment into Azure, it is important to:
Evaluate what SAP products are supported running in Azure VMs
Evaluate what specific Operating System releases are supported with specific Azure VMs for those
SAP products
Evaluate what DBMS releases are supported for your SAP products with specific Azure VMs
Evaluate whether some of the required OS/DBMS releases require you to perform SAP release,
Support Package upgrade, and kernel upgrades to get to a supported configuration
Evaluate whether you need to move to different operating systems in order to deploy on Azure.
Details on supported SAP components on Azure, supported Azure infrastructure units and related
operating system releases and DBMS releases are explained in the article What SAP software is
supported for Azure deployments. Results gained out of the evaluation of valid SAP releases, operating
system, and DBMS releases have a large impact on the efforts moving SAP systems to Azure. Results
out of this evaluation are going to define whether there could be significant preparation efforts in
cases where SAP release upgrades or changes of operating systems are needed.

Azure Regions
Microsoft's Azure services are collected in Azure regions. An Azure region is a one or a collection out of
datacenters that contain the hardware and infrastructure that runs and hosts the different Azure
services. This infrastructure includes a large number of nodes that function as compute nodes or
storage nodes, or run network functionality.
For a list of the different Azure regions, check the article Azure geographies. Not all the Azure regions
offer the same services. Dependent on the SAP product you want to run, and the operating system and
DBMS related to it, you can end up in a situation that a certain region does not offer the VM types you
require. This is especially true for running SAP HANA, where you usually need VMs of the M/Mv2 VM-
series. These VM families are deployed only in a subset of the regions. You can find out what exact VM,
types, Azure storage types or, other Azure Services are available in which of the regions with the help
of the site Products available by region. As you start your planning and have certain regions in mind as
primary region and eventually secondary region, you need to investigate first whether the necessary
services are available in those regions.
Availability Zones
Several of the Azure regions implemented a concept called Availability Zones. Availability Zones are
physically separate locations within an Azure region. Each Availability Zone is made up of one or more
datacenters equipped with independent power, cooling, and networking. For example, deploying two
VMs across two Availability Zones of Azure, and implementing a high-availability framework for your
SAP DBMS system or the SAP Central Services gives you the best SLA in Azure. For this particular
virtual machine SLA in Azure, check the latest version of Virtual Machine SLAs. Since Azure regions
developed and extended rapidly over the last years, the topology of the Azure regions, the number of
physical datacenters, the distance among those datacenters, and the distance between Azure
Availability Zones can be different. And with that the network latency.
The principle of Availability Zones does not apply to the HANA specific service of HANA Large
Instances. Service Level agreements for HANA Large Instances can be found in the article SLA for SAP
HANA on Azure Large Instances
Fault Domains
Fault Domains represent a physical unit of failure, closely related to the physical infrastructure
contained in data centers, and while a physical blade or rack can be considered a Fault Domain, there is
no direct one-to-one mapping between the two.
When you deploy multiple Virtual Machines as part of one SAP system in Microsoft Azure Virtual
Machine Services, you can influence the Azure Fabric Controller to deploy your application into
different Fault Domains, thereby meeting higher requirements of availability SLAs. However, the
distribution of Fault Domains over an Azure Scale Unit (collection of hundreds of Compute nodes or
Storage nodes and networking) or the assignment of VMs to a specific Fault Domain is something over
which you do not have direct control. In order to direct the Azure fabric controller to deploy a set of
VMs over different Fault Domains, you need to assign an Azure availability set to the VMs at
deployment time. For more information on Azure availability sets, see chapter Azure availability sets in
this document.
Upgrade Domains
Upgrade Domains represent a logical unit that helps to determine how a VM within an SAP system,
that consists of SAP instances running in multiple VMs, is updated. When an upgrade occurs, Microsoft
Azure goes through the process of updating these Upgrade Domains one by one. By spreading VMs at
deployment time over different Upgrade Domains, you can protect your SAP system partly from
potential downtime. In order to force Azure to deploy the VMs of an SAP system spread over different
Upgrade Domains, you need to set a specific attribute at deployment time of each VM. Similar to Fault
Domains, an Azure Scale Unit is divided into multiple Upgrade Domains. In order to direct the Azure
fabric controller to deploy a set of VMs over different Upgrade Domains, you need to assign an Azure
Availability Set to the VMs at deployment time. For more information on Azure availability sets, see
chapter Azure availability sets below.
Azure availability sets
Azure Virtual Machines within one Azure availability set are distributed by the Azure Fabric Controller
over different Fault and Upgrade Domains. The purpose of the distribution over different Fault and
Upgrade Domains is to prevent all VMs of an SAP system from being shut down in the case of
infrastructure maintenance or a failure within one Fault Domain. By default, VMs are not part of an
availability set. The participation of a VM in an availability set is defined at deployment time or later on
by a reconfiguration and redeployment of a VM.
To understand the concept of Azure availability sets and the way availability sets relate to Fault and
Upgrade Domains, read this article.
As you define availability sets and try to mix various VMs of different VM families within one
availability set, you may encounter problems that prevent you to include a certain VM type into such
an availability set. The reason is that the availability set is bound to scale unit that contains a certain
type of compute hosts. And a certain type of compute host can only run certain types of VM families.
For example, if you create an availability set and deploy the first VM into the availability set and you
choose a VM type of the Esv3 family and then you try to deploy as second VM a VM of the M family,
you will be rejected in the second allocation. Reason is that the Esv3 family VMs are not running on the
same host hardware as the virtual machines of the M family do. The same problem can occur, when
you try to resize VMs and try to move a VM out of the Esv3 family to a VM type of the M family. In the
case of resizing to a VM family that can't be hosted on the same host hardware, you need to shut down
all VMs in your availability set and resize them to be able to run on the other host machine type. For
SLAs of VMs that are deployed within availability set, check the article Virtual Machine SLAs.
The principle of availability set and related update and fault domain does not apply to the HANA
specific service of HANA Large Instances. Service Level agreements for HANA Large Instances can be
found in the article SLA for SAP HANA on Azure Large Instances.

IMPORTANT
The concepts of Azure Availability Zones and Azure availability sets are mutually exclusive. That means, you can
either deploy a pair or multiple VMs into a specific Availability Zone or an Azure availability set. But not both.

Azure paired regions


Azure is offering Azure Region pairs where replication of certain data is enabled between these fixed
region pairs. The region pairing is documented in the article Business continuity and disaster recovery
(BCDR): Azure Paired Regions. As the article describes, the replication of data is tied Azure storage
types that can be configured by you to replicate into the paired region. See also the article Storage
redundancy in a secondary region. The storage types that allow such a replication are storage types,
which are not suitable for DBMS workload. As such the usability of the Azure storage replication would
be limited to Azure blob storage (like for backup purposes) or other high latency storage scenarios. As
you check for paired regions and the services you want to use as your primary or secondary region,
you may encounter situations where Azure services and/or VM types you intend to use in your
primary region are not available in the paired region. Or you might encounter a situation where the
Azure paired region is not acceptable out of data compliance reasons. For those cases, you need to use
a non-paired region as secondary/disaster recovery region. In such a case, you need to take care on
replication of some of the part of the data that Azure would have replicated yourself. An example on
how to replicate your Active Directory and DNS to your disaster recovery region is described in the
article Set up disaster recovery for Active Directory and DNS

Azure virtual machine services


Azure offers a large variety of virtual machines that you can select to deploy. There is no need for up-
front technology and infrastructure purchases. The Azure VM service offering simplifies maintaining
and operating applications by providing on-demand compute and storage to host, scale, and manage
web application and connected applications. Infrastructure management is automated with a platform
that is designed for high availability and dynamic scaling to match usage needs with the option of
several different pricing models.

With Azure virtual machines, Microsoft is enabling you to deploy custom server images to Azure as
IaaS instances. Or you are able to choose from a rich selection of consumable operating system images
out of the Azure image gallery.
From an operational perspective, the Azure Virtual Machine Service offers similar experiences as
virtual machines deployed on premises. You are responsible for the administration, operations and
also the patching of the particular operating system, running in an Azure VM and its applications in
that VM. Microsoft is not providing any more services beyond hosting that VM on its Azure
infrastructure (Infrastructure as a Service - IaaS). For SAP workload that you as a customer deploy,
Microsoft has no offers beyond the IaaS offerings.
The Microsoft Azure platform is a multi-tenant platform. As a result storage, network, and compute
resources that host Azure VMs are, with a few exceptions, shared between tenants. Intelligent throttling
and quota logic is used to prevent one tenant from impacting the performance of another tenant
(noisy neighbor) in a drastic way. Especially for certifying the Azure platform for SAP HANA, Microsoft
needs to prove the resource isolation for cases where multiple VMs can run on the same host on a
regular basis to SAP. Though logic in Azure tries to keep variances in bandwidth experienced small,
highly shared platforms tend to introduce larger variances in resource/bandwidth availability than
customers might experience in their on-premises deployments. The probability that an SAP system on
Azure could experience larger variances than in an on-premises system needs to be taken into account.
Azure virtual machines for SAP workload
For SAP workload, we narrowed down the selection to different VM families that are suitable for SAP
workload and SAP HANA workload more specifically. The way how you find the correct VM type and its
capability to work through SAP workload is described in the document What SAP software is
supported for Azure deployments.

NOTE
The VM types that are certified for SAP workload, there is no over-provisioning of CPU and memory resources.

Beyond the selection of purely supported VM types, you also need to check whether those VM types
are available in a specific region based on the site Products available by region. But more important,
you need to evaluate whether:
CPU and memory resources of different VM types
IOPS bandwidth of different VM types
Network capabilities of different VM types
Number of disks that can be attached
Ability to leverage certain Azure storage types
fit your need. Most of that data can be found here (Linux) and here (Windows) for a particular VM type.
As pricing model you have several different pricing options that list like:
Pay as you go
One year reserved
Three years reserved
Spot pricing
The pricing of each of the different offers with different service offers around operating systems and
different regions is available on the site Linux Virtual Machines Pricing and Windows Virtual Machines
Pricing. For details and flexibility of one year and three year reserved instances, check these articles:
What are Azure Reservations?
Virtual machine size flexibility with Reserved VM Instances
How the Azure reservation discount is applied to virtual machines
For more information on spot pricing, read the article Azure Spot Virtual Machines. Pricing of the same
VM type can also be different between different Azure regions. For some customers, it was worth to
deploy into a less expensive Azure region.
Additionally, Azure offers the concepts of a dedicated host. The dedicated host concept gives you more
control on patching cycles that are done by Azure. You can time the patching according to your own
schedules. This offer is specifically targeting customers with workload that might not follow the normal
cycle of workload. To read up on the concepts of Azure dedicated host offers, read the article Azure
Dedicated Host. Using this offer is supported for SAP workload and is used by several SAP customers
who want to have more control on patching of infrastructure and eventual maintenance plans of
Microsoft. For more information on how Microsoft maintains and patches the Azure infrastructure that
hosts virtual machines, read the article Maintenance for virtual machines in Azure.
Generation 1 and Generation 2 virtual machines
Microsoft's hypervisor is able to handle two different generations of virtual machines. Those formats
are called Generation 1 and Generation 2 . Generation 2 was introduced in the year 2012 with
Windows Server 2012 hypervisor. Azure started out using Generation 1 virtual machines. As you
deploy Azure virtual machines, the default is still to use the Generation 1 format. Meanwhile you can
deploy Generation 2 VM formats as well. The article Support for generation 2 VMs on Azure lists the
Azure VM families that can be deployed as Generation 2 VM. This article also lists the important
functional differences of Generation 2 virtual machines as they can run on Hyper-V private cloud and
Azure. More important this article also lists functional differences between Generation 1 virtual
machines and Generation 2 VMs, as those run in Azure.

NOTE
There are functional differences of Generation 1 and Generation 2 VMs running in Azure. Read the article
Support for generation 2 VMs on Azure to see a list of those differences.

Moving an existing VM from one generation to the other generation is not possible. To change the
virtual machine generation, you need to deploy a new VM of the generation you desire and re-install
the software that you are running in the virtual machine of the generation. This change only affects the
base VHD image of the VM and has no impact on the data disks or attached NFS or SMB shares. Data
disks, NFS, or SMB shares that originally were assigned to, for example, on a Generation 1 VM.

NOTE
Deploying Mv1 VM family VMs as Generation 2 VMs is possible as of beginning of May 2020. With that a
seeming less up and downsizing between Mv1 and Mv2 family VMs is possible.

Storage: Microsoft Azure Storage and Data Disks


Microsoft Azure Virtual Machines utilize different storage types. When implementing SAP on Azure
Virtual Machine Services, it is important to understand the differences between these two main types
of storage:
Non-Persistent, volatile storage.
Persistent storage.
Azure VMs offer non-persistent disks after a VM is deployed. In case of a VM reboot, all content on
those drives will be wiped out. Hence, it is a given that data files and log/redo files of databases should
under no circumstances be located on those non-persisted drives. There might be exceptions for some
of the databases, where these non-persisted drives could be suitable for tempdb and temp tablespaces.
However, avoid using those drives for A-Series VMs since those non-persisted drives are limited in
throughput with that VM family. For further details, read the article Understanding the temporary drive
on Windows VMs in Azure

Windows
Drive D:\ in an Azure VM is a non-persisted drive, which is backed by some local disks on the Azure
compute node. Because it is non-persisted, this means that any changes made to the content on
the D:\ drive is lost when the VM is rebooted. By "any changes", like files stored, directories created,
applications installed, etc.

Linux
Linux Azure VMs automatically mount a drive at /mnt/resource that is a non-persisted drive
backed by local disks on the Azure compute node. Because it is non-persisted, this means that any
changes made to content in /mnt/resource are lost when the VM is rebooted. By any changes, like
files stored, directories created, applications installed, etc.

Azure Storage accounts


When deploying services or VMs in Azure, deployment of VHDs and VM Images are organized in units
called Azure Storage Accounts. Azure storage accounts have limitations either in IOPS, throughput, or
sizes those can contain. In the past these limitations, which are documented in:
Scalability targets for standard storage accounts
Scalability targets for premium page blob storage accounts
played an important role in planning an SAP deployment in Azure. It was on you to manage the
number of persisted disks within a storage account. You needed to manage the storage accounts and
eventually create new storage accounts to create more persisted disks.
In recent years, the introduction of Azure managed disks relieved you from those tasks. The
recommendation for SAP deployments is to leverage Azure managed disks instead of managing Azure
storage accounts yourself. Azure managed disks will distribute disks across different storage accounts,
so, that the limits of the individual storage accounts are not exceeded.
Within a storage account, you have a type of a folder concept called 'containers' that can be used to
group certain disks into specific containers.
Within Azure, a disk/VHD name follows the following naming connection that needs to provide a
unique name for the VHD within Azure:
http(s)://<storage account name>.blob.core.windows.net/<container name>/<vhd name>

The string above needs to uniquely identify the disk/VHD that is stored on Azure Storage.
Azure persisted storage types
Azure offers a variety of persisted storage option that can be used for SAP workload and specific SAP
stack components. For more details, read the document Azure storage for SAP workloads.
Microsoft Azure Networking
Microsoft Azure provides a network infrastructure, which allows the mapping of all scenarios, which
we want to realize with SAP software. The capabilities are:
Access from the outside, directly to the VMs via Windows Terminal Services or ssh/VNC
Access to services and specific ports used by applications within the VMs
Internal Communication and Name Resolution between a group of VMs deployed as Azure VMs
Cross-premises Connectivity between a customer's on-premises network and the Azure network
Cross Azure Region or data center connectivity between Azure sites
More information can be found here: https://azure.microsoft.com/documentation/services/virtual-
network/
There are many different possibilities to configure name and IP resolution in Azure. There is also an
Azure DNS service, which can be used instead of setting up your own DNS server. More information
can be found in this article and on this page.
For cross-premises or hybrid scenarios, we are relying on the fact that the on-premises
AD/OpenLDAP/DNS has been extended via VPN or private connection to Azure. For certain scenarios
as documented here, it might be necessary to have an AD/OpenLDAP replica installed in Azure.
Because networking and name resolution is a vital part of the database deployment for an SAP system,
this concept is discussed in more detail in the DBMS Deployment Guide.
A z u r e Vi r t u a l N e t w o r k s

By building up an Azure Virtual Network, you can define the address range of the private IP addresses
allocated by Azure DHCP functionality. In cross-premises scenarios, the IP address range defined is still
allocated using DHCP by Azure. However, Domain Name resolution is done on-premises (assuming
that the VMs are a part of an on-premises domain) and hence can resolve addresses beyond different
Azure Cloud Services.
Every Virtual Machine in Azure needs to be connected to a Virtual Network.
More details can be found in this article and on this page.

NOTE
By default, once a VM is deployed you cannot change the Virtual Network configuration. The TCP/IP settings
must be left to the Azure DHCP server. Default behavior is Dynamic IP assignment.

The MAC address of the virtual network card may change, for example after resize and the Windows or
Linux guest OS picks up the new network card and automatically uses DHCP to assign the IP and DNS
addresses in this case.
St a t i c I P A ssi g n m e n t

It is possible to assign fixed or reserved IP addresses to VMs within an Azure Virtual Network. Running
the VMs in an Azure Virtual Network opens a great possibility to leverage this functionality if needed
or required for some scenarios. The IP assignment remains valid throughout the existence of the VM,
independent of whether the VM is running or shutdown. As a result, you need to take the overall
number of VMs (running and stopped VMs) into account when defining the range of IP addresses for
the Virtual Network. The IP address remains assigned either until the VM and its Network Interface is
deleted or until the IP address gets de-assigned again. For more information, read this article.

NOTE
You should assign static IP addresses through Azure means to individual vNICs. You should not assign static IP
addresses within the guest OS to a vNIC. Some Azure services like Azure Backup Service rely on the fact that at
least the primary vNIC is set to DHCP and not to static IP addresses. See also the document Troubleshoot
Azure virtual machine backup.

Mu l t i pl e N ICs per VM

You can define multiple virtual network interface cards (vNIC) for an Azure Virtual Machine. With the
ability to have multiple vNICs you can start to set up network traffic separation where, for example,
client traffic is routed through one vNIC and backend traffic is routed through a second vNIC.
Dependent on the type of VM there are different limitations for the number of vNICs a VM can have
assigned. Exact details, functionality, and restrictions can be found in these articles:
Create a Windows VM with multiple NICs
Create a Linux VM with multiple NICs
Deploy multi NIC VMs using a template
Deploy multi NIC VMs using PowerShell
Deploy multi NIC VMs using the Azure CLI
Site-to-Site Connectivity
Cross-premises is Azure VMs and On-Premises linked with a transparent and permanent VPN
connection. It is expected to become the most common SAP deployment pattern in Azure. The
assumption is that operational procedures and processes with SAP instances in Azure should work
transparently. This means you should be able to print out of these systems as well as use the SAP
Transport Management System (TMS) to transport changes from a development system in Azure to a
test system, which is deployed on-premises. More documentation around site-to-site can be found in
this article
VP N T u n n el Devi c e

In order to create a site-to-site connection (on-premises data center to Azure data center), you need to
either obtain and configure a VPN device, or use Routing and Remote Access Service (RRAS) which
was introduced as a software component with Windows Server 2012.
Create a virtual network with a site-to-site VPN connection using PowerShell
About VPN devices for Site-to-Site VPN Gateway connections
VPN Gateway FAQ

The Figure above shows two Azure subscriptions have IP address subranges reserved for usage in
Virtual Networks in Azure. The connectivity from the on-premises network to Azure is established via
VPN.
Point-to-Site VPN
Point-to-site VPN requires every client machine to connect with its own VPN into Azure. For the SAP
scenarios, we are looking at, point-to-site connectivity is not practical. Therefore, no further references
are given to point-to-site VPN connectivity.
More information can be found here
Configure a Point-to-Site connection to a VNet using the Azure portal
Configure a Point-to-Site connection to a VNet using PowerShell
Multi-Site VPN
Azure also nowadays offers the possibility to create Multi-Site VPN connectivity for one Azure
subscription. Previously a single subscription was limited to one site-to-site VPN connection. This
limitation went away with Multi-Site VPN connections for a single subscription. This makes it possible
to leverage more than one Azure Region for a specific subscription through cross-premises
configurations.
For more documentation, see this article
VNet to VNet Connection
Using Multi-Site VPN, you need to configure a separate Azure Virtual Network in each of the regions.
However often you have the requirement that the software components in the different regions should
communicate with each other. Ideally this communication should not be routed from one Azure Region
to on-premises and from there to the other Azure Region. To shortcut, Azure offers the possibility to
configure a connection from one Azure Virtual Network in one region to another Azure Virtual
Network hosted in another region. This functionality is called VNet-to-VNet connection. More details
on this functionality can be found here: https://azure.microsoft.com/documentation/articles/vpn-
gateway-vnet-vnet-rm-ps/.
Private Connection to Azure ExpressRoute
Microsoft Azure ExpressRoute allows the creation of private connections between Azure data centers
and either the customer's on-premises infrastructure or in a co-location environment. ExpressRoute is
offered by various MPLS (packet switched) VPN providers or other Network Service Providers.
ExpressRoute connections do not go over the public Internet. ExpressRoute connections offer higher
security, more reliability through multiple parallel circuits, faster speeds, and lower latencies than
typical connections over the Internet.
Find more details on Azure ExpressRoute and offerings here:
https://azure.microsoft.com/documentation/services/expressroute/
https://azure.microsoft.com/pricing/details/expressroute/
https://azure.microsoft.com/documentation/articles/expressroute-faqs/
Express Route enables multiple Azure subscriptions through one ExpressRoute circuit as documented
here
https://azure.microsoft.com/documentation/articles/expressroute-howto-linkvnet-arm/
https://azure.microsoft.com/documentation/articles/expressroute-howto-circuit-arm/
Forced tunneling in case of cross-premises
For VMs joining on-premises domains through site-to-site, point-to-site, or ExpressRoute, you need to
make sure that the Internet proxy settings are getting deployed for all the users in those VMs as well.
By default, software running in those VMs or users using a browser to access the internet would not go
through the company proxy, but would connect straight through Azure to the internet. But even the
proxy setting is not a 100% solution to direct the traffic through the company proxy since it is
responsibility of software and services to check for the proxy. If software running in the VM is not
doing that or an administrator manipulates the settings, traffic to the Internet can be detoured again
directly through Azure to the Internet.
In order to avoid such a direct internet connectivity, you can configure Forced Tunneling with site-to-
site connectivity between on-premises and Azure. The detailed description of the Forced Tunneling
feature is published here https://azure.microsoft.com/documentation/articles/vpn-gateway-forced-
tunneling-rm/
Forced Tunneling with ExpressRoute is enabled by customers advertising a default route via the
ExpressRoute BGP peering sessions.
Summary of Azure networking
This chapter contained many important points about Azure Networking. Here is a summary of the
main points:
Azure Virtual Networks allow you to put a network structure into your Azure deployment. VNets
can be isolated against each other or with the help of Network Security Groups traffic between
VNets can be controlled.
Azure Virtual Networks can be leveraged to assign IP address ranges to VMs or assign fixed IP
addresses to VMs
To set up a Site-To-Site or Point-To-Site connection you need to create an Azure Virtual Network first
Once a virtual machine has been deployed, it is no longer possible to change the Virtual Network
assigned to the VM
Quotas in Azure virtual machine services
We need to be clear about the fact that the storage and network infrastructure is shared between VMs
running a variety of services in the Azure infrastructure. As in the customer's own data centers, over-
provisioning of some of the infrastructure resources does take place to a degree. The Microsoft Azure
Platform uses disk, CPU, network, and other quotas to limit the resource consumption and to preserve
consistent and deterministic performance. The different VM types (A5, A6, etc.) have different quotas
for the number of disks, CPU, RAM, and Network.
NOTE
CPU and memory resources of the VM types supported by SAP are pre-allocated on the host nodes. This
means that once the VM is deployed, the resources on the host are available as defined by the VM type.

When planning and sizing SAP on Azure solutions, the quotas for each virtual machine size must be
considered. The VM quotas are described here (Linux) and here (Windows).
The quotas described represent the theoretical maximum values. The limit of IOPS per disk may be
achieved with small I/Os (8 KB) but possibly may not be achieved with large I/Os (1 MB). The IOPS limit
is enforced on the granularity of single disk.
As a rough decision tree to decide whether an SAP system fits into Azure Virtual Machine Services and
its capabilities or whether an existing system needs to be configured differently in order to deploy the
system on Azure, the decision tree below can be used:

1. The most important information to start with is the SAPS requirement for a given SAP system. The
SAPS requirements need to be separated out into the DBMS part and the SAP application part, even
if the SAP system is already deployed on-premises in a 2-tier configuration. For existing systems,
the SAPS related to the hardware in use often can be determined or estimated based on existing
SAP benchmarks. The results can be found here. For newly deployed SAP systems, you should have
gone through a sizing exercise, which should determine the SAPS requirements of the system.
2. For existing systems, the I/O volume and I/O operations per second on the DBMS server should be
measured. For newly planned systems, the sizing exercise for the new system also should give
rough ideas of the I/O requirements on the DBMS side. If unsure, you eventually need to conduct a
Proof of Concept.
3. Compare the SAPS requirement for the DBMS server with the SAPS the different VM types of Azure
can provide. The information on SAPS of the different Azure VM types is documented in SAP Note
1928533. The focus should be on the DBMS VM first since the database layer is the layer in an SAP
NetWeaver system that does not scale out in the majority of deployments. In contrast, the SAP
application layer can be scaled out. If none of the SAP supported Azure VM types can deliver the
required SAPS, the workload of the planned SAP system can't be run on Azure. You either need to
deploy the system on-premises or you need to change the workload volume for the system.
4. As documented here (Linux) and here (Windows), Azure enforces an IOPS quota per disk
independent whether you use Standard Storage or Premium Storage. Dependent on the VM type,
the number of data disks, which can be mounted varies. As a result, you can calculate a maximum
IOPS number that can be achieved with each of the different VM types. Dependent on the database
file layout, you can stripe disks to become one volume in the guest OS. However, if the current IOPS
volume of a deployed SAP system exceeds the calculated limits of the largest VM type of Azure and
if there is no chance to compensate with more memory, the workload of the SAP system can be
impacted severely. In such cases, you can hit a point where you should not deploy the system on
Azure.
5. Especially in SAP systems, which are deployed on-premises in 2-Tier configurations, the chances are
that the system might need to be configured on Azure in a 3-Tier configuration. In this step, you
need to check whether there is a component in the SAP application layer, which can't be scaled out
and which would not fit into the CPU and memory resources the different Azure VM types offer. If
there indeed is such a component, the SAP system and its workload can't be deployed into Azure.
But if you can scale out the SAP application components into multiple Azure VMs, the system can be
deployed into Azure.
If the DBMS and SAP application layer components can be run in Azure VMs, the configuration needs
to be defined with regard to:
Number of Azure VMs
VM types for the individual components
Number of VHDs in DBMS VM to provide enough IOPS

Managing Azure assets


Azure portal
The Azure portal is one of three interfaces to manage Azure VM deployments. The basic management
tasks, like deploying VMs from images, can be done through the Azure portal. In addition, the creation
of Storage Accounts, Virtual Networks, and other Azure components are also tasks the Azure portal
can handle well. However, functionality like uploading VHDs from on-premises to Azure or copying a
VHD within Azure are tasks, which require either third-party tools or administration through
PowerShell or CLI.

Administration and configuration tasks for the Virtual Machine instance are possible from within the
Azure portal.
Besides restarting and shutting down a Virtual Machine you can also attach, detach, and create data
disks for the Virtual Machine instance, to capture the instance for image preparation, and configure the
size of the Virtual Machine instance.
The Azure portal provides basic functionality to deploy and configure VMs and many other Azure
services. However not all available functionality is covered by the Azure portal. In the Azure portal, it's
not possible to perform tasks like:
Uploading VHDs to Azure
Copying VMs
Management via Microsoft Azure PowerShell cmdlets
Windows PowerShell is a powerful and extensible framework that has been widely adopted by
customers deploying larger numbers of systems in Azure. After the installation of PowerShell cmdlets
on a desktop, laptop or dedicated management station, the PowerShell cmdlets can be run remotely.
The process to enable a local desktop/laptop for the usage of Azure PowerShell cmdlets and how to
configure those for the usage with the Azure subscription(s) is described in this article.
More detailed steps on how to install, update, and configure the Azure PowerShell cmdlets can also be
found in this chapter of the Deployment Guide.
Customer experience so far has been that PowerShell (PS) is certainly the more powerful tool to deploy
VMs and to create custom steps in the deployment of VMs. All of the customers running SAP instances
in Azure are using PS cmdlets to supplement management tasks they do in the Azure portal or are
even using PS cmdlets exclusively to manage their deployments in Azure. Since the Azure-specific
cmdlets share the same naming convention as the more than 2000 Windows-related cmdlets, it is an
easy task for Windows administrators to leverage those cmdlets.
See example here: https://blogs.technet.com/b/keithmayer/archive/2015/07/07/18-steps-for-end-to-
end-iaas-provisioning-in-the-cloud-with-azure-resource-manager-arm-powershell-and-desired-state-
configuration-dsc.aspx
Deployment of the Azure Extension for SAP (see chapter Azure Extension for SAP in this document) is
only possible via PowerShell or CLI. Therefore it is mandatory to set up and configure PowerShell or
CLI when deploying or administering an SAP NetWeaver system in Azure.
As Azure provides more functionality, new PS cmdlets are going to be added that requires an update of
the cmdlets. Therefore it makes sense to check the Azure Download site at least once the month
https://azure.microsoft.com/downloads/ for a new version of the cmdlets. The new version is installed
on top of the older version.
For a general list of Azure-related PowerShell commands check here: /powershell/azure/.
Management via Microsoft Azure CLI commands
For customers who use Linux and want to manage Azure resources PowerShell might not be an option.
Microsoft offers Azure CLI as an alternative. The Azure CLI provides a set of open source, cross-
platform commands for working with the Azure Platform. The Azure CLI provides much of the same
functionality found in the Azure portal.
For information about installation, configuration and how to use CLI commands to accomplish Azure
tasks see
Install the Azure classic CLI
[Deploy and manage virtual machines by using Azure Resource Manager templates and the Azure
CLI][../../linux/create-ssh-secured-vm-from-template.md]
Use the Azure classic CLI for Mac, Linux, and Windows with Azure Resource Manager
Also read chapter Azure CLI for Linux VMs in the Deployment Guide on how to use Azure CLI to deploy
the Azure Extension for SAP.

First steps planning a deployment


The first step in deployment planning is NOT to check for VMs available to run SAP. The first step can
be one that is time consuming, but most important, is to work with compliance and security teams in
your company on what the boundary conditions are for deploying which type of SAP workload or
business process into public cloud. If your company deployed other software before into Azure, the
process can be easy. If your company is more at the beginning of the journey, there might be larger
discussions necessary in order to figure out the boundary conditions, security conditions, that allow
certain SAP data and SAP business processes to be hosted in public cloud.
As useful help, you can point to Microsoft compliance offerings for a list of compliance offers Microsoft
can provide.
Other areas of concerns like data encryption for data at rest or other encryption in Azure service is
documented in Azure encryption overview.
Don't underestimate this phase of the project in your planning. Only when you have agreement and
rules around this topic, you need to go to the next step, which is the planning of the network
architecture that you deploy in Azure.

Different ways to deploy VMs for SAP in Azure


In this chapter, you learn the different ways to deploy a VM in Azure. Additional preparation
procedures, as well as handling of VHDs and VMs in Azure are covered in this chapter.
Deployment of VMs for SAP
Microsoft Azure offers multiple ways to deploy VMs and associated disks. Thus it is important to
understand the differences since preparations of the VMs might differ depending on the method of
deployment. In general, we take a look at the following scenarios:
Moving a VM from on-premises to Azure with a non-generalized disk
You plan to move a specific SAP system from on-premises to Azure. This can be done by uploading the
VHD, which contains the OS, the SAP Binaries, and DBMS binaries plus the VHDs with the data and log
files of the DBMS to Azure. In contrast to scenario #2 below, you keep the hostname, SAP SID, and SAP
user accounts in the Azure VM as they were configured in the on-premises environment. Therefore,
generalizing the image is not necessary. See chapters Preparation for moving a VM from on-premises
to Azure with a non-generalized disk of this document for on-premises preparation steps and upload
of non-generalized VMs or VHDs to Azure. Read chapter Scenario 3: Moving a VM from on-premises
using a non-generalized Azure VHD with SAP in the Deployment Guide for detailed steps of deploying
such an image in Azure.
Another option which we will not discuss in detail in this guide is using Azure Site Recovery to replicate
SAP NetWeaver Application Servers and SAP NetWeaver Central Services to Azure. We do not
recommend to use Azure Site Recovery for the database layer and rather use database specific
replication mechanisms, like HANA System Replication. For more information, see chapter Protect SAP
of the About disaster recovery for on-premises apps guide.
Deploying a VM with a customer-specific image
Due to specific patch requirements of your OS or DBMS version, the provided images in the Azure
Marketplace might not fit your needs. Therefore, you might need to create a VM using your own
private OS/DBMS VM image, which can be deployed several times afterwards. To prepare such a
private image for duplication, the following items have to be considered:

Windows
See more details here: /azure/virtual-machines/windows/upload-generalized-managed The
Windows settings (like Windows SID and hostname) must be abstracted/generalized on the on-
premises VM via the sysprep command.

Linux
Follow the steps described in these articles for SUSE, Red Hat, or Oracle Linux, to prepare a VHD to
be uploaded to Azure.

If you have already installed SAP content in your on-premises VM (especially for 2-Tier systems), you
can adapt the SAP system settings after the deployment of the Azure VM through the instance rename
procedure supported by the SAP Software Provisioning Manager (SAP Note 1619720). See chapters
Preparation for deploying a VM with a customer-specific image for SAP and Uploading a VHD from
on-premises to Azure of this document for on-premises preparation steps and upload of a generalized
VM to Azure. Read chapter Scenario 2: Deploying a VM with a custom image for SAP in the
Deployment Guide for detailed steps of deploying such an image in Azure.
Deploying a VM out of the Azure Marketplace
You would like to use a Microsoft or third-party provided VM image from the Azure Marketplace to
deploy your VM. After you deployed your VM in Azure, you follow the same guidelines and tools to
install the SAP software and/or DBMS inside your VM as you would do in an on-premises
environment. For more detailed deployment description, see chapter Scenario 1: Deploying a VM out
of the Azure Marketplace for SAP in the Deployment Guide.
Preparing VMs with SAP for Azure
Before uploading VMs into Azure, you need to make sure the VMs and VHDs fulfill certain
requirements. There are small differences depending on the deployment method that is used.
Preparation for moving a VM from on-premises to Azure with a non-generalized disk
A common deployment method is to move an existing VM, which runs an SAP system from on-
premises to Azure. That VM and the SAP system in the VM just should run in Azure using the same
hostname and likely the same SAP SID. In this case, the guest OS of VM should not be generalized for
multiple deployments. If the on-premises network got extended into Azure, then even the same
domain accounts can be used within the VM as those were used before on-premises.
Requirements when preparing your own Azure VM Disk are:
Originally the VHD containing the operating system could have a maximum size of 127 GB only.
This limitation got eliminated at the end of March 2015. Now the VHD containing the operating
system can be up to 1 TB in size as any other Azure Storage hosted VHD as well.
It needs to be in the fixed VHD format. Dynamic VHDs or VHDs in VHDx format are not yet
supported on Azure. Dynamic VHDs will be converted to static VHDs when you upload the VHD
with PowerShell commandlets or CLI
VHDs, which are mounted to the VM and should be mounted again in Azure to the VM need to be
in a fixed VHD format as well. Read this article for size limits of data disks. Dynamic VHDs will be
converted to static VHDs when you upload the VHD with PowerShell commandlets or CLI
Add another local account with administrator privileges, which can be used by Microsoft support or
which can be assigned as context for services and applications to run in until the VM is deployed
and more appropriate users can be used.
Add other local accounts as those might be needed for the specific deployment scenario.

Windows
In this scenario no generalization (sysprep) of the VM is required to upload and deploy the VM on
Azure. Make sure that drive D:\ is not used. Set disk automount for attached disks as described in
chapter Setting automount for attached disks in this document.

Linux
In this scenario no generalization (waagent -deprovision) of the VM is required to upload and
deploy the VM on Azure. Make sure that /mnt/resource is not used and that ALL disks are mounted
via uuid. For the OS disk, make sure that the bootloader entry also reflects the uuid-based mount.

Preparation for deploying a VM with a customer-specific image for SAP


VHD files that contain a generalized OS are stored in containers on Azure Storage Accounts or as
Managed Disk images. You can deploy a new VM from such an image by referencing the VHD or
Managed Disk image as a source in your deployment template files as described in chapter Scenario 2:
Deploying a VM with a custom image for SAP of the Deployment Guide.
Requirements when preparing your own Azure VM Image are:
Originally the VHD containing the operating system could have a maximum size of 127 GB only.
This limitation got eliminated at the end of March 2015. Now the VHD containing the operating
system can be up to 1 TB in size as any other Azure Storage hosted VHD as well.
It needs to be in the fixed VHD format. Dynamic VHDs or VHDs in VHDx format are not yet
supported on Azure. Dynamic VHDs will be converted to static VHDs when you upload the VHD
with PowerShell commandlets or CLI
VHDs, which are mounted to the VM and should be mounted again in Azure to the VM need to be
in a fixed VHD format as well. Read this article for size limits of data disks. Dynamic VHDs will be
converted to static VHDs when you upload the VHD with PowerShell commandlets or CLI
Add other local accounts as those might be needed for the specific deployment scenario.
If the image contains an installation of SAP NetWeaver and renaming of the host name from the
original name at the point of the Azure deployment is likely, it is recommended to copy the latest
versions of the SAP Software Provisioning Manager DVD into the template. This will enable you to
easily use the SAP provided rename functionality to adapt the changed hostname and/or change
the SID of the SAP system within the deployed VM image as soon as a new copy is started.

Windows
Make sure that drive D:\ is not used Set disk automount for attached disks as described in chapter
Setting automount for attached disks in this document.

Linux
Make sure that /mnt/resource is not used and that ALL disks are mounted via uuid. For the OS
disk, make sure the bootloader entry also reflects the uuid-based mount.

SAP GUI (for administrative and setup purposes) can be pre-installed in such a template.
Other software necessary to run the VMs successfully in cross-premises scenarios can be installed
as long as this software can work with the rename of the VM.
If the VM is prepared sufficiently to be generic and eventually independent of accounts/users not
available in the targeted Azure deployment scenario, the last preparation step of generalizing such an
image is conducted.
Gen er al i z i n g a VM

Windows
The last step is to sign in to a VM with an Administrator account. Open a Windows command
window as administrator. Go to %windir%\windows\system32\sysprep and execute sysprep.exe. A
small window will appear. It is important to check the Generalize option (the default is unchecked)
and change the Shutdown Option from its default of 'Reboot' to 'shutdown'. This procedure
assumes that the sysprep process is executed on-premises in the Guest OS of a VM. If you want to
perform the procedure with a VM already running in Azure, follow the steps described in this
article.

Linux
How to capture a Linux virtual machine to use as a Resource Manager template

Transferring VMs and VHDs between on-premises to Azure


Since uploading VM images and disks to Azure is not possible via the Azure portal, you need to use
Azure PowerShell cmdlets or CLI. Another possibility is the use of the tool 'AzCopy'. The tool can copy
VHDs between on-premises and Azure (in both directions). It also can copy VHDs between Azure
Regions. Consult this documentation for download and usage of AzCopy.
A third alternative would be to use various third-party GUI-oriented tools. However, make sure that
these tools are supporting Azure Page Blobs. For our purposes, we need to use Azure Page Blob store
(the differences are described here: /rest/api/storageservices/Understanding-Block-Blobs--Append-
Blobs--and-Page-Blobs). Also the tools provided by Azure are efficient in compressing the VMs and
VHDs, which need to be uploaded. This is important because this efficiency in compression reduces the
upload time (which varies anyway depending on the upload link to the internet from the on-premises
facility and the Azure deployment region targeted). It is a fair assumption that uploading a VM or VHD
from European location to the U.S.-based Azure data centers will take longer than uploading the same
VMs/VHDs to the European Azure data centers.
Uploading a VHD from on-premises to Azure
To upload an existing VM or VHD from the on-premises network such a VM or VHD needs to meet the
requirements as listed in chapter Preparation for moving a VM from on-premises to Azure with a non-
generalized disk of this document.
Such a VM does NOT need to be generalized and can be uploaded in the state and shape it has after
shutdown on the on-premises side. The same is true for additional VHDs, which don't contain any
operating system.
U p l o a d i n g a V H D a n d m a k i n g i t a n A z u r e D i sk

In this case we want to upload a VHD, either with or without an OS in it, and mount it to a VM as a data
disk or use it as OS disk. This is a multi-step process
PowerShell
Sign in to your subscription with Connect-AzAccount
Set the subscription of your context with Set-AzContext and parameter SubscriptionId or
SubscriptionName - see /powershell/module/az.accounts/set-Azcontext
Upload the VHD with Add-AzVhd to an Azure Storage Account - see
/powershell/module/az.compute/add-Azvhd
(Optional) Create a Managed Disk from the VHD with New-AzDisk - see
/powershell/module/az.compute/new-Azdisk
Set the OS disk of a new VM config to the VHD or Managed Disk with Set-AzVMOSDisk - see
/powershell/module/az.compute/set-Azvmosdisk
Create a new VM from the VM config with New-AzVM - see /powershell/module/az.compute/new-
Azvm
Add a data disk to a new VM with Add-AzVMDataDisk - see /powershell/module/az.compute/add-
Azvmdatadisk
Azure CLI
Sign in to your subscription with az login
Select your subscription with az account set --subscription <subscription name or id >
Upload the VHD with az storage blob upload - see Using the Azure CLI with Azure Storage
(Optional) Create a Managed Disk from the VHD with az disk create - see
https://docs.microsoft.com/cli/azure/disk
Create a new VM specifying the uploaded VHD or Managed Disk as OS disk with az vm create and
parameter --attach-os-disk
Add a data disk to a new VM with az vm disk attach and parameter --new
Template
Upload the VHD with PowerShell or Azure CLI
(Optional) Create a Managed Disk from the VHD with PowerShell, Azure CLI, or the Azure portal
Deploy the VM with a JSON template referencing the VHD as shown in this example JSON template
or using Managed Disks as shown in this example JSON template.
Deployment of a VM Image
To upload an existing VM or VHD from the on-premises network, in order to use it as an Azure VM
image such a VM or VHD need to meet the requirements listed in chapter Preparation for deploying a
VM with a customer-specific image for SAP of this document.
Use sysprep on Windows or waagent -deprovision on Linux to generalize your VM - see Sysprep
Technical Reference for Windows or How to capture a Linux virtual machine to use as a Resource
Manager template for Linux
Sign in to your subscription with Connect-AzAccount
Set the subscription of your context with Set-AzContext and parameter SubscriptionId or
SubscriptionName - see /powershell/module/az.accounts/set-Azcontext
Upload the VHD with Add-AzVhd to an Azure Storage Account - see
/powershell/module/az.compute/add-Azvhd
(Optional) Create a Managed Disk Image from the VHD with New-AzImage - see
/powershell/module/az.compute/new-Azimage
Set the OS disk of a new VM config to the
VHD with Set-AzVMOSDisk -SourceImageUri -CreateOption fromImage - see
/powershell/module/az.compute/set-Azvmosdisk
Managed Disk Image Set-AzVMSourceImage - see /powershell/module/az.compute/set-
Azvmsourceimage
Create a new VM from the VM config with New-AzVM - see /powershell/module/az.compute/new-
Azvm
Azure CLI
Use sysprep on Windows or waagent -deprovision on Linux to generalize your VM - see Sysprep
Technical Reference for Windows or How to capture a Linux virtual machine to use as a Resource
Manager template for Linux
Sign in to your subscription with az login
Select your subscription with az account set --subscription <subscription name or id >
Upload the VHD with az storage blob upload - see Using the Azure CLI with Azure Storage
(Optional) Create a Managed Disk Image from the VHD with az image create - see
https://docs.microsoft.com/cli/azure/image
Create a new VM specifying the uploaded VHD or Managed Disk Image as OS disk with az vm
create and parameter --image
Template
Use sysprep on Windows or waagent -deprovision on Linux to generalize your VM - see Sysprep
Technical Reference for Windows or How to capture a Linux virtual machine to use as a Resource
Manager template for Linux
Upload the VHD with PowerShell or Azure CLI
(Optional) Create a Managed Disk Image from the VHD with PowerShell, Azure CLI, or the Azure
portal
Deploy the VM with a JSON template referencing the image VHD as shown in this example JSON
template or using the Managed Disk Image as shown in this example JSON template.
Downloading VHDs or Managed Disks to on-premises
Azure Infrastructure as a Service is not a one-way street of only being able to upload VHDs and SAP
systems. You can move SAP systems from Azure back into the on-premises world as well.
During the time of the download the VHDs or Managed Disks can't be active. Even when downloading
disks, which are mounted to VMs, the VM needs to be shut down and deallocated. If you only want to
download the database content, which, then should be used to set up a new system on-premises and if
it is acceptable that during the time of the download and the setup of the new system that the system
in Azure can still be operational, you could avoid a long downtime by performing a compressed
database backup into a disk and just download that disk instead of also downloading the OS base VM.
PowerShell
Downloading a Managed Disk You first need to get access to the underlying blob of the
Managed Disk. Then you can copy the underlying blob to a new storage account and download
the blob from this storage account.

$access = Grant-AzDiskAccess -ResourceGroupName <resource group> -DiskName <disk name> -


Access Read -DurationInSecond 3600
$key = (Get-AzStorageAccountKey -ResourceGroupName <resource group> -Name <storage account
name>)[0].Value
$destContext = (New-AzStorageContext -StorageAccountName <storage account name -
StorageAccountKey $key)
Start-AzStorageBlobCopy -AbsoluteUri $access.AccessSAS -DestContainer <container name> -
DestBlob <blob name> -DestContext $destContext
# Wait for blob copy to finish
Get-AzStorageBlobCopyState -Container <container name> -Blob <blob name> -Context
$destContext
Save-AzVhd -SourceUri <blob in new storage account> -LocalFilePath <local file path> -
StorageKey $key
# Wait for download to finish
Revoke-AzDiskAccess -ResourceGroupName <resource group> -DiskName <disk name>

Downloading a VHD Once the SAP system is stopped and the VM is shut down, you can use the
PowerShell cmdlet Save-AzVhd on the on-premises target to download the VHD disks back to
the on-premises world. In order to do that, you need the URL of the VHD, which you can find in
the 'storage Section' of the Azure portal (need to navigate to the Storage Account and the
storage container where the VHD was created) and you need to know where the VHD should be
copied to.
Then you can leverage the command by defining the parameter SourceUri as the URL of the
VHD to download and the LocalFilePath as the physical location of the VHD (including its name).
The command could look like:
Save-AzVhd -ResourceGroupName <resource group name of storage account> -SourceUri
http://<storage account name>.blob.core.windows.net/<container name>/sapidedata.vhd -
LocalFilePath E:\Azure_downloads\sapidesdata.vhd

For more details of the Save-AzVhd cmdlet, check here /powershell/module/az.compute/save-


Azvhd.
Azure CLI
Downloading a Managed Disk You first need to get access to the underlying blob of the
Managed Disk. Then you can copy the underlying blob to a new storage account and download
the blob from this storage account.

az disk grant-access --ids "/subscriptions/<subscription id>/resourceGroups/<resource


group>/providers/Microsoft.Compute/disks/<disk name>" --duration-in-seconds 3600
az storage blob download --sas-token "<sas token>" --account-name <account name> --
container-name <container name> --name <blob name> --file <local file>
az disk revoke-access --ids "/subscriptions/<subscription id>/resourceGroups/<resource
group>/providers/Microsoft.Compute/disks/<disk name>"

Downloading a VHD Once the SAP system is stopped and the VM is shut down, you can use the
Azure CLI command _azure storage blob download_ on the on-premises target to download the
VHD disks back to the on-premises world. In order to do that, you need the name and the
container of the VHD, which you can find in the 'Storage Section' of the Azure portal (need to
navigate to the Storage Account and the storage container where the VHD was created) and you
need to know where the VHD should be copied to.
Then you can leverage the command by defining the parameters blob and container of the VHD
to download and the destination as the physical target location of the VHD (including its name).
The command could look like:

az storage blob download --name <name of the VHD to download> --container-name <container of
the VHD to download> --account-name <storage account name of the VHD to download> --account-
key <storage account key> --file <destination of the VHD to download>

Transferring VMs and disks within Azure


Copying SAP systems within Azure
An SAP system or even a dedicated DBMS server supporting an SAP application layer will likely consist
of several disks, which contain either the OS with the binaries or the data and log file(s) of the SAP
database. Neither the Azure functionality of copying disks nor the Azure functionality of saving disks to
a local disk has a synchronization mechanism, which snapshots multiple disks in a consistent manner.
Therefore, the state of the copied or saved disks even if those are mounted against the same VM would
be different. This means that in the concrete case of having different data and logfile(s) contained in the
different disks, the database in the end would be inconsistent.
Conclusion: In order to copy or save disks, which are par t of an SAP system configuration
you need to stop the SAP system and also need to shut down the deployed VM. Only then
you can copy or download the set of disks to either create a copy of the SAP system in
Azure or on-premises.
Data disks can be stored as VHD files in an Azure Storage Account and can be directly attached to a
virtual machine or be used as an image. In this case, the VHD is copied to another location before
being attached to the virtual machine. The full name of the VHD file in Azure must be unique within
Azure. As mentioned earlier already, the name is kind of a three-part name that looks like:
http(s)://<storage account name>.blob.core.windows.net/<container name>/<vhd name>

Data disks can also be Managed Disks. In this case, the Managed Disk is used to create a new Managed
Disk before being attached to the virtual machine. The name of the Managed Disk must be unique
within a resource group.
P o w e r Sh e l l

You can use Azure PowerShell cmdlets to copy a VHD as shown in this article. To create a new Managed
Disk, use New-AzDiskConfig and New-AzDisk as shown in the following example.

$config = New-AzDiskConfig -CreateOption Copy -SourceUri "/subscriptions/<subscription


id>/resourceGroups/<resource group>/providers/Microsoft.Compute/disks/<disk name>" -Location
<location>
New-AzDisk -ResourceGroupName <resource group name> -DiskName <disk name> -Disk $config

Azure CLI

You can use Azure CLI to copy a VHD. To create a new Managed Disk, use az disk create as shown in the
following example.

az disk create --source "/subscriptions/<subscription id>/resourceGroups/<resource


group>/providers/Microsoft.Compute/disks/<disk name>" --name <disk name> --resource-group
<resource group name> --location <location>

A z u r e St o r a g e t o o l s

https://storageexplorer.com/
Professional editions of Azure Storage Explorers can be found here:
https://www.cerebrata.com/
https://clumsyleaf.com/products/cloudxplorer
The copy of a VHD itself within a storage account is a process, which takes only a few seconds (similar
to SAN hardware creating snapshots with lazy copy and copy on write). After you have a copy of the
VHD file, you can attach it to a virtual machine or use it as an image to attach copies of the VHD to
virtual machines.
P o w e r Sh e l l
# attach a vhd to a vm
$vm = Get-AzVM -ResourceGroupName <resource group name> -Name <vm name>
$vm = Add-AzVMDataDisk -VM $vm -Name newdatadisk -VhdUri <path to vhd> -Caching <caching option> -
DiskSizeInGB $null -Lun <lun, for example 0> -CreateOption attach
$vm | Update-AzVM

# attach a managed disk to a vm


$vm = Get-AzVM -ResourceGroupName <resource group name> -Name <vm name>
$vm = Add-AzVMDataDisk -VM $vm -Name newdatadisk -ManagedDiskId <managed disk id> -Caching
<caching option> -DiskSizeInGB $null -Lun <lun, for example 0> -CreateOption attach
$vm | Update-AzVM

# attach a copy of the vhd to a vm


$vm = Get-AzVM -ResourceGroupName <resource group name> -Name <vm name>
$vm = Add-AzVMDataDisk -VM $vm -Name <disk name> -VhdUri <new path of vhd> -SourceImageUri <path
to image vhd> -Caching <caching option> -DiskSizeInGB $null -Lun <lun, for example 0> -
CreateOption fromImage
$vm | Update-AzVM

# attach a copy of the managed disk to a vm


$vm = Get-AzVM -ResourceGroupName <resource group name> -Name <vm name>
$diskConfig = New-AzDiskConfig -Location $vm.Location -CreateOption Copy -SourceUri <source
managed disk id>
$disk = New-AzDisk -DiskName <disk name> -Disk $diskConfig -ResourceGroupName <resource group
name>
$vm = Add-AzVMDataDisk -VM $vm -Caching <caching option> -Lun <lun, for example 0> -CreateOption
attach -ManagedDiskId $disk.Id
$vm | Update-AzVM

Azure CLI

# attach a vhd to a vm
az vm unmanaged-disk attach --resource-group <resource group name> --vm-name <vm name> --vhd-uri
<path to vhd>

# attach a managed disk to a vm


az vm disk attach --resource-group <resource group name> --vm-name <vm name> --disk <managed disk
id>

# attach a copy of the vhd to a vm


# this scenario is currently not possible with Azure CLI. A workaround is to manually copy the vhd
to the destination.

# attach a copy of a managed disk to a vm


az disk create --name <new disk name> --resource-group <resource group name> --location <location
of target virtual machine> --source <source managed disk id>
az vm disk attach --disk <new disk name or managed disk id> --resource-group <resource group name>
--vm-name <vm name> --caching <caching option> --lun <lun, for example 0>

Copying disks between Azure Storage Accounts


This task cannot be performed on the Azure portal. You can use Azure PowerShell cmdlets, Azure CLI,
or a third-party storage browser. The PowerShell cmdlets or CLI commands can create and manage
blobs, which include the ability to asynchronously copy blobs across Storage Accounts and across
regions within the Azure subscription.
P o w e r Sh e l l

You can also copy VHDs between subscriptions. For more information, read this article.
The basic flow of the PS cmdlet logic looks like this:
Create a storage account context for the source storage account with New-AzStorageContext - see
/powershell/module/az.storage/new-AzStoragecontext
Create a storage account context for the target storage account with New-AzStorageContext - see
/powershell/module/az.storage/new-AzStoragecontext
Start the copy with

Start-AzStorageBlobCopy -SrcBlob <source blob name> -SrcContainer <source container name> -


SrcContext <variable containing context of source storage account> -DestBlob <target blob name> -
DestContainer <target container name> -DestContext <variable containing context of target storage
account>

Check the status of the copy in a loop with

Get-AzStorageBlobCopyState -Blob <target blob name> -Container <target container name> -Context
<variable containing context of target storage account>

Attach the new VHD to a virtual machine as described above.


For examples see this article.
Azure CLI

Start the copy with

az storage blob copy start --source-blob <source blob name> --source-container <source container
name> --source-account-name <source storage account name> --source-account-key <source storage
account key> --destination-container <target container name> --destination-blob <target blob name>
--account-name <target storage account name> --account-key <target storage account name>

Check the status if the copy is still in a loop with

az storage blob show --name <target blob name> --container <target container name> --account-name
<target storage account name> --account-key <target storage account name>

Attach the new VHD to a virtual machine as described above.


Disk Handling
VM/disk structure for SAP deployments
Ideally the handling of the structure of a VM and the associated disks should be simple. In on-premises
installations, customers developed many ways of structuring a server installation.
One base disk, which contains the OS and all the binaries of the DBMS and/or SAP. Since March
2015, this disk can be up to 1 TB in size instead of earlier restrictions that limited it to 127 GB.
One or multiple disks, which contain the DBMS log file of the SAP database and the log file of the
DBMS temp storage area (if the DBMS supports this). If the database log IOPS requirements are
high, you need to stripe multiple disks in order to reach the IOPS volume required.
A number of disks containing one or two database files of the SAP database and the DBMS temp
data files as well (if the DBMS supports this).
Windows
With many customers we saw configurations where, for example, SAP and DBMS binaries were not
installed on the c:\ drive where the OS was installed. There were various reasons for this, but when
we went back to the root, it usually was that the drives were small and OS upgrades needed
additional space 10-15 years ago. Both conditions do not apply these days too often anymore.
Today the c:\ drive can be mapped on large volume disks or VMs. In order to keep deployments
simple in their structure, it is recommended to follow the following deployment pattern for SAP
NetWeaver systems in Azure
The Windows operating system pagefile should be on the D: drive (non-persistent disk)

Linux
Place the Linux swapfile under /mnt /mnt/resource on Linux as described in this article. The swap
file can be configured in the configuration file of the Linux Agent /etc/waagent.conf. Add or change
the following settings:

ResourceDisk.EnableSwap=y
ResourceDisk.SwapSizeMB=30720

To activate the changes, you need to restart the Linux Agent with

sudo service waagent restart

Read SAP Note 1597355 for more details on the recommended swap file size

The number of disks used for the DBMS data files and the type of Azure Storage these disks are hosted
on should be determined by the IOPS requirements and the latency required. Exact quotas are
described in this article (Linux) and this article (Windows).
Experience of SAP deployments over the last two years taught us some lessons, which can be
summarized as:
IOPS traffic to different data files is not always the same since existing customer systems might
have differently sized data files representing their SAP database(s). As a result it turned out to be
better using a RAID configuration over multiple disks to place the data files LUNs carved out of
those. There were situations, especially with Azure Standard Storage where an IOPS rate hit the
quota of a single disk against the DBMS transaction log. In such scenarios, the use of Premium
Storage is recommended or alternatively aggregating multiple Standard Storage disks with a
software stripe.

Windows
Performance best practices for SQL Server in Azure Virtual Machines

Linux
Configure Software RAID on Linux
Configure LVM on a Linux VM in Azure

Premium Storage is showing significant better performance, especially for critical transaction log
writes. For SAP scenarios that are expected to deliver production like performance, it is highly
recommended to use VM-Series that can leverage Azure Premium Storage.
Keep in mind that the disk, which contains the OS, and as we recommend, the binaries of SAP and the
database (base VM) as well, is not anymore limited to 127 GB. It now can have up to 1 TB in size. This
should be enough space to keep all the necessary file including, for example, SAP batch job logs.
For more suggestions and more details, specifically for DBMS VMs, consult the DBMS Deployment
Guide
Disk Handling
In most scenarios, you need to create additional disks in order to deploy the SAP database into the VM.
We talked about the considerations on number of disks in chapter VM/disk structure for SAP
deployments of this document. The Azure portal allows to attach and detach disks once a base VM is
deployed. The disks can be attached/detached when the VM is up and running as well as when it is
stopped. When attaching a disk, the Azure portal offers to attach an empty disk or an existing disk,
which at this point in time is not attached to another VM.
Note : Disks can only be attached to one VM at any given time.

During the deployment of a new virtual machine, you can decide whether you want to use Managed
Disks or place your disks on Azure Storage Accounts. If you want to use Premium Storage, we
recommend using Managed Disks.
Next, you need to decide whether you want to create a new and empty disk or whether you want to
select an existing disk that was uploaded earlier and should be attached to the VM now.
IMPORTANT : You DO NOT want to use Host Caching with Azure Standard Storage. You should leave
the Host Cache preference at the default of NONE. With Azure Premium Storage, you should enable
Read Caching if the I/O characteristic is mostly read like typical I/O traffic against database data files. In
case of database transaction log file, no caching is recommended.

Windows
How to attach a data disk in the Azure portal
If disks are attached, you need to sign in to the VM to open the Windows Disk Manager. If
automount is not enabled as recommended in chapter Setting automount for attached disks, the
newly attached volume needs to be taken online and initialized.

Linux
If disks are attached, you need to sign in to the VM and initialize the disks as described in this
article

If the new disk is an empty disk, you need to format the disk as well. For formatting, especially for
DBMS data and log files the same recommendations as for bare-metal deployments of the DBMS
apply.
An Azure Storage account does not provide infinite resources in terms of I/O volume, IOPS, and data
volume. Usually DBMS VMs are most affected by this. It might be best to use a separate Storage
Account for each VM if you have few high I/O volume VMs to deploy in order to stay within the limit of
the Azure Storage Account volume. Otherwise, you need to see how you can balance these VMs
between different Storage accounts without hitting the limit of each single Storage Account. More
details are discussed in the DBMS Deployment Guide. You should also keep these limitations in mind
for pure SAP application server VMs or other VMs, which eventually might require additional VHDs.
These restrictions do not apply if you use Managed Disk. If you plan to use Premium Storage, we
recommend using Managed Disk.
Another topic, which is relevant for Storage Accounts is whether the VHDs in a Storage Account are
getting Geo-replicated. Geo-replication is enabled or disabled on the Storage Account level and not on
the VM level. If geo-replication is enabled, the VHDs within the Storage Account would be replicated
into another Azure data center within the same region. Before deciding on this, you should think about
the following restriction:
Azure Geo-replication works locally on each VHD in a VM and does not replicate the I/Os in
chronological order across multiple VHDs in a VM. Therefore, the VHD that represents the base VM as
well as any additional VHDs attached to the VM are replicated independent of each other. This means
there is no synchronization between the changes in the different VHDs. The fact that the I/Os are
replicated independently of the order in which they are written means that geo-replication is not of
value for database servers that have their databases distributed over multiple VHDs. In addition to the
DBMS, there also might be other applications where processes write or manipulate data in different
VHDs and where it is important to keep the order of changes. If that is a requirement, geo-replication
in Azure should not be enabled. Dependent on whether you need or want geo-replication for a set of
VMs, but not for another set, you can already categorize VMs and their related VHDs into different
Storage Accounts that have geo-replication enabled or disabled.
Setting automount for attached disks

Windows
For VMs, which are created from own Images or Disks, it is necessary to check and possibly set the
automount parameter. Setting this parameter will allow the VM after a restart or redeployment in
Azure to mount the attached/mounted drives again automatically. The parameter is set for the
images provided by Microsoft in the Azure Marketplace.
In order to set the automount, check the documentation of the command-line executable
diskpart.exe here:
DiskPart Command-Line Options
Automount
The Windows command-line window should be opened as administrator.
If disks are attached, you need to sign in to the VM to open the Windows Disk Manager. If
automount is not enabled as recommended in chapter Setting automount for attached disks, the
newly attached volume >needs to be taken online and initialized.

Linux
You need to initialize a newly attached empty disk as described in this article. You also need to add
new disks to the /etc/fstab.

Final Deployment
For the final deployment and exact steps, especially with regards to the deployment of the Azure
Extension for SAP, refer to the Deployment Guide.

Accessing SAP systems running within Azure VMs


For scenarios where you want to connect to those SAP systems across the public internet using SAP
GUI, the following procedures need to be applied.
Later in the document we will discuss the other major scenario, connecting to SAP systems in cross-
premises deployments, which have a site-to-site connection (VPN tunnel) or Azure ExpressRoute
connection between the on-premises systems and Azure systems.
Remote Access to SAP systems
With Azure Resource Manager, there are no default endpoints anymore like in the former classic
model. All ports of an Azure Resource Manager VM are open as long as:
1. No Network Security Group is defined for the subnet or the network interface. Network traffic to
Azure VMs can be secured via so-called "Network Security Groups". For more information, see
What is a Network Security Group (NSG)?
2. No Azure Load Balancer is defined for the network interface
See the architecture difference between classic model and ARM as described in this article.
Configuration of the SAP System and SAP GUI connectivity over the internet
See this article, which describes details to this topic:
/archive/blogs/saponsqlserver/sap-gui-connection-closed-when-connecting-to-sap-system-in-azure
Changing Firewall Settings within VM
It might be necessary to configure the firewall on your virtual machines to allow inbound traffic to
your SAP system.

Windows
By default, the Windows Firewall within an Azure deployed VM is turned on. You now need to allow
the SAP Port to be opened, otherwise the SAP GUI will not be able to connect. To do this:
Open Control Panel\System and Security\Windows Firewall to Advanced Settings .
Now right-click on Inbound Rules and chose New Rule .
In the following Wizard chose to create a new Por t rule.
In the next step of the wizard, leave the setting at TCP and type in the port number you want to
open. Since our SAP instance ID is 00, we took 3200. If your instance has a different instance
number, the port you defined earlier based on the instance number should be opened.
In the next part of the wizard, you need to leave the item Allow Connection checked.
In the next step of the wizard you need to define whether the rule applies for Domain, Private
and Public network. Adjust it if necessary to your needs. However, connecting with SAP GUI
from the outside through the public network, you need to have the rule applied to the public
network.
In the last step of the wizard, name the rule and save by pressing Finish .
The rule becomes effective immediately.

Linux
The Linux images in the Azure Marketplace do not enable the iptables firewall by default and the
connection to your SAP system should work. If you enabled iptables or another firewall, refer to
the documentation of iptables or the used firewall to allow inbound tcp traffic to port 32xx (where
xx is the system number of your SAP system).

Security recommendations
The SAP GUI does not connect immediately to any of the SAP instances (port 32xx) which are running,
but first connects via the port opened to the SAP message server process (port 36xx). In the past, the
same port was used by the message server for the internal communication to the application
instances. To prevent on-premises application servers from inadvertently communicating with a
message server in Azure, the internal communication ports can be changed. It is highly recommended
to change the internal communication between the SAP message server and its application instances
to a different port number on systems that have been cloned from on-premises systems, such as a
clone of development for project testing etc. This can be done with the default profile parameter:
rdisp/msserv_internal

as documented in Security Settings for the SAP Message Server


Single VM with SAP NetWeaver demo/training scenario

In this scenario we are implementing a typical training/demo system scenario where the complete
training/demo scenario is contained in a single VM. We assume that the deployment is done through
VM image templates. We also assume that multiple of these demo/trainings VMs need to be deployed
with the VMs having the same name. The whole training systems don't have connectivity to your on-
premises assets and are an opposite to a hybrid deployment.
The assumption is that you created a VM Image as described in some sections of chapter Preparing
VMs with SAP for Azure in this document.
The sequence of events to implement the scenario looks like this:
P o w e r Sh e l l

Create a new resource group for every training/demo landscape

$rgName = "SAPERPDemo1"
New-AzResourceGroup -Name $rgName -Location "North Europe"

Create a new storage account if you don't want to use Managed Disks

$suffix = Get-Random -Minimum 100000 -Maximum 999999


$account = New-AzStorageAccount -ResourceGroupName $rgName -Name "saperpdemo$suffix" -SkuName
Standard_LRS -Kind "Storage" -Location "North Europe"

Create a new virtual network for every training/demo landscape to enable the usage of the same
hostname and IP addresses. The virtual network is protected by a Network Security Group that only
allows traffic to port 3389 to enable Remote Desktop access and port 22 for SSH.
# Create a new Virtual Network
$rdpRule = New-AzNetworkSecurityRuleConfig -Name SAPERPDemoNSGRDP -Protocol * -SourcePortRange * -
DestinationPortRange 3389 -Access Allow -Direction Inbound -SourceAddressPrefix * -
DestinationAddressPrefix * -Priority 100
$sshRule = New-AzNetworkSecurityRuleConfig -Name SAPERPDemoNSGSSH -Protocol * -SourcePortRange * -
DestinationPortRange 22 -Access Allow -Direction Inbound -SourceAddressPrefix * -
DestinationAddressPrefix * -Priority 101
$nsg = New-AzNetworkSecurityGroup -Name SAPERPDemoNSG -ResourceGroupName $rgName -Location "North
Europe" -SecurityRules $rdpRule,$sshRule

$subnetConfig = New-AzVirtualNetworkSubnetConfig -Name Subnet1 -AddressPrefix 10.0.1.0/24 -


NetworkSecurityGroup $nsg
$vnet = New-AzVirtualNetwork -Name SAPERPDemoVNet -ResourceGroupName $rgName -Location "North
Europe" -AddressPrefix 10.0.1.0/24 -Subnet $subnetConfig

Create a new public IP address that can be used to access the virtual machine from the internet

# Create a public IP address with a DNS name


$pip = New-AzPublicIpAddress -Name SAPERPDemoPIP -ResourceGroupName $rgName -Location "North
Europe" -DomainNameLabel $rgName.ToLower() -AllocationMethod Dynamic

Create a new network interface for the virtual machine

# Create a new Network Interface


$nic = New-AzNetworkInterface -Name SAPERPDemoNIC -ResourceGroupName $rgName -Location "North
Europe" -Subnet $vnet.Subnets[0] -PublicIpAddress $pip

Create a virtual machine. For this scenario, every VM will have the same name. The SAP SID of the
SAP NetWeaver instances in those VMs will be the same as well. Within the Azure Resource Group,
the name of the VM needs to be unique, but in different Azure Resource Groups you can run VMs
with the same name. The default 'Administrator' account of Windows or 'root' for Linux are not
valid. Therefore, a new administrator user name needs to be defined together with a password. The
size of the VM also needs to be defined.

#####
# Create a new virtual machine with an official image from the Azure Marketplace
#####
$cred=Get-Credential -Message "Type the name and password of the local administrator account."
$vmconfig = New-AzVMConfig -VMName SAPERPDemo -VMSize Standard_D11

# select image
$vmconfig = Set-AzVMSourceImage -VM $vmconfig -PublisherName "MicrosoftWindowsServer" -Offer
"WindowsServer" -Skus "2012-R2-Datacenter" -Version "latest"
$vmconfig = Set-AzVMOperatingSystem -VM $vmconfig -Windows -ComputerName "SAPERPDemo" -Credential
$cred -ProvisionVMAgent -EnableAutoUpdate
# $vmconfig = Set-AzVMSourceImage -VM $vmconfig -PublisherName "SUSE" -Offer "SLES-SAP" -Skus "12-
SP1" -Version "latest"
# $vmconfig = Set-AzVMSourceImage -VM $vmconfig -PublisherName "RedHat" -Offer "RHEL" -Skus "7.2"
-Version "latest"
# $vmconfig = Set-AzVMSourceImage -VM $vmconfig -PublisherName "Oracle" -Offer "Oracle-Linux" -
Skus "7.2" -Version "latest"
# $vmconfig = Set-AzVMOperatingSystem -VM $vmconfig -Linux -ComputerName "SAPERPDemo" -Credential
$cred

$vmconfig = Add-AzVMNetworkInterface -VM $vmconfig -Id $nic.Id

$vmconfig = Set-AzVMBootDiagnostics -Disable -VM $vmconfig


$vm = New-AzVM -ResourceGroupName $rgName -Location "North Europe" -VM $vmconfig
#####
# Create a new virtual machine with a VHD that contains the private image that you want to use
#####
$cred=Get-Credential -Message "Type the name and password of the local administrator account."
$vmconfig = New-AzVMConfig -VMName SAPERPDemo -VMSize Standard_D11

$vmconfig = Add-AzVMNetworkInterface -VM $vmconfig -Id $nic.Id

$diskName="osfromimage"
$osDiskUri=$account.PrimaryEndpoints.Blob.ToString() + "vhds/" + $diskName + ".vhd"

$vmconfig = Set-AzVMOSDisk -VM $vmconfig -Name $diskName -VhdUri $osDiskUri -CreateOption


fromImage -SourceImageUri <path to VHD that contains the OS image> -Windows
$vmconfig = Set-AzVMOperatingSystem -VM $vmconfig -Windows -ComputerName "SAPERPDemo" -Credential
$cred
#$vmconfig = Set-AzVMOSDisk -VM $vmconfig -Name $diskName -VhdUri $osDiskUri -CreateOption
fromImage -SourceImageUri <path to VHD that contains the OS image> -Linux
#$vmconfig = Set-AzVMOperatingSystem -VM $vmconfig -Linux -ComputerName "SAPERPDemo" -Credential
$cred

$vmconfig = Set-AzVMBootDiagnostics -Disable -VM $vmconfig


$vm = New-AzVM -ResourceGroupName $rgName -Location "North Europe" -VM $vmconfig

#####
# Create a new virtual machine with a Managed Disk Image
#####
$cred=Get-Credential -Message "Type the name and password of the local administrator account."
$vmconfig = New-AzVMConfig -VMName SAPERPDemo -VMSize Standard_D11

$vmconfig = Add-AzVMNetworkInterface -VM $vmconfig -Id $nic.Id

$vmconfig = Set-AzVMSourceImage -VM $vmconfig -Id <Id of Managed Disk Image>


$vmconfig = Set-AzVMOperatingSystem -VM $vmconfig -Windows -ComputerName "SAPERPDemo" -Credential
$cred
#$vmconfig = Set-AzVMOperatingSystem -VM $vmconfig -Linux -ComputerName "SAPERPDemo" -Credential
$cred

$vmconfig = Set-AzVMBootDiagnostics -Disable -VM $vmconfig


$vm = New-AzVM -ResourceGroupName $rgName -Location "North Europe" -VM $vmconfig

Optionally add additional disks and restore necessary content. All blob names (URLs to the blobs)
must be unique within Azure.

# Optional: Attach additional VHD data disks


$vm = Get-AzVM -ResourceGroupName $rgName -Name SAPERPDemo
$dataDiskUri = $account.PrimaryEndpoints.Blob.ToString() + "vhds/datadisk.vhd"
Add-AzVMDataDisk -VM $vm -Name datadisk -VhdUri $dataDiskUri -DiskSizeInGB 1023 -CreateOption
empty | Update-AzVM

# Optional: Attach additional Managed Disks


$vm = Get-AzVM -ResourceGroupName $rgName -Name SAPERPDemo
Add-AzVMDataDisk -VM $vm -Name datadisk -DiskSizeInGB 1023 -CreateOption empty -Lun 0 | Update-
AzVM

CLI

The following example code can be used on Linux. For Windows, either use PowerShell as described
above or adapt the example to use %rgName% instead of $rgName and set the environment variable
using the Windows command set.
Create a new resource group for every training/demo landscape
rgName=SAPERPDemo1
rgNameLower=saperpdemo1
az group create --name $rgName --location "North Europe"

Create a new storage account

az storage account create --resource-group $rgName --location "North Europe" --kind Storage --sku
Standard_LRS --name $rgNameLower

Create a new virtual network for every training/demo landscape to enable the usage of the same
hostname and IP addresses. The virtual network is protected by a Network Security Group that only
allows traffic to port 3389 to enable Remote Desktop access and port 22 for SSH.

az network nsg create --resource-group $rgName --location "North Europe" --name SAPERPDemoNSG
az network nsg rule create --resource-group $rgName --nsg-name SAPERPDemoNSG --name
SAPERPDemoNSGRDP --protocol \* --source-address-prefix \* --source-port-range \* --destination-
address-prefix \* --destination-port-range 3389 --access Allow --priority 100 --direction Inbound
az network nsg rule create --resource-group $rgName --nsg-name SAPERPDemoNSG --name
SAPERPDemoNSGSSH --protocol \* --source-address-prefix \* --source-port-range \* --destination-
address-prefix \* --destination-port-range 22 --access Allow --priority 101 --direction Inbound

az network vnet create --resource-group $rgName --name SAPERPDemoVNet --location "North Europe" --
address-prefixes 10.0.1.0/24
az network vnet subnet create --resource-group $rgName --vnet-name SAPERPDemoVNet --name Subnet1 -
-address-prefix 10.0.1.0/24 --network-security-group SAPERPDemoNSG

Create a new public IP address that can be used to access the virtual machine from the internet

az network public-ip create --resource-group $rgName --name SAPERPDemoPIP --location "North


Europe" --dns-name $rgNameLower --allocation-method Dynamic

Create a new network interface for the virtual machine

az network nic create --resource-group $rgName --location "North Europe" --name SAPERPDemoNIC --
public-ip-address SAPERPDemoPIP --subnet Subnet1 --vnet-name SAPERPDemoVNet

Create a virtual machine. For this scenario, every VM will have the same name. The SAP SID of the
SAP NetWeaver instances in those VMs will be the same as well. Within the Azure Resource Group,
the name of the VM needs to be unique, but in different Azure Resource Groups you can run VMs
with the same name. The default 'Administrator' account of Windows or 'root' for Linux are not
valid. Therefore, a new administrator user name needs to be defined together with a password. The
size of the VM also needs to be defined.
#####
# Create virtual machines using storage accounts
#####
az vm create --resource-group $rgName --location "North Europe" --name SAPERPDemo --nics
SAPERPDemoNIC --image MicrosoftWindowsServer:WindowsServer:2012-R2-Datacenter:latest --admin-
username <username> --admin-password <password> --size Standard_D11 --use-unmanaged-disk --
storage-account $rgNameLower --storage-container-name vhds --os-disk-name os
#az vm create --resource-group $rgName --location "North Europe" --name SAPERPDemo --nics
SAPERPDemoNIC --image SUSE:SLES-SAP:12-SP1:latest --admin-username <username> --admin-password
<password> --size Standard_D11 --use-unmanaged-disk --storage-account $rgNameLower --storage-
container-name vhds --os-disk-name os --authentication-type password
#az vm create --resource-group $rgName --location "North Europe" --name SAPERPDemo --nics
SAPERPDemoNIC --image RedHat:RHEL:7.2:latest --admin-username <username> --admin-password
<password> --size Standard_D11 --use-unmanaged-disk --storage-account $rgNameLower --storage-
container-name vhds --os-disk-name os --authentication-type password
#az vm create --resource-group $rgName --location "North Europe" --name SAPERPDemo --nics
SAPERPDemoNIC --image "Oracle:Oracle-Linux:7.2:latest" --admin-username <username> --admin-
password <password> --size Standard_D11 --use-unmanaged-disk --storage-account $rgNameLower --
storage-container-name vhds --os-disk-name os --authentication-type password

#####
# Create virtual machines using Managed Disks
#####
az vm create --resource-group $rgName --location "North Europe" --name SAPERPDemo --nics
SAPERPDemoNIC --image MicrosoftWindowsServer:WindowsServer:2012-R2-Datacenter:latest --admin-
username <username> --admin-password <password> --size Standard_DS11_v2 --os-disk-name os
#az vm create --resource-group $rgName --location "North Europe" --name SAPERPDemo --nics
SAPERPDemoNIC --image SUSE:SLES-SAP:12-SP1:latest --admin-username <username> --admin-password
<password> --size Standard_DS11_v2 --os-disk-name os --authentication-type password
#az vm create --resource-group $rgName --location "North Europe" --name SAPERPDemo --nics
SAPERPDemoNIC --image RedHat:RHEL:7.2:latest --admin-username <username> --admin-password
<password> --size Standard_DS11_v2 --os-disk-name os --authentication-type password
#az vm create --resource-group $rgName --location "North Europe" --name SAPERPDemo --nics
SAPERPDemoNIC --image "Oracle:Oracle-Linux:7.2:latest" --admin-username <username> --admin-
password <password> --size Standard_DS11_v2 --os-disk-name os --authentication-type password

#####
# Create a new virtual machine with a VHD that contains the private image that you want to use
#####
az vm create --resource-group $rgName --location "North Europe" --name SAPERPDemo --nics
SAPERPDemoNIC --os-type Windows --admin-username <username> --admin-password <password> --size
Standard_D11 --use-unmanaged-disk --storage-account $rgNameLower --storage-container-name vhds --
os-disk-name os --image <path to image vhd>
#az vm create --resource-group $rgName --location "North Europe" --name SAPERPDemo --nics
SAPERPDemoNIC --os-type Linux --admin-username <username> --admin-password <password> --size
Standard_D11 --use-unmanaged-disk --storage-account $rgNameLower --storage-container-name vhds --
os-disk-name os --image <path to image vhd> --authentication-type password

#####
# Create a new virtual machine with a Managed Disk Image
#####
az vm create --resource-group $rgName --location "North Europe" --name SAPERPDemo --nics
SAPERPDemoNIC --admin-username <username> --admin-password <password> --size Standard_DS11_v2 --
os-disk-name os --image <managed disk image id>
#az vm create --resource-group $rgName --location "North Europe" --name SAPERPDemo --nics
SAPERPDemoNIC --admin-username <username> --admin-password <password> --size Standard_DS11_v2 --
os-disk-name os --image <managed disk image id> --authentication-type password

Optionally add additional disks and restore necessary content. All blob names (URLs to the blobs)
must be unique within Azure.
# Optional: Attach additional VHD data disks
az vm unmanaged-disk attach --resource-group $rgName --vm-name SAPERPDemo --size-gb 1023 --vhd-uri
https://$rgNameLower.blob.core.windows.net/vhds/data.vhd --new

# Optional: Attach additional Managed Disks


az vm disk attach --resource-group $rgName --vm-name SAPERPDemo --size-gb 1023 --disk datadisk --
new

Te m p l a t e

You can use the sample templates on the Azure-quickstart-templates repository on GitHub.
Simple Linux VM
Simple Windows VM
VM from image
Implement a set of VMs that communicate within Azure
This non-hybrid scenario is a typical scenario for training and demo purposes where the software
representing the demo/training scenario is spread over multiple VMs. The different components
installed in the different VMs need to communicate with each other. Again, in this scenario no on-
premises network communication or cross-premises scenario is needed.
This scenario is an extension of the installation described in chapter Single VM with SAP NetWeaver
demo/training scenario of this document. In this case, more virtual machines will be added to an
existing resource group. In the following example, the training landscape consists of an SAP ASCS/SCS
VM, a VM running a DBMS, and an SAP Application Server instance VM.
Before you build this scenario, you need to think about basic settings as already exercised in the
scenario before.
Resource Group and Virtual Machine naming
All resource group names must be unique. Develop your own naming scheme of your resources, such
as <rg-name >-suffix.
The virtual machine name has to be unique within the resource group.
Set up Network for communication between the different VMs

To prevent naming collisions with clones of the same training/demo landscapes, you need to create an
Azure Virtual Network for every landscape. DNS name resolution will be provided by Azure or you can
configure your own DNS server outside Azure (not to be further discussed here). In this scenario, we
do not configure our own DNS. For all virtual machines inside one Azure Virtual Network,
communication via hostnames will be enabled.
The reasons to separate training or demo landscapes by virtual networks and not only resource groups
could be:
The SAP landscape as set up needs its own AD/OpenLDAP and a Domain Server needs to be part of
each of the landscapes.
The SAP landscape as set up has components that need to work with fixed IP addresses.
More details about Azure Virtual Networks and how to define them can be found in this article.

Deploying SAP VMs with corporate network connectivity (Cross-


Premises)
You run an SAP landscape and want to divide the deployment between bare-metal for high-end DBMS
servers, on-premises virtualized environments for application layers, and smaller 2-Tier configured
SAP systems and Azure IaaS. The base assumption is that SAP systems within one SAP landscape need
to communicate with each other and with many other software components deployed in the company,
independent of their deployment form. There also should be no differences introduced by the
deployment form for the end user connecting with SAP GUI or other interfaces. These conditions can
only be met when we have the on-premises Active Directory/OpenLDAP and DNS services extended to
the Azure systems through site-to-site/multi-site connectivity or private connections like Azure
ExpressRoute.
Scenario of an SAP landscape
The cross-premises or hybrid scenario can be roughly described like in the graphics below:

The minimum requirement is the use of secure communication protocols such as SSL/TLS for browser
access or VPN-based connections for system access to the Azure services. The assumption is that
companies handle the VPN connection between their corporate network and Azure differently. Some
companies might blankly open all the ports. Some other companies might want to be precise in which
ports they need to open, etc.
In the table below typical SAP communication ports are listed. Basically it is sufficient to open the SAP
gateway port.

EXA M P L E <NN >= DEFA ULT RA N GE


SERVIC E P O RT N A M E 01 ( M IN - M A X) C O M M EN T
EXA M P L E <NN >= DEFA ULT RA N GE
SERVIC E P O RT N A M E 01 ( M IN - M A X) C O M M EN T

Dispatcher sapdp <nn> see * 3201 3200 - 3299 SAP Dispatcher,


used by SAP GUI
for Windows and
Java

Message server sapms <sid > see 3600 free sapms sid = SAP-System-
** <anySID > ID

Gateway sapgw <nn > see * 3301 free SAP gateway, used
for CPIC and RFC
communication

SAP router sapdp99 3299 free Only CI (central


instance) Service
names can be
reassigned in
/etc/services to an
arbitrary value
after installation.

*) nn = SAP Instance Number


**) sid = SAP-System-ID
More detailed information on ports required for different SAP products or services by SAP products
can be found here https://scn.sap.com/docs/DOC-17124. With this document, you should be able to
open dedicated ports in the VPN device necessary for specific SAP products and scenarios.
Other security measures when deploying VMs in such a scenario could be to create a Network Security
Group to define access rules.
Printing on a local network printer from SAP instance in Azure
P r i n t i n g o v e r T C P / I P i n C r o ss- P r e m i se s sc e n a r i o

Setting up your on-premises TCP/IP based network printers in an Azure VM is overall the same as in
your corporate network, assuming you do have a VPN Site-To-Site tunnel or ExpressRoute connection
established.

Windows
To do this:
Some network printers come with a configuration wizard which makes it easy to set up your
printer in an Azure VM. If no wizard software has been distributed with the printer, the manual
way to set up the printer is to create a new TCP/IP printer port.
Open Control Panel -> Devices and Printers -> Add a printer
Choose Add a printer using a TCP/IP address or hostname
Type in the IP address of the printer
Printer Port standard 9100
If necessary install the appropriate printer driver manually.

Linux
like for Windows just follow the standard procedure to install a network printer
just follow the public Linux guides for SUSE or Red Hat and Oracle Linux on how to add a
printer.

H o st - b a se d p r i n t e r o v e r SM B (sh a r e d p r i n t e r ) i n C r o ss- P r e m i se s sc e n a r i o

Host-based printers are not network-compatible by design. But a host-based printer can be shared
among computers on a network as long as the printer is connected to a powered-on computer.
Connect your corporate network either Site-To-Site or ExpressRoute and share your local printer. The
SMB protocol uses NetBIOS instead of DNS as name service. The NetBIOS host name can be different
from the DNS host name. The standard case is that the NetBIOS host name and the DNS host name are
identical. The DNS domain does not make sense in the NetBIOS name space. Accordingly, the fully
qualified DNS host name consisting of the DNS host name and DNS domain must not be used in the
NetBIOS name space.
The printer share is identified by a unique name in the network:
Host name of the SMB host (always needed).
Name of the share (always needed).
Name of the domain if printer share is not in the same domain as SAP system.
Additionally, a user name and a password may be required to access the printer share.
How to:

Windows
Share your local printer. In the Azure VM, open the Windows Explorer and type in the share name
of the printer. A printer installation wizard will guide you through the installation process.

Linux
Here are some examples of documentation about configuring network printers in Linux or
including a chapter regarding printing in Linux. It will work the same way in an Azure Linux VM as
long as the VM is part of a VPN:
SLES https://en.opensuse.org/SDB:Printing_via_SMB_(Samba)_Share_or_Windows_Share
RHEL or Oracle Linux https://access.redhat.com/documentation/en-
us/red_hat_enterprise_linux/7/html-single/system_administrators_guide/index#sec-
Starting_Print_Settings_Config

U SB P r i n t e r (p r i n t e r fo r w a r d i n g )

In Azure the ability of the Remote Desktop Services to provide users the access to their local printer
devices in a remote session is not available.
Windows
More details on printing with Windows can be found here:
https://technet.microsoft.com/library/jj590748.aspx.

Integration of SAP Azure Systems into Correction and Transport System (TMS) in Cross-Premises
The SAP Change and Transport System (TMS) needs to be configured to export and import transport
request across systems in the landscape. We assume that the development instances of an SAP system
(DEV) are located in Azure whereas the quality assurance (QA) and productive systems (PRD) are on-
premises. Furthermore, we assume that there is a central transport directory.
C o n fi g u r i n g t h e T r a n sp o r t D o m a i n

Configure your Transport Domain on the system you designated as the Transport Domain Controller
as described in Configuring the Transport Domain Controller. A system user TMSADM will be created
and the required RFC destination will be generated. You may check these RFC connections using the
transaction SM59. Hostname resolution must be enabled across your transport domain.
How to:
In our scenario, we decided the on-premises QAS system will be the CTS domain controller. Call
transaction STMS. The TMS dialog box appears. A Configure Transport Domain dialog box is
displayed. (This dialog box only appears if you have not yet configured a transport domain.)
Make sure that the automatically created user TMSADM is authorized (SM59 -> ABAP Connection -
> [email protected]_E61 -> Details -> Utilities(M) -> Authorization Test). The initial screen of
transaction STMS should show that this SAP System is now functioning as the controller of the
transport domain as shown here:

Including SAP Systems in the Transport Domain


The sequence of including an SAP system in a transport domain looks as follows:
On the DEV system in Azure, go to the transport system (Client 000) and call transaction STMS.
Choose Other Configuration from the dialog box and continue with Include System in Domain.
Specify the Domain Controller as target host (Including SAP Systems in the Transport Domain). The
system is now waiting to be included in the transport domain.
For security reasons, you then have to go back to the domain controller to confirm your request.
Choose System Overview and Approve of the waiting system. Then confirm the prompt and the
configuration will be distributed.
This SAP system now contains the necessary information about all the other SAP systems in the
transport domain. At the same time, the address data of the new SAP system is sent to all the other
SAP systems, and the SAP system is entered in the transport profile of the transport control program.
Check whether RFCs and access to the transport directory of the domain work.
Continue with the configuration of your transport system as usual as described in the documentation
Change and Transport System.
How to:
Make sure your STMS on premises is configured correctly.
Make sure the hostname of the Transport Domain Controller can be resolved by your virtual
machine on Azure and vice visa.
Call transaction STMS -> Other Configuration -> Include System in Domain.
Confirm the connection in the on premises TMS system.
Configure transport routes, groups, and layers as usual.
In site-to-site connected cross-premises scenarios, the latency between on-premises and Azure still can
be substantial. If we follow the sequence of transporting objects through development and test
systems to production or think about applying transports or support packages to the different
systems, you realize that, dependent on the location of the central transport directory, some of the
systems will encounter high latency reading or writing data in the central transport directory. The
situation is similar to SAP landscape configurations where the different systems are spread through
different data centers with substantial distance between the data centers.
In order to work around such latency and have the systems work fast in reading or writing to or from
the transport directory, you can set up two STMS transport domains (one for on-premises and one
with the systems in Azure and link the transport domains. Check this documentation, which explains
the principles behind this concept in the SAP TMS:
https://help.sap.com/saphelp_me60/helpdata/en/c4/6045377b52253de10000009b38f889/content.ht
m?frameset=/en/57/38dd924eb711d182bf0000e829fbfe/frameset.htm.
How to:
Set up a transport domain on each location (on-premises and Azure) using transaction STMS
https://help.sap.com/saphelp_nw70ehp3/helpdata/en/44/b4a0b47acc11d1899e0000e829fbbd/content.htm
Link the domains with a domain link and confirm the link between the two domains.
https://help.sap.com/saphelp_nw73ehp1/helpdata/en/a3/139838280c4f18e10000009b38f8cf/content.htm
Distribute the configuration to the linked system.
RFC traffic between SAP instances located in Azure and on-premises (Cross-Premises )
RFC traffic between systems, which are on-premises and in Azure needs to work. To set up a
connection call transaction SM59 in a source system where you need to define an RFC connection
towards the target system. The configuration is similar to the standard setup of an RFC Connection.
We assume that in the cross-premises scenario, the VMs, which run SAP systems that need to
communicate with each other are in the same domain. Therefore the setup of an RFC connection
between SAP systems does not differ from the setup steps and inputs in on-premises scenarios.
Accessing local fileshares from SAP instances located in Azure or vice versa
SAP instances located in Azure need to access file shares, which are within the corporate premises. In
addition, on-premises SAP instances need to access file shares, which are located in Azure. To enable
the file shares, you must configure the permissions and sharing options on the local system. Make sure
to open the ports on the VPN or ExpressRoute connection between Azure and your datacenter.

Supportability
Azure Extension for SAP
In order to feed some portion of Azure infrastructure information of mission critical SAP systems to
the SAP Host Agent instances, installed in VMs, an Azure (VM) Extension for SAP needs to get installed
for the deployed VMs. Since the demands by SAP were specific to SAP applications, Microsoft decided
not to generically implement the required functionality into Azure, but leave it for customers to deploy
the necessary VM extension and configurations to their Virtual Machines running in Azure. However,
deployment and lifecycle management of the Azure VM Extension for SAP will be mostly automated by
Azure.
Solution design
The solution developed to enable SAP Host Agent getting the required information is based on the
architecture of Azure VM Agent and Extension framework. The idea of the Azure VM Agent and
Extension framework is to allow installation of software application(s) available in the Azure VM
Extension gallery within a VM. The principle idea behind this concept is to allow (in cases like the Azure
Extension for SAP), the deployment of special functionality into a VM and the configuration of such
software at deployment time.
The 'Azure VM Agent' that enables handling of specific Azure VM Extensions within the VM is injected
into Windows VMs by default on VM creation in the Azure portal. In case of SUSE, Red Hat or Oracle
Linux, the VM agent is already part of the Azure Marketplace image. In case, one would upload a Linux
VM from on-premises to Azure the VM agent has to be installed manually.
The basic building blocks of the solution to provide Azure infrastructure information to SAP Host agent
in Azure looks like this:

As shown in the block diagram above, one part of the solution is hosted in the Azure VM Image and
Azure Extension Gallery, which is a globally replicated repository that is managed by Azure Operations.
It is the responsibility of the joint SAP/MS team working on the Azure implementation of SAP to work
with Azure Operations to publish new versions of the Azure Extension for SAP.
When you deploy a new Windows VM, the Azure VM Agent is automatically added into the VM. The
function of this agent is to coordinate the loading and configuration of the Azure Extensions of the
VMs. For Linux VMs, the Azure VM Agent is already part of the Azure Marketplace OS image.
However, there is a step that still needs to be executed by the customer. This is the enablement and
configuration of the performance collection. The process related to the configuration is automated by a
PowerShell script or CLI command. The PowerShell script can be downloaded in the Microsoft Azure
Script Center as described in the Deployment Guide.
The overall Architecture of the Azure extension for SAP looks like:

For the exact how-to and for detailed steps of using these PowerShell cmdlets or CLI
command during deployments, follow the instructions given in the Deployment Guide .
Integration of Azure located SAP instance into SAProuter
SAP instances running in Azure need to be accessible from SAProuter as well.

A SAProuter enables the TCP/IP communication between participating systems if there is no direct IP
connection. This provides the advantage that no end-to-end connection between the communication
partners is necessary on network level. The SAProuter is listening on port 3299 by default. To connect
SAP instances through a SAProuter, you need to give the SAProuter string and host name with any
attempt to connect.

SAP NetWeaver AS Java


So far the focus of the document has been SAP NetWeaver in general or the SAP NetWeaver ABAP
stack. In this small section, specific considerations for the SAP Java stack are listed. One of the most
important SAP NetWeaver Java exclusively based applications is the SAP Enterprise Portal. Other SAP
NetWeaver based applications like SAP PI and SAP Solution Manager use both the SAP NetWeaver
ABAP and Java stacks. Therefore, there certainly is a need to consider specific aspects related to the
SAP NetWeaver Java stack as well.
SAP Enterprise Portal
The setup of an SAP Portal in an Azure Virtual Machine does not differ from an on premises
installation if you are deploying in cross-premises scenarios. Since the DNS is done by on-premises,
the port settings of the individual instances can be done as configured on-premises. The
recommendations and restrictions described in this document so far apply for an application like SAP
Enterprise Portal or the SAP NetWeaver Java stack in general.

A special deployment scenario by some customers is the direct exposure of the SAP Enterprise Portal
to the Internet while the virtual machine host is connected to the company network via site-to-site VPN
tunnel or ExpressRoute. For such a scenario, you have to make sure that specific ports are open and
not blocked by firewall or network security group.
The initial portal URI is http(s): <Portalserver >:5XX00/irj where the port is formed as documented by
SAP in
https://help.sap.com/saphelp_nw70ehp1/helpdata/de/a2/f9d7fed2adc340ab462ae159d19509/frames
et.htm.

If you want to customize the URL and/or ports of your SAP Enterprise Portal, check this
documentation:
Change Portal URL
Change Default port numbers, Portal port numbers
High Availability (HA) and Disaster Recovery (DR) for SAP
NetWeaver running on Azure Virtual Machines
Definition of terminologies
The term high availability (HA) is generally related to a set of technologies that minimizes IT
disruptions by providing business continuity of IT services through redundant, fault-tolerant, or
failover protected components inside the same data center. In our case, within one Azure Region.
Disaster recover y (DR) is also targeting minimizing IT services disruption, and their recovery but
across different data centers, that are usually located hundreds of kilometers away. In our case usually
between different Azure Regions within the same geopolitical region or as established by you as a
customer.
Overview of High Availability
We can separate the discussion about SAP high availability in Azure into two parts:
Azure infrastructure high availability , for example HA of compute (VMs), network, storage etc.
and its benefits for increasing SAP application availability.
SAP application high availability , for example HA of SAP software components:
SAP application servers
SAP ASCS/SCS instance
DB server
and how it can be combined with Azure infrastructure HA.
SAP High Availability in Azure has some differences compared to SAP High Availability in an on-
premises physical or virtual environment. The following paper from SAP describes standard SAP High
Availability configurations in virtualized environments on Windows: https://scn.sap.com/docs/DOC-
44415. There is no sapinst-integrated SAP-HA configuration for Linux like it exists for Windows.
Regarding SAP HA on-premises for Linux find more information here: https://scn.sap.com/docs/DOC-
8541.
Azure Infrastructure High Availability
There is currently a single-VM SLA of 99.9%. To get an idea how the availability of a single VM might
look like, you can build the product of the different available Azure SLAs:
https://azure.microsoft.com/support/legal/sla/.
The basis for the calculation is 30 days per month, or 43200 minutes. Therefore, 0.05% downtime
corresponds to 21.6 minutes. As usual, the availability of the different services will multiply in the
following way:
(Availability Service #1/100) * (Availability Service #2/100) * (Availability Service #3/100)
Like:
(99.95/100) * (99.9/100) * (99.9/100) = 0.9975 or an overall availability of 99.75%.
Virtual Machine (VM) High Availability
There are two types of Azure platform events that can affect the availability of your virtual machines:
planned maintenance and unplanned maintenance.
Planned maintenance events are periodic updates made by Microsoft to the underlying Azure
platform to improve overall reliability, performance, and security of the platform infrastructure that
your virtual machines run on.
Unplanned maintenance events occur when the hardware or physical infrastructure underlying
your virtual machine has faulted in some way. This may include local network failures, local disk
failures, or other rack level failures. When such a failure is detected, the Azure platform will
automatically migrate your virtual machine from the unhealthy physical server hosting your virtual
machine to a healthy physical server. Such events are rare, but may also cause your virtual machine
to reboot.
For more details, see Availability of Windows virtual machines in Azure and Availability of Linux virtual
machines in Azure.
Azure Storage Redundancy
The data in your Microsoft Azure Storage Account is always replicated to ensure durability and high
availability, meeting the Azure Storage SLA even in the face of transient hardware failures.
Since Azure Storage is keeping three images of the data by default, RAID5 or RAID1 across multiple
Azure disks are not necessary.
For more details, see Azure Storage redundancy.
Utilizing Azure Infrastructure VM Restart to Achieve Higher Availability of SAP Applications
If you decide not to use functionalities like Windows Server Failover Clustering (WSFC) or Pacemaker
on Linux (currently only supported for SLES 12 and higher), Azure VM Restart is utilized to protect an
SAP System against planned and unplanned downtime of the Azure physical server infrastructure and
overall underlying Azure platform.

NOTE
It is important to mention that Azure VM Restart primarily protects VMs and NOT applications. VM Restart
does not offer high availability for SAP applications, but it does offer a certain level of infrastructure availability
and therefore indirectly higher availability of SAP systems. There is also no SLA for the time it will take to
restart a VM after a planned or unplanned host outage. Therefore, this method of high availability is not
suitable for critical components of an SAP system like (A)SCS or DBMS.

Another important infrastructure element for high availability is storage. For example Azure Storage
SLA is 99.9 % availability. If one deploys all VMs with its disks into a single Azure Storage Account,
potential Azure Storage unavailability will cause unavailability of all VMs that are placed in that Azure
Storage Account, and also all SAP components running inside of those VMs.
Instead of putting all VMs into one single Azure Storage Account, you can also use dedicated storage
accounts for each VM, and in this way increase overall VM and SAP application availability by using
multiple independent Azure Storage Accounts.
Azure managed disks are automatically placed in the Fault Domain of the virtual machine they are
attached to. If you place two virtual machines in an availability set and use Managed Disks, the
platform will take care of distributing the Managed Disks into different Fault Domains as well. If you
plan to use Premium Storage, we highly recommend using Manage Disks as well.
A sample architecture of an SAP NetWeaver system that uses Azure infrastructure HA and storage
accounts could look like this:
A sample architecture of an SAP NetWeaver system that uses Azure infrastructure HA and Managed
Disks could look like this:

For critical SAP components, we achieved the following so far:


High Availability of SAP Application Servers (AS)
SAP application server instances are redundant components. Each SAP AS instance is deployed
on its own VM, that is running in a different Azure Fault and Upgrade Domain (see chapters
Fault Domains and Upgrade Domains). This is ensured by using Azure availability sets (see
chapter Azure Availability Sets). Potential planned or unplanned unavailability of an Azure Fault
or Upgrade Domain will cause unavailability of a restricted number of VMs with their SAP AS
instances.
Each SAP AS instance is placed in its own Azure Storage account - potential unavailability of one
Azure Storage Account will cause unavailability of only one VM with its SAP AS instance.
However, be aware that there is a limit of Azure Storage Accounts within one Azure subscription.
To ensure automatic start of (A)SCS instance after the VM reboot, make sure to set the Autostart
parameter in (A)SCS instance start profile described in chapter Using Autostart for SAP
instances. Also read chapter High Availability for SAP Application Servers for more details.
Even if you use Managed Disks, those disks are also stored in an Azure Storage Account and can
be unavailable in an event of a storage outage.
Higher Availability of SAP (A)SCS instance
Here we utilize Azure VM Restart to protect the VM with installed SAP (A)SCS instance. In the
case of planned or unplanned downtime of Azure severs, VMs will be restarted on another
available server. As mentioned earlier, Azure VM Restart primarily protects VMs and NOT
applications, in this case, the (A)SCS instance. Through the VM Restart, we'll reach indirectly
higher availability of SAP (A)SCS instance. To insure automatic start of (A)SCS instance after the
VM reboot, make sure to set Autostart parameter in (A)SCS instance start profile described in
chapter Using Autostart for SAP instances. This means the (A)SCS instance as a Single Point of
Failure (SPOF) running in a single VM will be the determinative factor for the availability of the
whole SAP landscape.
Higher Availability of DBMS Server
Here, similar to the SAP (A)SCS instance use case, we utilize Azure VM Restart to protect the VM
with installed DBMS software, and we achieve higher availability of DBMS software through VM
Restart. DBMS running in a single VM is also a SPOF, and it is the determinative factor for the
availability of the whole SAP landscape.
SAP Application High Availability on Azure IaaS
To achieve full SAP system high availability, we need to protect all critical SAP system components, for
example redundant SAP application servers, and unique components (for example Single Point of
Failure) like SAP (A)SCS instance, and DBMS.
High Availability for SAP Application Servers
For the SAP application servers/dialog instances, it's not necessary to think about a specific high
availability solution. High availability is achieved by redundancy and thereby having enough of them in
different virtual machines. They should all be placed in the same Azure availability set to avoid that the
VMs might be updated at the same time during planned maintenance downtime. The basic
functionality, which builds on different Upgrade and Fault Domains within an Azure Scale Unit was
already introduced in chapter Upgrade Domains. Azure availability sets were presented in chapter
Azure Availability Sets of this document.
There is no infinite number of Fault and Upgrade Domains that can be used by an Azure availability set
within an Azure Scale Unit. This means that putting a number of VMs into one availability set, sooner
or later more than one VM ends up in the same Fault or Upgrade Domain.
Deploying a few SAP application server instances in their dedicated VMs and assuming that we got five
Upgrade Domains, the following picture emerges at the end. The actual max number of fault and
update domains within an availability set might change in the future:
High Availability for SAP Central Services on Azure
For High availability architecture of SAP Central Services on Azure, check the article High-availability
architecture and scenarios for SAP NetWeaver as entry information. The article points to more detailed
descriptions for the particular operating systems.
High Availability for the SAP database instance
The typical SAP DBMS HA setup is based on two DBMS VMs where DBMS high-availability
functionality is used to replicate data from the active DBMS instance to the second VM into a passive
DBMS instance.
High Availability and Disaster recovery functionality for DBMS in general as well as specific DBMS are
described in the DBMS Deployment Guide.
End-to-End High Availability for the Complete SAP System
Here are two examples of a complete SAP NetWeaver HA architecture in Azure - one for Windows and
one for Linux.
Unmanaged disks only: The concepts as explained below may need to be compromised a bit when you
deploy many SAP systems and the number of VMs deployed are exceeding the maximum limit of
Storage Accounts per subscription. In such cases, VHDs of VMs need to be combined within one
Storage Account. Usually you would do so by combining VHDs of SAP application layer VMs of
different SAP systems. We also combined different VHDs of different DBMS VMs of different SAP
systems in one Azure Storage Account. Thereby keeping the IOPS limits of Azure Storage Accounts in
mind (https://azure.microsoft.com/documentation/articles/storage-scalability-targets)
HA on W indow s
The following Azure constructs are used for the SAP NetWeaver system, to minimize impact by
infrastructure issues and host patching:
The complete system is deployed on Azure (required - DBMS layer, (A)SCS instance, and complete
application layer need to run in the same location).
The complete system runs within one Azure subscription (required).
The complete system runs within one Azure Virtual Network (required).
The separation of the VMs of one SAP system into three availability sets is possible even with all the
VMs belonging to the same Virtual Network.
Each layer (for example DBMS, ASCS, Application Servers) must use a dedicated availability set.
All VMs running DBMS instances of one SAP system are in one availability set. We assume that
there is more than one VM running DBMS instances per system since native DBMS high availability
features are used, like SQL Server AlwaysOn or Oracle Data Guard.
All VMs running DBMS instances use their own storage account. DBMS data and log files are
replicated from one storage account to another storage account using DBMS high availability
functions that synchronize the data. Unavailability of one storage account will cause unavailability
of one SQL Windows cluster node, but not the whole SQL Server service.
All VMs running (A)SCS instance of one SAP system are in one availability set. A Windows Server
Failover Cluster (WSFC) is configured inside of those VMs to protect the (A)SCS instance.
All VMs running (A)SCS instances use their own storage account. (A)SCS instance files and SAP
global folder are replicated from one storage account to another storage account using SIOS
DataKeeper replication. Unavailability of one storage account will cause unavailability of one (A)SCS
Windows cluster node, but not the whole (A)SCS service.
ALL the VMs representing the SAP application server layer are in a third availability set.
ALL the VMs running SAP application servers use their own storage account. Unavailability of one
storage account will cause unavailability of one SAP application server, where other SAP application
servers continue to run.
The following figure illustrated the same landscape using Managed Disks.
HA on Linux
The architecture for SAP HA on Linux on Azure is basically the same as for Windows as described
above. Refer to SAP Note 1928533 for a list of supported high availability solutions.
Using Autostart for SAP instances
SAP offered the functionality to start SAP instances immediately after the start of the OS within the
VM. The exact steps were documented in SAP Knowledge Base Article 1909114. However, SAP is not
recommending to use the setting anymore because there is no control in the order of instance restarts,
assuming more than one VM got affected or multiple instances ran per VM. Assuming a typical Azure
scenario of one SAP application server instance in a VM and the case of a single VM eventually getting
restarted, the Autostart is not critical and can be enabled by adding this parameter:
Autostart = 1

Into the start profile of the SAP ABAP and/or Java instance.

NOTE
The Autostart parameter can have some downfalls as well. In more detail, the parameter triggers the start of an
SAP ABAP or Java instance when the related Windows/Linux service of the instance is started. That certainly is
the case when the operating system boots up. However, restarts of SAP services are also a common thing for
SAP Software Lifecycle Management functionality like SUM or other updates or upgrades. These functionalities
are not expecting an instance to be restarted automatically at all. Therefore, the Autostart parameter should be
disabled before running such tasks. The Autostart parameter also should not be used for SAP instances that
are clustered, like ASCS/SCS/CI.

See additional information regarding autostart for SAP instances here:


Start/Stop SAP along with your Unix Server Start/Stop
Starting and Stopping SAP NetWeaver Management Agents
How to enable auto Start of HANA Database
Larger 3-Tier SAP systems
High-Availability aspects of 3-Tier SAP configurations got discussed in earlier sections already. But
what about systems where the DBMS server requirements are too large to have it located in Azure, but
the SAP application layer could be deployed into Azure?
Location of 3-Tier SAP configurations
It is not supported to split the application tier itself or the application and DBMS tier between on-
premises and Azure. An SAP system is either completely deployed on-premises OR in Azure. It is also
not supported to have some of the application servers run on-premises and some others in Azure.
That is the starting point of the discussion. We also are not supporting to have the DBMS components
of an SAP system and the SAP application server layer deployed in two different Azure Regions. For
example, DBMS in West US and SAP application layer in Central US. Reason for not supporting such
configurations is the latency sensitivity of the SAP NetWeaver architecture.
However, over the course of last year data center partners developed co-locations to Azure Regions.
These co-locations often are in close proximity to the physical Azure data centers within an Azure
Region. The short distance and connection of assets in the co-location through ExpressRoute into Azure
can result in a latency that is less than 2 milliseconds. In such cases, to locate the DBMS layer (including
storage SAN/NAS) in such a co-location and the SAP application layer in Azure is possible. HANA Large
Instances.
Offline Backup of SAP systems
Dependent on the SAP configuration chosen (2-Tier or 3-Tier) there could be a need to back up. The
content of the VM itself plus to have a backup of the database. The DBMS-related backups are expected
to be done with database methods. A detailed description for the different databases, can be found in
DBMS Guide. On the other hand, the SAP data can be backed up in an offline manner (including the
database content as well) as described in this section or online as described in the next section.
The offline backup would basically require a shutdown of the VM through the Azure portal and a copy
of the base VM disk plus all attached disks to the VM. This would preserve a point in time image of the
VM and its associated disk. It is recommended to copy the backups into a different Azure Storage
Account. Hence the procedure described in chapter Copying disks between Azure Storage Accounts of
this document would apply.
A restore of that state would consist of deleting the base VM as well as the original disks of the base
VM and mounted disks, copying back the saved disks to the original Storage Account or resource
group for managed disks and then redeploying the system. This article shows an example how to
script this process in PowerShell: https://www.westerndevs.com/_/azure-snapshots/
Make sure to install a new SAP license since restoring a VM backup as described above creates a new
hardware key.
Online backup of an SAP system
Backup of the DBMS is performed with DBMS-specific methods as described in the DBMS Guide.
Other VMs within the SAP system can be backed up using Azure Virtual Machine Backup functionality.
Azure Virtual Machine Backup is a standard method to back up a complete VM in Azure. Azure Backup
stores the backups in Azure and allows a restore of a VM again.

NOTE
As of Dec 2015 using VM Backup does NOT keep the unique VM ID which is used for SAP licensing. This means
that a restore from a VM backup requires installation of a new SAP license key as the restored VM is
considered to be a new VM and not a replacement of the former one which was saved.
Windows
Theoretically, VMs that run databases can be backed up in a consistent manner as well if the DBMS system
supports the Windows VSS (Volume Shadow Copy Service
https://msdn.microsoft.com/library/windows/desktop/bb968832(v=vs.85).aspx) as, for example, SQL Server
does. However, be aware that based on Azure VM backups point-in-time restores of databases are not
possible. Therefore, the recommendation is to perform backups of databases with DBMS functionality instead
of relying on Azure VM Backup.
To get familiar with Azure Virtual Machine Backup start here: /azure/backup/backup-azure-vms.
Other possibilities are to use a combination of Microsoft Data Protection Manager installed in an Azure VM
and Azure Backup to backup/restore databases. More information can be found here: /azure/backup/backup-
azure-dpm-introduction.

Linux
There is no equivalent to Windows VSS in Linux. Therefore only file-consistent backups are possible but not
application-consistent backups. The SAP DBMS backup should be done using DBMS functionality. The file
system which includes the SAP-related data can be saved, for example, using tar as described here:
https://help.sap.com/saphelp_nw70ehp2/helpdata/en/d3/c0da3ccbb04d35b186041ba6ac301f/content.htm

Azure as DR site for production SAP landscapes


Since Mid 2014, extensions to various components around Hyper-V, System Center, and Azure enable
the usage of Azure as DR site for VMs running on-premises based on Hyper-V.
A blog detailing how to deploy this solution is documented here:
/archive/blogs/saponsqlserver/protecting-sap-solutions-with-azure-site-recovery.

Summary for High Availability for SAP systems


The key points of High Availability for SAP systems in Azure are:
At this point in time, the SAP single point of failure cannot be secured exactly the same way as it can
be done in on-premises deployments. The reason is that Shared Disk clusters can't yet be built in
Azure without the use of 3rd party software.
For the DBMS layer, you need to use DBMS functionality that does not rely on shared disk cluster
technology. Details are documented in the DBMS Guide.
To minimize the impact of problems within Fault Domains in the Azure infrastructure or host
maintenance, you should use Azure availability sets:
It is recommended to have one availability set for the SAP application layer.
It is recommended to have a separate availability set for the SAP DBMS layer.
It is NOT recommended to apply the same availability set for VMs of different SAP systems.
It is recommended to use Premium Managed Disks.
For Backup purposes of the SAP DBMS layer, check the DBMS Guide.
Backing up SAP Dialog instances makes little sense since it is usually faster to redeploy simple
dialog instances.
Backing up the VM, which contains the global directory of the SAP system and with it all the profiles
of the different instances, does make sense and should be performed with Windows Backup or, for
example, tar on Linux. Since there are differences between Windows Server 2008 (R2) and
Windows Server 2012 (R2), which make it easier to back up using the more recent Windows Server
releases, we recommend running Windows Server 2012 (R2) as Windows guest operating system.

Next steps
Read the articles:
Azure Virtual Machines deployment for SAP NetWeaver
Considerations for Azure Virtual Machines DBMS deployment for SAP workload
[SAP HANA infrastructure configurations and operations on Azure](/- azure/virtual-
machines/workloads/sap/hana-vm-operations)
Azure Storage types for SAP workload
12/22/2020 • 24 minutes to read • Edit Online

Azure has numerous storage types that differ vastly in capabilities, throughput, latency, and prices. Some of the
storage types are not, or of limited usable for SAP scenarios. Whereas, several Azure storage types are well
suited or optimized for specific SAP workload scenarios. Especially for SAP HANA, some Azure storage types got
certified for the usage with SAP HANA. In this document, we are going through the different types of storage
and describe their capability and usability with SAP workloads and SAP components.
Remark about the units used throughout this article. The public cloud vendors moved to use GiB (Gibibyte) or
TiB (Tebibyte as size units, instead of Gigabyte or Terabyte. Therefore all Azure documentation and prizing are
using those units. Throughout the document, we are referencing these size units of MiB, GiB, and TiB units
exclusively. You might need to plan with MB, GB, and TB. So, be aware of some small differences in the
calculations if you need to size for a 400 MiB/sec throughput, instead of a 250 MiB/sec throughput.

Microsoft Azure Storage resiliency


Microsoft Azure storage of Standard HDD, Standard SSD, Azure premium storage, and Ultra disk keeps the base
VHD (with OS) and VM attached data disks or VHDs in three copies on three different storage nodes. Failing
over to another replica and seeding of a new replica in case of a storage node failure is transparent. As a result
of this redundancy, it is NOT required to use any kind of storage redundancy layer across multiple Azure disks.
This fact is called Local Redundant Storage (LRS). LRS is default for these types of storage in Azure. Azure
NetApp Files provides sufficient redundancy to achieve the same SLAs as other native Azure storage.
There are several more redundancy methods, which are all described in the article Azure Storage replication that
apply to some of the different storage types Azure has to offer.
Also keep in mind that different Azure storage types influence the single VM availability SLAs as released in SLA
for Virtual Machines.
Azure managed disks
Managed disks are a resource type in Azure Resource Manager that can be used instead of VHDs that are stored
in Azure Storage Accounts. Managed Disks automatically align with the [availability set][virtual-machines-
manage-availability] of the virtual machine they are attached to and therefore increase the availability of your
virtual machine and the services that are running on the virtual machine. For more information, read the
overview article.
Related to resiliency, this example demonstrates the advantage of managed disks:
You are deploying your two DBMS VMs for your SAP system in an Azure availability set
As Azure deploys the VMs, the disk with the OS image will be placed in a different storage cluster. This avoids
that both VMs get impacted by an issue of a single Azure storage cluster
As you create new managed disks that you assign to these VMs to store the data and log files of your
database, these new disks for the two VMs are also deployed in separate storage clusters, so, that none of
disks of the first VM are sharing storage clusters with the disks of the second VM
Deploying without managed disks in customer defined storage accounts, disk allocation is arbitrary and has no
awareness of the fact that VMs are deployed within an AvSet for resiliency purposes.
NOTE
Out of this reason and several other improvements that are exclusively available through managed disks, we require that
new deployments of VMs that use Azure block storage for their disks (all Azure storage except Azure NetApp Files) need
to use Azure managed disks for the base VHD/OS disks, data disks that contain SAP database files. Independent on
whether you deploy the VMs through availability set, across Availability Zones or independent of the sets and zones.
Disks that are used for the purpose of storing backups are not necessarily required to be managed disks.

NOTE
Azure managed disks provide local redundancy (LRS) only.

Storage scenarios with SAP workloads


Persisted storage is needed in SAP workload in various components of the stack that you deploy in Azure. These
scenarios list at minimum like:
Persistent the base VHD of your VM that holds the operating system and other software you install in that
disk. This disk/VHD is the root of your VM. Any changes made to it, need to be persisted. So, that the next
time, you stop and restart the VM, all the changes made before still exist. Especially in cases where the VM is
getting deployed by Azure onto another host than it was running originally
Persisted data disks. These disks are VHDs you attach to store application data in. This application data could
be data and log/redo files of a database, backup files, or software installations. Means any disk beyond your
base VHD that holds the operating system
File shares or shared disks that contain your global transport directory for NetWeaver or S/4HANA. Content
of those shares is either consumed by software running in multiple VMs or is used to build high-availability
failover cluster scenarios
The /sapmnt directory or common file shares for EDI processes or similar. Content of those shares is either
consumed by software running in multiple VMs or is used to build high-availability failover cluster scenarios
In the next few sections, the different Azure storage types and their usability for SAP workload gets discussed
that apply to the four scenarios above. A general categorization of how the different Azure storage types should
be used is documented in the article What disk types are available in Azure?. The recommendations for using
the different Azure storage types for SAP workload are not going to be majorly different.
For support restrictions on Azure storage types for SAP NetWeaver/application layer of S/4HANA, read the SAP
support note 2015553 For SAP HANA certified and supported Azure storage types read the article SAP HANA
Azure virtual machine storage configurations.
The sections describing the different Azure storage types will give you more background about the restrictions
and possibilities using the SAP supported storage.

Storage recommendations for SAP storage scenarios


Before going into the details, we are presenting the summary and recommendations already at the beginning of
the document. Whereas the details for the particular types of Azure storage are following this section of the
document. Summarizing the storage recommendations for the SAP storage scenarios in a table, it looks like:

USA GE P REM IUM A Z URE N ETA P P


SC EN A RIO STA N DA RD H DD STA N DA RD SSD STO RA GE ULT RA DISK F IL ES
USA GE P REM IUM A Z URE N ETA P P
SC EN A RIO STA N DA RD H DD STA N DA RD SSD STO RA GE ULT RA DISK F IL ES

OS disk not suitable restricted recommended not possible not possible


suitable (non-
prod)

Global transport not supported not supported recommended recommended recommended


Directory

/sapmnt not suitable restricted recommended recommended recommended


suitable (non-
prod)

DBMS Data not supported not supported recommended recommended recommended2


volume SAP
HANA M/Mv2
VM families

DBMS log not supported not supported recommended1 recommended recommended2


volume SAP
HANA M/Mv2
VM families

DBMS Data not supported not supported recommended recommended recommended2


volume SAP
HANA
Esv3/Edsv4 VM
families

DBMS log not supported not supported not supported recommended recommended2
volume SAP
HANA
Esv3/Edsv4 VM
families

DBMS Data not supported restricted recommended recommended not supported


volume non- suitable (non-
HANA prod)

DBMS log not supported restricted recommended1 recommended not supported


volume non- suitable (non-
HANA M/Mv2 prod)
VM families

DBMS log not supported restricted suitable for up to recommended not supported
volume non- suitable (non- medium
HANA non- prod) workload
M/Mv2 VM
families

1 With usage of Azure Write Accelerator for M/Mv2 VM families for log/redo log volumes 2 Using ANF requires
/hana/data as well as /hana/log to be on ANF
Characteristics you can expect from the different storage types list like:
USA GE P REM IUM A Z URE N ETA P P
SC EN A RIO STA N DA RD H DD STA N DA RD SSD STO RA GE ULT RA DISK F IL ES

Throughput/ no no yes yes yes


IOPS SLA

Latency Reads high medium to high low sub-millisecond sub-millisecond

Latency Writes high medium to high low (sub- sub-millisecond sub-millisecond


millisecond1 )

HANA no no yes1 yes yes


supported

Disk snapshots yes yes yes no yes


possible

Allocation of through through through disk type not no3


disks on different managed disks managed disks managed disks supported with
storage clusters VMs deployed
when using through
availability sets availability sets

Aligned with yes yes yes yes needs


Availability engagement of
Zones Microsoft

Zonal not for managed not for managed not for managed no no
redundancy disks disks disks

Geo redundancy not for managed not for managed no no no


disks disks

1 With usage of Azure Write Accelerator for M/Mv2 VM families for log/redo log volumes
2 Costs depend on provisioned IOPS and throughput
3 Creation of different ANF capacity pools does not guarantee deployment of capacity pools onto different

storage units

IMPORTANT
To achieve less than 1 millisecond I/O latency using Azure NetApp Files (ANF), you need to work with Microsoft to arrange
the correct placement between your VMs and the NFS shares based on ANF. So far there is no mechanism in place that
provides an automatic proximity between a VM deployed and the NFS volumes hosted on ANF. Given the different setup
of the different Azure regions, the network latency added could push the I/O latency beyond 1 millisecond if the VM and
the NFS share are not allocated in proximity.

IMPORTANT
None of the currently offered Azure block storage based managed disks, or Azure NetApp Files offer any zonal or
geographical redundancy. As a result, you need to make sure that your high availability and disaster recovery
architectures are not relying on any type of Azure native storage replication for these managed disks, NFS or SMB shares.

Azure premium storage


Azure premium SSD storage got introduced with the goal to provide:
Low I/O latency
SLAs for IOPS and throughput
Less variability in I/O latency
This type of storage is targeting DBMS workloads, storage traffic that requires low single digit millisecond
latency, and SLAs on IOPS and throughput Cost basis in the case of Azure premium storage is not the actual
data volume stored in such disks, but the size category of such a disk, independent of the amount of the data
that is stored within the disk. You also can create disks on premium storage that are not directly mapping into
the size categories shown in the article Premium SSD. Conclusions out of this article are:
The storage is organized in ranges. For example, a disk in the range 513 GiB to 1024 GiB capacity share the
same capabilities and the same monthly costs
The IOPS per GiB are not tracking linear across the size categories. Smaller disks below 32 GiB have higher
IOPS rates per GiB. For disks beyond 32 GiB to 1024 GiB, the IOPS rate per GiB is between 4-5 IOPS per GiB.
For larger disks up to 32,767 GiB, the IOPS rate per GiB is going below 1
The I/O throughput for this storage is not linear with the size of the disk category. For smaller disks, like the
category between 65 GiB and 128 GiB capacity, the throughput is around 780KB/GiB. Whereas for the
extreme large disks like a 32,767 GiB disk, the throughput is around 28KB/GiB
The IOPS and throughput SLAs cannot be changed without changing the capacity of the disk
The capability matrix for SAP workload looks like:

C A PA B IL IT Y C O M M EN T N OT ES/ L IN K S

OS base VHD suitable all systems

Data disk suitable all systems - specially for SAP HANA

SAP global transport directory YES Supported

SAP sapmnt suitable all systems

Backup storage suitable for short term storage of backups

Shares/shared disk not available Needs Azure Premium Files or third


party

Resiliency LRS No GRS or ZRS available for disks

Latency low-to medium -

IOPS SLA YES -

IOPS linear to capacity semi linear in brackets Managed Disk pricing

Maximum IOPS per disk 20,000 dependent on disk size Also consider VM limits

Throughput SLA YES -

Throughput linear to capacity semi linear in brackets Managed Disk pricing


C A PA B IL IT Y C O M M EN T N OT ES/ L IN K S

HANA certified YES specially for SAP HANA

Disk snapshots possible YES -

Azure Backup VM snapshots possible YES except for Write Accelerator cached
disks

Costs MEDIUM -

Azure premium storage does not fulfill SAP HANA storage latency KPIs with the common caching types offered
with Azure premium storage. In order to fulfill the storage latency KPIs for SAP HANA log writes, you need to
use Azure Write Accelerator caching as described in the article Enable Write Accelerator. Azure Write Accelerator
benefits all other DBMS systems for their transaction log writes and redo log writes. Therefore, it is
recommended to use it across all the SAP DBMS deployments. For SAP HANA, the usage of Azure Write
Accelerator in conjunction with Azure premium storage is mandatory.
Summar y: Azure premium storage is one of the Azure storage types recommended for SAP workload. This
recommendation applies for non-production as well as production systems. Azure premium storage is suited to
handle database workloads. The usage of Azure Write Accelerator is going to improve write latency against
Azure premium disks substantially. However, for DBMS systems with high IOPS and throughput rates, you need
to either over-provision storage capacity or you need to use functionality like Windows Storage Spaces or
logical volume managers in Linux to build stripe sets that give you the desired capacity on the one side, but also
the necessary IOPS or throughput at best cost efficiency.
Azure burst functionality for premium storage
For Azure premium storage disks smaller or equal to 512 GiB in capacity, burst functionality is offered. The exact
way how disk bursting works is described in the article Disk bursting. When you read the article, you
understand the concept of accruing IOPS and throughput in the times when your I/O workload is below the
nominal IOPS and throughput of the disks (for details on the nominal throughput see Managed Disk pricing).
You are going to accrue the delta of IOPS and throughput between your current usage and the nominal values
of the disk. The bursts are limited to a maximum of 30 minutes.
The ideal cases where this burst functionality can be planned in is likely going to be the volumes or disks that
contain data files for the different DBMS. The I/O workload expected against those volumes, especially with
small to mid-ranged systems is expected to look like:
Low to moderate read workload since data ideally is cached in memory, or like in the case of HANA should
be completely in memory
Bursts of write triggered by database checkpoints or savepoints that are issued on a regular basis
Backup workload that reads in a continuous stream in cases where backups are not executed via storage
snapshots
For SAP HANA, load of the data into memory after an instance restart
Especially on smaller DBMS systems where your workload is handling a few hundred transactions per seconds
only, such a burst functionality can make sense as well for the disks or volumes that store the transaction or
redo log. Expected workload against such a disk or volumes looks like:
Regular writes to the disk that are dependent on the workload and the nature of workload since every
commit issued by the application is likely to trigger an I/O operation
Higher workload in throughput for cases of operational tasks, like creating or rebuilding indexes
Read bursts when performing transaction log or redo log backups
Azure Ultra disk
Azure ultra disks deliver high throughput, high IOPS, and consistent low latency disk storage for Azure IaaS
VMs. Some additional benefits of ultra disks include the ability to dynamically change the IOPS and throughput
of the disk, along with your workloads, without the need to restart your virtual machines (VM). Ultra disks are
suited for data-intensive workloads such as SAP DBMS workload. Ultra disks can only be used as data disks and
can't be used as base VHD disk that stores the operating system. We would recommend the usage of Azure
premium storage as based VHD disk.
As you create an ultra disk, you have three dimensions you can define:
The capacity of the disk. Ranges are from 4 GiB to 65,536 GiB
Provisioned IOPS for the disk. Different maximum values apply to the capacity of the disk. Read the article
Ultra disk for more details
Provisioned storage bandwidth. Different maximum bandwidth applies dependent on the capacity of the
disk. Read the article Ultra disk for more details
The cost of a single disk is determined by the three dimensions you can define for the particular disks
separately.
The capability matrix for SAP workload looks like:

C A PA B IL IT Y C O M M EN T N OT ES/ L IN K S

OS base VHD does not work -

Data disk suitable all systems

SAP global transport directory YES Supported

SAP sapmnt suitable all systems

Backup storage suitable for short term storage of backups

Shares/shared disk not available Needs third party

Resiliency LRS No GRS or ZRS available for disks

Latency very low -

IOPS SLA YES -

IOPS linear to capacity semi linear in brackets Managed Disk pricing

Maximum IOPS per disk 1,200 to 160,000 dependent of disk capacity

Throughput SLA YES -

Throughput linear to capacity semi linear in brackets Managed Disk pricing

HANA certified YES -

Disk snapshots possible NO -


C A PA B IL IT Y C O M M EN T N OT ES/ L IN K S

Azure Backup VM snapshots possible NO -

Costs Higher than Premium storage -

Summar y: Azure ultra disks are a suitable storage with low latency for all kinds of SAP workload. So far, Ultra
disk can only be used in combinations with VMs that have been deployed through Availability Zones (zonal
deployment). Ultra disk is not supporting storage snapshots at this point in time. In opposite to all other storage,
Ultra disk cannot be used for the base VHD disk. Ultra disk is ideal for cases where I/O workload fluctuates a lot
and you want to adapt deployed storage throughput or IOPS to storage workload patterns instead of sizing for
maximum usage of bandwidth and IOPS.

Azure NetApp files (ANF)


Azure NetApp Files is the result out of a cooperation between Microsoft and NetApp with the goal to provide
high performing Azure native NFS and SMB shares. The emphasis is to provide high bandwidth and low latency
storage that enables DBMS deployment scenarios, and over time enable typical operational functionality of the
NetApp storage through Azure as well. NFS/SMB shares are offered in three different service levels that
differentiate in storage throughput and in price. The service levels are documented in the article Service levels
for Azure NetApp Files. For the different types of SAP workload the following service levels are highly
recommended:
SAP DBMS workload: Performance, ideally Ultra
SAPMNT share: Performance, ideally Ultra
Global transport directory: Performance, ideally Ultra

NOTE
The minimum provisioning size is a 4 TiB unit that is called capacity pool. You then create volumes out of this capacity
pool. Whereas the smallest volume you can build is 100 GiB. You can expand a capacity pool in TiB steps. For pricing,
check the article Azure NetApp Files Pricing

ANF storage is currently supported for several SAP workload scenarios:


Providing SMB or NFS shares for SAP's global transport directory
The share sapmnt in high availability scenarios as documented in:
High availability for SAP NetWeaver on Azure VMs on Windows with Azure NetApp Files(SMB) for
SAP applications
High availability for SAP NetWeaver on Azure VMs on SUSE Linux Enterprise Server with Azure
NetApp Files for SAP applications
Azure Virtual Machines high availability for SAP NetWeaver on Red Hat Enterprise Linux with Azure
NetApp Files for SAP applications
SAP HANA deployments using NFS v4.1 shares for /hana/data and /hana/log volumes and/or NFS v4.1 or
NFS v3 volumes for /hana/shared volumes as documented in the article SAP HANA Azure virtual machine
storage configurations

NOTE
No other DBMS workload is supported for Azure NetApp Files based NFS or SMB shares. Updates and changes will be
provided if this is going to change.
As already with Azure premium storage, a fixed or linear throughput size per GB can be a problem when you are
required to adhere to some minimum numbers in throughput. Like this is the case for SAP HANA. With ANF, this
problem can become more pronounced than with Azure premium disk. In case of Azure premium disk, you can
take several smaller disks with a relatively high throughput per GiB and stripe across them to be cost efficient
and have higher throughput at lower capacity. This kind of striping does not work for NFS or SMB shares hosted
on ANF. This restriction resulted in deployment of overcapacity like:
To achieve, for example, a throughput of 250 MiB/sec on an NFS volume hosted on ANF, you need to deploy
1.95 TiB capacity of the Ultra service level.
To achieve 400 MiB/sec, you would need to deploy 3.125 TiB capacity. But you may need the over-
provisioning of capacity to achieve the throughput you require of the volume. This over-provisioning of
capacity impacts the pricing of smaller HANA instances.
In the space of using NFS on top of ANF for the SAP /sapmnt directory, you are usually going far with the
minimum capacity of 100 GiB to 150 GiB that is enforced by Azure NetApp Files. However customer
experience showed that the related throughput of 12.8 MiB/sec (using Ultra service level) may not be enough
and may have negative impact on the stability of the SAP system. In such cases, customers could avoid issues
by increasing the volume of the /sapmnt volume, so, that more throughput is provided to that volume.
The capability matrix for SAP workload looks like:

C A PA B IL IT Y C O M M EN T N OT ES/ L IN K S

OS base VHD does not work -

Data disk suitable SAP HANA only

SAP global transport directory YES SMB as well as NFS

SAP sapmnt suitable all systems SMB (Windows only) or


NFS (Linux only)

Backup storage suitable -

Shares/shared disk YES SMB 3.0, NFS v3, and NFS v4.1

Resiliency LRS No GRS or ZRS available for disks

Latency very low -

IOPS SLA YES -

IOPS linear to capacity strictly linear Dependent on Service Level

Throughput SLA YES -

Throughput linear to capacity semi linear in brackets Dependent on Service Level

HANA certified YES -

Disk snapshots possible YES -

Azure Backup VM snapshots possible NO -


C A PA B IL IT Y C O M M EN T N OT ES/ L IN K S

Costs Higher than Premium storage -

Additional built-in functionality of ANF storage:


Capability to perform snapshots of volume
Cloning of ANF volumes from snapshots
Restore volumes from snapshots (snap-revert)
Summar y : Azure NetApp Files is a HANA certified low latency storage that allows to deploy NFS and SMB
volumes or shares. The storage comes with three different service levels that provide different throughput and
IOPS in a linear manner per GiB capacity of the volume. The ANF storage is enabling to deploy SAP HANA scale-
out scenarios with a standby node. The storage is suitable for providing file shares as needed for /sapmnt or
SAP global transport directory. ANF storage come with functionality availability that is available as native
NetApp functionality.

Azure standard SSD storage


Compared to Azure standard HDD storage, Azure standard SSD storage delivers better availability, consistency,
reliability, and latency. It is optimized for workloads that need consistent performance at lower IOPS levels. This
storage is the minimum storage used for non-production SAP systems that have low IOPS and throughput
demands. The capability matrix for SAP workload looks like:

C A PA B IL IT Y C O M M EN T N OT ES/ L IN K S

OS base VHD restricted suitable non-production systems

Data disk restricted suitable some non-production systems with


low IOPS and latency demands

SAP global transport directory NO Not supported

SAP sapmnt restricted suitable non-production systems

Backup storage suitable -

Shares/shared disk not available Needs third party

Resiliency LRS, GRS No ZRS available for disks

Latency high too high for SAP Global Transport


directory, or production systems

IOPS SLA NO -

Maximum IOPS per disk 500 Independent of the size of disk

Throughput SLA NO -

HANA certified NO -

Disk snapshots possible YES -


C A PA B IL IT Y C O M M EN T N OT ES/ L IN K S

Azure Backup VM snapshots possible YES -

Costs LOW -

Summar y: Azure standard SSD storage is the minimum recommendation for non-production VMs for base
VHD, eventual DBMS deployments with relative latency insensitivity and/or low IOPS and throughput rates. This
Azure storage type is not supported anymore for hosting the SAP Global Transport Directory.

Azure standard HDD storage


The Azure Standard HDD storage was the only storage type when Azure infrastructure got certified for SAP
NetWeaver workload in the year 2014. In the year 2014, the Azure virtual machines were small and low in
storage throughput. Therefore, this storage type was able to just keep up with the demands. The storage is ideal
for latency insensitive workloads, that you hardly experience in the SAP space. With the increasing throughput
of Azure VMs and the increased workload these VMs are producing, this storage type is not considered for the
usage with SAP scenarios anymore. The capability matrix for SAP workload looks like:

C A PA B IL IT Y C O M M EN T N OT ES/ L IN K S

OS base VHD not suitable -

Data disk not suitable -

SAP global transport directory NO Not supported

SAP sapmnt NO Not supported

Backup storage suitable -

Shares/shared disk not available Needs Azure Files or third party

Resiliency LRS, GRS No ZRS available for disks

Latency high too high for DBMS usage, SAP Global


Transport directory, or sapmnt/saploc

IOPS SLA NO -

Maximum IOPS per disk 500 Independent of the size of disk

Throughput SLA NO -

HANA certified NO -

Disk snapshots possible YES -

Azure Backup VM snapshots possible YES -

Costs LOW -

Summar y: Standard HDD is an Azure storage type that should only be used to store SAP backups. It should
only be used as base VHD for rather inactive systems, like retired systems used for looking up data here and
there. But no active development, QA or production VMs should be based on that storage. Nor should database
files being hosted on that storage

Azure VM limits in storage traffic


In opposite to on-premise scenarios, the individual VM type you are selecting, plays a vital role in the storage
bandwidth you can achieve. For the different storage types, you need to consider:

STO RA GE T Y P E L IN UX W IN DO W S C O M M EN T S

Standard HDD Sizes for Linux VMs in Azure Sizes for Windows VMs in Likely hard to touch the
Azure storage limits of medium or
large VMs

Standard SSD Sizes for Linux VMs in Azure Sizes for Windows VMs in Likely hard to touch the
Azure storage limits of medium or
large VMs

Premium Storage Sizes for Linux VMs in Azure Sizes for Windows VMs in Easy to hit IOPS or storage
Azure throughput VM limits with
storage configuration

Ultra disk storage Sizes for Linux VMs in Azure Sizes for Windows VMs in Easy to hit IOPS or storage
Azure throughput VM limits with
storage configuration

Azure NetApp Files Sizes for Linux VMs in Azure Sizes for Windows VMs in Storage traffic is using
Azure network throughput
bandwidth and not storage
bandwidth!

As limitations, you can note that:


The smaller the VM, the fewer disks you can attach. This does not apply to ANF. Since you mount NFS or SMB
shares, you don't encounter a limit of number of shared volumes to be attached
VMs have I/O throughput and IOPS limits that easily could be exceeded with premium storage disks and
Ultra disks
With ANF, the traffic to the shared volumes is consuming the VM's network bandwidth and not storage
bandwidth
With large NFS volumes in the double digit TiB capacity space, the throughput accessing such a volume out
of a single VM is going to plateau based on limits of Linux for a single session interacting with the shared
volume.
As you up-size Azure VMs in the lifecycle of an SAP system, you should evaluate the IOPS and storage
throughput limits of the new and larger VM type. In some cases, it also could make sense to adjust the storage
configuration to the new capabilities of the Azure VM.

Striping or not striping


Creating a stripe set out of multiple Azure disks into one larger volume allows you to accumulate the IOPS and
throughput of the individual disks into one volume. It is used for Azure standard storage and Azure premium
storage only. Azure Ultra disk where you can configure the throughput and IOPS independent of the capacity of
a disk, does not require the usage of stripe sets. Shared volumes based on NFS or SMB can't be striped. Due to
the non-linear nature of Azure premium storage throughput and IOPS, you can provision smaller capacity with
the same IOPS and throughput than large single Azure premium storage disks. That is the method to achieve
higher throughput or IOPS at lower cost using Azure premium storage. For example, striping across two P15
premium storage disks gets you to a throughput of:
250 MiB/sec. Such a volume is going to have 512 GiB capacity. If you want to have a single disk that gives
you 250 MiB throughput per second, you would need to pick a P40 disk with 2 TiB capacity.
400 MiB/sec by striping four P10 premium storage disks with an overall capacity of 512 GiB by striping. If
you would like to have a single disk with a minimum of 500 MiB throughput per second, you would need to
pick a P60 premium storage disk with 8 TiB. Because the cost of premium storage is near linear with the
capacity, you can sense the cost savings by using striping.
Some rules need to be followed on striping:
No in-VM configured storage should be used since Azure storage keeps the data redundant already
The disks the stripe set is applied to, need to be of the same size
Striping across multiple smaller disks is the best way to achieve a good price/performance ratio using Azure
premium storage. It is understood that striping has some additional deployment and management overhead.
For specific stripe size recommendations, read the documentation for the different DBMS, like SAP HANA Azure
virtual machine storage configurations.

Next steps
Read the articles:
Considerations for Azure Virtual Machines DBMS deployment for SAP workload
SAP HANA Azure virtual machine storage configurations
SAP workload on Azure virtual machine supported
scenarios
12/22/2020 • 25 minutes to read • Edit Online

Designing SAP NetWeaver, Business one, Hybris or S/4HANA systems architecture in Azure opens a lot of
different opportunities for various architectures and tools to use to get to a scalable, efficient, and highly available
deployment. Though dependent on the operating system or DBMS used, there are restrictions. Also, not all
scenarios that are supported on-premises are supported in the same way in Azure. This document will lead
through the supported non-high-availability configurations and high-availability configurations and architectures
using Azure VMs exclusively. For scenarios supported with HANA Large Instances, check the article Supported
scenarios for HANA Large Instances.

2-Tier configuration
An SAP 2-Tier configuration is considered to be built up out of a combined layer of the SAP DBMS and application
layer that run on the same server or VM unit. The second tier is considered to be the user interface layer. In the
case of a 2-Tier configuration, the DBMS and SAP application layer share the resources of the Azure VM. As a
result, you need to configure the different components in a way that those don't compete for resources. You also
need to be careful to not oversubscribe the resources of the VM. Such a configuration does not provide any high
availability, beyond the Azure Service Level agreements of the different Azure components involved.
A graphical representation of such a configuration can look like:
Such configurations are supported with Windows, Red Hat, SUSE, and Oracle Linux for the DBMS systems of SQL
Server, Oracle, Db2, maxDB, and SAP ASE for production and non-production cases. For SAP HANA as DBMS, such
type of configurations is supported for non-production cases only. This includes the deployment case of Azure
HANA Large Instances as well. For all OS/DBMS combinations supported on Azure, this type of configuration is
supported. However, it is mandatory that you set the configuration of the DBMS and the SAP components in a
way that DBMS and SAP components don't compete for memory and CPU resources and thereby exceed the
physical available resources. This needs to be done by restricting the memory the DBMS is allowed to allocate.
You also need to limit the SAP Extended Memory on application instances. You also need to monitor CPU
consumption of the VM overall to make sure that the components are not maximizing the CPU resources.

NOTE
For production SAP systems, we recommend additional high availability and eventual disaster recovery configurations as
described later in this document

3-Tier configuration
In such configurations, you separate the SAP application layer and the DBMS layer into different VMs. You usually
do that for larger systems and out of reasons of being more flexible on the resources of the SAP application layer.
In the most simple setup, there is no high availability beyond the Azure Service Level agreements of the different
Azure components involved.
The graphical representation looks like:

This type of configuration is supported on Windows, Red Hat, SUSE, and Oracle Linux for the DBMS systems of
SQL Server, Oracle, Db2, SAP HANA, maxDB, and SAP ASE for production and non-production cases. This is the
default deployment configuration for Azure HANA Large Instances. For simplification, we did not distinguish
between SAP Central Services and SAP dialog instances in the SAP application layer. In this simple 3-Tier
configuration, there would be no high availability protection for SAP Central Services.

NOTE
For production SAP systems, we recommend additional high availability and eventual disaster recovery configurations as
described later in this document

Multiple DBMS instances per VM or HANA Large Instance unit


In this configuration type, you host multiple DBMS instances per Azure VM or HANA Large Instance unit. The
motivation can be to have less operating systems to maintain and with that reduced costs. Other motivations can
be to have more flexibility and more efficiency by sharing resources of a larger VM or HANA Large Instance unit
among multiple DBMS instances. So far these configurations were showing up mostly for non-production
systems.
A configuration like that could look like:
This type of DBMS deployment is supported for:
SQL Server on Windows
IBM Db2. Find details in the article Multiple instances (Linux, UNIX)
For Oracle. For details see SAP support note #1778431 and related SAP notes
For SAP HANA, multiple instances on one VM, SAP calls this deployment method MCOS, is supported. For
details see the SAP article [Multiple SAP HANA Systems on One Host (MCOS)]
(https://help.sap.com/viewer/eb3777d5495d46c5b2fa773206bbfb46/2.0.02/
/b2751fd43bec41a9a14e01913f1edf18.html)
Running multiple database instances on one host, you need to make sure that the different instances are not
competing for resources and thereby exceed the physical resource limits of the VM. This is especially true for
memory where you need to cap the memory anyone of the instances sharing the VM can allocate. That also might
be true for the CPU resources the different database instances can leverage. All the DBMS mentioned have
configurations that allow limiting memory allocation and CPU resources on an instance level. In order to have
support for such a configuration for Azure VMs, it is expected that the disks or volumes that are used for the data
and log/redo log files of the databases managed by the different instances are separate. Or in other words data or
log/redo log files of databases managed by different DBMS instance are not supposed to share the same disks or
volumes.
The disk configuration for HANA Large Instances is delivered configured and is detailed in Supported scenarios
for HANA Large Instances.
NOTE
For production SAP systems, we recommend additional high availability and eventual disaster recovery configurations as
described later in this document. VMs with multiple DBMS instances are not supported with the high availability
configurations described later in this document.

Multiple SAP Dialog instances in one VM


In a lot of cases, multiple dialog instances got deployed on bare metal servers or even in VMs running in private
clouds. Reason for such configurations was to tailor certain SAP dialog instances to certain workload, business
functionality, or workload types. Reason for not isolating those instances into separate VMs was the effort of
operating system maintenance and operations. Or in numerous cases the costs in case the hoster or operator of
the VM is asking for a monthly fee per VM operated and administrated. In Azure, a scenario of hosting multiple
SAP dialog instances within a single VM us supported for production and non-production purposes on the
operating systems of Windows, Red Hat, SUSE, and Oracle Linux. The SAP kernel parameter PHYS_MEMSIZE,
available on Windows and modern Linux kernels, should be set if multiple SAP Application Server instances are
running on a single VM. It is also advised to limit the expansion of SAP Extended Memory on operating systems,
like Windows where automatic growth of the SAP extended Memory is implemented. This can be done with the
SAP profile parameter em/max_size_MB .
At 3-Tier configuration where multiple SAP dialog instances are run within Azure VMs can look like:

For simplification, we did not distinguish between SAP Central Services and SAP dialog instances in the SAP
application layer. In this simple 3-Tier configuration, there would be no high availability protection for SAP Central
Services. For production systems, it is not recommended to leave SAP Central Services unprotected. For specifics
on so called multi-SID configurations around SAP Central Instances and high-availability of such multi-SID
configurations, see later sections of this document.

High Availability protection for the SAP DBMS layer


As you look to deploy SAP production systems, you need to consider hot standby type of high availability
configurations. Especially with SAP HANA, where data needs to be loaded into memory before being able to get
the full performance and scalability back, Azure service healing is not an ideal measure for high availability.
In general Microsoft supports only high availability configurations and software packages that are described
under the SAP workload section in docs.microsoft.com. You can read the same statement in SAP note #1928533.
Microsoft will not provide support for other high availability third-party software frameworks that are not
documented by Microsoft in conjunction with SAP workload. In such cases, the third-party supplier of the high
availability framework is the supporting party for the high availability configuration who needs to be engaged by
you as a customer into the support process. Exceptions are going to be mentioned in this article.
In general Microsoft supports a limited set of high availability configurations on Azure VMs or HANA Large
Instances units. For the supported scenarios of HANA Large Instances, read the document Supported scenarios
for HANA Large Instances.
For Azure VMs, the following high availability configurations are supported on DBMS level:
SAP HANA System Replication based on Linux Pacemaker on SUSE and Red Hat. See the detailed articles:
High availability of SAP HANA on Azure VMs on SUSE Linux Enterprise Server
High availability of SAP HANA on Azure VMs on Red Hat Enterprise Linux
SAP HANA scale-out n+m configurations using Azure NetApp Files on SUSE and Red Hat. Details are listed in
these articles:
Deploy a SAP HANA scale-out system with standby node on Azure VMs by using Azure NetApp Files on
SUSE Linux Enterprise Server}
Deploy a SAP HANA scale-out system with standby node on Azure VMs by using Azure NetApp Files on
Red Hat Enterprise Linux
SQL Server Failover cluster based on Windows Scale-Out File Services. Though recommendation for
production systems is to use SQL Server Always On instead of clustering. SQL Server Always On provides
better availability using separate storage. Details are described in this article:
Configure a SQL Server failover cluster instance on Azure virtual machines
SQL Server Always On is supported with the Windows operating system for SQL Server on Azure. This is the
default recommendation for production SQL Server instances on Azure. Details are described in these articles:
Introducing SQL Server Always On availability groups on Azure virtual machines.
Configure an Always On availability group on Azure virtual machines in different regions.
Configure a load balancer for an Always On availability group in Azure.
Oracle Data Guard for Windows and Oracle Linux. Details for Oracle Linux can be found in this article:
Implement Oracle Data Guard on an Azure Linux virtual machine
IBM Db2 HADR on SUSE and RHEL Detailed documentation for SUSE and RHEL using Pacemaker is provided
here:
High availability of IBM Db2 LUW on Azure VMs on SUSE Linux Enterprise Server with Pacemaker
High availability of IBM Db2 LUW on Azure VMs on Red Hat Enterprise Linux Server
SAP ASE and SAP maxDB configuration as detailed in these documents:
SAP ASE Azure Virtual Machines DBMS deployment for SAP workload
SAP MaxDB, liveCache, and Content Server deployment on Azure VMs
HANA Large Instances high availability scenarios are detailed in:
Supported scenarios for HANA Large Instances- HSR with STONITH for high availability
Supported scenarios for HANA Large Instances - Host auto failover (1+1)

IMPORTANT
For none of the scenarios described above, we support configurations of multiple DBMS instances in one VM. Means in
each of the cases, only one database instance can be deployed per VM and protected with the described high availability
methods. Protecting multiple DBMS instances under the same Windows or Pacemaker failover cluster is NOT supported at
this point in time. Also Oracle Data Guard is supported for single instance per VM deployment cases only.

Various database systems allow to host multiple databases under one DBMS instance. As in the case of SAP
HANA, multiple databases can be hosted in multiple database containers (MDC). For cases where these multi-
database configurations are working within one failover cluster resource, these configurations are supported.
Configurations that are not supported are cases where multiple cluster resources would be required. As for
configurations where you would define multiple SQL Server Availability Groups, under one SQL Server instance.

Dependent on the DBMS an/or operating systems, components like Azure load balancer might or might not be
required as part of the solution architecture.
Specifically for maxDB, the storage configuration needs to be different. In maxDB the data and log files needs to
be located on shared storage for high availability configurations. Only in the case of maxDB, shared storage is
supported for high availability. For all other DBMS separate storage stacks per node are the only supported disk
configurations.
Other high availability frameworks are known to exist and are known to run on Microsoft Azure as well. However,
Microsoft did not test those frameworks. If you want to build your high availability configuration with those
frameworks, you will need to work with the provider of that software to:
Develop a deployment architecture
Deployment of the architecture
Support of the architecture

IMPORTANT
Microsoft Azure Marketplace offers a variety of soft appliances that provide storage solutions on top of Azure native
storage. These soft appliances can be used to create NFS shares as well that theoretically could be used in the SAP HANA
scale-out deployments where a standby node is required. Due to various reasons, none of these storage soft appliances is
supported for any of the DBMS deployments by Microsoft and SAP on Azure. Deployments of DBMS on SMB shares is not
supported at all at this point in time. Deployments of DBMS on NFS shares is limited to NFS 4.1 shares on Azure NetApp
Files.

High Availability for SAP Central Service


SAP Central Services is a second single point of failure of your SAP configuration. As a result, you would need to
protect these Central Services processes as well. The offer supported and documented for SAP workload reads
like:
Windows Failover Cluster Server using Windows Scale-out File Services for sapmnt and global transport
directory. Details are described in the article:
Cluster an SAP ASCS/SCS instance on a Windows failover cluster by using a file share in Azure
Prepare Azure infrastructure for SAP high availability by using a Windows failover cluster and file share
for SAP ASCS/SCS instances
Windows Failover Cluster Server using SMB share based on Azure NetApp Files for sapmnt and global
transport directory. Details are listed in the article:
High availability for SAP NetWeaver on Azure VMs on Windows with Azure NetApp Files(SMB) for SAP
applications
Windows Failover Cluster Server based on SIOS Datakeeper . Though documented by Microsoft, you need a
support relationship with SIOS, so, that you can engage with SIOS support when using this solution. Details
are described in the article:
Cluster an SAP ASCS/SCS instance on a Windows failover cluster by using a cluster shared disk in Azure
Prepare the Azure infrastructure for SAP HA by using a Windows failover cluster and shared disk for
SAP ASCS/SCS
Pacemaker on SUSE operating system with creating a highly available NFS share using two SUSE VMs and
drdb for file replication. Details are documented in the article
High availability for SAP NetWeaver on Azure VMs on SUSE Linux Enterprise Server for SAP
applications
High availability for NFS on Azure VMs on SUSE Linux Enterprise Server
Pacemaker SUSE operating system with leveraging NFS shares provided by Azure NetApp Files. Details are
documented in
High availability for SAP NetWeaver on Azure VMs on SUSE Linux Enterprise Server with Azure NetApp
Files for SAP applications
Pacemaker on Red Hat operating system with NFS share hosted on a glusterfs cluster. Details can be found in
the articles
Azure Virtual Machines high availability for SAP NetWeaver on Red Hat Enterprise Linux
GlusterFS on Azure VMs on Red Hat Enterprise Linux for SAP NetWeaver
Pacemaker on Red Hat operating system with NFS share hosted on Azure NetApp Files. Details are described in
the article
Azure Virtual Machines high availability for SAP NetWeaver on Red Hat Enterprise Linux with Azure
NetApp Files for SAP applications
Of the listed solutions, you need a support relationship with SIOS to support the Datakeeper product and to
engage with SIOS directly in case of issues. Dependent on the way you licensed the Windows, Red Hat, and/or
SUSE OS, you could also be required to have a support contract with your OS provider to have full support of the
listed high availability configurations.
The configuration can as well be displayed like:
On the right hand side of the graphics, the highly available SAP Central Services is shown. Besides having the SAP
Central services protected with a failover cluster framework that can fail over in case of an issue, there is a
necessity for a highly available NFS or SMB share, or a Windows shared disk to make sure the sapmnt and global
transport directory are available independent of the existence of a single VM. Additional some of the solutions,
like Windows Failover Cluster Server and Pacemaker are going to require an Azure load balancer to direct or re-
direct traffic to a healthy node.
In the list shown, there is no mentioning of the Oracle Linux operating system. Oracle Linux does not support
Pacemaker as a cluster framework. If you want to deploy your SAP system on Oracle Linux and you need a high
availability framework for Oracle Linux, you need to work with third-party suppliers. One of the suppliers is SIOS
with their Protection Suite for Linux that is supported by SAP on Azure. For more information read SAP note
#1662610 - Support details for SIOS Protection Suite for Linux for more details.
Supported storage with the SAP Central Services scenarios listed above
Since only a subset of Azure storage types are providing highly available NFS or SMB shares that quality for the
usage in our SAP Central Services cluster scenarios a list of supported storage types
Windows Failover Cluster Server with Windows Scale-out File Server can be deployed on all native Azure
storage types, except Azure NetApp Files. However, recommendation is to leverage Premium Storage due to
superior service level agreements in throughput and IOPS.
Windows Failover Cluster Server with SMB on Azure NetApp Files is supported on Azure NetApp Files. SMB
shares on Azure File services are NOT supported at this point in time.
Windows Failover Cluster Server with windows shared disk based on SIOS Datakeeper can be deployed on all
native Azure storage types, except Azure NetApp Files. However, recommendation is to leverage Premium
Storage due to superior service level agreements in throughput and IOPS.
SUSE or Red Hat Pacemaker using NFS shares on Azure NetApp Files is supported on Azure NetApp Files.
SUSE Pacemaker using a drdb configuration between two VMs is supported using native Azure storage types,
except Azure NetApp Files. However, recommendation is to leverage Premium Storage due to superior service
level agreements in throughput and IOPS.
Red Hat Pacemaker using glusterfs for providing NFS share is supported using native Azure storage types,
except Azure NetApp Files. However, recommendation is to leverage Premium Storage due to superior service
level agreements in throughput and IOPS.
IMPORTANT
Microsoft Azure Marketplace offers a variety of soft appliances that provide storage solutions on top of Azure native
storage. These soft appliances can be used to create NFS or SMB shares as well that theoretically could be used in the
failover clustered SAP Central Services as well. These solutions are not directly supported for SAP workload by Microsoft. If
you decide to use such a solution to create your NFS or SMB share, support for the SAP Central Service configuration needs
to be provided by the third-party owning the software in the storage soft appliance.

Multi-SID SAP Central Services failover clusters


To reduce the number of VMs that are needed in large SAP landscapes, SAP allows to run SAP Central Services
instances of multiple different SAP systems in failover cluster configuration. Imagine cases where you have 30 or
more NetWeaver or S/4HANA production systems. Without multi-SID clustering, these configurations would
require 60 or more VMs in 30 or more Windows or Pacemaker failover cluster configurations. Besides the DBMS
failover clusters necessary. Deploying multiple SAP central services across two nodes in a failover cluster
configuration can reduce the number of VMs significantly. However, deploying multiple SAP Central services
instances on a single two node cluster configuration also has some disadvantages. Issues around a single VM in
the cluster configuration apply to multiple SAP systems. Maintenance on the guest OS running in the cluster
configuration requires more coordination since multiple production SAP systems are affected. Tools like SAP
LaMa are not supporting multi-SID clustering in their system cloning process.
On Azure, a multi-SID cluster configuration is supported for the Windows operating system with ENSA1 and
ENSA2. Recommendation is not to combine the older Enqueue Replication Service architecture (ENSA1) with the
new architecture (ENSA2) on one multi-SID cluster. Details about such an architecture are documented in the
articles
SAP ASCS/SCS instance multi-SID high availability with Windows Server Failover Clustering and shared disk
on Azure
SAP ASCS/SCS instance multi-SID high availability with Windows Server Failover Clustering and file share on
Azure
For SUSE, a multi-SID cluster based on Pacemaker is supported as well. So far the configuration is supported for:
A maximum of five SAP ASCS/SCS instances
The old enqueue replication server ice architecture (ENSA1)
Two node Pacemaker cluster configurations
The configuration is documented in High availability for SAP NetWeaver on Azure VMs on SUSE Linux Enterprise
Server for SAP applications multi-SID guide
A multi-SID cluster with Enqueue Replication server schematically looks like
SAP HANA scale-out scenarios
SAP HANA scale-out scenarios are supported for a subset of the HANA certified Azure VMs as listed in the SAP
HANA hardware directory. All the VMs marked with 'Yes' in the column 'Clustering' can be used for either OLAP
or S/4HANA scale-out. Configurations without standby are supported with the Azure Storage types of:
Azure Premium Storage, including Azure Write accelerator for the /hana/log volume
Ultra disk
Azure NetApp Files
SAP HANA scale-out configurations for OLAP or S/4HANA with standby node(s) are exclusively supported with
NFS shared hosted on Azure NetApp Files.
For further information on exact storage configurations with or without standby node, check the articles:
SAP HANA Azure virtual machine storage configurations
Deploy a SAP HANA scale-out system with standby node on Azure VMs by using Azure NetApp Files on SUSE
Linux Enterprise Server
Deploy a SAP HANA scale-out system with standby node on Azure VMs by using Azure NetApp Files on Red
Hat Enterprise Linux
SAP support note #2080991
For details of HANA Large Instances supported HANA scale-out configurations, the following documentation
applies:
Supported scenarios for HANA Large Instances scale-out with standby
Supported scenarios for HANA Large Instances scale-out without standby

Disaster Recovery Scenario


There is a variety of disaster recovery scenarios that are supported. We define Disaster architectures as
architectures which should compensate for a complete Azure regions going off the grid. This means we need the
disaster recovery target to be a different Azure region as target to run your SAP landscape. We separate methods
and configurations in DBMS layer and non-DBMS layer.
DBMS layer
For the DBMS layer, configurations using the DBMS native replication mechanisms, like Always On, Oracle Data
Guard, Db2 HADR, SAP ASE Always-On, or HANA System Replication are supported. It is mandatory that the
replication stream in such cases is asynchronous, instead of synchronous as in typical high availability scenarios
that are deployed within a single Azure region. A typical example of such a supported DBMS disaster recovery
configuration is described in the article SAP HANA availability across Azure regions. The second graphic in that
section describes a scenario with HANA as an example. The main databases supported for SAP applications are all
able to be deployed in such a scenario.
It is supported to use a smaller VM as target instance in the disaster recovery region since that VM does not
experience the full workload traffic. Doing so, you need to keep the following considerations in mind:
Smaller VM types do not allow that many disks attached than smaller VMs
Smaller VMs have less network and storage throughput
Re-sizing across VM families can be a problem when the Different VMs are collected in one Azure Availability
Set or when the re-sizing should happen between the M-Series family and Mv2 family of VMs
CPU and memory consumption for the database instance being able to receive the stream of changes with
minimal delay and enough CPU and memory resources to apply these changes with minimal delay to the data
More details on limitations of different VM sizes can be found here
Another supported method of deploying a DR target is to have a second DBMS instance installed on a VM that
runs a non-production DBMS instance of a non-production SAP instance. This can be a bit more challenging since
you need to figure out what on memory, CPU resources, network bandwidth, and storage bandwidth is needed
for the particular target instances that should function as main instance in the DR scenario. Especially in HANA it
is highly recommended that you are configuring the instance that functions as DR target on a shared host so that
the data is not pre-loaded into the DR target instance.
For HANA Large Instance DR scenarios check these documents:
Single node with DR using storage replication
Single node with DR (multipurpose) using storage replication
Single node with DR (multipurpose) using storage replication
High availability with HSR and DR with storage replication
Scale-out with DR using storage replication
Single node with DR using HSR
Single node HSR to DR (cost optimized)
High availability and disaster recovery with HSR
High availability and disaster recovery with HSR (cost optimized)
Scale-out with DR using HSR

NOTE
Usage of Azure Site Recovery has not been tested for DBMS deployments under SAP workload. As a result it is not
supported for the DBMS layer of SAP systems at this point in time. Other methods of replications by Microsoft and SAP
that are not listed are not supported. Using third party software for replicating the DBMS layer of SAP systems between
different Azure Regions, needs to be supported by the vendor of the software and will not be supported through Microsoft
and SAP support channels.

Non-DBMS layer
For the SAP application layer and eventual shares or storage locations that are needed, the two major scenarios
are leveraged by customers:
The disaster recovery targets in the second Azure region are not being used for any production or non-
production purposes. In this scenario, the VMs that function as disaster recovery target are ideally not
deployed and the image and changes to the images of the production SAP application layer is replicated to the
disaster recovery region. A functionality that can perform such a task is Azure Site Recovery. Azure Site
Recovery support an Azure-to-Azure replication scenarios like this.
The disaster recovery targets are VMs that are actually in use by non-production systems. The whole SAP
landscape is spread across two different Azure regions with production systems usually in one region and
non-production systems in another region. In a lot of customer deployments, the customer has a non-
production system that is equivalent to a production system. The customer has production application
instances pre-installed on the application layer non-production systems. In case of a failover, the non-
production instances would be shutdown, the virtual names of the production VMs moved to the non-
production VMs (after assigning new IP addresses in DNS), and the pre-installed production instances are
getting started
SAP Central Services clusters
SAP Central Services clusters that are using shared disks (Windows), SMB shares (Windows) or NFS shares are a
bit harder to replicate. On the Windows side, Windows Storage Replication is a possible solution. On Linux rsync is
a viable solution.

Non-supported scenario
There is a list of scenario, which are not supported for SAP workload on Azure architectures. Not suppor ted
means SAP and Microsoft will not be able to support these configurations and need to defer to an eventual
involved third-party that provided software to establish such architectures. Two of the categories are:
Storage soft appliances: There is a number of storage soft appliances offered in Azure marketplace. Some of
the vendors offer own documentation on how to use those storage soft appliances on Azure related to SAP
software. Support of configurations or deployments involving such storage soft appliances needs to be
provided by the vendor of those storage soft appliances. This fact is also manifested in SAP support note
#2015553
High Availability frameworks: Only Pacemaker and Windows Server Failover Cluster are supported high
availability frameworks for SAP workload on Azure. As mentioned earlier, the solution of SIOS Datakeeper is
described and documented by Microsoft. Nevertheless, the components of SIOS Datakeeper need to be
supported through SIOS as the vendor providing those components. SAP also listed other certified high
availability frameworks in various SAP notes. Some of them were certified by the third-party vendor for Azure
as well. Nevertheless, support for configurations using those products need to be provided by the product
vendor. Different vendors have different integration into the SAP support processes. You should clarify what
support process works best for the particular vendor before deciding to use the product in SAP configurations
deployed on Azure.
Shared disk clusters where database files are residing on the shared disks are not supported with the
exception of maxDB. For all other database, the supported solution is to have separate storage locations
instead of a SMB or NFS share or shared disk to configure high-availability scenarios
Other scenarios, which are not supported are scenarios like:
Deployment scenarios that introduce a larger network latency between the SAP application tier and the SAP
DBMS tier in SAP's common architecture as shown in NetWeaver, S/4HANA and e.g. Hybris . This includes:
Deploying one of the tiers on-premise whereas the other tier is deployed in Azure
Deploying the SAP application tier of a system in a different Azure region than the DBMS tier
Deploying one tier in datacenters that are co-located to Azure and the other tier in Azure, except where
such an architecture patterns are provided by an Azure native service
Deploying network virtual appliances between the SAP application tier and the DBMS layer
Leveraging storage that is hosted in datacenters co-located to Azure datacenter for the SAP DBMS tier
or SAP global transport directory
Deploying the two layers with two different cloud vendors. For example, deploying the DBMS tier in
Oracle Cloud Infrastructure and the application tier in Azure
Multi-Instance HANA Pacemaker cluster configurations
Windows Cluster configurations with shared disks through SOFS or SMB on ANF for SAP databases supported
on Windows. Instead we recommend the usage of native high availability replication of the particular
databases and use separate storage stacks
Deployment of SAP databases supported on Linux with database files located in NFS shares on top of ANF
with the exception of SAP HANA
Deployment of Oracle DBMS on any other guest OS than Windows and Oracle Linux. See also SAP support
note #2039619
Scenario(s) that we did not test and therefore have no experience with list like:
Azure Site Recovery replicating DBMS layer VMs. As a result, we recommend leveraging the database native
asynchronous replication functionality for potential disaster recovery configuration

Next Steps
Read next steps in the Azure Virtual Machines planning and implementation for SAP NetWeaver
What SAP software is supported for Azure
deployments
12/22/2020 • 10 minutes to read • Edit Online

This article describes how you can find out what SAP software is supported for Azure deployments and what the
necessary operating system releases or DBMS releases are.
Evaluating, whether your current SAP software is supported and what OS and DBMS releases are supported with
your SAP software in Azure, you are going to need access to:
SAP support notes
SAP Product availability Matrix

General restrictions for SAP workload


Azure IaaS services that can be used for SAP workload are limited to x86-64 or x64 hardware. There is no Sparc
or Power CPU based offers that apply to SAP workload. Customers who run on their applications on operating
systems proprietary to hardware architectures like IBM mainframe or AS400, or where the operating systems
HP-UX, Solaris or AIX are in use, need to change their SAP applications including DBMS to one of the following
operating systems:
Windows server 64bit for the x86-64 platform
SUSE linux 64bit for the x86-64 platform
Red hat Linux 64Bit for the x86-64 platform
Oracle Linux 64bit for the x86-64 platform
In combination with SAP software, no other OS releases or Linux distributions are supported. Exact details on
specific versions and cases are documented later in the document.

You start here


The starting point for you is SAP support note #1928533. As you go through this SAP note from top to bottom,
several areas of supported software and VMs are shown
The first section lists the minimum requirements for operating releases that are supported with SAP software in
Azure VMs in general. If you are not reaching those minimum requirements and run older releases of these
operating systems, you need to upgrade your OS release to such a minimum release or even more recent
releases. It is correct that Azure in general would support older releases of some of those operating systems. But
the restrictions or minimum releases as listed are based on tests and qualifications executed and are not going to
be extended further back.

NOTE
There are some specific VM types, HANA Large Instances or SAP workloads that are going to require more recent OS
releases. Cases like that will be mentioned throughout the document. Cases like that are clearly documented either in SAP
notes or other SAP publications.

The section following lists general SAP platforms that are supported with the releases that are supported and
more important the SAP kernels that are supported. It lists NetWeaver/ABAP or Java stacks that are supported
AND, which need minimum kernel releases. More recent ABAP stacks are supported on Azure, but do not need
minimum kernel releases since changes for Azure got implemented from the start of the development of the
more recent stacks
You need to check:
Whether the SAP applications you are running, are covered by the minimum releases stated. If not, you need
to define a new target release, check in the SAP Product Availability Matrix, what operating system builds and
DBMS combinations are supported with the new target release. So, that you can choose the right operating
system release and DBMS release
Whether you need to update your SAP kernels in a move to Azure
Whether you need to update SAP Support Packages. Especially Basis Support Packages that can be required
for cases where you are required to move to a more recent DBMS release
The next section goes into more details on other SAP products and DBMS releases that are supported by SAP on
Azure for Windows and Linux.

NOTE
The minimum releases of the different DBMS is carefully chosen and might not always reflect the whole spectrum of DBMS
releases the different DBMS vendors support on Azure in general. Many SAP workload related considerations were taken
into account to define those minimum releases. There is no effort to test and qualify older DBMS releases.

NOTE
The minimum releases listed are representing older version of operating systems and database releases. We highly
encourage to use most recent operating system releases and database releases. In a lot of cases, more recent operating
system and database releases took the usage case of running in public cloud into consideration and adapted code to
optimize for running in public cloud or more specifically Azure

Oracle DBMS support


Operating system, Oracle DBMS releases and Oracle functionality supported on Azure are specifically listed in
SAP support note #2039619. Essence out of that note can be summarized like:
Minimum Oracle release supported on Azure VMs that are certified for NetWeaver is Oracle 11g Release 2
Patchset 3 (11.2.0.4)
As guest operating systems only Windows and Oracle Linux qualify. Exact releases of the OS and related
minimum DBMS releases are listed in the note
The support of Oracle Linux extends to the Oracle DBMS client as well. This means that all SAP components,
like dialog instances of the ABAP or Java Stack need to run on Oracle Linux as well. Only SAP components
within such an SAP system that would not connect to the Oracle DBMS would be allowed to run a different
Linux operating system
Oracle RAC is not supported
Oracle ASM is supported for some of the cases. Details are listed in the note
Non-Unicode SAP systems are only supported with application servers running with Windows guest OS. The
guest operating system of the DBMS can be Oracle Linux or Windows. Reason for this restriction is apparent
when checking the SAP Product Availability Matrix (PAM). For Oracle Linux, SAP never released non-Unicode
SAP kernels
Knowing the DBMS releases that are supported with the targeted Azure infrastructure you need to check the SAP
Product Availability Matrix on whether the OS releases and DBMS required are supported with your SAP product
releases you intended to run.
Oracle Linux
Most prominent asked question around Oracle Linux is whether SAP supports the Red Hat kernel that is integral
part of Oracle Linux as well. For details read SAP support note #1565179.

Other database than SAP HANA


Support of non-HANA databases for SAP workload is documented in SAP support note #1928533.

SAP HANA support


In Azure there are two services, which can be used to run HANA database:
Azure Virtual Machines
HANA Large Instances
For running SAP HANA, SAP has more and stronger conditions infrastructure needs to meet than for running
NetWeaver or other SAP applications and DBMS. As a result a smaller number of Azure VMs qualify for running
the SAP HANA DBMS. The list of supported Azure infrastructure supported for SAP HANA can be found in the so
called SAP HANA hardware directory.

NOTE
The units starting with the letter 'S' are HANA Large Instances units.

NOTE
SAP has no specific certification dependent on the SAP HANA major releases. Contrary to common opinion, the column
Cer tification scenario in the HANA certified IaaS platforms, the column makes no statement about the HANA
major or minor release cer tified . You need to assume that all the units listed that can be used for HANA 1.0 and
HANA 2.0 as long as the certified operating system releases for the specific units are supported by HANA 1.0 releases as
well.

For the usage of SAP HANA, different minimum OS releases may apply than for the general NetWeaver cases.
You need to check out the supported operating systems for each unit individually since those might vary. You do
so by clicking on each unit. More details will appear. One of the details listed is the different operating systems
supported for this specific unit.

NOTE
Azure HANA Large Instance units are more restrictive with supported operating systems compared to Azure VMs. On the
other hand Azure VMs may enforce more recent operating releases as minimum releases. This is especially true for some
of the larger VM units that required changes to Linux kernels

Knowing the supported OS for the Azure infrastructure, you need to check SAP support note #2235581 for the
exact SAP HANA releases and patch levels that are supported with the Azure units you are targeting.

IMPORTANT
The step of checking the exact SAP HANA releases and patch levels supported is very important. In a lot of cases, support
of a certain OS release is dependent on a specific patch level of the SAP HANA executables.

As you know the specific HANA releases you can run on the targeted Azure infrastructure, you need to check in
the SAP Product Availability Matrix to find out whether there are restrictions with the SAP product releases that
support the HANA releases you filtered out

Certified Azure VMs and HANA Large Instance units and business
transaction throughput
Besides evaluating supported operating system releases, DBMS releases and dependent support SAP software
releases for Azure infrastructure units, you have the need to qualify these units by business transaction
throughput, which is expressed in the unit 'SAP' by SAP. All the SAP sizing depends on SAPS calculations.
Evaluating existing SAP systems, you usually can, with the help of your infrastructure provider, calculate the SAPS
of the units. For the DBMS layer as well as for the application layer. In other cases where new functionality is
created, a sizing exercise with SAP can reveal the required SAPS numbers for the application layer and the DBMS
layer. As infrastructure provider Microsoft is obliged to provide the SAP throughput characterization of the
different units that are either NetWeaver and/or HANA certified.
For Azure VMs, these SAPS throughput numbers are documented in SAP support note #1928533. For Azure
HANA Large Instance units, the SAPS throughput numbers are documented in SAP support note #2316233
Looking into SAP support note #1928533, the following remarks apply:
For M-Series Azure VMs and Mv2-Series Azure VMs, different minimum OS releases apply than
for other Azure VM types . The requirement for more recent OS releases is based on changes the different
operating system vendors had to provide in their operating system releases to either enable their operating
systems running on the specific Azure VM types or optimize performance and throughput of SAP workload
on those VM types
There are two tables that specify different VM types. The second table specifies SAPS throughput for Azure
VM types that support Azure standard Storage only. DBMS deployment on the units specified in the second
table of the note is not supported

Other SAP products supported on Azure


In general the assumption is that with the state of hyperscale clouds like Azure, most of the SAP software should
run without functional problems in Azure. Nevertheless and opposite to private cloud visualization, SAP still
expresses support for the different SAP products explicitly for the different hyerpscale cloud providers. As a
result there are different SAP support notes indicating support for Azure for different SAP products.
For Business Objects BI platform, SAP support note #2145537 gives a list of SAP Business Objects products
supported on Azure. If there are questions around components or combinations of software releases and OS
releases that seem not to be listed or supported and which are more recent than the minimum releases listed,
you need to open an SAP support request against the component you inquire support for.
For Business Objects Data Services, SAP support note #22288344 explains minimum support of SAP Data
Services running on Azure.

NOTE
As indicated in the SAP support note, you need to check in the SAP PAM to identify the correct support package level to
be supported on Azure

SAP Datahub/Vora support in Azure Kubernetes Services (AKS) is detailed in SAP support note #2464722
Support for SAP BPC 10.1 SP08 is described in SAP support note #2451795
Support for SAP Hybris Commerce Platform on Azure is detailed in the Hybris Documentation. As of supported
DBMS for SAP Hybris Commerce Platform, it lists like:
SQL Server and Oracle on the Windows operating system platform. Same minimum releases apply as for
SAP NetWeaver. See SAP support note #1928533 for details
SAP HANA on Red Hat and SUSE Linux. SAP HANA certified VM types are required as documented earlier in
this document. SAP (Hybris) Commerce Platform is considered OLTP workload
SQL Azure DB as of SAP (Hybris) Commerce Platform version 1811

Next Steps
Read next steps in the Azure Virtual Machines planning and implementation for SAP NetWeaver
Azure Virtual Machines deployment for SAP
NetWeaver
12/22/2020 • 57 minutes to read • Edit Online

NOTE
Azure has two different deployment models you can use to create and work with resources: Azure Resource
Manager and classic. This article covers the use of the Resource Manager deployment model. We recommend the
Resource Manager deployment model for new deployments instead of the classic deployment model.

Azure Virtual Machines is the solution for organizations that need compute and storage resources, in
minimal time, and without lengthy procurement cycles. You can use Azure Virtual Machines to deploy
classical applications, like SAP NetWeaver-based applications, in Azure. Extend an application's reliability
and availability without additional on-premises resources. Azure Virtual Machines supports cross-
premises connectivity, so you can integrate Azure Virtual Machines into your organization's on-premises
domains, private clouds, and SAP system landscape.
In this article, we cover the steps to deploy SAP applications on virtual machines (VMs) in Azure,
including alternate deployment options and troubleshooting. This article builds on the information in
Azure Virtual Machines planning and implementation for SAP NetWeaver. It also complements SAP
installation documentation and SAP Notes, which are the primary resources for installing and deploying
SAP software.

Prerequisites
NOTE
This article has been updated to use the new Azure PowerShell Az module. You can still use the AzureRM module,
which will continue to receive bug fixes until at least December 2020. To learn more about the new Az module
and AzureRM compatibility, see Introducing the new Azure PowerShell Az module. For Az module installation
instructions, see Install Azure PowerShell.

Setting up an Azure virtual machine for SAP software deployment involves multiple steps and resources.
Before you start, make sure that you meet the prerequisites for installing SAP software on virtual
machines in Azure.
Local computer
To manage Windows or Linux VMs, you can use a PowerShell script and the Azure portal. For both tools,
you need a local computer running Windows 7 or a later version of Windows. If you want to manage
only Linux VMs and you want to use a Linux computer for this task, you can use Azure CLI.
Internet connection
To download and run the tools and scripts that are required for SAP software deployment, you must be
connected to the Internet. The Azure VM that is running the Azure Extension for SAP also needs access to
the Internet. If the Azure VM is part of an Azure virtual network or on-premises domain, make sure that
the relevant proxy settings are set, as described in Configure the proxy.
Microsoft Azure subscription
You need an active Azure account.
Topology and networking
You need to define the topology and architecture of the SAP deployment in Azure:
Azure storage accounts to be used
Virtual network where you want to deploy the SAP system
Resource group to which you want to deploy the SAP system
Azure region where you want to deploy the SAP system
SAP configuration (two-tier or three-tier)
VM sizes and the number of additional data disks to be mounted to the VMs
SAP Correction and Transport System (CTS) configuration
Create and configure Azure storage accounts (if required) or Azure virtual networks before you begin
the SAP software deployment process. For information about how to create and configure these
resources, see Azure Virtual Machines planning and implementation for SAP NetWeaver.
SAP sizing
Know the following information, for SAP sizing:
Projected SAP workload, for example, by using the SAP Quick Sizer tool, and the SAP Application
Performance Standard (SAPS) number
Required CPU resource and memory consumption of the SAP system
Required input/output (I/O) operations per second
Required network bandwidth of eventual communication between VMs in Azure
Required network bandwidth between on-premises assets and the Azure-deployed SAP system
Resource groups
In Azure Resource Manager, you can use resource groups to manage all the application resources in
your Azure subscription. For more information, see Azure Resource Manager overview.

Resources
SAP resources
When you are setting up your SAP software deployment, you need the following SAP resources:
SAP Note 1928533, which has:
List of Azure VM sizes that are supported for the deployment of SAP software
Important capacity information for Azure VM sizes
Supported SAP software, and operating system (OS) and database combinations
Required SAP kernel version for Windows and Linux on Microsoft Azure
SAP Note 2015553 lists prerequisites for SAP-supported SAP software deployments in Azure.
SAP Note 2178632 has detailed information about all monitoring metrics reported for SAP in
Azure.
SAP Note 1409604 has the required SAP Host Agent version for Windows in Azure.
SAP Note 2191498 has the required SAP Host Agent version for Linux in Azure.
SAP Note 2243692 has information about SAP licensing on Linux in Azure.
SAP Note 1984787 has general information about SUSE Linux Enterprise Server 12.
SAP Note 2002167 has general information about Red Hat Enterprise Linux 7.x.
SAP Note 2069760 has general information about Oracle Linux 7.x.
SAP Note 1999351 has additional troubleshooting information for the Azure Extension for SAP.
SAP Note 1597355 has general information about swap-space for Linux.
SAP on Azure SCN page has news and a collection of useful resources.
SAP Community WIKI has all required SAP Notes for Linux.
SAP-specific PowerShell cmdlets that are part of Azure PowerShell.
SAP-specific Azure CLI commands that are part of Azure CLI.
Windows resources
These Microsoft articles cover SAP deployments in Azure:
Azure Virtual Machines planning and implementation for SAP NetWeaver
Azure Virtual Machines deployment for SAP NetWeaver (this article)
Azure Virtual Machines DBMS deployment for SAP NetWeaver

Deployment scenarios for SAP software on Azure VMs


You have multiple options for deploying VMs and associated disks in Azure. It's important to understand
the differences between deployment options, because you might take different steps to prepare your
VMs for deployment based on the deployment type you choose.
Scenario 1: Deploying a VM from the Azure Marketplace for SAP
You can use an image provided by Microsoft or by a third party in the Azure Marketplace to deploy your
VM. The Marketplace offers some standard OS images of Windows Server and different Linux
distributions. You also can deploy an image that includes database management system (DBMS) SKUs,
for example, Microsoft SQL Server. For more information about using images with DBMS SKUs, see
Azure Virtual Machines DBMS deployment for SAP NetWeaver.
The following flowchart shows the SAP-specific sequence of steps for deploying a VM from the Azure
Marketplace:

Create a virtual machine by using the Azure portal


The easiest way to create a new virtual machine with an image from the Azure Marketplace is by using
the Azure portal.
1. Go to https://portal.azure.com/#create/hub. Or, in the Azure portal menu, select + New .
2. Select Compute , and then select the type of operating system you want to deploy. For example,
Windows Server 2012 R2, SUSE Linux Enterprise Server 12 (SLES 12), Red Hat Enterprise Linux 7.2
(RHEL 7.2), or Oracle Linux 7.2. The default list view does not show all supported operating systems.
Select see all for a full list. For more information about supported operating systems for SAP
software deployment, see SAP Note 1928533.
3. On the next page, review terms and conditions.
4. In the Select a deployment model box, select Resource Manager .
5. Select Create .
The wizard guides you through setting the required parameters to create the virtual machine, in addition
to all required resources, like network interfaces and storage accounts. Some of these parameters are:
1. Basics :
Name : The name of the resource (the virtual machine name).
VM disk type : Select the disk type of the OS disk. If you want to use Premium Storage for
your data disks, we recommend using Premium Storage for the OS disk as well.
Username and password or SSH public key : Enter the username and password of the
user that is created during the provisioning. For a Linux virtual machine, you can enter the
public Secure Shell (SSH) key that you use to sign in to the machine.
Subscription : Select the subscription that you want to use to provision the new virtual
machine.
Resource group : The name of the resource group for the VM. You can enter either the name
of a new resource group or the name of a resource group that already exists.
Location : Where to deploy the new virtual machine. If you want to connect the virtual
machine to your on-premises network, make sure you select the location of the virtual
network that connects Azure to your on-premises network. For more information, see
Microsoft Azure networking in Azure Virtual Machines planning and implementation for SAP
NetWeaver.
2. Size :
For a list of supported VM types, see SAP Note 1928533. Be sure you select the correct VM type if
you want to use Azure Premium Storage. Not all VM types support Premium Storage. For more
information, see Storage: Microsoft Azure Storage and data disks and Azure storage for SAP
workloads in Azure Virtual Machines planning and implementation for SAP NetWeaver.
3. Settings :
Storage
Disk Type : Select the disk type of the OS disk. If you want to use Premium Storage for
your data disks, we recommend using Premium Storage for the OS disk as well.
Use managed disks : If you want to use Managed Disks, select Yes. For more
information about Managed Disks, see chapter Managed Disks in the planning guide.
Storage account : Select an existing storage account or create a new one. Not all
storage types work for running SAP applications. For more information about storage
types, see Storage structure of a VM for RDBMS Deployments.
Network
Vir tual network and Subnet : To integrate the virtual machine with your intranet,
select the virtual network that is connected to your on-premises network.
Public IP address : Select the public IP address that you want to use, or enter
parameters to create a new public IP address. You can use a public IP address to access
your virtual machine over the Internet. Make sure that you also create a network
security group to help secure access to your virtual machine.
Network security group : For more information, see Control network traffic flow with
network security groups.
Extensions : You can install virtual machine extensions by adding them to the deployment.
You do not need to add extensions in this step. The extensions required for SAP support are
installed later. See chapter Configure the Azure Extension for SAP in this guide.
High Availability : Select an availability set, or enter the parameters to create a new
availability set. For more information, see Azure availability sets.
Monitoring
Boot diagnostics : You can select Disable for boot diagnostics.
Guest OS diagnostics : You can select Disable for monitoring diagnostics.
4. Summar y :
Review your selections, and then select OK .
Your virtual machine is deployed in the resource group you selected.
Create a virtual machine by using a template
You can create a virtual machine by using one of the SAP templates published in the azure-quickstart-
templates GitHub repository. You also can manually create a virtual machine by using the Azure portal,
PowerShell, or Azure CLI.
Two-tier configuration (only one vir tual machine) template (sap-2-tier-marketplace-
image)
To create a two-tier system by using only one virtual machine, use this template.
Two-tier configuration (only one vir tual machine) template - Managed Disks (sap-2-
tier-marketplace-image-md)
To create a two-tier system by using only one virtual machine and Managed Disks, use this
template.
Three-tier configuration (multiple vir tual machines) template (sap-3-tier-marketplace-
image)
To create a three-tier system by using multiple virtual machines, use this template.
Three-tier configuration (multiple vir tual machines) template - Managed Disks (sap-3-
tier-marketplace-image-md)
To create a three-tier system by using multiple virtual machines and Managed Disks, use this
template.
In the Azure portal, enter the following parameters for the template:
1. Basics :
Subscription : The subscription to use to deploy the template.
Resource group : The resource group to use to deploy the template. You can create a new
resource group, or you can select an existing resource group in the subscription.
Location : Where to deploy the template. If you selected an existing resource group, the
location of that resource group is used.
2. Settings :
SAP System ID : The SAP System ID (SID).
OS type : The operating system you want to deploy, for example, Windows Server 2012
R2, SUSE Linux Enterprise Server 12 (SLES 12), Red Hat Enterprise Linux 7.2 (RHEL 7.2), or
Oracle Linux 7.2.
The list view does not show all supported operating systems. For more information about
supported operating systems for SAP software deployment, see SAP Note 1928533.
SAP system size : The size of the SAP system.
The number of SAPS the new system provides. If you are not sure how many SAPS the
system requires, ask your SAP Technology Partner or System Integrator.
System availability (three-tier template only): The system availability.
Select HA for a configuration that is suitable for a high-availability installation. Two
database servers and two servers for ABAP SAP Central Services (ASCS) are created.
Storage type (two-tier template only): The type of storage to use.
For larger systems, we highly recommend using Azure Premium Storage. For more
information about storage types, see these resources:
Use of Azure Premium SSD Storage for SAP DBMS Instance
Storage structure of a VM for RDBMS Deployments
Premium Storage: High-performance storage for Azure Virtual Machine workloads
Introduction to Microsoft Azure Storage
Admin username and Admin password : A username and password. A new user is
created, for signing in to the virtual machine.
New or existing subnet : Determines whether a new virtual network and subnet are
created or an existing subnet is used. If you already have a virtual network that is
connected to your on-premises network, select Existing .
Subnet ID : If you want to deploy the VM into an existing VNet where you have a subnet
defined the VM should be assigned to, name the ID of that specific subnet. The ID usually
looks like this: /subscriptions/<subscription id>/resourceGroups/<resource group
name>/providers/Microsoft.Network/virtualNetworks/<virtual network
name>/subnets/<subnet name>
3. Terms and conditions :
Review and accept the legal terms.
4. Select Purchase .
The Azure VM Agent is deployed by default when you use an image from the Azure Marketplace.
Configure proxy settings
Depending on how your on-premises network is configured, you might need to set up the proxy on your
VM. If your VM is connected to your on-premises network via VPN or ExpressRoute, the VM might not
be able to access the Internet, and won't be able to download the required VM extensions or collect
Azure infrastructure information for the SAP Host agent via the SAP extension for Azure. For more
information, see Configure the proxy.
Join a domain (Windows only )
If your Azure deployment is connected to an on-premises Active Directory or DNS instance via an Azure
site-to-site VPN connection or ExpressRoute (this is called cross-premises in Azure Virtual Machines
planning and implementation for SAP NetWeaver), it is expected that the VM is joining an on-premises
domain. For more information about considerations for this task, see Join a VM to an on-premises
domain (Windows only).
Configure VM Extension
To be sure SAP supports your environment, set up the Azure Extension for SAP as described in
Configure the Azure Extension for SAP. Check the prerequisites for SAP, and required minimum versions
of SAP Kernel and SAP Host Agent, in the resources listed in SAP resources.
VM extension for SAP check
Check whether the VM Extension for SAP is working, as described in Checks and Troubleshooting.
Post-deployment steps
After you create the VM and the VM is deployed, you need to install the required software components
in the VM. Because of the deployment/software installation sequence in this type of VM deployment, the
software to be installed must already be available, either in Azure, on another VM, or as a disk that can
be attached. Or, consider using a cross-premises scenario, in which connectivity to the on-premises
assets (installation shares) is given.
After you deploy your VM in Azure, follow the same guidelines and tools to install the SAP software on
your VM as you would in an on-premises environment. To install SAP software on an Azure VM, both
SAP and Microsoft recommend that you upload and store the SAP installation media on Azure VHDs or
Managed Disks, or that you create an Azure VM that works as a file server that has all the required SAP
installation media.
Scenario 2: Deploying a VM with a custom image for SAP
Because different versions of an operating system or DBMS have different patch requirements, the
images you find in the Azure Marketplace might not meet your needs. You might instead want to create
a VM by using your own OS/DBMS VM image, which you can deploy again later. You use different steps
to create a private image for Linux than to create one for Windows.

Windows
To prepare a Windows image that you can use to deploy multiple virtual machines, the Windows
settings (like Windows SID and hostname) must be abstracted or generalized on the on-premises
VM. You can use sysprep to do this.

Linux
To prepare a Linux image that you can use to deploy multiple virtual machines, some Linux settings
must be abstracted or generalized on the on-premises VM. You can use waagent -deprovision to do
this. For more information, see Capture a Linux virtual machine running on Azure and the Azure
Linux agent user guide.

You can prepare and create a custom image, and then use it to create multiple new VMs. This is
described in Azure Virtual Machines planning and implementation for SAP NetWeaver. Set up your
database content either by using SAP Software Provisioning Manager to install a new SAP system
(restores a database backup from a disk that's attached to the virtual machine) or by directly restoring a
database backup from Azure storage, if your DBMS supports it. For more information, see Azure Virtual
Machines DBMS deployment for SAP NetWeaver. If you have already installed an SAP system on your
on-premises VM (especially for two-tier systems), you can adapt the SAP system settings after the
deployment of the Azure VM by using the System Rename procedure supported by SAP Software
Provisioning Manager (SAP Note 1619720). Otherwise, you can install the SAP software after you
deploy the Azure VM.
The following flowchart shows the SAP-specific sequence of steps for deploying a VM from a custom
image:

Create a virtual machine by using the Azure portal


The easiest way to create a new virtual machine from a Managed Disk image is by using the Azure
portal. For more information on how to create a Manage Disk Image, read Capture a managed image of
a generalized VM in Azure
1. Go to
https://ms.portal.azure.com/#blade/HubsExtension/Resources/resourceType/Microsoft.Compute%2Fimages.
Or, in the Azure portal menu, select Images .
2. Select the Managed Disk image you want to deploy and click on Create VM
The wizard guides you through setting the required parameters to create the virtual machine, in addition
to all required resources, like network interfaces and storage accounts. Some of these parameters are:
1. Basics :
Name : The name of the resource (the virtual machine name).
VM disk type : Select the disk type of the OS disk. If you want to use Premium Storage for
your data disks, we recommend using Premium Storage for the OS disk as well.
Username and password or SSH public key : Enter the username and password of the
user that is created during the provisioning. For a Linux virtual machine, you can enter the
public Secure Shell (SSH) key that you use to sign in to the machine.
Subscription : Select the subscription that you want to use to provision the new virtual
machine.
Resource group : The name of the resource group for the VM. You can enter either the name
of a new resource group or the name of a resource group that already exists.
Location : Where to deploy the new virtual machine. If you want to connect the virtual
machine to your on-premises network, make sure you select the location of the virtual
network that connects Azure to your on-premises network. For more information, see
Microsoft Azure networking in Azure Virtual Machines planning and implementation for SAP
NetWeaver.
2. Size :
For a list of supported VM types, see SAP Note 1928533. Be sure you select the correct VM type if
you want to use Azure Premium Storage. Not all VM types support Premium Storage. For more
information, see Storage: Microsoft Azure Storage and data disks and Azure storage for SAP
workloads in Azure Virtual Machines planning and implementation for SAP NetWeaver.
3. Settings :
Storage
Disk Type : Select the disk type of the OS disk. If you want to use Premium Storage for
your data disks, we recommend using Premium Storage for the OS disk as well.
Use managed disks : If you want to use Managed Disks, select Yes. For more
information about Managed Disks, see chapter Managed Disks in the planning guide.
Network
Vir tual network and Subnet : To integrate the virtual machine with your intranet,
select the virtual network that is connected to your on-premises network.
Public IP address : Select the public IP address that you want to use, or enter
parameters to create a new public IP address. You can use a public IP address to access
your virtual machine over the Internet. Make sure that you also create a network
security group to help secure access to your virtual machine.
Network security group : For more information, see Control network traffic flow with
network security groups.
Extensions : You can install virtual machine extensions by adding them to the deployment.
You do not need to add extension in this step. The extensions required for SAP support are
installed later. See chapter Configure the Azure Extension for SAP in this guide.
High Availability : Select an availability set, or enter the parameters to create a new
availability set. For more information, see Azure availability sets.
Monitoring
Boot diagnostics : You can select Disable for boot diagnostics.
Guest OS diagnostics : You can select Disable for monitoring diagnostics.
4. Summar y :
Review your selections, and then select OK .
Your virtual machine is deployed in the resource group you selected.
Create a virtual machine by using a template
To create a deployment by using a private OS image from the Azure portal, use one of the following SAP
templates. These templates are published in the azure-quickstart-templates GitHub repository. You also
can manually create a virtual machine, by using PowerShell.
Two-tier configuration (only one vir tual machine) template (sap-2-tier-user-image)
To create a two-tier system by using only one virtual machine, use this template.
Two-tier configuration (only one vir tual machine) template - Managed Disk Image
(sap-2-tier-user-image-md)
To create a two-tier system by using only one virtual machine and a Managed Disk image, use
this template.
Three-tier configuration (multiple vir tual machines) template (sap-3-tier-user-image)
To create a three-tier system by using multiple virtual machines or your own OS image, use this
template.
Three-tier configuration (multiple vir tual machines) template - Managed Disk Image
(sap-3-tier-user-image-md)
To create a three-tier system by using multiple virtual machines or your own OS image and a
Managed Disk image, use this template.
In the Azure portal, enter the following parameters for the template:
1. Basics :
Subscription : The subscription to use to deploy the template.
Resource group : The resource group to use to deploy the template. You can create a new
resource group or select an existing resource group in the subscription.
Location : Where to deploy the template. If you selected an existing resource group, the
location of that resource group is used.
2. Settings :
SAP System ID : The SAP System ID.
OS type : The operating system type you want to deploy (Windows or Linux).
SAP system size : The size of the SAP system.
The number of SAPS the new system provides. If you are not sure how many SAPS the
system requires, ask your SAP Technology Partner or System Integrator.
System availability (three-tier template only): The system availability.
Select HA for a configuration that is suitable for a high-availability installation. Two
database servers and two servers for ASCS are created.
Storage type (two-tier template only): The type of storage to use.
For larger systems, we highly recommend using Azure Premium Storage. For more
information about storage types, see the following resources:
Use of Azure Premium SSD Storage for SAP DBMS Instance
Storage structure of a VM for RDBMS Deployments
Premium Storage: High-performance storage for Azure virtual machine workloads
Introduction to Microsoft Azure Storage
User image VHD URI (unmanaged disk image template only): The URI of the private OS
image VHD, for example,
https://<accountname>.blob.core.windows.net/vhds/userimage.vhd.
User image storage account (unmanaged disk image template only): The name of the
storage account where the private OS image is stored, for example, <accountname> in
https://<accountname>.blob.core.windows.net/vhds/userimage.vhd.
userImageId (managed disk image template only): ID of the Managed Disk image you
want to use
Admin username and Admin password : The username and password.
A new user is created, for signing in to the virtual machine.
New or existing subnet : Determines whether a new virtual network and subnet is
created or an existing subnet is used. If you already have a virtual network that is
connected to your on-premises network, select Existing .
Subnet ID : If you want to deploy the VM into an existing VNet where you have a subnet
defined the VM should be assigned to, name the ID of that specific subnet. The ID usually
looks like this: /subscriptions/<subscription id>/resourceGroups/<resource group
name>/providers/Microsoft.Network/virtualNetworks/<virtual network
name>/subnets/<subnet name>
3. Terms and conditions :
Review and accept the legal terms.
4. Select Purchase .
Install the VM Agent (Linux only )
To use the templates described in the preceding section, the Linux Agent must already be installed in the
user image, or the deployment will fail. Download and install the VM Agent in the user image as
described in Download, install, and enable the Azure VM Agent. If you don't use the templates, you also
can install the VM Agent later.
Join a domain (Windows only )
If your Azure deployment is connected to an on-premises Active Directory or DNS instance via an Azure
site-to-site VPN connection or Azure ExpressRoute (this is called cross-premises in Azure Virtual
Machines planning and implementation for SAP NetWeaver), it is expected that the VM is joining an on-
premises domain. For more information about considerations for this step, see Join a VM to an on-
premises domain (Windows only).
Configure proxy settings
Depending on how your on-premises network is configured, you might need to set up the proxy on your
VM. If your VM is connected to your on-premises network via VPN or ExpressRoute, the VM might not
be able to access the Internet, and won't be able to download the required VM extensions or collect
Azure infrastructure information for the SAP Host agent via the SAP extension for Azure, see Configure
the proxy.
Configure Azure VM Extension for SAP
To be sure SAP supports your environment, set up the Azure Extension for SAP as described in
Configure the Azure Extension for SAP. Check the prerequisites for SAP, and required minimum versions
of SAP Kernel and SAP Host Agent, in the resources listed in SAP resources.
SAP VM Extension check
Check whether the VM Extension for SAP is working, as described in Checks and Troubleshooting.
Scenario 3: Moving an on-premises VM by using a non-generalized Azure VHD with SAP
In this scenario, you plan to move a specific SAP system from an on-premises environment to Azure. You
can do this by uploading the VHD that has the OS, the SAP binaries, and eventually the DBMS binaries,
plus the VHDs with the data and log files of the DBMS, to Azure. Unlike the scenario described in
Scenario 2: Deploying a VM with a custom image for SAP, in this case, you keep the hostname, SAP SID,
and SAP user accounts in the Azure VM, because they were configured in the on-premises environment.
You do not need to generalize the OS. This scenario applies most often to cross-premises scenarios
where part of the SAP landscape runs on-premises and part of it runs on Azure.
In this scenario, the VM Agent is not automatically installed during deployment. Because the VM Agent
and the Azure Extension for SAP are required to run SAP NetWeaver on Azure, you need to download,
install, and enable both components manually after you create the virtual machine.
For more information about the Azure VM Agent, see the following resources.

Windows
Azure Virtual Machine Agent overview

Linux
Azure Linux Agent User Guide

The following flowchart shows the sequence of steps for moving an on-premises VM by using a non-
generalized Azure VHD:

If the disk is already uploaded and defined in Azure (see Azure Virtual Machines planning and
implementation for SAP NetWeaver), do the tasks described in the next few sections.
Create a virtual machine
To create a deployment by using a private OS disk through the Azure portal, use the SAP template
published in the azure-quickstart-templates GitHub repository. You also can manually create a virtual
machine, by using PowerShell.
Two-tier configuration (only one vir tual machine) template (sap-2-tier-user-disk)
To create a two-tier system by using only one virtual machine, use this template.
Two-tier configuration (only one vir tual machine) template - Managed Disk (sap-2-tier-
user-disk-md)
To create a two-tier system by using only one virtual machine and a Managed Disk, use this
template.
In the Azure portal, enter the following parameters for the template:
1. Basics :
Subscription : The subscription to use to deploy the template.
Resource group : The resource group to use to deploy the template. You can create a new
resource group or select an existing resource group in the subscription.
Location : Where to deploy the template. If you selected an existing resource group, the
location of that resource group is used.
2. Settings :
SAP System ID : The SAP System ID.
OS type : The operating system type you want to deploy (Windows or Linux).
SAP system size : The size of the SAP system.
The number of SAPS the new system provides. If you are not sure how many SAPS the
system requires, ask your SAP Technology Partner or System Integrator.
Storage type (two-tier template only): The type of storage to use.
For larger systems, we highly recommend using Azure Premium Storage. For more
information about storage types, see the following resources:
Use of Azure Premium SSD Storage for SAP DBMS Instance
Storage structure of a VM for RDBMS Deployments
Premium Storage: High-performance storage for Azure Virtual Machine workloads
Introduction to Microsoft Azure Storage
OS disk VHD URI (unmanaged disk template only): The URI of the private OS disk, for
example, https://<accountname>.blob.core.windows.net/vhds/osdisk.vhd.
OS disk Managed Disk ID (managed disk template only): The ID of the Managed Disk
OS disk, /subscriptions/92d102f7-81a5-4df7-9877-
54987ba97dd9/resourceGroups/group/providers/Microsoft.Compute/disks/WIN
New or existing subnet : Determines whether a new virtual network and subnet are
created, or an existing subnet is used. If you already have a virtual network that is
connected to your on-premises network, select Existing .
Subnet ID : If you want to deploy the VM into an existing VNet where you have a subnet
defined the VM should be assigned to, name the ID of that specific subnet. The ID usually
looks like this: /subscriptions/<subscription id>/resourceGroups/<resource group
name>/providers/Microsoft.Network/virtualNetworks/<virtual network
name>/subnets/<subnet name>
3. Terms and conditions :
Review and accept the legal terms.
4. Select Purchase .
Install the VM Agent
To use the templates described in the preceding section, the VM Agent must be installed on the OS disk,
or the deployment will fail. Download and install the VM Agent in the VM, as described in Download,
install, and enable the Azure VM Agent.
If you don't use the templates described in the preceding section, you can also install the VM Agent
afterwards.
Join a domain (Windows only )
If your Azure deployment is connected to an on-premises Active Directory or DNS instance via an Azure
site-to-site VPN connection or ExpressRoute (this is called cross-premises in Azure Virtual Machines
planning and implementation for SAP NetWeaver), it is expected that the VM is joining an on-premises
domain. For more information about considerations for this task, see Join a VM to an on-premises
domain (Windows only).
Configure proxy settings
Depending on how your on-premises network is configured, you might need to set up the proxy on your
VM. If your VM is connected to your on-premises network via VPN or ExpressRoute, the VM might not
be able to access the Internet, and won't be able to download the required VM extensions or collect
Azure infrastructure information for the SAP Host agent via the SAP extension for Azure, see Configure
the proxy.
Configure Azure VM Extension for SAP
To be sure SAP supports your environment, set up the Azure Extension for SAP as described in
Configure the Azure Extension for SAP. Check the prerequisites for SAP, and required minimum versions
of SAP Kernel and SAP Host Agent, in the resources listed in SAP resources.
SAP VM check
Check whether the VM extension for SAP is working, as described in Checks and Troubleshooting.

Update the configuration of Azure Extension for SAP


Update the configuration of Azure Extension for SAP in any of the following scenarios:
The joint Microsoft/SAP team extends the capabilities of the VM extension and requests more or
fewer counters.
Microsoft introduces a new version of the underlying Azure infrastructure that delivers the data, and
the Azure Extension for SAP needs to be adapted to those changes.
You mount additional data disks to your Azure VM or you remove a data disk. In this scenario, update
the collection of storage-related data. Changing your configuration by adding or deleting endpoints
or by assigning IP addresses to a VM does not affect the extension configuration.
You change the size of your Azure VM, for example, from size A5 to any other VM size.
You add new network interfaces to your Azure VM.
To update settings, update configuration of Azure Extension for SAP by following the steps in Configure
the Azure Extension for SAP.

Detailed tasks for SAP software deployment


This section has detailed steps for doing specific tasks in the configuration and deployment process.
Deploy Azure PowerShell cmdlets
Follow the steps described in the article Install the Azure PowerShell module
Check frequently for updates to the PowerShell cmdlets, which usually are updated monthly. Follow the
steps described in this article. Unless stated otherwise in SAP Note 1928533 or SAP Note 2015553, we
recommend that you work with the latest version of Azure PowerShell cmdlets.
To check the version of the Azure PowerShell cmdlets that are installed on your computer, run this
PowerShell command:

(Get-Module Az.Compute).Version
Deploy Azure CLI
Follow the steps described in the article Install the Azure CLI
Check frequently for updates to Azure CLI, which usually is updated monthly.
To check the version of Azure CLI that is installed on your computer, run this command:

az --version

Join a VM to an on-premises domain (Windows only)


If you deploy SAP VMs in a cross-premises scenario, where on-premises Active Directory and DNS are
extended in Azure, it is expected that the VMs are joining an on-premises domain. The detailed steps you
take to join a VM to an on-premises domain, and the additional software required to be a member of an
on-premises domain, varies by customer. Usually, to join a VM to an on-premises domain, you need to
install additional software, like antimalware software, and backup or monitoring software.
In this scenario, you also need to make sure that if Internet proxy settings are forced when a VM joins a
domain in your environment, the Windows Local System Account (S-1-5-18) in the Guest VM has the
same proxy settings. The easiest option is to force the proxy by using a domain Group Policy, which
applies to systems in the domain.
Download, install, and enable the Azure VM Agent
For virtual machines that are deployed from an OS image that is not generalized (for example, an image
that doesn't originate in the Windows System Preparation, or sysprep, tool), you need to manually
download, install, and enable the Azure VM Agent.
If you deploy a VM from the Azure Marketplace, this step is not required. Images from the Azure
Marketplace already have the Azure VM Agent.
Windows
1. Download the Azure VM Agent:
a. Download the Azure VM Agent installer package.
b. Store the VM Agent MSI package locally on a personal computer or server.
2. Install the Azure VM Agent:
a. Connect to the deployed Azure VM by using Remote Desktop Protocol (RDP).
b. Open a Windows Explorer window on the VM and select the target directory for the MSI file of
the VM Agent.
c. Drag the Azure VM Agent Installer MSI file from your local computer/server to the target
directory of the VM Agent on the VM.
d. Double-click the MSI file on the VM.
3. For VMs that are joined to on-premises domains, make sure that eventual Internet proxy settings also
apply to the Windows Local System account (S-1-5-18) in the VM, as described in Configure the
proxy. The VM Agent runs in this context and needs to be able to connect to Azure.
No user interaction is required to update the Azure VM Agent. The VM Agent is automatically updated,
and does not require a VM restart.
Linux
Use the following commands to install the VM Agent for Linux:
SUSE Linux Enterprise Ser ver (SLES)

sudo zypper install WALinuxAgent


Red Hat Enterprise Linux (RHEL) or Oracle Linux

sudo yum install WALinuxAgent

If the agent is already installed, to update the Azure Linux Agent, do the steps described in Update the
Azure Linux Agent on a VM to the latest version from GitHub.
Configure the proxy
The steps you take to configure the proxy in Windows are different from the way you configure the
proxy in Linux.
Windows
Proxy settings must be set up correctly for the Local System account to access the Internet. If your proxy
settings are not set by Group Policy, you can configure the settings for the Local System account.
1. Go to Star t , enter gpedit.msc , and then select Enter .
2. Select Computer Configuration > Administrative Templates > Windows Components >
Internet Explorer . Make sure that the setting Make proxy settings per-machine (rather than
per-user) is disabled or not configured.
3. In Control Panel , go to Network and Sharing Center > Internet Options .
4. On the Connections tab, select the L AN settings button.
5. Clear the Automatically detect settings check box.
6. Select the Use a proxy ser ver for your L AN check box, and then enter the proxy address and
port.
7. Select the Advanced button.
8. In the Exceptions box, enter the IP address 168.63.129.16 . Select OK .
Linux
Configure the correct proxy in the configuration file of the Microsoft Azure Guest Agent, which is located
at \etc\waagent.conf.
Set the following parameters:
1. HTTP proxy host . For example, set it to proxy.corp.local .

HttpProxy.Host=<proxy host>

2. HTTP proxy por t . For example, set it to 80 .

HttpProxy.Port=<port of the proxy host>

3. Restart the agent.

sudo service waagent restart

The proxy settings in \etc\waagent.conf also apply to the required VM extensions. If you want to use the
Azure repositories, make sure that the traffic to these repositories is not going through your on-
premises intranet. If you created user-defined routes to enable forced tunneling, make sure that you add
a route that routes traffic to the repositories directly to the Internet, and not through your site-to-site
VPN connection.
SLES
You also need to add routes for the IP addresses listed in \etc\regionserverclnt.cfg. The following
figure shows an example:

RHEL
You also need to add routes for the IP addresses of the hosts listed in \etc\yum.repos.d\rhui-load-
balancers. For an example, see the preceding figure.
Oracle Linux
There are no repositories for Oracle Linux on Azure. You need to configure your own repositories
for Oracle Linux or use the public repositories.
For more information about user-defined routes, see User-defined routes and IP forwarding.
Configure the Azure Extension for SAP

NOTE
General Support Statement: Please always open an incident with SAP on component BC-OP-NT-AZR for
Windows or BC-OP-LNX-AZR if you need support for the Azure Extension for SAP. There are dedicated Microsoft
support engineers working in the SAP support system to help our joint customers.
When you've prepared the VM as described in Deployment scenarios of VMs for SAP on Azure, the
Azure VM Agent is installed on the virtual machine. The next step is to deploy the Azure Extension for
SAP, which is available in the Azure Extension Repository in the global Azure datacenters. For more
information, see Azure Virtual Machines planning and implementation for SAP NetWeaver.
We are in the process of releasing a new version of the Azure Extension for SAP. The new extension uses
the system assigned identity of the virtual machine to get information about the attached disks, network
interfaces and the virtual machine itself. To be able to access these resources, the system identity of the
virtual machine needs Reader permission for the virtual machine, OS disk, data disks and network
interfaces. We currently recommend to only install the new extension in the following scenarios:
1. You want to install the extension with Terraform, Azure Resource Manager Templates or with other
means than Azure CLI or Azure PowerShell
2. You want to install the extension on SUSE SLES 15 or higher.
3. Microsoft or SAP support asks you to install the new extension
4. You want to use Azure Ultra Disk or Standard Managed Disks
For these scenarios, follow the steps in chapter Configure the new Azure Extension for SAP with Azure
PowerShell for Azure PowerShell or Configure the new Azure Extension for SAP with Azure CLI for Azure
CLI.
Follow Azure PowerShell or Azure CLI to install and configure the standard version of the Azure
Extension for SAP.
Azure PowerShell for Linux and Windows VMs
To install the Azure Extension for SAP by using PowerShell:
1. Make sure that you have installed the latest version of the Azure PowerShell cmdlet. For more
information, see Deploying Azure PowerShell cmdlets.
2. Run the following PowerShell cmdlet. For a list of available environments, run cmdlet
Get-AzEnvironment . If you want to use global Azure, your environment is AzureCloud . For Azure
China 21Vianet, select AzureChinaCloud .

$env = Get-AzEnvironment -Name <name of the environment>


Connect-AzAccount -Environment $env
Set-AzContext -SubscriptionName <subscription name>

Set-AzVMAEMExtension -ResourceGroupName <resource group name> -VMName <virtual machine name>

After you enter your account data, the script deploys the required extensions and enables the required
features. This can take several minutes. For more information about Set-AzVMAEMExtension , see Set-
AzVMAEMExtension.

The Set-AzVMAEMExtension configuration does all the steps to configure host data collection for SAP.
The script output includes the following information:
Confirmation that data collection for the OS disk and all additional data disks has been configured.
The next two messages confirm the configuration of Storage Metrics for a specific storage account.
One line of output gives the status of the actual update of the VM Extension for SAP configuration.
Another line of output confirms that the configuration has been deployed or updated.
The last line of output is informational. It shows your options for testing the VM Extension for SAP
configuration.
To check that all steps of Azure VM Extension for SAP configuration have been executed successfully,
and that the Azure Infrastructure provides the necessary data, proceed with the readiness check for
the Azure Extension for SAP, as described in Readiness check for Azure Extension for SAP.
Wait 15-30 minutes for Azure Diagnostics to collect the relevant data.
Azure CLI for Linux VMs
To install the Azure Extension for SAP by using Azure CLI:
1. Install Azure classic CLI, as described in Install the Azure classic CLI.
2. Sign in with your Azure account:

azure login

3. Switch to Azure Resource Manager mode:

azure config mode arm

4. Enable Azure Extension for SAP:

azure vm enable-aem <resource-group-name> <vm-name>

5. Install using Azure CLI 2.0


a. Install Azure CLI 2.0, as described in Install Azure CLI 2.0.
b. Sign in with your Azure account:

az login

c. Install Azure CLI AEM Extension

az extension add --name aem

d. Install the extension with

az vm aem set -g <resource-group-name> -n <vm name>

6. Verify that the Azure Extension for SAP is active on the Azure Linux VM. Check whether the file
\var\lib\AzureEnhancedMonitor\PerfCounters exists. If it exists, at a command prompt, run this
command to display information collected by the Azure Extension for SAP:
cat /var/lib/AzureEnhancedMonitor/PerfCounters

The output looks like this:

...
2;cpu;Current Hw Frequency;;0;2194.659;MHz;60;1444036656;saplnxmon;
2;cpu;Max Hw Frequency;;0;2194.659;MHz;0;1444036656;saplnxmon;
...

Configure the new Azure Extension for SAP with Azure PowerShell
The new VM Extension for SAP uses a Managed Identity assigned to the VM to access monitoring and
configuration data of the VM. To install the new Azure Extension for SAP by using PowerShell, you first
have to assign such an identity to the VM and grant that identity access to all resources that are in use
by that VM, for example disks and network interfaces.

NOTE
The following steps require Owner privileges over the resource group or individual resources (virtual machine,
data disks etc.)

1. Make sure to use SAP Host Agent 7.21 PL 47 or higher.


2. Make sure to uninstall the current version of the VM Extension for SAP. It is not supported to
install both versions of the VM Extension for SAP on the same virtual machine.
3. Make sure that you have installed the latest version of the Azure PowerShell cmdlet (at least
4.3.0). For more information, see Deploying Azure PowerShell cmdlets.
4. Run the following PowerShell cmdlet. For a list of available environments, run cmdlet
Get-AzEnvironment . If you want to use global Azure, your environment is AzureCloud . For Azure
China 21Vianet, select AzureChinaCloud .

$env = Get-AzEnvironment -Name <name of the environment>


Connect-AzAccount -Environment $env
Set-AzContext -SubscriptionName <subscription name>

Set-AzVMAEMExtension -ResourceGroupName <resource group name> -VMName <virtual machine name>


-InstallNewExtension

Configure the new Azure Extension for SAP with Azure CLI
The new VM Extension for SAP uses a Managed Identity assigned to the VM to access monitoring and
configuration data of the VM. To install the new Azure Extension for SAP by using Azure CLI, you first
have to assign such an identity to the VM and grant that identity access to all resources that are in use
by that VM, for example disks and network interfaces.

NOTE
The following steps require Owner privileges over the resource group or individual resources (virtual machine,
data disks etc.)

1. Make sure to use SAP Host Agent 7.21 PL 47 or higher.


2. Make sure to uninstall the current version of the VM Extension for SAP. It is not supported to
install both versions of the VM Extension for SAP on the same virtual machine.
3. Install Azure CLI 2.0, as described in Install Azure CLI 2.0.
4. Sign in with your Azure account:

az login

5. Follow the steps in the Configure managed identities for Azure resources on an Azure VM using
Azure CLI article to enable a System-Assigned Managed Identity to the VM. User-Assigned
Managed Identities are not supported by the VM extension for SAP. However, you can enable
both, a system-assigned and a user-assigned identity.
Example:

az vm identity assign -g <resource-group-name> -n <vm name>

6. Assign the Managed Identity access to the resource group of the VM or to all network interfaces,
managed disks and the VM itself as described in Assign a managed identity access to a resource
using Azure CLI
Example:

spID=$(az resource show -g <resource-group-name> -n <vm name> --query identity.principalId --


out tsv --resource-type Microsoft.Compute/virtualMachines)
rgId=$(az group show -g <resource-group-name> --query id --out tsv)
az role assignment create --assignee $spID --role 'Reader' --scope $rgId

7. Run the following Azure CLI command to install the Azure Extension for SAP. The extension is
currently only supported in AzureCloud. Azure China 21Vianet, Azure Government or any of the
other special environments are not yet supported.

# For Linux machines


az vm extension set --publisher Microsoft.AzureCAT.AzureEnhancedMonitoring --name
MonitorX64Linux --version 1.0 -g <resource-group-name> --vm-name <vm name> --settings
'{"system":"SAP"}'

#For Windows machines


az vm extension set --publisher Microsoft.AzureCAT.AzureEnhancedMonitoring --name
MonitorX64Windows --version 1.0 -g <resource-group-name> --vm-name <vm name> --settings
'{"system":"SAP"}'

Checks and Troubleshooting


After you have deployed your Azure VM and set up the relevant Azure Extension for SAP, check whether
all the components of the extension are working as expected.
Run the readiness check for the Azure Extension for SAP as described in Readiness check for the Azure
Extension for SAP. If all readiness check results are positive and all relevant performance counters
appear OK, Azure Extension for SAP has been set up successfully. You can proceed with the installation of
SAP Host Agent as described in the SAP Notes in SAP resources. If the readiness check indicates that
counters are missing, run the health check for the Azure Extension for SAP, as described in Health check
for Azure Extension for SAP configuration. For more troubleshooting options, see Troubleshooting Azure
Extension for SAP.
Readiness check for the Azure Extension for SAP
NOTE
There are two versions of the VM extension. This chapter covers the default VM extension. If you have installed
the new VM extension, please see chapter Readiness check for the new Azure Extension for SAP

This check makes sure that all performance metrics that appear inside your SAP application are provided
by the underlying Azure Extension for SAP.
Run the readiness check on a Windows VM
1. Sign in to the Azure virtual machine (using an admin account is not necessary).
2. Open a Command Prompt window.
3. At the command prompt, change the directory to the installation folder of the Azure Extension for
SAP:
C:\Packages\Plugins\Microsoft.AzureCAT.AzureEnhancedMonitoring.AzureCATExtensionHandler\
<version>\drop
The version in the path to the extension might vary. If you see folders for multiple versions of the
extension in the installation folder, check the configuration of the AzureEnhancedMonitoring
Windows service, and then switch to the folder indicated as Path to executable.

4. At the command prompt, run azperflib.exe without any parameters.

NOTE
Azperflib.exe runs in a loop and updates the collected counters every 60 seconds. To end the loop, close
the Command Prompt window.

If the Azure Extension for SAP is not installed, or the AzureEnhancedMonitoring service is not running,
the extension has not been configured correctly. For detailed information about how to deploy the
extension, see Troubleshooting the Azure Extension for SAP.

NOTE
The Azperflib.exe is a component that can't be used for own purposes. It is a component which delivers Azure
infrastructure data related to the VM for the SAP Host Agent exclusively.
C h e c k t h e o u t p u t o f a z p e r fl i b .e x e

Azperflib.exe output shows all populated Azure performance counters for SAP. At the bottom of the list
of collected counters, a summary and health indicator show the status of Azure Extension for SAP.

Check the result returned for the Counters total output, which is reported as empty, and for Health
status , shown in the preceding figure.
Interpret the resulting values as follows:

A Z P ERF L IB . EXE RESULT VA L UES A Z URE EXT EN SIO N F O R SA P H EA LT H STAT US

API Calls - not available Counters that are not available might be either not
applicable to the virtual machine configuration, or are
errors. See Health status .

Counters total - empty The following two Azure storage counters can be empty:
Storage Read Op Latency Server msec
Storage Read Op Latency E2E msec
All other counters must have values.

Health status Only OK if return status shows OK .

Diagnostics Detailed information about health status.

If the Health status value is not OK , follow the instructions in Health check for Azure Extension for SAP
configuration.
Run the readiness check on a Linux VM
1. Connect to the Azure Virtual Machine by using SSH.
2. Check the output of the Azure Extension for SAP.
a. Run more /var/lib/AzureEnhancedMonitor/PerfCounters

Expected result : Returns list of performance counters. The file should not be empty.
b. Run cat /var/lib/AzureEnhancedMonitor/PerfCounters | grep Error
Expected result : Returns one line where the error is none , for example,
3;config;Error ;;0;0;none;0;1456416792;tst-ser vercs;
c. Run more /var/lib/AzureEnhancedMonitor/LatestErrorRecord

Expected result : Returns as empty or does not exist.


If the preceding check was not successful, run these additional checks:
1. Make sure that the waagent is installed and enabled.
a. Run sudo ls -al /var/lib/waagent/

Expected result : Lists the content of the waagent directory.


b. Run ps -ax | grep waagent

Expected result : Displays one entry similar to: python /usr/sbin/waagent -daemon

2. Make sure that the Azure Extension for SAP is installed and running.
a. Run
sudo sh -c 'ls -al /var/lib/waagent/Microsoft.OSTCExtensions.AzureEnhancedMonitorForLinux-*/'

Expected result : Lists the content of the Azure Extension for SAP directory.
b. Run ps -ax | grep AzureEnhanced

Expected result : Displays one entry similar to:


python /var/lib/waagent/Microsoft.OSTCExtensions.AzureEnhancedMonitorForLinux-
2.0.0.2/handler.py daemon

3. Install SAP Host Agent as described in SAP Note 1031096, and check the output of saposcol .
a. Run /usr/sap/hostctrl/exe/saposcol -d

b. Run dump ccm

c. Check whether the Vir tualization_Configuration\Enhanced Monitoring Access metric is


true .
If you already have an SAP NetWeaver ABAP application server installed, open transaction ST06 and
check whether monitoring is enabled.
If any of these checks fail, and for detailed information about how to redeploy the extension, see
Troubleshooting the Azure Extension for SAP.
Readiness check for the new Azure Extension for SAP

NOTE
There are two versions of the VM extension. This chapter covers the new VM extension. If you have installed the
default VM extension, please see chapter Readiness check for Azure Extension for SAP.

This check makes sure that all performance metrics that appear inside your SAP application are provided
by the underlying Azure Extension for SAP.
Run the readiness check on a Windows VM
1. Sign in to the Azure virtual machine (using an admin account is not necessary).
2. Open a web browser and navigate to http://127.0.0.1:11812/azure4sap/metrics
3. The browser should display or download an XML file that contains the monitoring data of your
virtual machine. If that is not the case, make sure that the Azure Extension for SAP is installed.
C h e c k t h e c o n t e n t o f t h e X M L fi l e

The XML file that you can access at http://127.0.0.1:11812/azure4sap/metrics contains all populated
Azure performance counters for SAP. It also contains a summary and health indicator of the status of
Azure Extension for SAP.
Check the value of the Provider Health Description element. If the value is not OK , follow the
instructions in Health check for new Azure Extension for SAP configuration.
Run the readiness check on a Linux VM
1. Connect to the Azure Virtual Machine by using SSH.
2. Check the output of the following command

curl http://127.0.0.1:11812/azure4sap/metrics

Expected result : Returns an XML document that contains the monitoring information of the
virtual machine, its disks and network interfaces.
If the preceding check was not successful, run these additional checks:
1. Make sure that the waagent is installed and enabled.
a. Run sudo ls -al /var/lib/waagent/

Expected result : Lists the content of the waagent directory.


b. Run ps -ax | grep waagent

Expected result : Displays one entry similar to: python /usr/sbin/waagent -daemon

2. Make sure that the Azure Extension for SAP is installed and running.
a. Run
sudo sh -c 'ls -al
/var/lib/waagent/Microsoft.AzureCAT.AzureEnhancedMonitoring.MonitorX64Linux-*/'

Expected result : Lists the content of the Azure Extension for SAP directory.
b. Run ps -ax | grep AzureEnhanced

Expected result : Displays one entry similar to:


/var/lib/waagent/Microsoft.AzureCAT.AzureEnhancedMonitoring.MonitorX64Linux-
1.0.0.82/AzureEnhancedMonitoring -monitor

3. Install SAP Host Agent as described in SAP Note 1031096, and check the output of saposcol .
a. Run /usr/sap/hostctrl/exe/saposcol -d

b. Run dump ccm

c. Check whether the Vir tualization_Configuration\Enhanced Monitoring Access metric is


true .
If you already have an SAP NetWeaver ABAP application server installed, open transaction ST06 and
check whether monitoring is enabled.
If any of these checks fail, and for detailed information about how to redeploy the extension, see
Troubleshooting the new Azure Extension for SAP.
Health check for the Azure Extension for SAP configuration
NOTE
There are two versions of the VM extension. This chapter covers the default VM extension. If you have installed
the new VM extension, please see chapter Health check for the new Azure Extension for SAP configuration.

If some of the infrastructure data is not delivered correctly as indicated by the test described in
Readiness check for Azure Extension for SAP, run the Test-AzVMAEMExtension cmdlet to check whether
the Azure infrastructure and the Azure Extension for SAP are configured correctly.
1. Make sure that you have installed the latest version of the Azure PowerShell cmdlet, as described
in Deploying Azure PowerShell cmdlets.
2. Run the following PowerShell cmdlet. For a list of available environments, run the cmdlet
Get-AzEnvironment . To use global Azure, select the AzureCloud environment. For Azure China
21Vianet, select AzureChinaCloud .

$env = Get-AzEnvironment -Name <name of the environment>


Connect-AzAccount -Environment $env
Set-AzContext -SubscriptionName <subscription name>
Test-AzVMAEMExtension -ResourceGroupName <resource group name> -VMName <virtual machine name>

3. The script tests the configuration of the virtual machine you select.

Make sure that every health check result is OK . If some checks do not display OK , run the update cmdlet
as described in Configure the Azure Extension for SAP. Wait 15 minutes, and repeat the checks described
in Readiness check for Azure Extension for SAP and Health check for Azure Extension for SAP
configuration. If the checks still indicate a problem with some or all counters, see Troubleshooting the
Azure Extension for SAP.

NOTE
You can experience some warnings in cases where you use Managed Standard Azure Disks. Warnings will be
displayed instead of the tests returning "OK". This is normal and intended in case of that disk type. See also see
Troubleshooting the Azure Extension for SAP

Health check for the new Azure Extension for SAP configuration

NOTE
There are two versions of the VM extension. This chapter covers the new VM extension. If you have installed the
default VM extension, please see chapter Health check for the Azure Extension for SAP configuration.
If some of the infrastructure data is not delivered correctly as indicated by the test described in
Readiness check for Azure Extension for SAP, run the Get-AzVMExtension cmdlet to check whether the
Azure Extension for SAP is installed. The Test-AzVMAEMExtension does not yet support the new extension.
Once the cmdlet supports the new extension, we will update this article.
1. Make sure that you have installed the latest version of the Azure PowerShell cmdlet, as described
in Deploying Azure PowerShell cmdlets.
2. Run the following PowerShell cmdlet. For a list of available environments, run the cmdlet
Get-AzEnvironment . To use global Azure, select the AzureCloud environment. For Azure China
21Vianet, select AzureChinaCloud .

$env = Get-AzEnvironment -Name <name of the environment>


Connect-AzAccount -Environment $env
Set-AzContext -SubscriptionName <subscription name>
Test-AzVMAEMExtension -ResourceGroupName <resource group name> -VMName <virtual machine name>

3. The cmdlet tests the configuration of the VM extension for SAP on virtual machine you select.
Troubleshooting Azure Extension for SAP

NOTE
There are two versions of the VM extension. This chapter covers the default VM extension. If you have installed
the new VM extension, please see chapter Troubleshooting the new Azure Extension for SAP.

Azure performance counters do not show up at all


The AzureEnhancedMonitoring Windows service collects performance metrics in Azure. If the service
has not been installed correctly or if it is not running in your VM, no performance metrics can be
collected.
T h e i n st a l l a t i o n d i r e c t o r y o f t h e A z u r e Ex t e n si o n fo r SA P i s e m p t y
Issue

The installation directory


C:\Packages\Plugins\Microsoft.AzureCAT.AzureEnhancedMonitoring.AzureCATExtensionHandler\
<version>\drop is empty.
So l u t i o n

The extension is not installed. Determine whether this is a proxy issue (as described earlier). You might
need to restart the machine or rerun the Set-AzVMAEMExtension configuration script.
Se r v i c e fo r A z u r e Ex t e n si o n fo r SA P d o e s n o t e x i st
Issue

The AzureEnhancedMonitoring Windows service does not exist.


Azperflib.exe output throws an error:

So l u t i o n

If the service does not exist, the Azure Extension for SAP has not been installed correctly. Redeploy the
extension by using the steps described for your deployment scenario in Deployment scenarios of VMs
for SAP in Azure.
After you deploy the extension, after one hour, check again whether the Azure performance counters are
provided in the Azure VM.
Se r v i c e fo r A z u r e Ex t e n si o n fo r SA P e x i st s, b u t fa i l s t o st a r t
Issue

The AzureEnhancedMonitoring Windows service exists and is enabled, but fails to start. For more
information, check the application event log.
So l u t i o n

The configuration is incorrect. Restart the Azure Extension for SAP in the VM, as described in Configure
the Azure Extension for SAP.
Some Azure performance counters are missing
The AzureEnhancedMonitoring Windows service collects performance metrics in Azure. The service gets
data from several sources. Some configuration data is collected locally, and some performance metrics
are read from Azure Diagnostics. Storage counters are used from your logging on the storage
subscription level.
If troubleshooting by using SAP Note 1999351 doesn't resolve the issue, rerun the
Set-AzVMAEMExtension configuration script. You might have to wait an hour because storage analytics or
diagnostics counters might not be created immediately after they are enabled. If the problem persists,
open an SAP customer support message on the component BC-OP-NT-AZR for Windows or BC-OP-
LNX-AZR for a Linux virtual machine.

Azure performance counters do not show up at all


Performance metrics in Azure are collected by a daemon. If the daemon is not running, no performance
metrics can be collected.
T h e i n st a l l a t i o n d i r e c t o r y o f t h e A z u r e Ex t e n si o n fo r SA P i s e m p t y
Issue

The directory \var\lib\waagent\ does not have a subdirectory for the Azure Extension for SAP.
So l u t i o n

The extension is not installed. Determine whether this is a proxy issue (as described earlier). You might
need to restart the machine and/or rerun the Set-AzVMAEMExtension configuration script.
T h e e x e c u t i o n o f Se t - A z V M A E M Ex t e n si o n a n d Te st - A z V M A E M Ex t e n si o n sh o w w a r n i n g m e ssa g e s st a t i n g t h a t St a n d a r d M a n a g e d
D i sk s a r e n o t su p p o r t e d
Issue

When executing Set-AzVMAEMExtension or Test-AzVMAEMExtension messages like these are shown:

WARNING: [WARN] Standard Managed Disks are not supported. Extension will be installed but no disk
metrics will be available.
WARNING: [WARN] Standard Managed Disks are not supported. Extension will be installed but no disk
metrics will be available.
WARNING: [WARN] Standard Managed Disks are not supported. Extension will be installed but no disk
metrics will be available.

Executing azperfli.exe as described earlier you can get a result that is indicating a non-healthy state.
So l u t i o n

The messages are caused by the fact that Standard Managed Disks are not delivering the APIs used by
the SAP Extension for SAP to check on statistics of the Standard Azure Storage Accounts. This is not a
matter of concern. Reason for introducing the collecting data for Standard Disk Storage accounts was
throttling of inputs and outputs that occurred frequently. Managed disks will avoid such throttling by
limiting the number of disks in a storage account. Therefore, not having that type of that data is not
critical.

Some Azure performance counters are missing


Performance metrics in Azure are collected by a daemon, which gets data from several sources. Some
configuration data is collected locally, and some performance metrics are read from Azure Diagnostics.
Storage counters come from the logs in your storage subscription.
For a complete and up-to-date list of known issues, see SAP Note 1999351, which has additional
troubleshooting information for Azure Extension for SAP.
If troubleshooting by using SAP Note 1999351 does not resolve the issue, rerun the
Set-AzVMAEMExtension configuration script as described in Configure the Azure Extension for SAP. You
might have to wait for an hour because storage analytics or diagnostics counters might not be created
immediately after they are enabled. If the problem persists, open an SAP customer support message on
the component BC-OP-NT-AZR for Windows or BC-OP-LNX-AZR for a Linux virtual machine.
Troubleshooting the new Azure Extension for SAP

NOTE
There are two versions of the VM extension. This chapter covers the new VM extension. If you have installed the
default VM extension, please see chapter Troubleshooting the Azure Extension for SAP.

Azure performance counters do not show up at all


The AzureEnhancedMonitoring process collects performance metrics in Azure. If the process is not
running in your VM, no performance metrics can be collected.
T h e i n st a l l a t i o n d i r e c t o r y o f t h e A z u r e Ex t e n si o n fo r SA P i s e m p t y
Issue

The installation directory


C:\Packages\Plugins\Microsoft.AzureCAT.AzureEnhancedMonitoring.MonitorX64Windows\<version> is
empty.
So l u t i o n

The extension is not installed. Determine whether this is a proxy issue (as described earlier). You might
need to restart the machine or install the VM extension again.
Some Azure performance counters are missing
The AzureEnhancedMonitoring Windows process collects performance metrics in Azure. The process
gets data from several sources. Some configuration data is collected locally, and some performance
metrics are read from Azure Monitor.
If troubleshooting by using SAP Note 1999351 does not resolve the issue, open an SAP customer
support message on the component BC-OP-NT-AZR for Windows or BC-OP-LNX-AZR for a Linux
virtual machine. Please attach the log file
C:\Packages\Plugins\Microsoft.AzureCAT.AzureEnhancedMonitoring.MonitorX64Windows\
<version>\logapp.txt to the incident.

Azure performance counters do not show up at all


Performance metrics in Azure are collected by a daemon. If the daemon is not running, no performance
metrics can be collected.
T h e i n st a l l a t i o n d i r e c t o r y o f t h e A z u r e Ex t e n si o n fo r SA P i s e m p t y
Issue

The directory \var\lib\waagent\ does not have a subdirectory for the Azure Extension for SAP.
So l u t i o n

The extension is not installed. Determine whether this is a proxy issue (as described earlier). You might
need to restart the machine and/or install the VM extension again.

Some Azure performance counters are missing


Performance metrics in Azure are collected by a daemon, which gets data from several sources. Some
configuration data is collected locally, and some performance metrics are read from Azure Monitor.
For a complete and up-to-date list of known issues, see SAP Note 1999351, which has additional
troubleshooting information for Azure Extension for SAP.
If troubleshooting by using SAP Note 1999351 does not resolve the issue, install the extension again as
described in Configure the Azure Extension for SAP. If the problem persists, open an SAP customer
support message on the component BC-OP-NT-AZR for Windows or BC-OP-LNX-AZR for a Linux
virtual machine. Please attach the log file
/var/lib/waagent/Microsoft.AzureCAT.AzureEnhancedMonitoring.MonitorX64Linux-<version>/logapp.txt
to the incident.

Azure Extension Error Codes


ERRO R ID ERRO R DESC RIP T IO N SO L UT IO N

cfg/018 App configuration is missing. run setup script

cfg/019 No deployment ID in app config. contact support

cfg/020 No RoleInstanceId in app config. contact support

cfg/022 No RoleInstanceId in app config. contact support

cfg/031 Cannot read Azure configuration. contact support

cfg/021 App configuration file is missing. run setup script

cfg/015 No VM size in app config. run setup script

cfg/016 GlobalMemoryStatusEx counter contact support


failed.

cfg/023 MaxHwFrequency counter failed. contact support

cfg/024 NIC counters failed. contact support

cfg/025 Disk mapping counter failed. contact support

cfg/026 Processor name counter failed. contact support

cfg/027 Disk mapping counter failed. contact support

cfg/038 The metric 'Disk type' is missing in run setup script


the extension configuration file
config.xml. 'Disk type' along with
some other counters was
introduced in v2.2.0.68
12/16/2015. If you deployed the
extension prior to 12/16/2015, it
uses the old configuration file. The
Azure extension framework
automatically upgrades the
extension to a newer version, but
the config.xml remains unchanged.
To update the configuration,
download and execute the latest
PowerShell setup script.

cfg/039 No disk caching. run setup script


ERRO R ID ERRO R DESC RIP T IO N SO L UT IO N

cfg/036 No disk SLA throughput. run setup script

cfg/037 No disk SLA IOPS. run setup script

cfg/028 Disk mapping counter failed. contact support

cfg/029 Last hardware change counter contact support


failed.

cfg/030 NIC counters failed contact support

cfg/017 Due to sysprep of the VM your redeploy after sysprep


Windows SID has changed.

str/007 Access to the storage analytics run setup script


failed.

As population of storage analytics


data on a newly created VM may
need up to half an hour, the error
might disappear after some time. If
the error still appears, re-run the
setup script.

str/010 No Storage Analytics counters. run setup script

str/009 Storage Analytics failed. run setup script

wad/004 Bad WAD configuration. run setup script

wad/002 Unexpected WAD format. contact support

wad/001 No WAD counters found. run setup script

wad/040 Stale WAD counters found. contact support

wad/003 Cannot read WAD table. There is no run setup script


connection to WAD table. There can fix internet connection
be several causes of this: contact support

1) outdated configuration
2) no network connection to Azure
3) issues with WAD setup

prf/011 Perfmon NIC metrics failed. contact support

prf/012 Perfmon disk metrics failed. contact support

prf/013 Some prefmon metrics failed. contact support

prf/014 Perfmon failed to create a counter. contact support

cfg/035 No metric providers configured. contact support


ERRO R ID ERRO R DESC RIP T IO N SO L UT IO N

str/006 Bad Storage Analytics config. run setup script

str/032 Storage Analytics metrics failed. run setup script

cfg/033 One of the metric providers failed. run setup script

str/034 Provider thread failed. contact support

Detailed Guidelines on Solutions Provided


Run the setup script
Follow the steps in chapter Configure the Azure Extension for SAP in this guide to install the extension
again. Note that some counters might need up to 30 minutes for provisioning.
If the errors do not disappear, contact support.
Contact Support
Unexpected error or there is no known solution. Collect the AzureEnhancedMonitoring_service.log file
located in the folder
C:\Packages\Plugins\Microsoft.AzureCAT.AzureEnhancedMonitoring.AzureCATExtensionHandler\
<version>\drop (Windows) or
/var/log/azure/Microsoft.OSTCExtensions.AzureEnhancedMonitorForLinux (Linux) and contact SAP
support for further assistance.
Redeploy after sysprep
If you plan to build a generalized sysprepped OS image (which can include SAP software), it is
recommended that this image does not include the Azure extension for SAP. You should install the Azure
extension for SAP after the new instance of the generalized OS image has been deployed.
However, if your generalized and sysprepped OS image already contains the Azure Extension for SAP,
you can apply the following workaround to reconfigure the extension, on the newly deployed VM
instance:
On the newly deployed VM instance delete the content of the following folders:
C:\Packages\Plugins\Microsoft.AzureCAT.AzureEnhancedMonitoring.AzureCATExtensionHandler\
<version>\RuntimeSettings
C:\Packages\Plugins\Microsoft.AzureCAT.AzureEnhancedMonitoring.AzureCATExtensionHandler\
<version>\Status
Follow the steps in chapter Configure the Azure Extension for SAP in this guide to install the
extension again.
Fix internet connection
The Microsoft Azure Virtual Machine running the Azure extension for SAP requires access to the
Internet. If this Azure VM is part of an Azure Virtual Network or of an on-premises domain, make sure
that the relevant proxy settings are set. These settings must also be valid for the LocalSystem account to
access the Internet. Follow chapter Configure the proxy in this guide.
In addition, if you need to set a static IP address for your Azure VM, do not set it manually inside the
Azure VM, but set it using Azure PowerShell, Azure CLI Azure portal. The static IP is propagated via the
Azure DHCP service.
Manually setting a static IP address inside the Azure VM is not supported, and might lead to problems
with the Azure extension for SAP.
Considerations for Azure Virtual Machines DBMS
deployment for SAP workload
12/22/2020 • 21 minutes to read • Edit Online

This guide is part of the documentation on how to implement and deploy SAP software on Microsoft Azure.
Before you read this guide, read the Planning and implementation guide and articles the planning guide
points you to. This document covers the generic deployment aspects of SAP-related DBMS systems on
Microsoft Azure virtual machines (VMs) by using the Azure infrastructure as a service (IaaS) capabilities.
The paper complements the SAP installation documentation and SAP Notes, which represent the primary
resources for installations and deployments of SAP software on given platforms.
In this document, considerations of running SAP-related DBMS systems in Azure VMs are introduced. There
are few references to specific DBMS systems in this chapter. Instead, the specific DBMS systems are handled
within this paper, after this document.

Definitions
Throughout the document, these terms are used:
IaaS : Infrastructure as a service.
PaaS : Platform as a service.
SaaS : Software as a service.
SAP component : An individual SAP application such as ERP Central Component (ECC), Business
Warehouse (BW), Solution Manager, or Enterprise Portal (EP). SAP components can be based on
traditional ABAP or Java technologies or on a non-NetWeaver-based application such as Business
Objects.
SAP environment : One or more SAP components logically grouped to perform a business function
such as development, quality assurance, training, disaster recovery, or production.
SAP landscape : This term refers to the entire SAP assets in a customer's IT landscape. The SAP
landscape includes all production and nonproduction environments.
SAP system : The combination of a DBMS layer and an application layer of, for example, an SAP ERP
development system, an SAP Business Warehouse test system, or an SAP CRM production system. In
Azure deployments, dividing these two layers between on-premises and Azure isn't supported. As a
result, an SAP system is either deployed on-premises or it's deployed in Azure. You can deploy the
different systems of an SAP landscape in Azure or on-premises. For example, you could deploy the
SAP CRM development and test systems in Azure but deploy the SAP CRM production system on-
premises.
Cross-premises : Describes a scenario where VMs are deployed to an Azure subscription that has
site-to-site, multisite, or Azure ExpressRoute connectivity between the on-premises data centers and
Azure. In common Azure documentation, these kinds of deployments are also described as cross-
premises scenarios.
The reason for the connection is to extend on-premises domains, on-premises Active Directory, and
on-premises DNS into Azure. The on-premises landscape is extended to the Azure assets of the
subscription. With this extension, the VMs can be part of the on-premises domain. Domain users of
the on-premises domain can access the servers and run services on those VMs, like DBMS services.
Communication and name resolution between VMs deployed on-premises and VMs deployed in
Azure is possible. This scenario is the most common scenario in use to deploy SAP assets on Azure.
For more information, see Planning and design for VPN gateway.

NOTE
Cross-premises deployments of SAP systems are where Azure virtual machines that run SAP systems are members of
an on-premises domain and are supported for production SAP systems. Cross-premises configurations are supported
for deploying parts or complete SAP landscapes into Azure. Even running the complete SAP landscape in Azure
requires those VMs to be part of an on-premises domain and Active Directory/LDAP.
In previous versions of the documentation, hybrid-IT scenarios were mentioned. The term hybrid is rooted in the fact
that there's a cross-premises connectivity between on-premises and Azure. In this case, hybrid also means that the
VMs in Azure are part of the on-premises Active Directory.

Some Microsoft documentation describes cross-premises scenarios a bit differently, especially for DBMS
high-availability configurations. In the case of the SAP-related documents, the cross-premises scenario boils
down to site-to-site or private ExpressRoute connectivity and an SAP landscape that's distributed between
on-premises and Azure.

Resources
There are other articles available on SAP workload on Azure. Start with SAP workload on Azure: Get started
and then choose your area of interest.
The following SAP Notes are related to SAP on Azure in regard to the area covered in this document.

N OT E N UM B ER T IT L E

1928533 SAP applications on Azure: Supported products and Azure


VM types

2015553 SAP on Microsoft Azure: Support prerequisites

1999351 Troubleshooting enhanced Azure monitoring for SAP

2178632 Key monitoring metrics for SAP on Microsoft Azure

1409604 Virtualization on Windows: Enhanced monitoring

2191498 SAP on Linux with Azure: Enhanced monitoring

2039619 SAP applications on Microsoft Azure using the Oracle


database: Supported products and versions

2233094 DB6: SAP applications on Azure using IBM DB2 for Linux,
UNIX, and Windows: Additional information

2243692 Linux on Microsoft Azure (IaaS) VM: SAP license issues

1984787 SUSE LINUX Enterprise Server 12: Installation notes

2002167 Red Hat Enterprise Linux 7.x: Installation and upgrade


N OT E N UM B ER T IT L E

2069760 Oracle Linux 7.x SAP installation and upgrade

1597355 Swap-space recommendation for Linux

2171857 Oracle Database 12c: File system support on Linux

1114181 Oracle Database 11g: File system support on Linux

For information on all the SAP Notes for Linux, see the SAP community wiki.
You need a working knowledge of Microsoft Azure architecture and how Microsoft Azure virtual machines
are deployed and operated. For more information, see Azure documentation.
In general, the Windows, Linux, and DBMS installation and configuration are essentially the same as any
virtual machine or bare metal machine you install on-premises. There are some architecture and system
management implementation decisions that are different when you use Azure IaaS. This document explains
the specific architectural and system management differences to be prepared for when you use Azure IaaS.

Storage structure of a VM for RDBMS deployments


To follow this chapter, read and understand the information presented in:
Azure Virtual Machines planning and implementation for SAP NetWeaver
Azure Storage types for SAP workload
What SAP software is supported for Azure deployments
SAP workload on Azure virtual machine supported scenarios
You need to understand and know about the different VM-Series and the differences between standard and
premium storage before you read this chapter.
For Azure block storage, the usage of Azure managed disks is highly recommended. For details about Azure
managed disks read the article Introduction to managed disks for Azure VMs.
In a basic configuration, we usually recommend a deployment structure where the operating system, DBMS,
and eventual SAP binaries are separate from the database files. Changing earlier recommendations, we
recommend having separate Azure disks for:
The operating system (base VHD or OS VHD)
Database management system executables
SAP executables like /usr/sap
A configuration that separates these components in three different Azure disks can result in higher resiliency
since excessive log or dump writes by the DBMS or SAP executables are not interfering with the disk quotas
of the OS disk.
The DBMS data and transaction/redo log files are stored in Azure supported block storage or Azure NetApp
Files. They're stored in separate disks and attached as logical disks to the original Azure operating system
image VM. For Linux deployments, different recommendations are documented, especially for SAP HANA.
Read the article Azure Storage types for SAP workload for the capabilities and the support of the different
storage types for your scenario.
When you plan your disk layout, find the best balance between these items:
The number of data files.
The number of disks that contain the files.
The IOPS quotas of a single disk or NFS share.
The data throughput per disk or NFS share.
The number of additional data disks possible per VM size.
The overall storage or network throughput a VM can provide.
The latency different Azure Storage types can provide.
VM SLAs.
Azure enforces an IOPS quota per data disk or NFS share. These quotas are different for disks hosted on the
different Azure block storage solutions or shares. I/O latency is also different between these different
storage types as well.
Each of the different VM types has a limited number of data disks that you can attach. Another restriction is
that only certain VM types can use, for example, premium storage. Typically, you decide to use a certain VM
type based on CPU and memory requirements. You also might consider the IOPS, latency, and disk
throughput requirements that usually are scaled with the number of disks or the type of premium storage
disks. The number of IOPS and the throughput to be achieved by each disk might dictate disk size, especially
with premium storage.

NOTE
For DBMS deployments, we recommend Azure premium storage, Ultra disk or Azure NetApp Files based NFS shares
(exclusively for SAP HANA) for any data, transaction log, or redo files. It doesn't matter whether you want to deploy
production or nonproduction systems.

NOTE
To benefit from Azure's single VM SLA, all disks that are attached must be Azure premium storage or Azure Ultra disk
type, which includes the base VHD (Azure premium storage).

NOTE
Hosting main database files, such as data and log files, of SAP databases on storage hardware that's located in co-
located third-party data centers adjacent to Azure data centers isn't supported. Storage provided through software
appliances hosted in Azure VMs, are also not supported for this use case. For SAP DBMS workloads, only storage
that's represented as native Azure service is supported for the data and transaction log files of SAP databases in
general. Different DBMS might support different Azure storage types. For more details check the article Azure Storage
types for SAP workload

The placement of the database files and the log and redo files and the type of Azure Storage you use, is
defined by IOPS, latency, and throughput requirements. Specifically for Azure premium storage to achieve
enough IOPS, you might be forced to use multiple disks or use a larger premium storage disk. If you use
multiple disks, build a software stripe across the disks that contain the data files or the log and redo files. In
such cases, the IOPS and the disk throughput SLAs of the underlying premium storage disks or the
maximum achievable IOPS of standard storage disks are accumulative for the resulting stripe set.
If your IOPS requirement exceeds what a single VHD can provide, balance the number of IOPS that are
needed for the database files across a number of VHDs. The easiest way to distribute the IOPS load across
disks is to build a software stripe over the different disks. Then place a number of data files of the SAP DBMS
on the LUNs carved out of the software stripe. The number of disks in the stripe is driven by IOPS demands,
disk throughput demands, and volume demands.
Windows
We recommend that you use Windows Storage Spaces to create stripe sets across multiple Azure VHDs.
Use at least Windows Server 2012 R2 or Windows Server 2016.

Linux
Only MDADM and Logical Volume Manager (LVM) are supported to build a software RAID on Linux. For
more information, see:
Configure software RAID on Linux using MDADM
Configure LVM on a Linux VM in Azure using LVM

For Azure Ultra disk, striping is not necessary since you can define IOPS and disk throughput independent of
the size of the disk.

NOTE
Because Azure Storage keeps three images of the VHDs, it doesn't make sense to configure a redundancy when you
stripe. You only need to configure striping so that the I/Os are distributed over the different VHDs.

Managed or nonmanaged disks


An Azure storage account is an administrative construct and also a subject of limitations. Limitations differ
between standard storage accounts and premium storage accounts. For information on capabilities and
limitations, see Azure Storage scalability and performance targets.
For standard storage, remember that there's a limit on the IOPS per storage account. See the row that
contains Total Request Rate in the article Azure Storage scalability and performance targets. There's also
an initial limit on the number of storage accounts per Azure subscription. Balance VHDs for the larger SAP
landscape across different storage accounts to avoid hitting the limits of these storage accounts. This is
tedious work when you're talking about a few hundred virtual machines with more than a thousand VHDs.
Using standard storage for DBMS deployments in conjunction with an SAP workload isn't recommended.
Therefore, references and recommendations to standard storage are limited to this short article
To avoid the administrative work of planning and deploying VHDs across different Azure storage accounts,
Microsoft introduced Azure Managed Disks in 2017. Managed disks are available for standard storage and
premium storage. The major advantages of managed disks compared to nonmanaged disks are:
For managed disks, Azure distributes the different VHDs across different storage accounts automatically
at deployment time. In this way, storage account limits for data volume, I/O throughput, and IOPS aren’t
hit.
Using managed disks, Azure Storage honors the concepts of Azure availability sets. If the VM is part of an
Azure availability set, the base VHD and attached disk of a VM are deployed into different fault and
update domains.

IMPORTANT
Given the advantages of Azure Managed Disks, we highly recommend that you use Azure Managed Disks for your
DBMS deployments and SAP deployments in general.

To convert from unmanaged to managed disks, see:


Convert a Windows virtual machine from unmanaged disks to managed disks.
Convert a Linux virtual machine from unmanaged disks to managed disks.
Caching for VMs and data disks
When you mount disks to VMs, you can choose whether the I/O traffic between the VM and those disks
located in Azure storage is cached.
The following recommendations assume these I/O characteristics for standard DBMS:
It's mostly a read workload against data files of a database. These reads are performance critical for the
DBMS system.
Writing against the data files occurs in bursts based on checkpoints or a constant stream. Averaged over
a day, there are fewer writes than reads. Opposite to reads from data files, these writes are asynchronous
and don't hold up any user transactions.
There are hardly any reads from the transaction log or redo files. Exceptions are large I/Os when you
perform transaction log backups.
The main load against transaction or redo log files is writes. Dependent on the nature of the workload,
you can have I/Os as small as 4 KB or, in other cases, I/O sizes of 1 MB or more.
All writes must be persisted on disk in a reliable fashion.
For standard storage, the possible cache types are:
None
Read
Read/Write
To get consistent and deterministic performance, set the caching on standard storage for all disks that
contain DBMS-related data files, log and redo files, and table space to NONE . The caching of the base VHD
can remain with the default.
For Azure premium storage, the following caching options exist:
None
Read
Read/write
None + Write Accelerator, which is only for Azure M-Series VMs
Read + Write Accelerator, which is only for Azure M-Series VMs
For premium storage, we recommend that you use Read caching for data files of the SAP database and
choose No caching for the disks of log file(s) .
For M-Series deployments, we recommend that you use Azure Write Accelerator for your DBMS
deployment. For details, restrictions, and deployment of Azure Write Accelerator, see Enable Write
Accelerator.
For Ultra disk and Azure NetApp Files, no caching options are offered.
Azure nonpersistent disks
Azure VMs offer nonpersistent disks after a VM is deployed. In the case of a VM reboot, all content on those
drives is wiped out. It's a given that data files and log and redo files of databases should under no
circumstances be located on those nonpersisted drives. There might be exceptions for some databases,
where these nonpersisted drives could be suitable for tempdb and temp tablespaces. Avoid using those
drives for A-Series VMs because those nonpersisted drives are limited in throughput with that VM family.
For more information, see Understand the temporary drive on Windows VMs in Azure.
Windows
Drive D in an Azure VM is a nonpersisted drive, which is backed by some local disks on the Azure
compute node. Because it's nonpersisted, any changes made to the content on drive D are lost when the
VM is rebooted. Changes include files that were stored, directories that were created, and applications
that were installed.

Linux
Linux Azure VMs automatically mount a drive at /mnt/resource that's a nonpersisted drive backed by
local disks on the Azure compute node. Because it's nonpersisted, any changes made to content in
/mnt/resource are lost when the VM is rebooted. Changes include files that were stored, directories that
were created, and applications that were installed.

Microsoft Azure Storage resiliency


Microsoft Azure Storage stores the base VHD, with OS and attached disks or blobs, on at least three separate
storage nodes. This type of storage is called locally redundant storage (LRS). LRS is the default for all types
of storage in Azure.
There are other redundancy methods. For more information, see Azure Storage replication.

NOTE
Azure premium storage, Ultra disk and Azure NetApp Files (exclusively for SAP HANA) are the recommended type of
storage for DBMS VMs and disks that store database and log and redo files. The only available redundancy method
for these storage types is LRS. As a result, you need to configure database methods to enable database data
replication into another Azure region or availability zone. Database methods include SQL Server Always On, Oracle
Data Guard, and HANA System Replication.

NOTE
For DBMS deployments, the use of geo-redundant storage (GRS) isn't recommended for standard storage. GRS
severely affects performance and doesn't honor the write order across different VHDs that are attached to a VM. Not
honoring the write order across different VHDs potentially leads to inconsistent databases on the replication target
side. This situation occurs if database and log and redo files are spread across multiple VHDs, as is generally the case,
on the source VM side.

VM node resiliency
Azure offers several different SLAs for VMs. For more information, see the most recent release of SLA for
Virtual Machines. Because the DBMS layer is critical to availability in an SAP system, you need to understand
availability sets, Availability Zones, and maintenance events. For more information on these concepts, see
Manage the availability of Windows virtual machines in Azure and Manage the availability of Linux virtual
machines in Azure.
The minimum recommendation for production DBMS scenarios with an SAP workload is to:
Deploy two VMs in a separate availability set in the same Azure region.
Run these two VMs in the same Azure virtual network and have NICs attached out of the same subnets.
Use database methods to keep a hot standby with the second VM. Methods can be SQL Server Always
On, Oracle Data Guard, or HANA System Replication.
You also can deploy a third VM in another Azure region and use the same database methods to supply an
asynchronous replica in another Azure region.
For information on how to set up Azure availability sets, see this tutorial.

Azure network considerations


In large-scale SAP deployments, use the blueprint of Azure Virtual Datacenter. Use it for your virtual
network configuration and permissions and role assignments to different parts of your organization.
These best practices are the result of hundreds of customer deployments:
The virtual networks the SAP application is deployed into don't have access to the internet.
The database VMs run in the same virtual network as the application layer, separated in a different subnet
from the SAP application layer.
The VMs within the virtual network have a static allocation of the private IP address. For more
information, see IP address types and allocation methods in Azure.
Routing restrictions to and from the DBMS VMs are not set with firewalls installed on the local DBMS
VMs. Instead, traffic routing is defined with network security groups (NSGs).
To separate and isolate traffic to the DBMS VM, assign different NICs to the VM. Every NIC gets a different
IP address, and every NIC is assigned to a different virtual network subnet. Every subnet has different
NSG rules. The isolation or separation of network traffic is a measure for routing. It's not used to set
quotas for network throughput.

NOTE
Assigning static IP addresses through Azure means to assign them to individual virtual NICs. Don't assign static IP
addresses within the guest OS to a virtual NIC. Some Azure services like Azure Backup rely on the fact that at least
the primary virtual NIC is set to DHCP and not to static IP addresses. For more information, see Troubleshoot Azure
virtual machine backup. To assign multiple static IP addresses to a VM, assign multiple virtual NICs to a VM.

WARNING
Configuring network virtual appliances in the communication path between the SAP application and the DBMS layer
of a SAP NetWeaver-, Hybris-, or S/4HANA-based SAP system isn't supported. This restriction is for functionality and
performance reasons. The communication path between the SAP application layer and the DBMS layer must be a
direct one. The restriction doesn't include application security group (ASG) and NSG rules if those ASG and NSG rules
allow a direct communication path.
Other scenarios where network virtual appliances aren't supported are in:
Communication paths between Azure VMs that represent Linux Pacemaker cluster nodes and SBD devices as
described in High availability for SAP NetWeaver on Azure VMs on SUSE Linux Enterprise Server for SAP
Applications.
Communication paths between Azure VMs and Windows Server Scale-Out File Server (SOFS) set up as described
in Cluster an SAP ASCS/SCS instance on a Windows failover cluster by using a file share in Azure.
Network virtual appliances in communication paths can easily double the network latency between two
communication partners. They also can restrict throughput in critical paths between the SAP application layer and the
DBMS layer. In some customer scenarios, network virtual appliances can cause Pacemaker Linux clusters to fail. These
are cases where communications between the Linux Pacemaker cluster nodes communicate to their SBD device
through a network virtual appliance.
IMPORTANT
Another design that's not supported is the segregation of the SAP application layer and the DBMS layer into different
Azure virtual networks that aren't peered with each other. We recommend that you segregate the SAP application
layer and DBMS layer by using subnets within an Azure virtual network instead of by using different Azure virtual
networks.
If you decide not to follow the recommendation and instead segregate the two layers into different virtual networks,
the two virtual networks must be peered.
Be aware that network traffic between two peered Azure virtual networks is subject to transfer costs. Huge data
volume that consists of many terabytes is exchanged between the SAP application layer and the DBMS layer. You can
accumulate substantial costs if the SAP application layer and DBMS layer are segregated between two peered Azure
virtual networks.

Use two VMs for your production DBMS deployment within an Azure availability set or between two Azure
Availability Zones. Also use separate routing for the SAP application layer and the management and
operations traffic to the two DBMS VMs. See the following image:

Use Azure Load Balancer to redirect traffic


The use of private virtual IP addresses used in functionalities like SQL Server Always On or HANA System
Replication requires the configuration of an Azure load balancer. The load balancer uses probe ports to
determine the active DBMS node and route the traffic exclusively to that active database node.
If there's a failover of the database node, there's no need for the SAP application to reconfigure. Instead, the
most common SAP application architectures reconnect against the private virtual IP address. Meanwhile, the
load balancer reacts to the node failover by redirecting the traffic against the private virtual IP address to the
second node.
Azure offers two different load balancer SKUs: a basic SKU and a standard SKU. Based on the advantages in
setup and functionality, you should use the Standard SKU of the Azure load balancer. One of the large
advantages of the Standard version of the load balancer is that the data traffic is not routed through the load
balancer itself.
An example how you can configure an internal load balancer can be found in the article Tutorial: Configure a
SQL Server availability group on Azure Virtual Machines manually

NOTE
There are differences in behavior of the basic and standard SKU related to the access of public IP addresses. The way
how to work around the restrictions of the Standard SKU to access public IP addresses is described in the document
Public endpoint connectivity for Virtual Machines using Azure Standard Load Balancer in SAP high-availability
scenarios

Azure Accelerated Networking


To further reduce network latency between Azure VMs, we recommend that you choose Azure Accelerated
Networking. Use it when you deploy Azure VMs for an SAP workload, especially for the SAP application
layer and the SAP DBMS layer.

NOTE
Not all VM types support Accelerated Networking. The previous article lists the VM types that support Accelerated
Networking.

Windows
To learn how to deploy VMs with Accelerated Networking for Windows, see Create a Windows virtual
machine with Accelerated Networking.

Linux
For more information on Linux distribution, see Create a Linux virtual machine with Accelerated
Networking.

NOTE
In the case of SUSE, Red Hat, and Oracle Linux, Accelerated Networking is supported with recent releases. Older
releases like SLES 12 SP2 or RHEL 7.2 don't support Azure Accelerated Networking.

Deployment of host monitoring


For production use of SAP applications in Azure virtual machines, SAP requires the ability to get host
monitoring data from the physical hosts that run the Azure virtual machines. A specific SAP Host Agent
patch level is required that enables this capability in SAPOSCOL and SAP Host Agent. The exact patch level is
documented in SAP Note 1409604.
For more information on the deployment of components that deliver host data to SAPOSCOL and SAP Host
Agent and the life-cycle management of those components, see the Deployment guide.

Next steps
For more information on a particular DBMS, see:
SQL Server Azure Virtual Machines DBMS deployment for SAP workload
Oracle Azure Virtual Machines DBMS deployment for SAP workload
IBM DB2 Azure Virtual Machines DBMS deployment for SAP workload
SAP ASE Azure Virtual Machines DBMS deployment for SAP workload
SAP maxDB, Live Cache, and Content Server deployment on Azure
SAP HANA on Azure operations guide
SAP HANA high availability for Azure virtual machines
Backup guide for SAP HANA on Azure virtual machines
SQL Server Azure Virtual Machines DBMS
deployment for SAP NetWeaver
12/22/2020 • 27 minutes to read • Edit Online

This document covers several different areas to consider when deploying SQL Server for SAP workload in Azure
IaaS. As a precondition to this document, you should have read the document Considerations for Azure Virtual
Machines DBMS deployment for SAP workload as well as other guides in the SAP workload on Azure
documentation.

IMPORTANT
The scope of this document is the Windows version on SQL Server. SAP is not supporting the Linux version of SQL Server
with any of the SAP software. The document is not discussing Microsoft Azure SQL Database, which is a Platform as a
Service offer of the Microsoft Azure Platform. The discussion in this paper is about running the SQL Server product as it is
known for on-premises deployments in Azure Virtual Machines, leveraging the Infrastructure as a Service capability of Azure.
Database capabilities and functionality between these two offers are different and should not be mixed up with each other.
See also: https://azure.microsoft.com/services/sql-database/

In general, you should consider using the most recent SQL Server releases to run SAP workload in Azure IaaS. The
latest SQL Server releases offer better integration into some of the Azure services and functionality. Or have
changes that optimize operations in an Azure IaaS infrastructure.
It is recommended to review the article [What is SQL Server on Azure Virtual Machines (Windows)]
[https://docs.microsoft.com/azure/azure-sql/virtual-machines/windows/sql-server-on-azure-vm-iaas-what-is-
overview] before continuing.
In the following sections, pieces of parts of the documentation under the link above are aggregated and
mentioned. Specifics around SAP are mentioned as well and some concepts are described in more detail. However,
it is highly recommended to work through the documentation above first before reading the SQL Server-specific
documentation.
There is some SQL Server in IaaS specific information you should know before continuing:
SQL Version Suppor t : For SAP customers, SQL Server 2008 R2 and higher is supported on Microsoft Azure
Virtual Machine. Earlier editions are not supported. Review this general Support Statement for more details. In
general, SQL Server 2008 is supported by Microsoft as well. However due to significant functionality for SAP,
which was introduced with SQL Server 2008 R2, SQL Server 2008 R2 is the minimum release for SAP. In
general, you should consider using the most recent SQL Server releases to run SAP workload in Azure IaaS. The
latest SQL Server releases offer better integration into some of the Azure services and functionality. Or have
changes that optimize operations in an Azure IaaS infrastructure. Therefore, the paper is restricted to SQL
Server 2016 and SQL Server 2017.
SQL Performance : Microsoft Azure hosted Virtual Machines perform well in comparison to other public cloud
virtualization offerings, but individual results may vary. Check out the article Performance best practices for
SQL Server in Azure Virtual Machines.
Using Images from Azure Marketplace : The fastest way to deploy a new Microsoft Azure VM is to use an
image from the Azure Marketplace. There are images in the Azure Marketplace, which contain the most recent
SQL Server releases. The images where SQL Server already is installed can't be immediately used for SAP
NetWeaver applications. The reason is the default SQL Server collation is installed within those images and not
the collation required by SAP NetWeaver systems. In order to use such images, check the steps documented in
chapter Using a SQL Server image out of the Microsoft Azure Marketplace.

Recommendations on VM/VHD structure for SAP-related SQL Server


deployments
In accordance with the general description, Operating system, SQL Server executables, and in case of SAP 2-Tier
systems, the SAP executables should be located or installed separate Azure disks. Typically, most of the SQL Server
system databases are not utilized at a high level by SAP NetWeaver workload. Nevertheless the system databases
of SQL Server (master, msdb, and model) should be, together with the other SQL Server directories on a separate
Azure disk. SQL Server tempdb should be either located on the nonperisisted D:\ drive or on a separate disk.
With all SAP certified VM types (see SAP Note 1928533), except A-Series VMs, tempdb data, and log files can
be placed on the non-persisted D:\ drive.
For older SQL Server releases, where SQL Server installs tempdb with one data file by default, it is
recommended to use multiple tempdb data files. Be aware D:\ drive volumes are different based on the VM
type. For exact sizes of the D:\ drive of the different VMs, check the article Sizes for Windows virtual machines
in Azure.
These configurations enable tempdb to consume more space and more important more IOPS and storage
bandwidth than the system drive is able to provide. The nonpersistent D:\ drive also offers better I/O latency and
throughput (with the exception of A-Series VMs). In order to determine the proper tempdb size, you can check the
tempdb sizes on existing systems.

NOTE
in case you place tempdb data files and log file into a folder on D:\ drive that you created, you need to make sure that the
folder does exist after a VM reboot. Since the D:\ drive is freshly initialized after a VM reboot all file and directory structures
are wiped out. A possibility to recreate eventual directory structures on D:\ drive before the start of the SQL Server service is
documented in this article.

A VM configuration, which runs SQL Server with an SAP database and where tempdb data and tempdb logfile are
placed on the D:\ drive would look like:
The diagram above displays a simple case. As eluded to in the article Considerations for Azure Virtual Machines
DBMS deployment for SAP workload, Azure storage type, number, and size of disks is dependent from different
factors. But in general we recommend:
Using one large volume, which contain the SQL Server data files. Reason behind this configuration is that in
real life there are numerous SAP databases with different sized database files with different I/O workload.
Use the D:\drive for tempdb as long as performance is good enough. If the overall workload is limited in
performance by tempdb being located on the D:\ drive you might need to consider to move tempdb to separate
Azure premium storage or Ultra disk disks as recommended in this article.
Special for M -Series VMs
For Azure M-Series VM, the latency writing into the transaction log can be reduced by factors, compared to Azure
Premium Storage performance, when using Azure Write Accelerator. Hence, you should deploy Azure Write
Accelerator for the VHD(s) that form the volume for the SQL Server transaction log. Details can be read in the
document Write Accelerator.
Formatting the disks
For SQL Server, the NTFS block size for disks containing SQL Server data and log files should be 64 KB. There is no
need to format the D:\ drive. This drive comes pre-formatted.
In order to make sure that the restore or creation of databases is not initializing the data files by zeroing the
content of the files, you should make sure that the user context the SQL Server service is running in has a certain
permission. Usually users in the Windows Administrator group have these permissions. If the SQL Server service
is run in the user context of non-Windows Administrator user, you need to assign that user the User Right
Perform volume maintenance tasks . See the details in this Microsoft Knowledge Base Article:
https://support.microsoft.com/kb/2574695
Impact of database compression
In configurations where I/O bandwidth can become a limiting factor, every measure, which reduces IOPS might
help to stretch the workload one can run in an IaaS scenario like Azure. Therefore, if not yet done, applying SQL
Server PAGE compression is recommended by both SAP and Microsoft before uploading an existing SAP database
to Azure.
The recommendation to perform Database Compression before uploading to Azure is given out of two reasons:
The amount of data to be uploaded is lower.
The duration of the compression execution is shorter assuming that one can use stronger hardware with more
CPUs or higher I/O bandwidth or less I/O latency on-premises.
Smaller database sizes might lead to less costs for disk allocation
Database compression works as well in an Azure Virtual Machines as it does on-premises. For more details on
how to compress existing SAP NetWeaver SQL Server databases, check the article Improved SAP compression tool
MSSCOMPRESS.

SQL Server 2014 and more recent - Storing Database Files directly on
Azure Blob Storage
SQL Server 2014 and later releases open the possibility to store database files directly on Azure Blob Store without
the 'wrapper' of a VHD around them. Especially with using Standard Azure Storage or smaller VM types this type
of deployment enables scenarios where you can overcome the limits of IOPS that would be enforced by a limited
number of disks that can be mounted to some smaller VM types. This way of deployment works for user databases
however not for system databases of SQL Server. It also works for data and log files of SQL Server. If you'd like to
deploy an SAP SQL Server database this way instead of 'wrapping' it into VHDs, keep in mind:
The Storage Account used needs to be in the same Azure Region as the one that is used to deploy the VM SQL
Server is running in.
Considerations listed earlier regarding the distribution of VHDs over different Azure Storage Accounts apply for
this method of deployments as well. Means the I/O operations count against the limits of the Azure Storage
Account.
Instead of accounting against the VM's storage I/O quota, the traffic against storage blobs representing the SQL
Server data and log files, will be accounted into the VM's network bandwidth of the specific VM type. For
network and storage bandwidth of a particular VM type, consult the article Sizes for Windows virtual machines
in Azure.
As a result of pushing file I/O through the network quota, you are stranding the storage quota mostly and with
that use the overall bandwidth of the VM only partially.
The IOPS and I/O throughput Performance targets that Azure Premium Storage has for the different disk sizes
do not apply anymore. Even if the blobs you created are located on Azure Premium Storage. The targets are
documented the article High-performance Premium Storage and managed disks for VMs. As a result of placing
SQL Server data files and log files directly on blobs that are stored on Azure Premium Storage, the performance
characteristics can be different compared to VHDs on Azure Premium Storage.
Host based caching as available for Azure Premium Storage disks is not available when placing SQL Server data
files directly on Azure blobs.
On M-Series VMs, Azure Write Accelerator can't be used to support sub-millisecond writes against the SQL
Server transaction log file.
Details of this functionality can be found in the article SQL Server data files in Microsoft Azure
Recommendation for production systems is to avoid this configuration and rather choose the placements of SQL
Server data and log files in Azure Premium Storage VHDs instead of directly on Azure blobs.

SQL Server 2014 Buffer Pool Extension


SQL Server 2014 introduced a new feature, which is called Buffer Pool Extension. This functionality extends the
buffer pool of SQL Server, which is kept in memory with a second-level cache that is backed by local SSDs of a
server or VM. The buffer pool extension enables keeping a larger working set of data 'in memory'. Compared to
accessing Azure Standard Storage the access into the extension of the buffer pool, which is stored on local SSDs of
an Azure VM is many factors faster. Comparing Buffer Pool Extension to Azure Premium Storage Read Cache, as
recommended for SQL Server data files, no significant advantages are expected for Buffer Pool Extensions. Reason
is that both caches (SQL Server Buffer Pool Extension and Premium Storage Read Cache) are using the local disks
of the Azure compute node.
Experiences gained in the meantime with SQL Server Buffer Pool Extension with SAP workload is mixed and still
does not allow clear recommendations on whether to use it in all cases. The ideal case is that the working set the
SAP application requires fits into main memory. With Azure meanwhile offering VMs that come with up to 4 TB of
memory, it should be achievable to keep the working set in memory. Hence the usage of Buffer Pool Extension is
limited to some rare cases and should not be a mainstream case.

Backup/Recovery considerations for SQL Server


When deploying SQL Server into Azure, your backup methodology must be reviewed. Even if the system is not a
production system, the SAP database hosted by SQL Server must be backed up periodically. Since Azure Storage
keeps three images, a backup is now less important in respect to compensating a storage crash. The priority
reason for maintaining a proper backup and recovery plan is more that you can compensate for logical/manual
errors by providing point in time recovery capabilities. So the goal is to either use backups to restore the database
back to a certain point in time or to use the backups in Azure to seed another system by copying the existing
database.
In order to look at different SQL Server backup possibilities in Azure read the article Backup and Restore for SQL
Server in Azure Virtual Machines. The article covers several different possibilities.
Manual backups
You have several possibilities to perform 'manual' backups by:
1. Performing conventional SQL Server backups onto direct attached Azure disks. This method has the advantage
that you have the backups available swiftly for system refreshes and build up of new systems as copies of
existing SAP systems
2. SQL Server 2012 CU4 and higher can back up databases to an Azure storage URL.
3. File-Snapshot Backups for Database Files in Azure Blob Storage. This method only works when your SQL Server
data and log files are located on Azure blob storage
The first method is well known and applied in many cases in the on-premises world as well. Nevertheless, it leaves
you with the task to solve the longer term backup location. Since you don't want to keep your backups for 30 or
more days in the locally attached Azure Storage, you have the need to either use Azure Backup Services or another
third-party backup/recovery tool that includes access and retention management for your backups. Or you build
out a large file server in Azure using Windows storage spaces.
The second method is described closer in the article SQL Server Backup to URL. Different releases of SQL Server
have some variations in this functionality. Therefore, you should check out the documentation for your particular
SQL Server release check. Important to note that this article lists a lot of restrictions. You either have the possibility
to perform the backup against:
One single Azure page blob, which then limits the backup size to 1000 GB. This restriction also limits the
throughput you can achieve.
Multiple (up to 64) Azure block blobs, which enable a theoretical backup size of 12 TB. However, tests with
customer databases revealed that the maximum backup size can be smaller than its theoretical limit. In this
case, you are responsible for managing retention of backups and access o the backups as well.
Automated Backup for SQL Server
Automated Backup provides an automatic backup service for SQL Server Standard and Enterprise editions running
in a Windows VM in Azure. This service is provided by the SQL Server IaaS Agent Extension, which is automatically
installed on SQL Server Windows virtual machine images in the Azure portal. If you deploy your own OS images
with SQL Server installed, you need to install the VM extensions separately. The steps necessary are documented
in this article.
More details about the capabilities of this method can be found in these articles:
SQL Server 2014: Automated Backup for SQL Server 2014 Virtual Machines (Resource Manager)
SQL Server 2016/2017: Automated Backup v2 for Azure Virtual Machines (Resource Manager)
Looking into the documentation, you can see that the functionality with the more recent SQL Server releases
improved. Some more details on SQL Server automated backups are released in the article SQL Server Managed
Backup to Microsoft Azure. The theoretical backup size limit is 12 TB. The automated backups can be a good
method for backup sizes of up to 12 TB. Since multiple blobs are written to in parallel, you can expect a throughput
of larger than 100 MB/sec.
Azure Backup for SQL Server VMs
This new method of SQL Server backups is offered as of June 2018 as public preview by Azure Backup services.
The method to backup SQL Server is the same as other third-party tools are using, namely the SQL Server
VSS/VDI interface to stream backups to a target location. In this case, the target location is Azure Recovery Service
vault.
A more than detailed description of this backup method, which adds numerous advantages of central backup
configurations, monitoring, and administration is available here.
Third-party backup solutions
For quite a number of SAP customers, there was no possibility to start over and introduce complete new backup
solutions for the part of their SAP landscape that was running on Azure. As a result, the existing backup solutions
needed to be used and extended into Azure. Extending existing backup solutions into Azure usually worked well
with most of the main vendors in this space.

Using a SQL Server image out of the Microsoft Azure Marketplace


Microsoft offers VMs in the Azure Marketplace, which already contain versions of SQL Server. For SAP customers
who require licenses for SQL Server and Windows, using these images might be an opportunity to cover the need
for licenses by spinning up VMs with SQL Server already installed. In order to use such images for SAP, the
following considerations need to be made:
The SQL Server non-Evaluation versions acquire higher costs than a 'Windows-only' VM deployed from Azure
Marketplace. See these articles to compare prices: https://azure.microsoft.com/pricing/details/virtual-
machines/windows/ and https://azure.microsoft.com/pricing/details/virtual-machines/sql-server-enterprise/.
You only can use SQL Server releases, which are supported by SAP.
The collation of the SQL Server instance, which is installed in the VMs offered in the Azure Marketplace is not
the collation SAP NetWeaver requires the SQL Server instance to run. You can change the collation though with
the directions in the following section.
Changing the SQL Server Collation of a Microsoft Windows/SQL Server VM
Since the SQL Server images in the Azure Marketplace are not set up to use the collation, which is required by SAP
NetWeaver applications, it needs to be changed immediately after the deployment. For SQL Server, this change of
collation can be done with the following steps as soon as the VM has been deployed and an administrator is able
to log into the deployed VM:
Open a Windows Command Window, as administrator.
Change the directory to C:\Program Files\Microsoft SQL Server\110\Setup Bootstrap\SQLServer2012.
Execute the command: Setup.exe /QUIET /ACTION=REBUILDDATABASE /INSTANCENAME=MSSQLSERVER
/SQLSYSADMINACCOUNTS= <local_admin_account_name > /SQLCOLLATION=SQL_Latin1_General_Cp850_BIN2
<local_admin_account_name > is the account, which was defined as the administrator account when
deploying the VM for the first time through the gallery.
The process should only take a few minutes. In order to make sure whether the step ended up with the correct
result, perform the following steps:
Open SQL Server Management Studio.
Open a Query Window.
Execute the command sp_helpsort in the SQL Server master database.
The desired result should look like:

Latin1-General, binary code point comparison sort for Unicode Data, SQL Server Sort Order 40 on Code Page 850
for non-Unicode Data

If the result is different, STOP deploying SAP and investigate why the setup command did not work as expected.
Deployment of SAP NetWeaver applications onto SQL Server instance with different SQL Server codepages than
the one mentioned above is NOT supported.

SQL Server High-Availability for SAP in Azure


Using SQL Server in Azure IaaS deployments for SAP, you have several different possibilities to add to deploy the
DBMS layer highly available. As discussed in Considerations for Azure Virtual Machines DBMS deployment for SAP
workload already, Azure provides different up-time SLAs for a single VM and a pair of VMs deployed in an Azure
Availability Set. Assumption is that you drive towards the up-time SLA for your production deployments that
requires the deployment in Azure Availability Sets. In such a case, you need to deploy a minimum of two VMs in
such an Availability Set. One VM will run the active SQL Server Instance. The other VM will run the passive Instance
SQL Server Clustering using Windows Scale -out File Server or Azure shared disk
With Windows Server 2016, Microsoft introduced Storage Spaces Direct. Based on Storage Spaces Direct
Deployment, SQL Server FCI clustering is supported in general. Azure also offers Azure shared disks that could be
used for Windows clustering. For SAP workload, we are not supporting these HA options.
SQL Server Log Shipping
One of the methods of high availability (HA) is SQL Server Log Shipping. If the VMs participating in the HA
configuration have working name resolution, there is no problem and the setup in Azure does not differ from any
setup that is done on-premises. With regards to setting up Log Shipping and the principles around Log Shipping.
Details of SQL Server Log Shipping can be found in the article About Log Shipping (SQL Server).
The SQL Server log shipping functionality was hardly used in Azure to achieve high availability within one Azure
region. However in the following scenarios SAP customers were using log shipping successful in conjunction with
Azure:
Disaster Recovery scenarios from one Azure region into another Azure region
Disaster Recovery configuration from on-premises into an Azure region
Cut-over scenarios from on-premises to Azure. In those cases, log shipping is used to synchronize the new
DBMS deployment in Azure with the ongoing production system on-premises. At the time of cutting over,
production is shut down and it is made sure that the last and latest transaction log backups got transferred to
the Azure DBMS deployment. Then the Azure DBMS deployment is opened up for production.
Database Mirroring
Database Mirroring as supported by SAP (see SAP Note 965908) relies on defining a failover partner in the SAP
connection string. For the Cross-Premises cases, we assume that the two VMs are in the same domain and that the
user context the two SQL Server instances are running under a domain user as well and have sufficient privileges
in the two SQL Server instances involved. Therefore, the setup of Database Mirroring in Azure does not differ
between a typical on-premises setup/configuration.
As of Cloud-Only deployments, the easiest method is to have another domain setup in Azure to have those DBMS
VMs (and ideally dedicated SAP VMs) within one domain.
If a domain is not possible, one can also use certificates for the database mirroring endpoints as described here:
/sql/database-engine/database-mirroring/use-certificates-for-a-database-mirroring-endpoint-transact-sql
A tutorial to set up Database Mirroring in Azure can be found here: /sql/database-engine/database-
mirroring/database-mirroring-sql-server
SQL Server Always On
As Always On is supported for SAP on-premises (see SAP Note 1772688), it is supported in combination with SAP
in Azure. There are some special considerations around deploying the SQL Server Availability Group Listener (not
to be confused with the Azure Availability Set) since Azure at this point in time does not allow creating an AD/DNS
object as it is possible on-premises. Therefore, some different installation steps are necessary to overcome the
specific behavior of Azure.
Some considerations using an Availability Group Listener are:
Using an Availability Group Listener is only possible with Windows Server 2012 or higher as guest OS of the
VM. For Windows Server 2012 you need to make sure that this patch is applied:
https://support.microsoft.com/kb/2854082
For Windows Server 2008 R2, this patch does not exist and Always On would need to be used in the same
manner as Database Mirroring by specifying a failover partner in the connections string (done through the SAP
default.pfl parameter dbs/mss/server - see SAP Note 965908).
When using an Availability Group Listener, the Database VMs need to be connected to a dedicated Load
Balancer. In order to avoid that Azure is assigning new IP addresses in cases where both VMs incidentally are
shut down, one should assign static IP addresses to the network interfaces of those VMs in the Always On
configuration (defining a static IP address is described in this article)
There are special steps required when building the WSFC cluster configuration where the cluster needs a
special IP address assigned, because Azure with its current functionality would assign the cluster name the
same IP address as the node the cluster is created on. This behavior means a manual step must be performed
to assign a different IP address to the cluster.
The Availability Group Listener is going to be created in Azure with TCP/IP endpoints, which are assigned to the
VMs running the primary and secondary replicas of the Availability group.
There might be a need to secure these endpoints with ACLs.
Detailed documentation on deploying Always On with SQL Server in Azure VMs lists like:
Introducing SQL Server Always On availability groups on Azure virtual machines.
Configure an Always On availability group on Azure virtual machines in different regions.
Configure a load balancer for an Always On availability group in Azure.

NOTE
If you are configuring the Azure load balancer for the virtual IP address of the Availability Group listener, make sure that the
DirectServerReturn is configured. configuring this option will reduce the network round trip latency between the SAP
application layer and the DBMS layer.

SQL Server Always On is the most common used high availability and disaster recovery functionality used in
Azure for SAP workload deployments. Most customers use Always On for high availability within a single Azure
Region. If the deployment is restricted to two nodes only, you have two choices for connectivity:
Using the Availability Group Listener. With the Availability Group Listener, you are required to deploy an Azure
load balancer. This way is the default method of deployment. SAP applications would be configured to connect
against the Availability Group listener and not against a single node
Using the connectivity parameters of SQL Server Database Mirroring. In this case, you need to configure the
connectivity of the SAP applications in a way where both node names are named. Exact details of such an SAP
side configuration is documented in SAP Note #965908. By using this option, you would have no need to
configure an Availability Group listener. And with that no Azure load balancer for the SQL Server high
availability. As a result, the network latency between the SAP application layer and the DBMS layer is lower
since the incoming traffic to the SQL Server instance is not routed through the Azure load balancer. But recall,
this option only works if you restrict your Availability Group to span two instances.
Quite a few customers are leveraging the SQL Server Always On functionality for additional disaster recovery
functionality between Azure regions. Several customers also use the ability to perform backups from a secondary
replica.

SQL Server Transparent Data Encryption


There is a number of customers who are using SQL Server Transparent Data Encryption (TDE) when deploying
their SAP SQL Server databases in Azure. The SQL Server TDE functionality is fully supported by SAP (see SAP
Note #1380493).
Applying SQL Server TDE
In cases where you perform a heterogeneous migration from another DBMS, running on-premises, to
Windows/SQL Server running in Azure, you should create your empty target database in SQL Server ahead of
time. As next step you would apply SQL Server TDE functionality. While you are still running your production
system on-premises. Reason you want to perform in this sequence is that the process of encrypting the empty
database can take quite a while. The SAP import processes would then import the data into the encrypted
database during the downtime phase. The overhead of importing into an encrypted database has a way lower time
impact than encrypting the database after the export phase in the down time phase. Negative experiences were
made when trying to apply TDE with SAP workload running on top of the database. Therefore, recommendation is
treating the deployment of TDE as an activity that needs to be done without SAP workload on the particular
database.
In cases where you move SAP SQL Server databases from on-premises into Azure, we recommend testing on
which infrastructure you can get the encryption applied fastest. For this keep these facts in mind:
You can't define how many threads are used to apply data encryption to the database. The number of threads is
majorly dependent on the number of disk volumes the SQL Server data and log files are distributed over.
Means the more distinct volumes (drive letters), the more threads will be engaged in parallel to perform the
encryption. Such a configuration contradicts a bit with earlier disk configuration suggestion on building one or
a smaller number of storage spaces for the SQL Server database files in Azure VMs. A configuration with a
small number of volumes would lead to a small number of threads executing the encryption. A single thread
encrypting is reading 64 KB extents, encrypts it and then write a record into the transaction log file, telling that
the extent got encrypted. As a result the load on the transaction log is moderate.
In older SQL Server releases, backup compression did not get efficiency anymore when you encrypted your
SQL Server database. This behavior could develop into an issue when your plan was to encrypt your SQL
Server database on-premises and then copy a backup into Azure to restore the database in Azure. SQL Server
backup compression usually achieves a compression ratio of factor 4.
With SQL Server 2016, SQL Server introduced new functionality that allows compressing encrypted databases
as well in an efficient manner. See this blog for some details.
Treating the application of TDE encryption with no or little SAP workload only, you should test in your specific
configuration on whether it is better to apply TDE to your SAP database on-premises or to do so in Azure. In Azure,
you certainly have more flexibility in terms of over-provisioning infrastructure and shrink the infrastructure after
TDE got applied.
Using Azure Key Vault
Azure offers the service of a Key Vault to store encryption keys. SQL Server on the other side offer a connector to
leverage Azure Key Vault as store for the TDE certificates.
More details to use Azure Key Vault for SQL Server TDE lists like:
Extensible Key Management Using Azure Key Vault (SQL Server).
SQL Server TDE Extensible Key Management Using Azure Key Vault - Setup Steps.
SQL Server Connector Maintenance & Troubleshooting.
More Questions From Customers About SQL Server Transparent Data Encryption – TDE + Azure Key Vault.

IMPORTANT
Using SQL Server TDE, especially with Azure key Vault, it is recommended to use the latest patches of SQL Server 2014, SQL
Server 2016, and SQL Server 2017. Reason is that based on customer feedback, optimizations and fixes got applied to the
code. As an example, check KBA #4058175.

General SQL Server for SAP on Azure Summary


There are many recommendations in this guide and we recommend you read it more than once before planning
your Azure deployment. In general, though, be sure to follow the top general DBMS on Azure-specific
recommendations:
1. Use the latest DBMS release, like SQL Server 2017, that has the most advantages in Azure.
2. Carefully plan your SAP system landscape in Azure to balance the data file layout and Azure restrictions:
Don't have too many disks, but have enough to ensure you can reach your required IOPS.
If you don't use Managed Disks, remember that IOPS are also limited per Azure Storage Account and
that Storage Accounts are limited within each Azure subscription (more details).
Only stripe across disks if you need to achieve a higher throughput.
3. Never install software or put any files that require persistence on the D:\ drive as it is non-permanent and
anything on this drive is lost at a Windows reboot.
4. Don't use disk caching for Azure Standard Storage.
5. Don't use Azure geo-replicated Azure Standard Storage Accounts. Use Locally Redundant for DBMS workloads.
6. Use your DBMS vendor's HA/DR solution to replicate database data.
7. Always use Name Resolution, don't rely on IP addresses.
8. Using SQL Server TDE, apply the latest SQL Server patches.
9. Use the highest database compression possible. Which is page compression for SQL Server.
10. Be careful using SQL Server images from the Azure Marketplace. If you use the SQL Server one, you must
change the instance collation before installing any SAP NetWeaver system on it.
11. Install and configure the SAP Host Monitoring for Azure as described in Deployment Guide.

Next steps
Read the article
Considerations for Azure Virtual Machines DBMS deployment for SAP workload
Azure Virtual Machines Oracle DBMS deployment
for SAP workload
12/22/2020 • 13 minutes to read • Edit Online

This document covers several different areas to consider when you're deploying Oracle Database for SAP workload
in Azure IaaS. Before you read this document, we recommend you read Considerations for Azure Virtual Machines
DBMS deployment for SAP workload. We also recommend that you read other guides in the SAP workload on
Azure documentation.
You can find information about Oracle versions and corresponding OS versions that are supported for running
SAP on Oracle on Azure in SAP Note 2039619.
General information about running SAP Business Suite on Oracle can be found at SAP on Oracle. Oracle software
is supported by Oracle to run on Microsoft Azure. For more information about general support for Windows
Hyper-V and Azure, check the Oracle and Microsoft Azure FAQ.

SAP Notes relevant for Oracle, SAP, and Azure


The following SAP Notes are related to SAP on Azure.

N OT E N UM B ER T IT L E

1928533 SAP Applications on Azure: Supported products and Azure


VM types

2015553 SAP on Microsoft Azure: Support prerequisites

1999351 Troubleshooting enhanced Azure monitoring for SAP

2178632 Key monitoring metrics for SAP on Microsoft Azure

2191498 SAP on Linux with Azure: Enhanced monitoring

2039619 SAP applications on Microsoft Azure using the Oracle


database: Supported products and versions

2243692 Linux on Microsoft Azure (IaaS) VM: SAP license issues

2069760 Oracle Linux 7.x SAP installation and upgrade

1597355 Swap-space recommendation for Linux

2171857 Oracle Database 12c - file system support on Linux

1114181 Oracle Database 11g - file system support on Linux

The exact configurations and functionality that are supported by Oracle and SAP on Azure are documented in SAP
Note #2039619.
Windows and Oracle Linux are the only operating systems that are supported by Oracle and SAP on Azure. The
widely used SLES and RHEL Linux distributions aren't supported for deploying Oracle components in Azure. Oracle
components include the Oracle Database client, which is used by SAP applications to connect against the Oracle
DBMS.
Exceptions, according to SAP Note #2039619, are SAP components that don't use the Oracle Database client. Such
SAP components are SAP's stand-alone enqueue, message server, Enqueue replication services, WebDispatcher,
and SAP Gateway.
Even if you're running your Oracle DBMS and SAP application instances on Oracle Linux, you can run your SAP
Central Services on SLES or RHEL and protect it with a Pacemaker-based cluster. Pacemaker as a high-availability
framework isn't supported on Oracle Linux.

Specifics for Oracle Database on Windows


Oracle Configuration guidelines for SAP installations in Azure VMs on Windows
In accordance with the SAP installation manual, Oracle-related files shouldn't be installed or located in the OS disk
of the VM (drive c:). Virtual machines of varying sizes can support a varying number of attached disks. Smaller
virtual machine types can support a smaller number of attached disks.
If you have smaller VMs and would hit the limit of the number of disks you can attach to the VM, you can
install/locate Oracle home, stage, saptrace , saparch , sapbackup , sapcheck , or sapreorg into the OS disk. These
parts of Oracle DBMS components aren't too intense on I/O and I/O throughput. This means that the OS disk can
handle the I/O requirements. The default size of the OS disk should be 127 GB.
Oracle Database and redo log files need to be stored on separate data disks. There's an exception for the Oracle
temporary tablespace. Tempfiles can be created on D:/ (non-persistent drive). The non-persistent D:\ drive also
offers better I/O latency and throughput (with the exception of A-Series VMs).
To determine the right amount of space for the tempfiles , you can check the sizes of the tempfiles on existing
systems.
Storage configuration
Only single-instance Oracle using NTFS formatted disks is supported. All database files must be stored on the
NTFS file system on Managed Disks (recommended) or on VHDs. These disks are mounted to the Azure VM and
are based on Azure page blob storage or Azure Managed Disks.
Check out the article Azure Storage types for SAP workload to get more details of the specific Azure block storage
types suitable for DBMS workload.
We strongly recommend using Azure Managed Disks. We also strongly recommend using Azure premium storage
or Azure Ultra disk for your Oracle Database deployments.
Network drives or remote shares like Azure file services aren't supported for Oracle Database files. For more
information, see:
Introducing Microsoft Azure File Service
Persisting connections to Microsoft Azure Files
If you're using disks that are based on Azure page blob storage or Managed Disks, the statements in
Considerations for Azure Virtual Machines DBMS deployment for SAP workload apply to deployments with Oracle
Database as well.
Quotas on IOPS throughput for Azure disks exist. This concept is explained in Considerations for Azure Virtual
Machines DBMS deployment for SAP workload. The exact quotas depend on the VM type that you use. A list of VM
types with their quotas can be found at Sizes for Windows virtual machines in Azure.
To identify the supported Azure VM types, see SAP Note 1928533.
The minimum configuration is as follows:

C O M P O N EN T DISK C A C H IN G STO RA GE P O O L

\oracle<SID>\origlogaA & Premium or Ultra disk None Not needed


mirrlogB

\oracle<SID>\origlogaB & Premium or Ultra disk None Not needed


mirrlogA

\oracle<SID>\sapdata1...n Premium or Ultra disk Read-only Can be used for Premium

\oracle<SID>\oraarch Standard None Not needed

Oracle Home, saptrace , ... OS disk (Premium) Not needed

Disks selection for hosting online redo logs should be driven by IOPS requirements. It's possible to store all
sapdata1...n (tablespaces) on one single mounted disk as long as the size, IOPS, and throughput satisfy the
requirements.
The performance configuration is as follows:

C O M P O N EN T DISK C A C H IN G STO RA GE P O O L

\oracle<SID>\origlogaA Premium or Ultra disk None Can be used for Premium

\oracle<SID>\origlogaB Premium or Ultra disk None Can be used for Premium

\oracle<SID>\mirrlogAB Premium or Ultra disk None Can be used for Premium

\oracle<SID>\mirrlogBA Premium or Ultra disk None Can be used for Premium

\oracle<SID>\sapdata1...n Premium or Ultra disk Read-only Recommended for premium

\oracle\SID\sapdata(n+1)* Premium or Ultra disk None Can be used for Premium

\oracle<SID>\oraarch* Premium or Ultra disk None Not needed

Oracle Home, saptrace , ... OS disk (Premium) Not needed

*(n+1): hosting SYSTEM, TEMP, and UNDO tablespaces. The I/O pattern of System and Undo tablespaces are
different from other tablespaces hosting application data. No caching is the best option for performance of the
System and Undo tablespaces.
*oraarch: storage pool isn't necessary from a performance point of view. It can be used to get more space.
If more IOPS are required in case of Azure premium storage, we recommend using Windows Storage Pools (only
available in Windows Server 2012 and later) to create one large logical device over multiple mounted disks. This
approach simplifies the administration overhead for managing the disk space, and helps you avoid the effort of
manually distributing files across multiple mounted disks.
Write Accelerator
For Azure M-Series VMs, the latency writing into the online redo logs can be reduced by factors when compared to
Azure premium storage. Enable Azure Write Accelerator for the disks (VHDs) based on Azure Premium Storage
that are used for online redo log files. For more information, see Write Accelerator. Or use Azure Ultra disk for the
online redo log volume.
Backup/restore
For backup/restore functionality, the SAP BR*Tools for Oracle are supported in the same way as they are on
standard Windows Server operating systems. Oracle Recovery Manager (RMAN) is also supported for backups to
disk and restores from disk.
You can also use Azure Backup to run an application-consistent VM backup. The article Plan your VM backup
infrastructure in Azure explains how Azure Backup uses the Windows VSS functionality for executing application-
consistent backups. The Oracle DBMS releases that are supported on Azure by SAP can leverage the VSS
functionality for backups. For more information, see the Oracle documentation Basic concepts of database backup
and recovery with VSS.
High availability
Oracle Data Guard is supported for high availability and disaster recovery purposes. To achieve automatic failover
in Data Guard, your need to use Fast-Start Failover (FSFA). The Observer (FSFA) triggers the failover. If you don't
use FSFA, you can only use a manual failover configuration.
For more information about disaster recovery for Oracle databases in Azure, see Disaster recovery for an Oracle
Database 12c database in an Azure environment.
Accelerated networking
For Oracle deployments on Windows, we strongly recommend accelerated networking as described in Azure
accelerated networking. Also consider the recommendations that are made in Considerations for Azure Virtual
Machines DBMS deployment for SAP workload.
Other
Considerations for Azure Virtual Machines DBMS deployment for SAP workload describes other important
concepts related to deployments of VMs with Oracle Database, including Azure availability sets and SAP
monitoring.

Specifics for Oracle Database on Oracle Linux


Oracle software is supported by Oracle to run on Microsoft Azure with Oracle Linux as the guest OS. For more
information about general support for Windows Hyper-V and Azure, see the Azure and Oracle FAQ.
The specific scenario of SAP applications leveraging Oracle Databases is supported as well. Details are discussed in
the next part of the document.
Oracle version support
For information about which Oracle versions and corresponding OS versions are supported for running SAP on
Oracle on Azure Virtual Machines, see SAP Note 2039619.
General information about running SAP Business Suite on Oracle can be found in the SAP on Oracle community
page.
Oracle configuration guidelines for SAP installations in Azure VMs on Linux
In accordance with SAP installation manuals, Oracle-related files shouldn't be installed or located into system
drivers for a VM's boot disk. Varying sizes of virtual machines support a varying number of attached disks. Smaller
virtual machine types can support a smaller number of attached disks.
In this case, we recommend installing/locating Oracle home, stage, saptrace , saparch , sapbackup , sapcheck , or
sapreorg to boot disk. These parts of Oracle DBMS components aren't intense on I/O and I/O throughput. This
means that the OS disk can handle the I/O requirements. The default size of the OS disk is 30 GB. You can expand
the boot disk by using the Azure portal, PowerShell, or CLI. After the boot disk has been expanded, you can add an
additional partition for Oracle binaries.
Storage configuration
The filesystems of ext4, xfs, or Oracle ASM are supported for Oracle Database files on Azure. All database files
must be stored on these file systems based on VHDs or Managed Disks. These disks are mounted to the Azure VM
and are based on Azure page blob storage or Azure Managed Disks.
For Oracle Linux UEK kernels, a minimum of UEK version 4 is required to support Azure premium SSDs.
Checkout the article Azure Storage types for SAP workload to get more details of the specific Azure block storage
types suitable for DBMS workload.
It is highly recommended to use Azure managed disks. It also is highly recommended using Azure premium SSDs
for your Oracle Database deployments.
Network drives or remote shares like Azure file services aren't supported for Oracle Database files. For more
information, see the following:
Introducing Microsoft Azure File Service
Persisting connections to Microsoft Azure Files
If you're using disks based on Azure page blob storage or Managed Disks, the statements made in Considerations
for Azure Virtual Machines DBMS deployment for SAP workload apply to deployments with Oracle Database as
well.
Quotas on IOPS throughput for Azure disks exist. This concept is explained in Considerations for Azure Virtual
Machines DBMS deployment for SAP workload.The exact quotas depend on the VM type that's used. For a list of
VM types with their quotas, see Sizes for Linux virtual machines in Azure.
To identify the supported Azure VM types, see SAP Note 1928533.
Minimum configuration:

C O M P O N EN T DISK C A C H IN G ST RIP P IN G*

/oracle/<SID>/origlogaA & Premium or Ultra disk None Not needed


mirrlogB

/oracle/<SID>/origlogaB & Premium or Ultra disk None Not needed


mirrlogA

/oracle/<SID>/sapdata1...n Premium or Ultra disk Read-only Can be used for Premium

/oracle/<SID>/oraarch Standard None Not needed

Oracle Home, saptrace , ... OS disk (Premium) Not needed

*Stripping: LVM stripe or MDADM using RAID0


The disk selection for hosting Oracle's online redo logs should be driven by IOPS requirements. It's possible to
store all sapdata1...n (tablespaces) on a single mounted disk as long as the volume, IOPS, and throughput satisfy
the requirements.
Performance configuration:

C O M P O N EN T DISK C A C H IN G ST RIP P IN G*

/oracle/<SID>/origlogaA Premium or Ultra disk None Can be used for Premium


C O M P O N EN T DISK C A C H IN G ST RIP P IN G*

/oracle/<SID>/origlogaB Premium or Ultra disk None Can be used for Premium

/oracle/<SID>/mirrlogAB Premium or Ultra disk None Can be used for Premium

/oracle/<SID>/mirrlogBA Premium or Ultra disk None Can be used for Premium

/oracle/<SID>/sapdata1...n Premium or Ultra disk Read-only Recommended for Premium

/oracle/<SID>/sapdata(n+1) Premium or Ultra disk None Can be used for Premium


*

/oracle/<SID>/oraarch* Premium or Ultra disk None Not needed

Oracle Home, saptrace , ... OS disk (Premium) Not needed

*Stripping: LVM stripe or MDADM using RAID0


*(n+1):hosting SYSTEM, TEMP, and UNDO tablespaces: The I/O pattern of System and Undo tablespaces are
different from other tablespaces hosting application data. No caching is the best option for performance of the
System and Undo tablespaces.
*oraarch: storage pool isn't necessary from a performance point of view.
If more IOPS are required when using Azure premium storage, we recommend using LVM (Logical Volume
Manager) or MDADM to create one large logical volume over multiple mounted disks. For more information, see
Considerations for Azure Virtual Machines DBMS deployment for SAP workload regarding guidelines and pointers
on how to leverage LVM or MDADM. This approach simplifies the administration overhead of managing the disk
space and helps you avoid the effort of manually distributing files across multiple mounted disks.
Write Accelerator
For Azure M-Series VMs, when you use Azure Write Accelerator, the latency writing into the online redo logs can
be reduced by factors when using Azure premium storage. Enable Azure Write Accelerator for the disks (VHDs)
based on Azure Premium Storage that are used for online redo log files. For more information, see Write
Accelerator. Or use Azure Ultra disk for the online redo log volume.
Backup/restore
For backup/restore functionality, the SAP BR*Tools for Oracle are supported in the same way as they are on bare
metal and Hyper-V. Oracle Recovery Manager (RMAN) is also supported for backups to disk and restores from
disk.
For more information about how you can use Azure Backup and Recovery services for backing up and recovering
Oracle databases, see Back up and recover an Oracle Database 12c database on an Azure Linux virtual machine.
High availability
Oracle Data Guard is supported for high availability and disaster recovery purposes. To achieve automatic failover
in Data Guard, you need to use Fast-Start Failover (FSFA). The Observer functionality (FSFA) triggers the failover. If
you don't use FSFA, you can only use a manual failover configuration. For more information, see Implement Oracle
Data Guard on an Azure Linux virtual machine.
Disaster Recovery aspects for Oracle databases in Azure are presented in the article Disaster recovery for an
Oracle Database 12c database in an Azure environment.
Accelerated networking
Support for Azure Accelerated Networking in Oracle Linux is provided with Oracle Linux 7 Update 5 (Oracle Linux
7.5). If you can't upgrade to the latest Oracle Linux 7.5 release, there might be a workaround by using the RedHat
Compatible Kernel (RHCK) instead of the Oracle UEK kernel.
Using the RHEL kernel within Oracle Linux is supported according to SAP Note #1565179. For Azure Accelerated
Networking, the minimum RHCKL kernel release needs to be 3.10.0-862.13.1.el7. If you're using the UEK kernel in
Oracle Linux in conjunction with Azure Accelerated Networking, you need to use Oracle UEK kernel version 5.
If you're deploying VMs from an image that's not based on Azure Marketplace, then you need to copy additional
configuration files to the VM by running the following code:

# Copy settings from GitHub to the correct place in the VM


sudo curl -so /etc/udev/rules.d/68-azure-sriov-nm-unmanaged.rules https://raw.githubusercontent.com/LIS/lis-
next/master/hv-rhel7.x/hv/tools/68-azure-sriov-nm-unmanaged.rules

Next steps
Read the article
Considerations for Azure Virtual Machines DBMS deployment for SAP workload
IBM Db2 Azure Virtual Machines DBMS deployment
for SAP workload
12/22/2020 • 9 minutes to read • Edit Online

With Microsoft Azure, you can migrate your existing SAP application running on IBM Db2 for Linux, UNIX, and
Windows (LUW) to Azure virtual machines. With SAP on IBM Db2 for LUW, administrators and developers can still
use the same development and administration tools, which are available on-premises. General information about
running SAP Business Suite on IBM Db2 for LUW can be found in the SAP Community Network (SCN) at
https://www.sap.com/community/topic/db2-for-linux-unix-and-windows.html.
For more information and updates about SAP on Db2 for LUW on Azure, see SAP Note 2233094.
The are various articles on SAP workload on Azure released. It is recommended to start in SAP workload on Azure
- Get Started and then pick the area of interests
The following SAP Notes are related to SAP on Azure regarding the area covered in this document:

N OT E N UM B ER T IT L E

1928533 SAP Applications on Azure: Supported Products and Azure


VM types

2015553 SAP on Microsoft Azure: Support Prerequisites

1999351 Troubleshooting Enhanced Azure Monitoring for SAP

2178632 Key Monitoring Metrics for SAP on Microsoft Azure

1409604 Virtualization on Windows: Enhanced Monitoring

2191498 SAP on Linux with Azure: Enhanced Monitoring

2233094 DB6: SAP Applications on Azure Using IBM DB2 for Linux,
UNIX, and Windows - Additional Information

2243692 Linux on Microsoft Azure (IaaS) VM: SAP license issues

1984787 SUSE LINUX Enterprise Server 12: Installation notes

2002167 Red Hat Enterprise Linux 7.x: Installation and Upgrade

1597355 Swap-space recommendation for Linux

As a pr-read to this document, you should have read the document Considerations for Azure Virtual Machines
DBMS deployment for SAP workload as well as other guides in the SAP workload on Azure documentation.

IBM Db2 for Linux, UNIX, and Windows Version Support


SAP on IBM Db2 for LUW on Microsoft Azure Virtual Machine Services is supported as of Db2 version 10.5.
For information about supported SAP products and Azure VM types, refer to SAP Note 1928533.
IBM Db2 for Linux, UNIX, and Windows Configuration Guidelines for
SAP Installations in Azure VMs
Storage Configuration
For an overview of Azure storage types for SAP workload, consult the article Azure Storage types for SAP
workload All database files must be stored on mounted disks of Azure block storage (Windows: NFFS, Linux: xfs,
ext4 or ext3). Any kind of network drives or remote shares like the following Azure services are NOT supported
for database files:
Microsoft Azure File Service
Azure NetApp Files
Using disks based on Azure Page BLOB Storage or Managed Disks, the statements made in Considerations for
Azure Virtual Machines DBMS deployment for SAP workload apply to deployments with the Db2 DBMS as well.
As explained earlier in the general part of the document, quotas on IOPS throughput for Azure disks exist. The
exact quotas are depending on the VM type used. A list of VM types with their quotas can be found here (Linux)
and here (Windows).
As long as the current IOPS quota per disk is sufficient, it is possible to store all the database files on one single
mounted disk. Whereas you always should separate the data files and transaction log files on different
disks/VHDs.
For performance considerations, also refer to chapter 'Data Safety and Performance Considerations for Database
Directories' in SAP installation guides.
Alternatively, you can use Windows Storage Pools (only available in Windows Server 2012 and higher) as
described Considerations for Azure Virtual Machines DBMS deployment for SAP workload or LVM or mdadm on
Linux to create one large logical device over multiple disks.
For the disks containing the Db2 storage paths for your sapdata and saptmp directories, you must specify a
physical disk sector size of 512 KB. When using Windows Storage Pools, you must create the storage pools
manually via command line interface using the parameter -LogicalSectorSizeDefault . For more information, see
https://technet.microsoft.com/itpro/powershell/windows/storage/new-storagepool.
For Azure M-Series VM, the latency writing into the transaction logs can be reduced by factors, compared to Azure
Premium Storage performance, when using Azure Write Accelerator. Hence, you should deploy Azure Write
Accelerator for the VHD(s) that form the volume for the Db2 transaction logs. Details can be read in the document
Write Accelerator.

Recommendation on VM and disk structure for IBM Db2 deployment


IBM Db2 for SAP NetWeaver Applications is supported on any VM type listed in SAP support note 1928533.
Recommended VM families for running IBM Db2 database are Esd_v4/Eas_v4/Es_v3 and M/M_v2-series for large
multi-terabyte databases. The IBM Db2 transaction log disk write performance can be improved by enabling the
M-series Write Accelerator.
Following is a baseline configuration for various sizes and uses of SAP on Db2 deployments from small to large.
The list is based on Azure premium storage. However, Azure Ultra disk is fully supported with Db2 as well and can
be used as well. Simply use the values for capacity, burst throughput, and burst IOPS to define the Ultra disk
configuration. You can limit the IOPS for the /db2//log_dir at around 5000 IOPS.
Extra small SAP system: database size 50 - 200 GB: example Solution Manager
DB 2 A Z URE
VM M O UN P REM I T H RO U B URST
NAME T UM NR OF GH P UT SIZ E B URST THR ST RIP E CACHI
/ SIZ E P O IN T DISK DISK S IO P S [ M B / S] [ GB ] IO P S [ GB ] SIZ E NG

E4ds_v /db2 P6 1 240 50 64 3,500 170


4

vCPU: /db2//s P10 2 1,000 200 256 7,000 340 256 KB ReadO
4 apdata nly

RAM: /db2//s P6 1 240 50 128 3,500 170


32 GiB aptmp

/db2//l P6 2 480 100 128 7,000 340 64 KB


og_dir

/db2// P10 1 500 100 128 3,500 170


offline_l
og_dir

Small SAP system: database size 200 - 750 GB: small Business Suite

DB 2 A Z URE
VM M O UN P REM I T H RO U B URST
NAME T UM NR OF GH P UT SIZ E B URST THR ST RIP E CACHI
/ SIZ E P O IN T DISK DISK S IO P S [ M B / S] [ GB ] IO P S [ GB ] SIZ E NG

E16ds_ /db2 P6 1 240 50 64 3,500 170


v4

vCPU: /db2//s P15 4 4,400 500 1.024 14,000 680 256 KB ReadO
16 apdata nly

RAM: /db2//s P6 2 480 100 128 7,000 340 128 KB


128 aptmp
GiB

/db2//l P15 2 2,200 250 512 7,000 340 64 KB


og_dir

/db2// P10 1 500 100 128 3,500 170


offline_l
og_dir

Medium SAP system: database size 500 - 1000 GB: small Business Suite

DB 2 A Z URE
VM M O UN P REM I T H RO U B URST
NAME T UM NR OF GH P UT SIZ E B URST THR ST RIP E CACHI
/ SIZ E P O IN T DISK DISK S IO P S [ M B / S] [ GB ] IO P S [ GB ] SIZ E NG

E32ds_ /db2 P6 1 240 50 64 3,500 170


v4

vCPU: /db2//s P30 2 10,000 400 2.048 10,000 400 256 KB ReadO
32 apdata nly
DB 2 A Z URE
VM M O UN P REM I T H RO U B URST
NAME T UM NR OF GH P UT SIZ E B URST THR ST RIP E CACHI
/ SIZ E P O IN T DISK DISK S IO P S [ M B / S] [ GB ] IO P S [ GB ] SIZ E NG

RAM: /db2//s P10 2 1,000 200 256 7,000 340 128 KB


256 aptmp
GiB

/db2//l P20 2 4,600 300 1.024 7,000 340 64 KB


og_dir

/db2// P15 1 1,100 125 256 3,500 170


offline_l
og_dir

Large SAP system: database size 750 - 2000 GB: Business Suite

DB 2 A Z URE
VM M O UN P REM I T H RO U B URST
NAME T UM NR OF GH P UT SIZ E B URST THR ST RIP E CACHI
/ SIZ E P O IN T DISK DISK S IO P S [ M B / S] [ GB ] IO P S [ GB ] SIZ E NG

E64ds_ /db2 P6 1 240 50 64 3,500 170


v4

vCPU: /db2//s P30 4 20,000 800 4.096 20,000 800 256 KB ReadO
64 apdata nly

RAM: /db2//s P15 2 2,200 250 512 7,000 340 128 KB


504 aptmp
GiB

/db2//l P20 4 9,200 600 2.048 14,000 680 64 KB


og_dir

/db2// P20 1 2,300 150 512 3,500 170


offline_l
og_dir

Large multi-terabyte SAP system: database size 2 TB+: Global Business Suite system

DB 2 A Z URE
VM M O UN P REM I T H RO U B URST
NAME T UM NR OF GH P UT SIZ E B URST THR ST RIP E CACHI
/ SIZ E P O IN T DISK DISK S IO P S [ M B / S] [ GB ] IO P S [ GB ] SIZ E NG

M128s /db2 P10 1 500 100 128 3,500 170

vCPU: /db2//s P40 4 30,000 1.000 8.192 30,000 1.000 256 KB ReadO
128 apdata nly

RAM: /db2//s P20 2 4,600 300 1.024 7,000 340 128 KB


2048 aptmp
GiB

/db2//l P30 4 20,000 800 4.096 20,000 800 64 KB WriteA


og_dir ccelerat
or
DB 2 A Z URE
VM M O UN P REM I T H RO U B URST
NAME T UM NR OF GH P UT SIZ E B URST THR ST RIP E CACHI
/ SIZ E P O IN T DISK DISK S IO P S [ M B / S] [ GB ] IO P S [ GB ] SIZ E NG

/db2// P30 1 5,000 200 1.024 5,000 200


offline_l
og_dir

Backup/Restore
The backup/restore functionality for IBM Db2 for LUW is supported in the same way as on standard Windows
Server Operating Systems and Hyper-V.
Make sure that you have a valid database backup strategy in place.
As in bare-metal deployments, backup/restore performance depends on how many volumes can be read in
parallel and what the throughput of those volumes might be. In addition, the CPU consumption used by backup
compression may play a significant role on VMs with up to eight CPU threads. Therefore, one can assume:
The fewer the number of disks used to store the database devices, the smaller the overall throughput in reading
The smaller the number of CPU threads in the VM, the more severe the impact of backup compression
The fewer targets (Stripe Directories, disks) to write the backup to, the lower the throughput
To increase the number of targets to write to, two options can be used/combined depending on your needs:
Striping the backup target volume over multiple disks in order to improve the IOPS throughput on that striped
volume
Using more than one target directory to write the backup to

NOTE
Db2 on Windows does not support the Windows VSS technology. As a result, the application consistent VM backup of
Azure Backup Service can't be leveraged for VMs the Db2 DBMS is deployed in.

High Availability and Disaster Recovery


Linux Pacemaker
Db2 high availability disaster recovery (HADR) with pacemaker is supported. Both SLES and RHEL operating
systems are supported. This configuration enables high availability of IBM Db2 for SAP. Deployment guides:
SLES: High availability of IBM Db2 LUW on Azure VMs on SUSE Linux Enterprise Server with Pacemaker
RHEL: High availability of IBM Db2 LUW on Azure VMs on Red Hat Enterprise Linux Server
Windows Cluster Server
Microsoft Cluster Server (MSCS) is not supported.
Db2 high availability disaster recovery (HADR) is supported. If the virtual machines of the HA configuration have
working name resolution, the setup in Azure does not differ from any setup that is done on-premises. It is not
recommended to rely on IP resolution only.
Do not use Geo-Replication for the storage accounts that store the database disks. For more information, see the
document Considerations for Azure Virtual Machines DBMS deployment for SAP workload.
Accelerated Networking
For Db2 deployments on Windows, it is highly recommended to use the Azure functionality of Accelerated
Networking as described in the document Azure Accelerated Networking. Also consider recommendations made
in Considerations for Azure Virtual Machines DBMS deployment for SAP workload.
Specifics for Linux deployments
As long as the current IOPS quota per disk is sufficient, it is possible to store all the database files on one single
disk. Whereas you always should separate the data files and transaction log files on different disks/VHDs.
Alternatively, if the IOPS or I/O throughput of a single Azure VHD is not sufficient, you can use LVM (Logical
Volume Manager) or MDADM as described in the document Considerations for Azure Virtual Machines DBMS
deployment for SAP workload to create one large logical device over multiple disks. For the disks containing the
Db2 storage paths for your sapdata and saptmp directories, you must specify a physical disk sector size of 512 KB.
Other
All other general areas like Azure Availability Sets or SAP monitoring apply as described in the document
Considerations for Azure Virtual Machines DBMS deployment for SAP workload for deployments of VMs with the
IBM Database as well.

Next steps
Read the article
Considerations for Azure Virtual Machines DBMS deployment for SAP workload
High availability of IBM Db2 LUW on Azure VMs on
SUSE Linux Enterprise Server with Pacemaker
12/22/2020 • 26 minutes to read • Edit Online

IBM Db2 for Linux, UNIX, and Windows (LUW) in high availability and disaster recovery (HADR) configuration
consists of one node that runs a primary database instance and at least one node that runs a secondary database
instance. Changes to the primary database instance are replicated to a secondary database instance
synchronously or asynchronously, depending on your configuration.

NOTE
This article contains references to the terms master and slave, terms that Microsoft no longer uses. When these terms are
removed from the software, we’ll remove them from this article.

This article describes how to deploy and configure the Azure virtual machines (VMs), install the cluster
framework, and install the IBM Db2 LUW with HADR configuration.
The article doesn't cover how to install and configure IBM Db2 LUW with HADR or SAP software installation. To
help you accomplish these tasks, we provide references to SAP and IBM installation manuals. This article focuses
on parts that are specific to the Azure environment.
The supported IBM Db2 versions are 10.5 and later, as documented in SAP note 1928533.
Before you begin an installation, see the following SAP notes and documentation:

SA P N OT E DESC RIP T IO N

1928533 SAP applications on Azure: Supported products and Azure


VM types

2015553 SAP on Azure: Support prerequisites

2178632 Key monitoring metrics for SAP on Azure

2191498 SAP on Linux with Azure: Enhanced monitoring

2243692 Linux on Azure (IaaS) VM: SAP license issues

1984787 SUSE LINUX Enterprise Server 12: Installation notes

1999351 Troubleshooting enhanced Azure monitoring for SAP

2233094 DB6: SAP applications on Azure that use IBM Db2 for Linux,
UNIX, and Windows - additional information

1612105 DB6: FAQ on Db2 with HADR


DO C UM EN TAT IO N

SAP Community Wiki: Has all of the required SAP Notes for Linux

Azure Virtual Machines planning and implementation for SAP on Linux guide

Azure Virtual Machines deployment for SAP on Linux (this article)

Azure Virtual Machines database management system(DBMS) deployment for SAP on Linux guide

SAP workload on Azure planning and deployment checklist

SUSE Linux Enterprise Server for SAP Applications 12 SP4 best practices guides

SUSE Linux Enterprise High Availability Extension 12 SP4

IBM Db2 Azure Virtual Machines DBMS deployment for SAP workload

IBM Db2 HADR 11.1

IBM Db2 HADR R 10.5

Overview
To achieve high availability, IBM Db2 LUW with HADR is installed on at least two Azure virtual machines, which are
deployed in an Azure availability set or across Azure Availability Zones.
The following graphics display a setup of two database server Azure VMs. Both database server Azure VMs have
their own storage attached and are up and running. In HADR, one database instance in one of the Azure VMs has
the role of the primary instance. All clients are connected to this primary instance. All changes in database
transactions are persisted locally in the Db2 transaction log. As the transaction log records are persisted locally,
the records are transferred via TCP/IP to the database instance on the second database server, the standby server,
or standby instance. The standby instance updates the local database by rolling forward the transferred
transaction log records. In this way, the standby server is kept in sync with the primary server.
HADR is only a replication functionality. It has no failure detection and no automatic takeover or failover facilities.
A takeover or transfer to the standby server must be initiated manually by a database administrator. To achieve an
automatic takeover and failure detection, you can use the Linux Pacemaker clustering feature. Pacemaker monitors
the two database server instances. When the primary database server instance crashes, Pacemaker initiates an
automatic HADR takeover by the standby server. Pacemaker also ensures that the virtual IP address is assigned to
the new primary server.
To have SAP application servers connect to primary database, you need a virtual host name and a virtual IP
address. In the event of a failover, the SAP application servers will connect to new primary database instance. In an
Azure environment, an Azure load balancer is required to use a virtual IP address in the way that's required for
HADR of IBM Db2.
To help you fully understand how IBM Db2 LUW with HADR and Pacemaker fits into a highly available SAP system
setup, the following image presents an overview of a highly available setup of an SAP system based on IBM Db2
database. This article covers only IBM Db2, but it provides references to other articles about how to set up other
components of an SAP system.
High-level overview of the required steps
To deploy an IBM Db2 configuration, you need to follow these steps:
Plan your environment.
Deploy the VMs.
Update SUSE Linux and configure file systems.
Install and configure Pacemaker.
Install highly available NFS.
Install ASCS/ERS on a separate cluster.
Install IBM Db2 database with Distributed/High Availability option (SWPM).
Install and create a secondary database node and instance, and configure HADR.
Confirm that HADR is working.
Apply the Pacemaker configuration to control IBM Db2.
Configure Azure Load Balancer.
Install primary and dialog application servers.
Check and adapt the configuration of SAP application servers.
Perform failover and takeover tests.

Plan Azure infrastructure for hosting IBM Db2 LUW with HADR
Complete the planning process before you execute the deployment. Planning builds the foundation for deploying
a configuration of Db2 with HADR in Azure. Key elements that need to be part of planning for IMB Db2 LUW
(database part of SAP environment) are listed in the following table:

TO P IC SH O RT DESC RIP T IO N

Define Azure resource groups Resource groups where you deploy VM, VNet, Azure Load
Balancer, and other resources. Can be existing or new.

Virtual network / Subnet definition Where VMs for IBM Db2 and Azure Load Balancer are being
deployed. Can be existing or newly created.

Virtual machines hosting IBM Db2 LUW VM size, storage, networking, IP address.

Virtual host name and virtual IP for IBM Db2 database The virtual IP or host name that's used for connection of SAP
application servers. db-vir t-hostname , db-vir t-ip .

Azure fencing Azure fencing or SBD fencing (highly recommended). Method


to avoid split brain situations.

SBD VM SBD virtual machine size, storage, network.

Azure Load Balancer Usage of Basic or Standard (recommended), probe port for
Db2 database (our recommendation 62500) probe-por t .

Name resolution How name resolution works in the environment. DNS service
is highly recommended. Local hosts file can be used.

For more information about Linux Pacemaker in Azure, see Set up Pacemaker on SUSE Linux Enterprise Server in
Azure.

Deployment on SUSE Linux


The resource agent for IBM Db2 LUW is included in SUSE Linux Enterprise Server for SAP Applications. For the
setup that's described in this document, you must use SUSE Linux Server for SAP Applications. The Azure
Marketplace contains an image for SUSE Enterprise Server for SAP Applications 12 that you can use to deploy
new Azure virtual machines. Be aware of the various support or service models that are offered by SUSE through
the Azure Marketplace when you choose a VM image in the Azure VM Marketplace.
Hosts: DNS updates
Make a list of all host names, including virtual host names, and update your DNS servers to enable proper IP
address to host-name resolution. If a DNS server doesn't exist or you can't update and create DNS entries, you
need to use the local host files of the individual VMs that are participating in this scenario. If you're using host files
entries, make sure that the entries are applied to all VMs in the SAP system environment. However, we
recommend that you use your DNS that, ideally, extends into Azure
Manual deployment
Make sure that the selected OS is supported by IBM/SAP for IBM Db2 LUW. The list of supported OS versions for
Azure VMs and Db2 releases is available in SAP note 1928533. The list of OS releases by individual Db2 release is
available in the SAP Product Availability Matrix. We highly recommend a minimum of SLES 12 SP4 because of
Azure-related performance improvements in this or later SUSE Linux versions.
1. Create or select a resource group.
2. Create or select a virtual network and subnet.
3. Create an Azure availability set or deploy an availability zone.
For the availability set, set the maximum update domains to 2.
For the availability set, set the maximum update domains to 2.
4. Create Virtual Machine 1.
Use SLES for SAP image in the Azure Marketplace.
Select the Azure availability set you created in step 3, or select Availability Zone.
5. Create Virtual Machine 2.
Use SLES for SAP image in the Azure Marketplace.
Select the Azure availability set you in created in step 3, or select Availability Zone (not the same zone as
in step 3).
6. Add data disks to the VMs, and then check the recommendation of a file system setup in the article IBM Db2
Azure Virtual Machines DBMS deployment for SAP workload.

Create the Pacemaker cluster


To create a basic Pacemaker cluster for this IBM Db2 server, see Set up Pacemaker on SUSE Linux Enterprise
Server in Azure.

Install the IBM Db2 LUW and SAP environment


Before you start the installation of an SAP environment based on IBM Db2 LUW, review the following
documentation:
Azure documentation
SAP documentation
IBM documentation
Links to this documentation are provided in the introductory section of this article.
Check the SAP installation manuals about installing NetWeaver-based applications on IBM Db2 LUW.
You can find the guides on the SAP Help portal by using the SAP Installation Guide Finder.
You can reduce the number of guides displayed in the portal by setting the following filters:
I want to: "Install a new system"
My Database: "IBM Db2 for Linux, Unix, and Windows"
Additional filters for SAP NetWeaver versions, stack configuration, or operating system
Installation hints for setting up IBM Db2 LUW with HADR
To set up the primary IBM Db2 LUW database instance:
Use the high availability or distributed option.
Install the SAP ASCS/ERS and Database instance.
Take a backup of the newly installed database.

IMPORTANT
Write down the "Database Communication port" that's set during installation. It must be the same port number for both
database instances

To set up the Standby database server by using the SAP homogeneous system copy procedure, execute these
steps:
1. Select the System copy option > Target systems > Distributed > Database instance .
2. As a copy method, select Homogeneous System so that you can use backup to restore a backup on the
standby server instance.
3. When you reach the exit step to restore the database for homogeneous system copy, exit the installer.
Restore the database from a backup of the primary host. All subsequent installation phases have already
been executed on the primary database server.
4. Set up HADR for IBM Db2.

NOTE
For installation and configuration that's specific to Azure and Pacemaker: During the installation procedure through
SAP Software Provisioning Manager, there is an explicit question about high availability for IBM Db2 LUW:
Do not select IBM Db2 pureScale .
Do not select Install IBM Tivoli System Automation for Multiplatforms .
Do not select Generate cluster configuration files .

When you use an SBD device for Linux Pacemaker, set the following Db2 HADR parameters:
HADR peer window duration (seconds) (HADR_PEER_WINDOW) = 300
HADR timeout value (HADR_TIMEOUT) = 60
When you use an Azure Pacemaker fencing agent, set the following parameters:
HADR peer window duration (seconds) (HADR_PEER_WINDOW) = 900
HADR timeout value (HADR_TIMEOUT) = 60
We recommend the preceding parameters based on initial failover/takeover testing. It is mandatory that you test
for proper functionality of failover and takeover with these parameter settings. Because individual configurations
can vary, the parameters might require adjustment.

IMPORTANT
Specific to IBM Db2 with HADR configuration with normal startup: The secondary or standby database instance must be up
and running before you can start the primary database instance.

For demonstration purposes and the procedures described in this article, the database SID is PTR .
IBM Db2 HADR check
After you've configured HADR and the status is PEER and CONNECTED on the primary and standby nodes,
perform the following check:

Execute command as db2<sid> db2pd -hadr -db <SID>

#Primary output:
# Database Member 0 -- Database PTR -- Active -- Up 1 days 01:51:38 -- Date 2019-02-06-15.35.28.505451
#
# HADR_ROLE = PRIMARY
# REPLAY_TYPE = PHYSICAL
# HADR_SYNCMODE = NEARSYNC
# STANDBY_ID = 1
# LOG_STREAM_ID = 0
# HADR_STATE = PEER
# HADR_FLAGS = TCP_PROTOCOL
# PRIMARY_MEMBER_HOST = azibmdb02
# PRIMARY_INSTANCE = db2ptr
# PRIMARY_MEMBER = 0
# STANDBY_MEMBER_HOST = azibmdb01
# STANDBY_INSTANCE = db2ptr
# STANDBY_MEMBER = 0
# HADR_CONNECT_STATUS = CONNECTED
# HADR_CONNECT_STATUS_TIME = 02/05/2019 13:51:47.170561 (1549374707)
# HEARTBEAT_INTERVAL(seconds) = 15
# HEARTBEAT_MISSED = 0
# HEARTBEAT_EXPECTED = 6137
# HADR_TIMEOUT(seconds) = 60
# TIME_SINCE_LAST_RECV(seconds) = 13
# PEER_WAIT_LIMIT(seconds) = 0
# LOG_HADR_WAIT_CUR(seconds) = 0.000
# LOG_HADR_WAIT_RECENT_AVG(seconds) = 0.000025
# LOG_HADR_WAIT_ACCUMULATED(seconds) = 434.595
# LOG_HADR_WAIT_COUNT = 223713
# SOCK_SEND_BUF_REQUESTED,ACTUAL(bytes) = 0, 46080
# SOCK_RECV_BUF_REQUESTED,ACTUAL(bytes) = 0, 374400
# PRIMARY_LOG_FILE,PAGE,POS = S0000280.LOG, 15571, 27902548040
# STANDBY_LOG_FILE,PAGE,POS = S0000280.LOG, 15571, 27902548040
# HADR_LOG_GAP(bytes) = 0
# STANDBY_REPLAY_LOG_FILE,PAGE,POS = S0000280.LOG, 15571, 27902548040
# STANDBY_RECV_REPLAY_GAP(bytes) = 0
# PRIMARY_LOG_TIME = 02/06/2019 15:34:39.000000 (1549467279)
# STANDBY_LOG_TIME = 02/06/2019 15:34:39.000000 (1549467279)
# STANDBY_REPLAY_LOG_TIME = 02/06/2019 15:34:39.000000 (1549467279)
# STANDBY_RECV_BUF_SIZE(pages) = 2048
# STANDBY_RECV_BUF_PERCENT = 0
# STANDBY_SPOOL_LIMIT(pages) = 0
# STANDBY_SPOOL_PERCENT = NULL
# STANDBY_ERROR_TIME = NULL
# PEER_WINDOW(seconds) = 300
# PEER_WINDOW_END = 02/06/2019 15:40:25.000000 (1549467625)
# READS_ON_STANDBY_ENABLED = N

#Secondary output:
# Database Member 0 -- Database PTR -- Standby -- Up 1 days 01:46:43 -- Date 2019-02-06-15.38.25.644168
#
# HADR_ROLE = STANDBY
# REPLAY_TYPE = PHYSICAL
# HADR_SYNCMODE = NEARSYNC
# STANDBY_ID = 0
# LOG_STREAM_ID = 0
# HADR_STATE = PEER
# HADR_FLAGS = TCP_PROTOCOL
# PRIMARY_MEMBER_HOST = azibmdb02
# PRIMARY_INSTANCE = db2ptr
# PRIMARY_MEMBER = 0
# STANDBY_MEMBER_HOST = azibmdb01
# STANDBY_INSTANCE = db2ptr
# STANDBY_MEMBER = 0
# HADR_CONNECT_STATUS = CONNECTED
# HADR_CONNECT_STATUS_TIME = 02/05/2019 13:51:47.205067 (1549374707)
# HEARTBEAT_INTERVAL(seconds) = 15
# HEARTBEAT_MISSED = 0
# HEARTBEAT_EXPECTED = 6186
# HADR_TIMEOUT(seconds) = 60
# TIME_SINCE_LAST_RECV(seconds) = 5
# PEER_WAIT_LIMIT(seconds) = 0
# LOG_HADR_WAIT_CUR(seconds) = 0.000
# LOG_HADR_WAIT_RECENT_AVG(seconds) = 0.000023
# LOG_HADR_WAIT_ACCUMULATED(seconds) = 434.595
# LOG_HADR_WAIT_COUNT = 223725
# SOCK_SEND_BUF_REQUESTED,ACTUAL(bytes) = 0, 46080
# SOCK_RECV_BUF_REQUESTED,ACTUAL(bytes) = 0, 372480
# PRIMARY_LOG_FILE,PAGE,POS = S0000280.LOG, 15574, 27902562173
# STANDBY_LOG_FILE,PAGE,POS = S0000280.LOG, 15574, 27902562173
# HADR_LOG_GAP(bytes) = 0
# STANDBY_REPLAY_LOG_FILE,PAGE,POS = S0000280.LOG, 15574, 27902562173
# STANDBY_RECV_REPLAY_GAP(bytes) = 155
# PRIMARY_LOG_TIME = 02/06/2019 15:37:34.000000 (1549467454)
# STANDBY_LOG_TIME = 02/06/2019 15:37:34.000000 (1549467454)
# STANDBY_REPLAY_LOG_TIME = 02/06/2019 15:37:34.000000 (1549467454)
# STANDBY_RECV_BUF_SIZE(pages) = 2048
# STANDBY_RECV_BUF_SIZE(pages) = 2048
# STANDBY_RECV_BUF_PERCENT = 0
# STANDBY_SPOOL_LIMIT(pages) = 0
# STANDBY_SPOOL_PERCENT = NULL
# STANDBY_ERROR_TIME = NULL
# PEER_WINDOW(seconds) = 300
# PEER_WINDOW_END = 02/06/2019 15:43:19.000000 (1549467799)
# READS_ON_STANDBY_ENABLED = N

Db2 Pacemaker configuration


When you use Pacemaker for automatic failover in the event of a node failure, you need to configure your Db2
instances and Pacemaker accordingly. This section describes this type of configuration.
The following items are prefixed with either:
[A] : Applicable to all nodes
[1] : Applicable only to node 1
[2] : Applicable only to node 2
[A] Prerequisites for Pacemaker configuration:
1. Shut down both database servers with user db2<sid> with db2stop.
2. Change the shell environment for db2<sid> user to /bin/ksh. We recommend that you use the Yast tool.
Pacemaker configuration

IMPORTANT
Recent testing revealed situations, where netcat stops responding to requests due to backlog and its limitation of handling
only one connection. The netcat resource stops listening to the Azure Load balancer requests and the floating IP becomes
unavailable.
For existing Pacemaker clusters, we recommended in the past replacing netcat with socat. Currently we recommend using
azure-lb resource agent, which is part of package resource-agents, with the following package version requirements:
For SLES 12 SP4/SP5, the version must be at least resource-agents-4.3.018.a7fb5035-3.30.1.
For SLES 15/15 SP1, the version must be at least resource-agents-4.3.0184.6ee15eb2-4.13.1.
Note that the change will require brief downtime.
For existing Pacemaker clusters, if the configuration was already changed to use socat as described in Azure Load-Balancer
Detection Hardening, there is no requirement to switch immediately to azure-lb resource agent.

[1] IBM Db2 HADR-specific Pacemaker configuration:

# Put Pacemaker into maintenance mode


sudo crm configure property maintenance-mode=true

[1] Create IBM Db2 resources:


# Replace **bold strings** with your instance name db2sid, database SID, and virtual IP address/Azure Load
Balancer.

sudo crm configure primitive rsc_Db2_db2ptr_PTR db2 \


params instance="db2ptr" dblist="PTR" \
op start interval="0" timeout="130" \
op stop interval="0" timeout="120" \
op promote interval="0" timeout="120" \
op demote interval="0" timeout="120" \
op monitor interval="30" timeout="60" \
op monitor interval="31" role="Master" timeout="60"

# Configure virtual IP - same as Azure Load Balancer IP


sudo crm configure primitive rsc_ip_db2ptr_PTR IPaddr2 \
op monitor interval="10s" timeout="20s" \
params ip="10.100.0.10"

# Configure probe port for Azure load Balancer


sudo crm configure primitive rsc_nc_db2ptr_PTR azure-lb port=62500

sudo crm configure group g_ip_db2ptr_PTR rsc_ip_db2ptr_PTR rsc_nc_db2ptr_PTR

sudo crm configure ms msl_Db2_db2ptr_PTR rsc_Db2_db2ptr_PTR \


meta target-role="Started" notify="true"

sudo crm configure colocation col_db2_db2ptr_PTR inf: g_ip_db2ptr_PTR:Started msl_Db2_db2ptr_PTR:Master

sudo crm configure order ord_db2_ip_db2ptr_PTR inf: msl_Db2_db2ptr_PTR:promote g_ip_db2ptr_PTR:start

sudo crm configure rsc_defaults resource-stickiness=1000


sudo crm configure rsc_defaults migration-threshold=5000

[1] Start IBM Db2 resources:


Put Pacemaker out of maintenance mode.

# Put Pacemaker out of maintenance-mode - that start IBM Db2


sudo crm configure property maintenance-mode=false

[1] Make sure that the cluster status is OK and that all of the resources are started. It's not important which node
the resources are running on.

sudo crm status

# 2 nodes configured
# 5 resources configured

# Online: [ azibmdb01 azibmdb02 ]

# Full list of resources:

# stonith-sbd (stonith:external/sbd): Started azibmdb02


# Resource Group: g_ip_db2ptr_PTR
# rsc_ip_db2ptr_PTR (ocf::heartbeat:IPaddr2): Started azibmdb02
# rsc_nc_db2ptr_PTR (ocf::heartbeat:azure-lb): Started azibmdb02
# Master/Slave Set: msl_Db2_db2ptr_PTR [rsc_Db2_db2ptr_PTR]
# Masters: [ azibmdb02 ]
# Slaves: [ azibmdb01 ]
IMPORTANT
You must manage the Pacemaker clustered Db2 instance by using Pacemaker tools. If you use db2 commands such as
db2stop, Pacemaker detects the action as a failure of resource. If you're performing maintenance, you can put the nodes or
resources in maintenance mode. Pacemaker suspends monitoring resources, and you can then use normal db2
administration commands.

Configure Azure Load Balancer


To configure Azure Load Balancer, we recommend that you use the Azure Standard Load Balancer SKU and then
do the following;

NOTE
The Standard Load Balancer SKU has restrictions accessing public IP addresses from the nodes underneath the Load
Balancer. The article Public endpoint connectivity for Virtual Machines using Azure Standard Load Balancer in SAP high-
availability scenarios is describing ways on how to enable those nodes to access public IP addresses

IMPORTANT
Floating IP is not supported on a NIC secondary IP configuration in load-balancing scenarios. For details see Azure Load
balancer Limitations. If you need additional IP address for the VM, deploy a second NIC.

1. Create a front-end IP pool:


a. In the Azure portal, open the Azure Load Balancer, select frontend IP pool , and then select Add .
b. Enter the name of the new front-end IP pool (for example, Db2-connection ).
c. Set the Assignment to Static , and enter the IP address Vir tual-IP defined at the beginning.
d. Select OK .
e. After the new front-end IP pool is created, note the pool IP address.
2. Create a back-end pool:
a. In the Azure portal, open the Azure Load Balancer, select backend pools , and then select Add .
b. Enter the name of the new back-end pool (for example, Db2-backend ).
c. Select Add a vir tual machine .
d. Select the availability set or the virtual machines hosting IBM Db2 database created in the preceding
step.
e. Select the virtual machines of the IBM Db2 cluster.
f. Select OK .
3. Create a health probe:
a. In the Azure portal, open the Azure Load Balancer, select health probes , and select Add .
b. Enter the name of the new health probe (for example, Db2-hp ).
c. Select TCP as the protocol and port 62500 . Keep the Inter val value set to 5 , and keep the Unhealthy
threshold value set to 2 .
d. Select OK .
4. Create the load-balancing rules:
a. In the Azure portal, open the Azure Load Balancer, select Load balancing rules , and then select Add .
b. Enter the name of the new Load Balancer rule (for example, Db2-SID ).
c. Select the front-end IP address, the back-end pool, and the health probe that you created earlier (for
example, Db2-frontend ).
d. Keep the Protocol set to TCP , and enter port Database Communication port.
e. Increase the idle timeout to 30 minutes.
f. Make sure to enable Floating IP .
g. Select OK .
Make changes to SAP profiles to use virtual IP for connection
To connect to the primary instance of the HADR configuration, the SAP application layer needs to use the virtual IP
address that you defined and configured for the Azure Load Balancer. The following changes are required:
/sapmnt/<SID>/profile/DEFAULT.PFL

SAPDBHOST = db-virt-hostname
j2ee/dbhost = db-virt-hostname

/sapmnt/<SID>/global/db6/db2cli.ini

Hostname=db-virt-hostname

Install primary and dialog application servers


When you install primary and dialog application servers against an Db2 HADR configuration, use the virtual host
name that you picked for the configuration.
If you performed the installation before you created the Db2 HADR configuration, make the changes as described
in the preceding section and as follows for SAP Java stacks.
ABAP+Java or Java stack systems JDBC URL check
Use the J2EE Config tool to check or update the JDBC URL. Because the J2EE Config tool is a graphical tool, you
need to have X server installed:
1. Sign in to the primary application server of the J2EE instance and execute:
sudo /usr/sap/*SID*/*Instance*/j2ee/configtool/configtool.sh
2. In the left frame, choose security store .
3. In the right frame, choose the key jdbc/pool/<SAPSID>/url.
4. Change the host name in the JDBC URL to the virtual host name.
jdbc:db2://db-virt-hostname:5912/TSP:deferPrepares=0
5. Select Add .
6. To save your changes, select the disk icon at the upper left.
7. Close the configuration tool.
8. Restart the Java instance.
Configure log archiving for HADR setup
To configure the Db2 log archiving for HADR setup, we recommend that you configure both the primary and the
standby database to have automatic log retrieval capability from all log archive locations. Both the primary and
standby database must be able to retrieve log archive files from all the log archive locations to which either one of
the database instances might archive log files.
The log archiving is performed only by the primary database. If you change the HADR roles of the database
servers or if a failure occurs, the new primary database is responsible for log archiving. If you've set up multiple
log archive locations, your logs might be archived twice. In the event of a local or remote catch-up, you might also
have to manually copy the archived logs from the old primary server to the active log location of the new primary
server.
We recommend configuring a common NFS share where logs are written from both nodes. The NFS share has to
be highly available.
You can use existing highly available NFS shares for transports or a profile directory. For more information, see:
High availability for NFS on Azure VMs on SUSE Linux Enterprise Server
High availability for SAP NetWeaver on Azure VMs on SUSE Linux Enterprise Server with Azure NetApp Files
for SAP Applications
Azure NetApp Files (to create NFS shares)

Test the cluster setup


This section describes how you can test your Db2 HADR setup. Every test assumes that you are logged in as user
root and the IBM Db2 primary is running on the azibmdb01 virtual machine.
The initial status for all test cases is explained here: (crm_mon -r or crm status)
crm status is a snapshot of Pacemaker status at execution time
crm_mon -r is continuous output of Pacemaker status

2 nodes configured
5 resources configured

Online: [ azibmdb01 azibmdb02 ]

Full list of resources:

stonith-sbd (stonith:external/sbd): Started azibmdb02


Resource Group: g_ip_db2ptr_PTR
rsc_ip_db2ptr_PTR (ocf::heartbeat:IPaddr2): Stopped
rsc_nc_db2ptr_PTR (ocf::heartbeat:azure-lb): Stopped
Master/Slave Set: msl_Db2_db2ptr_PTR [rsc_Db2_db2ptr_PTR]
rsc_Db2_db2ptr_PTR (ocf::heartbeat:db2): Promoting azibmdb01
Slaves: [ azibmdb02 ]

The original status in an SAP system is documented in Transaction DBACOCKPIT > Configuration > Overview, as
shown in the following image:
Test takeover of IBM Db2

IMPORTANT
Before you start the test, make sure that:
Pacemaker doesn't have any failed actions (crm status).
There are no location constraints (leftovers of migration test)
The IBM Db2 HADR synchronization is working. Check with user db2<sid>

db2pd -hadr -db <DBSID>

Migrate the node that's running the primary Db2 database by executing following command:

crm resource migrate msl_Db2_db2ptr_PTR azibmdb02

After the migration is done, the crm status output looks like:

2 nodes configured
5 resources configured

Online: [ azibmdb01 azibmdb02 ]

Full list of resources:

stonith-sbd (stonith:external/sbd): Started azibmdb02


Resource Group: g_ip_db2ptr_PTR
rsc_ip_db2ptr_PTR (ocf::heartbeat:IPaddr2): Started azibmdb02
rsc_nc_db2ptr_PTR (ocf::heartbeat:azure-lb): Started azibmdb02
Master/Slave Set: msl_Db2_db2ptr_PTR [rsc_Db2_db2ptr_PTR]
Masters: [ azibmdb02 ]
Slaves: [ azibmdb01 ]

The original status in an SAP system is documented in Transaction DBACOCKPIT > Configuration > Overview, as
shown in the following image:
Resource migration with "crm resource migrate" creates location constraints. Location constraints should be
deleted. If location constraints are not deleted, the resource cannot fail back or you can experience unwanted
takeovers.
Migrate the resource back to azibmdb01 and clear the location constraints

crm resource migrate msl_Db2_db2ptr_PTR azibmdb01


crm resource clear msl_Db2_db2ptr_PTR

crm resource migrate <res_name> <host>: Creates location constraints and can cause issues with
takeover
crm resource clear <res_name> : Clears location constraints
crm resource cleanup <res_name> : Clears all errors of the resource
Test the fencing agent
In this case, we test SBD fencing, which we recommend that you do when you use SUSE Linux.

azibmdb01:~ # ps -ef|grep sbd


root 2374 1 0 Feb05 ? 00:00:17 sbd: inquisitor
root 2378 2374 0 Feb05 ? 00:00:40 sbd: watcher: /dev/disk/by-id/scsi-
36001405fbbaab35ee77412dacb77ae36 - slot: 0 - uuid: 27cad13a-0bce-4115-891f-43b22cfabe65
root 2379 2374 0 Feb05 ? 00:01:51 sbd: watcher: Pacemaker
root 2380 2374 0 Feb05 ? 00:00:18 sbd: watcher: Cluster

azibmdb01:~ # kill -9 2374

Cluster node azibmdb01 should be rebooted. The IBM Db2 primary HADR role is going to be moved to
azibmdb02. When azibmdb01 is back online, the Db2 instance is going to move in the role of a secondary
database instance.
If the Pacemaker service doesn't start automatically on the rebooted former primary, be sure to start it manually
with:

sudo service pacemaker start

Test a manual takeover


You can test a manual takeover by stopping the Pacemaker service on azibmdb01 node:

service pacemaker stop

status on azibmdb02

2 nodes configured
5 resources configured

Online: [ azibmdb02 ]
OFFLINE: [ azibmdb01 ]

Full list of resources:

stonith-sbd (stonith:external/sbd): Started azibmdb02


Resource Group: g_ip_db2ptr_PTR
rsc_ip_db2ptr_PTR (ocf::heartbeat:IPaddr2): Started azibmdb02
rsc_nc_db2ptr_PTR (ocf::heartbeat:azure-lb): Started azibmdb02
Master/Slave Set: msl_Db2_db2ptr_PTR [rsc_Db2_db2ptr_PTR]
Masters: [ azibmdb02 ]
Stopped: [ azibmdb01 ]

After the failover, you can start the service again on azibmdb01.

service pacemaker start

Kill the Db2 process on the node that runs the HADR primary database

#Kill main db2 process - db2sysc


azibmdb01:~ # ps -ef|grep db2s
db2ptr 34598 34596 8 14:21 ? 00:00:07 db2sysc 0

azibmdb01:~ # kill -9 34598

The Db2 instance is going to fail, and Pacemaker will report following status:

2 nodes configured
5 resources configured

Online: [ azibmdb01 azibmdb02 ]

Full list of resources:

stonith-sbd (stonith:external/sbd): Started azibmdb01


Resource Group: g_ip_db2ptr_PTR
rsc_ip_db2ptr_PTR (ocf::heartbeat:IPaddr2): Stopped
rsc_nc_db2ptr_PTR (ocf::heartbeat:azure-lb): Stopped
Master/Slave Set: msl_Db2_db2ptr_PTR [rsc_Db2_db2ptr_PTR]
Slaves: [ azibmdb02 ]
Stopped: [ azibmdb01 ]

Failed Actions:
* rsc_Db2_db2ptr_PTR_demote_0 on azibmdb01 'unknown error' (1): call=157, status=complete, exitreason='',
last-rc-change='Tue Feb 12 14:28:19 2019', queued=40ms, exec=223ms

Pacemaker will restart the Db2 primary database instance on the same node, or it will fail over to the node that's
running the secondary database instance and an error is reported.

2 nodes configured
5 resources configured

Online: [ azibmdb01 azibmdb02 ]

Full list of resources:

stonith-sbd (stonith:external/sbd): Started azibmdb01


Resource Group: g_ip_db2ptr_PTR
rsc_ip_db2ptr_PTR (ocf::heartbeat:IPaddr2): Started azibmdb01
rsc_nc_db2ptr_PTR (ocf::heartbeat:azure-lb): Started azibmdb01
Master/Slave Set: msl_Db2_db2ptr_PTR [rsc_Db2_db2ptr_PTR]
Masters: [ azibmdb01 ]
Slaves: [ azibmdb02 ]

Failed Actions:
* rsc_Db2_db2ptr_PTR_demote_0 on azibmdb01 'unknown error' (1): call=157, status=complete, exitreason='',
last-rc-change='Tue Feb 12 14:28:19 2019', queued=40ms, exec=223ms

Kill the Db2 process on the node that runs the secondary database instance

azibmdb02:~ # ps -ef|grep db2s


db2ptr 65250 65248 0 Feb11 ? 00:09:27 db2sysc 0

azibmdb02:~ # kill -9

The node gets into failed stated and error reported

2 nodes configured
5 resources configured

Online: [ azibmdb01 azibmdb02 ]

Full list of resources:

stonith-sbd (stonith:external/sbd): Started azibmdb01


Resource Group: g_ip_db2ptr_PTR
rsc_ip_db2ptr_PTR (ocf::heartbeat:IPaddr2): Started azibmdb01
rsc_nc_db2ptr_PTR (ocf::heartbeat:azure-lb): Started azibmdb01
Master/Slave Set: msl_Db2_db2ptr_PTR [rsc_Db2_db2ptr_PTR]
rsc_Db2_db2ptr_PTR (ocf::heartbeat:db2): FAILED azibmdb02
Masters: [ azibmdb01 ]

Failed Actions:
* rsc_Db2_db2ptr_PTR_monitor_30000 on azibmdb02 'not running' (7): call=144, status=complete, exitreason='',
last-rc-change='Tue Feb 12 14:36:59 2019', queued=0ms, exec=0ms

The Db2 instance gets restarted in the secondary role it had assigned before.
2 nodes configured
5 resources configured

Online: [ azibmdb01 azibmdb02 ]

Full list of resources:

stonith-sbd (stonith:external/sbd): Started azibmdb01


Resource Group: g_ip_db2ptr_PTR
rsc_ip_db2ptr_PTR (ocf::heartbeat:IPaddr2): Started azibmdb01
rsc_nc_db2ptr_PTR (ocf::heartbeat:azure-lb): Started azibmdb01
Master/Slave Set: msl_Db2_db2ptr_PTR [rsc_Db2_db2ptr_PTR]
Masters: [ azibmdb01 ]
Slaves: [ azibmdb02 ]

Failed Actions:
* rsc_Db2_db2ptr_PTR_monitor_30000 on azibmdb02 'not running' (7): call=144, status=complete, exitreason='',
last-rc-change='Tue Feb 12 14:36:59 2019', queued=0ms, exec=0ms

Stop DB via db2stop force on the node that runs the HADR primary database instance

2 nodes configured
5 resources configured

Online: [ azibmdb01 azibmdb02 ]

Full list of resources:

stonith-sbd (stonith:external/sbd): Started azibmdb01


Resource Group: g_ip_db2ptr_PTR
rsc_ip_db2ptr_PTR (ocf::heartbeat:IPaddr2): Started azibmdb01
rsc_nc_db2ptr_PTR (ocf::heartbeat:azure-lb): Started azibmdb01
Master/Slave Set: msl_Db2_db2ptr_PTR [rsc_Db2_db2ptr_PTR]
Masters: [ azibmdb01 ]
Slaves: [ azibmdb02 ]

As user db2<sid> execute command db2stop force:

azibmdb01:~ # su - db2ptr
azibmdb01:db2ptr> db2stop force

Failure detected

2 nodes configured
5 resources configured

Online: [ azibmdb01 azibmdb02 ]

Full list of resources:

stonith-sbd (stonith:external/sbd): Started azibmdb01


Resource Group: g_ip_db2ptr_PTR
rsc_ip_db2ptr_PTR (ocf::heartbeat:IPaddr2): Stopped
rsc_nc_db2ptr_PTR (ocf::heartbeat:azure-lb): Stopped
Master/Slave Set: msl_Db2_db2ptr_PTR [rsc_Db2_db2ptr_PTR]
rsc_Db2_db2ptr_PTR (ocf::heartbeat:db2): FAILED azibmdb01
Slaves: [ azibmdb02 ]

Failed Actions:
* rsc_Db2_db2ptr_PTR_demote_0 on azibmdb01 'unknown error' (1): call=201, status=complete, exitreason='',
last-rc-change='Tue Feb 12 14:45:25 2019', queued=1ms, exec=150ms
The Db2 HADR secondary database instance got promoted into the primary role

nodes configured
5 resources configured

Online: [ azibmdb01 azibmdb02 ]

Full list of resources:

stonith-sbd (stonith:external/sbd): Started azibmdb01


Resource Group: g_ip_db2ptr_PTR
rsc_ip_db2ptr_PTR (ocf::heartbeat:IPaddr2): Started azibmdb02
rsc_nc_db2ptr_PTR (ocf::heartbeat:azure-lb): Started azibmdb02
Master/Slave Set: msl_Db2_db2ptr_PTR [rsc_Db2_db2ptr_PTR]
Masters: [ azibmdb02 ]
Stopped: [ azibmdb01 ]

Failed Actions:
* rsc_Db2_db2ptr_PTR_start_0 on azibmdb01 'unknown error' (1): call=205, stat
us=complete, exitreason='',
last-rc-change='Tue Feb 12 14:45:27 2019', queued=0ms, exec=865ms

Crash VM with restart on the node that runs the HADR primary database instance

#Linux kernel panic - with OS restart


azibmdb01:~ # echo b > /proc/sysrq-trigger

Pacemaker will promote the secondary instance to the primary instance role. The old primary instance will move
into the secondary role after the VM and all services are fully restored after the VM reboot:

nodes configured
5 resources configured

Online: [ azibmdb01 azibmdb02 ]

Full list of resources:

stonith-sbd (stonith:external/sbd): Started azibmdb02


Resource Group: g_ip_db2ptr_PTR
rsc_ip_db2ptr_PTR (ocf::heartbeat:IPaddr2): Started azibmdb01
rsc_nc_db2ptr_PTR (ocf::heartbeat:azure-lb): Started azibmdb01
Master/Slave Set: msl_Db2_db2ptr_PTR [rsc_Db2_db2ptr_PTR]
Masters: [ azibmdb01 ]
Slaves: [ azibmdb02 ]

Crash the VM that runs the HADR primary database instance with "halt"

#Linux kernel panic - halts OS


azibmdb01:~ # echo b > /proc/sysrq-trigger

In such a case, Pacemaker will detect that the node that's running the primary database instance isn't responding.
2 nodes configured
5 resources configured

Node azibmdb01: UNCLEAN (online)


Online: [ azibmdb02 ]

Full list of resources:

stonith-sbd (stonith:external/sbd): Started azibmdb02


Resource Group: g_ip_db2ptr_PTR
rsc_ip_db2ptr_PTR (ocf::heartbeat:IPaddr2): Started azibmdb01
rsc_nc_db2ptr_PTR (ocf::heartbeat:azure-lb): Started azibmdb01
Master/Slave Set: msl_Db2_db2ptr_PTR [rsc_Db2_db2ptr_PTR]
Masters: [ azibmdb01 ]
Slaves: [ azibmdb02 ]

The next step is to check for a Split brain situation. After the surviving node has determined that the node that last
ran the primary database instance is down, a failover of resources is executed.

2 nodes configured
5 resources configured

Online: [ azibmdb02 ]
OFFLINE: [ azibmdb01 ]

Full list of resources:

stonith-sbd (stonith:external/sbd): Started azibmdb02


Resource Group: g_ip_db2ptr_PTR
rsc_ip_db2ptr_PTR (ocf::heartbeat:IPaddr2): Started azibmdb02
rsc_nc_db2ptr_PTR (ocf::heartbeat:azure-lb): Started azibmdb02
Master/Slave Set: msl_Db2_db2ptr_PTR [rsc_Db2_db2ptr_PTR]
Masters: [ azibmdb02 ]
Stopped: [ azibmdb01 ]

In the event of a "halting" of the node, the failed node has to be restarted via Azure Management tools (in the
Azure portal, PowerShell, or the Azure CLI). After the failed node is back online, it starts the Db2 instance into the
secondary role.

2 nodes configured
5 resources configured

Online: [ azibmdb01 azibmdb02 ]

Full list of resources:

stonith-sbd (stonith:external/sbd): Started azibmdb02


Resource Group: g_ip_db2ptr_PTR
rsc_ip_db2ptr_PTR (ocf::heartbeat:IPaddr2): Started azibmdb02
rsc_nc_db2ptr_PTR (ocf::heartbeat:azure-lb): Started azibmdb02
Master/Slave Set: msl_Db2_db2ptr_PTR [rsc_Db2_db2ptr_PTR]
Masters: [ azibmdb02 ]
Slaves: [ azibmdb01 ]

Next steps
High-availability architecture and scenarios for SAP NetWeaver
Set up Pacemaker on SUSE Linux Enterprise Server in Azure
High availability of IBM Db2 LUW on Azure VMs on
Red Hat Enterprise Linux Server
12/22/2020 • 24 minutes to read • Edit Online

IBM Db2 for Linux, UNIX, and Windows (LUW) in high availability and disaster recovery (HADR) configuration
consists of one node that runs a primary database instance and at least one node that runs a secondary database
instance. Changes to the primary database instance are replicated to a secondary database instance
synchronously or asynchronously, depending on your configuration.

NOTE
This article contains references to the terms master and slave, terms that Microsoft no longer uses. When these terms are
removed from the software, we’ll remove them from this article.

This article describes how to deploy and configure the Azure virtual machines (VMs), install the cluster framework,
and install the IBM Db2 LUW with HADR configuration.
The article doesn't cover how to install and configure IBM Db2 LUW with HADR or SAP software installation. To
help you accomplish these tasks, we provide references to SAP and IBM installation manuals. This article focuses
on parts that are specific to the Azure environment.
The supported IBM Db2 versions are 10.5 and later, as documented in SAP note 1928533.
Before you begin an installation, see the following SAP notes and documentation:

SA P N OT E DESC RIP T IO N

1928533 SAP applications on Azure: Supported products and Azure


VM types

2015553 SAP on Azure: Support prerequisites

2178632 Key monitoring metrics for SAP on Azure

2191498 SAP on Linux with Azure: Enhanced monitoring

2243692 Linux on Azure (IaaS) VM: SAP license issues

2002167 Red Hat Enterprise Linux 7.x: Installation and Upgrade

2694118 Red Hat Enterprise Linux HA Add-On on Azure

1999351 Troubleshooting enhanced Azure monitoring for SAP

2233094 DB6: SAP applications on Azure that use IBM Db2 for Linux,
UNIX, and Windows - additional information

1612105 DB6: FAQ on Db2 with HADR


DO C UM EN TAT IO N

SAP Community Wiki: Has all of the required SAP Notes for Linux

Azure Virtual Machines planning and implementation for SAP on Linux guide

Azure Virtual Machines deployment for SAP on Linux (this article)

Azure Virtual Machines database management system(DBMS) deployment for SAP on Linux guide

SAP workload on Azure planning and deployment checklist

Overview of the High Availability Add-On for Red Hat Enterprise Linux 7

High Availability Add-On Administration

High Availability Add-On Reference

Support Policies for RHEL High Availability Clusters - Microsoft Azure Virtual Machines as Cluster Members

Installing and Configuring a Red Hat Enterprise Linux 7.4 (and later) High-Availability Cluster on Microsoft Azure

IBM Db2 Azure Virtual Machines DBMS deployment for SAP workload

IBM Db2 HADR 11.1

IBM Db2 HADR 10.5

Support Policy for RHEL High Availability Clusters - Management of IBM Db2 for Linux, Unix, and Windows in a Cluster

Overview
To achieve high availability, IBM Db2 LUW with HADR is installed on at least two Azure virtual machines, which are
deployed in an Azure availability set or across Azure Availability Zones.
The following graphics display a setup of two database server Azure VMs. Both database server Azure VMs have
their own storage attached and are up and running. In HADR, one database instance in one of the Azure VMs has
the role of the primary instance. All clients are connected to primary instance. All changes in database transactions
are persisted locally in the Db2 transaction log. As the transaction log records are persisted locally, the records are
transferred via TCP/IP to the database instance on the second database server, the standby server, or standby
instance. The standby instance updates the local database by rolling forward the transferred transaction log
records. In this way, the standby server is kept in sync with the primary server.
HADR is only a replication functionality. It has no failure detection and no automatic takeover or failover facilities.
A takeover or transfer to the standby server must be initiated manually by a database administrator. To achieve an
automatic takeover and failure detection, you can use the Linux Pacemaker clustering feature. Pacemaker monitors
the two database server instances. When the primary database server instance crashes, Pacemaker initiates an
automatic HADR takeover by the standby server. Pacemaker also ensures that the virtual IP address is assigned to
the new primary server.
To have SAP application servers connect to primary database, you need a virtual host name and a virtual IP
address. In the event of a failover, the SAP application servers will connect to new primary database instance. In an
Azure environment, an Azure load balancer is required to use a virtual IP address in the way that's required for
HADR of IBM Db2.
To help you fully understand how IBM Db2 LUW with HADR and Pacemaker fits into a highly available SAP system
setup, the following image presents an overview of a highly available setup of an SAP system based on IBM Db2
database. This article covers only IBM Db2, but it provides references to other articles about how to set up other
components of an SAP system.

High-level overview of the required steps


To deploy an IBM Db2 configuration, you need to follow these steps:
Plan your environment.
Deploy the VMs.
Update RHEL Linux and configure file systems.
Install and configure Pacemaker.
Setup glusterfs cluster or Azure NetApp Files
Install ASCS/ERS on a separate cluster.
Install IBM Db2 database with Distributed/High Availability option (SWPM).
Install and create a secondary database node and instance, and configure HADR.
Confirm that HADR is working.
Apply the Pacemaker configuration to control IBM Db2.
Configure Azure Load Balancer.
Install primary and dialog application servers.
Check and adapt the configuration of SAP application servers.
Perform failover and takeover tests.

Plan Azure infrastructure for hosting IBM Db2 LUW with HADR
Complete the planning process before you execute the deployment. Planning builds the foundation for deploying
a configuration of Db2 with HADR in Azure. Key elements that need to be part of planning for IMB Db2 LUW
(database part of SAP environment) are listed in the following table:

TO P IC SH O RT DESC RIP T IO N

Define Azure resource groups Resource groups where you deploy VM, VNet, Azure Load
Balancer, and other resources. Can be existing or new.

Virtual network / Subnet definition Where VMs for IBM Db2 and Azure Load Balancer are being
deployed. Can be existing or newly created.

Virtual machines hosting IBM Db2 LUW VM size, storage, networking, IP address.

Virtual host name and virtual IP for IBM Db2 database The virtual IP or host name that's used for connection of SAP
application servers. db-vir t-hostname , db-vir t-ip .

Azure fencing Method to avoid split brain situations is prevented.

Azure Load Balancer Usage of Basic or Standard (recommended), probe port for
Db2 database (our recommendation 62500) probe-por t .

Name resolution How name resolution works in the environment. DNS service
is highly recommended. Local hosts file can be used.

For more information about Linux Pacemaker in Azure, see Setting up Pacemaker on Red Hat Enterprise Linux in
Azure.

Deployment on Red Hat Enterprise Linux


The resource agent for IBM Db2 LUW is included in Red Hat Enterprise Linux Server HA Addon. For the setup that's
described in this document, you should use Red Hat Enterprise Linux for SAP. The Azure Marketplace contains an
image for Red Hat Enterprise Linux 7.4 for SAP or higher that you can use to deploy new Azure virtual machines.
Be aware of the various support or service models that are offered by Red Hat through the Azure Marketplace
when you choose a VM image in the Azure VM Marketplace.
Hosts: DNS updates
Make a list of all host names, including virtual host names, and update your DNS servers to enable proper IP
address to host-name resolution. If a DNS server doesn't exist or you can't update and create DNS entries, you
need to use the local host files of the individual VMs that are participating in this scenario. If you're using host files
entries, make sure that the entries are applied to all VMs in the SAP system environment. However, we
recommend that you use your DNS that, ideally, extends into Azure
Manual deployment
Make sure that the selected OS is supported by IBM/SAP for IBM Db2 LUW. The list of supported OS versions for
Azure VMs and Db2 releases is available in SAP note 1928533. The list of OS releases by individual Db2 release is
available in the SAP Product Availability Matrix. We highly recommend a minimum of Red Hat Enterprise Linux 7.4
for SAP because of Azure-related performance improvements in this or later Red Hat Enterprise Linux versions.
1. Create or select a resource group.
2. Create or select a virtual network and subnet.
3. Create an Azure availability set or deploy an availability zone.
For the availability set, set the maximum update domains to 2.
4. Create Virtual Machine 1.
Use Red Hat Enterprise Linux for SAP image in the Azure Marketplace.
Select the Azure availability set you created in step 3, or select Availability Zone.
5. Create Virtual Machine 2.
Use Red Hat Enterprise Linux for SAP image in the Azure Marketplace.
Select the Azure availability set you in created in step 3, or select Availability Zone (not the same zone as
in step 3).
6. Add data disks to the VMs, and then check the recommendation of a file system setup in the article IBM Db2
Azure Virtual Machines DBMS deployment for SAP workload.

Create the Pacemaker cluster


To create a basic Pacemaker cluster for this IBM Db2 server, see Setting up Pacemaker on Red Hat Enterprise Linux
in Azure.

Install the IBM Db2 LUW and SAP environment


Before you start the installation of an SAP environment based on IBM Db2 LUW, review the following
documentation:
Azure documentation
SAP documentation
IBM documentation
Links to this documentation are provided in the introductory section of this article.
Check the SAP installation manuals about installing NetWeaver-based applications on IBM Db2 LUW. You can find
the guides on the SAP Help portal by using the SAP Installation Guide Finder.
You can reduce the number of guides displayed in the portal by setting the following filters:
I want to: "Install a new system"
My Database: "IBM Db2 for Linux, Unix, and Windows"
Additional filters for SAP NetWeaver versions, stack configuration, or operating system
Red Hat firewall rules
Red Hat Enterprise Linux has firewall enabled by default.
#Allow access to SWPM tool. Rule is not permanent.
sudo firewall-cmd --add-port=4237/tcp

Installation hints for setting up IBM Db2 LUW with HADR


To set up the primary IBM Db2 LUW database instance:
Use the high availability or distributed option.
Install the SAP ASCS/ERS and Database instance.
Take a backup of the newly installed database.

IMPORTANT
Write down the "Database Communication port" that's set during installation. It must be the same port number for both
database instances.

IBM Db2 HADR settings for Azure


When you use an Azure Pacemaker fencing agent, set the following parameters:
HADR peer window duration (seconds) (HADR_PEER_WINDOW) = 240
HADR timeout value (HADR_TIMEOUT) = 45
We recommend the preceding parameters based on initial failover/takeover testing. It is mandatory that you test
for proper functionality of failover and takeover with these parameter settings. Because individual configurations
can vary, the parameters might require adjustment.

NOTE
Specific to IBM Db2 with HADR configuration with normal startup: The secondary or standby database instance must be up
and running before you can start the primary database instance.
NOTE
For installation and configuration that's specific to Azure and Pacemaker: During the installation procedure through SAP
Software Provisioning Manager, there is an explicit question about high availability for IBM Db2 LUW:
Do not select IBM Db2 pureScale .
Do not select Install IBM Tivoli System Automation for Multiplatforms .
Do not select Generate cluster configuration files .

To set up the Standby database server by using the SAP homogeneous system copy procedure, execute these
steps:
1. Select the System copy option > Target systems > Distributed > Database instance .
2. As a copy method, select Homogeneous System so that you can use backup to restore a backup on the
standby server instance.
3. When you reach the exit step to restore the database for homogeneous system copy, exit the installer. Restore
the database from a backup of the primary host. All subsequent installation phases have already been executed
on the primary database server.
Red Hat firewall rules for DB2 HADR
Add firewall rules to allow traffic to DB2 and between DB2 for HADR to work:
Database communication port. If using partitions, add those ports too.
HADR port (value of DB2 parameter HADR_LOCAL_SVC)
Azure probe port

sudo firewall-cmd --add-port=<port>/tcp --permanent


sudo firewall-cmd --reload

IBM Db2 HADR check


For demonstration purposes and the procedures described in this article, the database SID is ID2 .
After you've configured HADR and the status is PEER and CONNECTED on the primary and standby nodes,
perform the following check:

Execute command as db2<sid> db2pd -hadr -db <SID>

#Primary output:
Database Member 0 -- Database ID2 -- Active -- Up 1 days 15:45:23 -- Date 2019-06-25-10.55.25.349375

HADR_ROLE = PRIMARY
REPLAY_TYPE = PHYSICAL
HADR_SYNCMODE = NEARSYNC
STANDBY_ID = 1
LOG_STREAM_ID = 0
HADR_STATE = PEER
HADR_FLAGS =
PRIMARY_MEMBER_HOST = az-idb01
PRIMARY_INSTANCE = db2id2
PRIMARY_MEMBER = 0
STANDBY_MEMBER_HOST = az-idb02
STANDBY_INSTANCE = db2id2
STANDBY_MEMBER = 0
HADR_CONNECT_STATUS = CONNECTED
HADR_CONNECT_STATUS_TIME = 06/25/2019 10:55:05.076494 (1561460105)
HEARTBEAT_INTERVAL(seconds) = 7
HEARTBEAT_MISSED = 5
HEARTBEAT_EXPECTED = 52
HADR_TIMEOUT(seconds) = 30
TIME_SINCE_LAST_RECV(seconds) = 5
PEER_WAIT_LIMIT(seconds) = 0
LOG_HADR_WAIT_CUR(seconds) = 0.000
LOG_HADR_WAIT_RECENT_AVG(seconds) = 598.000027
LOG_HADR_WAIT_ACCUMULATED(seconds) = 598.000
LOG_HADR_WAIT_COUNT = 1
SOCK_SEND_BUF_REQUESTED,ACTUAL(bytes) = 0, 46080
SOCK_RECV_BUF_REQUESTED,ACTUAL(bytes) = 0, 369280
PRIMARY_LOG_FILE,PAGE,POS = S0000012.LOG, 14151, 3685322855
STANDBY_LOG_FILE,PAGE,POS = S0000012.LOG, 14151, 3685322855
HADR_LOG_GAP(bytes) = 132242668
STANDBY_REPLAY_LOG_FILE,PAGE,POS = S0000012.LOG, 14151, 3685322855
STANDBY_RECV_REPLAY_GAP(bytes) = 0
PRIMARY_LOG_TIME = 06/25/2019 10:45:42.000000 (1561459542)
STANDBY_LOG_TIME = 06/25/2019 10:45:42.000000 (1561459542)
STANDBY_REPLAY_LOG_TIME = 06/25/2019 10:45:42.000000 (1561459542)
STANDBY_RECV_BUF_SIZE(pages) = 2048
STANDBY_RECV_BUF_PERCENT = 0
STANDBY_SPOOL_LIMIT(pages) = 1000
STANDBY_SPOOL_PERCENT = 0
STANDBY_ERROR_TIME = NULL
PEER_WINDOW(seconds) = 300
PEER_WINDOW_END = 06/25/2019 11:12:03.000000 (1561461123)
READS_ON_STANDBY_ENABLED = N

#Secondary output:
Database Member 0 -- Database ID2 -- Standby -- Up 1 days 15:45:18 -- Date 2019-06-25-10.56.19.820474

HADR_ROLE = STANDBY
REPLAY_TYPE = PHYSICAL
HADR_SYNCMODE = NEARSYNC
STANDBY_ID = 0
LOG_STREAM_ID = 0
HADR_STATE = PEER
HADR_FLAGS =
PRIMARY_MEMBER_HOST = az-idb01
PRIMARY_INSTANCE = db2id2
PRIMARY_MEMBER = 0
STANDBY_MEMBER_HOST = az-idb02
STANDBY_MEMBER_HOST = az-idb02
STANDBY_INSTANCE = db2id2
STANDBY_MEMBER = 0
HADR_CONNECT_STATUS = CONNECTED
HADR_CONNECT_STATUS_TIME = 06/25/2019 10:55:05.078116 (1561460105)
HEARTBEAT_INTERVAL(seconds) = 7
HEARTBEAT_MISSED = 0
HEARTBEAT_EXPECTED = 10
HADR_TIMEOUT(seconds) = 30
TIME_SINCE_LAST_RECV(seconds) = 1
PEER_WAIT_LIMIT(seconds) = 0
LOG_HADR_WAIT_CUR(seconds) = 0.000
LOG_HADR_WAIT_RECENT_AVG(seconds) = 598.000027
LOG_HADR_WAIT_ACCUMULATED(seconds) = 598.000
LOG_HADR_WAIT_COUNT = 1
SOCK_SEND_BUF_REQUESTED,ACTUAL(bytes) = 0, 46080
SOCK_RECV_BUF_REQUESTED,ACTUAL(bytes) = 0, 367360
PRIMARY_LOG_FILE,PAGE,POS = S0000012.LOG, 14151, 3685322855
STANDBY_LOG_FILE,PAGE,POS = S0000012.LOG, 14151, 3685322855
HADR_LOG_GAP(bytes) = 0
STANDBY_REPLAY_LOG_FILE,PAGE,POS = S0000012.LOG, 14151, 3685322855
STANDBY_RECV_REPLAY_GAP(bytes) = 0
PRIMARY_LOG_TIME = 06/25/2019 10:45:42.000000 (1561459542)
STANDBY_LOG_TIME = 06/25/2019 10:45:42.000000 (1561459542)
STANDBY_REPLAY_LOG_TIME = 06/25/2019 10:45:42.000000 (1561459542)
STANDBY_RECV_BUF_SIZE(pages) = 2048
STANDBY_RECV_BUF_PERCENT = 0
STANDBY_SPOOL_LIMIT(pages) = 1000
STANDBY_SPOOL_PERCENT = 0
STANDBY_ERROR_TIME = NULL
PEER_WINDOW(seconds) = 1000
PEER_WINDOW_END = 06/25/2019 11:12:59.000000 (1561461179)
READS_ON_STANDBY_ENABLED = N

Db2 Pacemaker configuration


When you use Pacemaker for automatic failover in the event of a node failure, you need to configure your Db2
instances and Pacemaker accordingly. This section describes this type of configuration.
The following items are prefixed with either:
[A] : Applicable to all nodes
[1] : Applicable only to node 1
[2] : Applicable only to node 2
[A] Prerequisite for Pacemaker configuration:
1. Shut down both database servers with user db2<sid> with db2stop.
2. Change the shell environment for db2<sid> user to /bin/ksh:

# Install korn shell:


sudo yum install ksh
# Change users shell:
sudo usermod -s /bin/ksh db2<sid>

Pacemaker configuration
[1] IBM Db2 HADR-specific Pacemaker configuration:

# Put Pacemaker into maintenance mode


sudo pcs property set maintenance-mode=true
[1] Create IBM Db2 resources:

# Replace bold strings with your instance name db2sid, database SID, and virtual IP address/Azure Load
Balancer.
sudo pcs resource create Db2_HADR_ID2 db2 instance='db2id2' dblist='ID2' master meta notify=true resource-
stickiness=5000

#Configure resource stickiness and correct cluster notifications for master resoruce
sudo pcs resource update Db2_HADR_ID2-master meta notify=true resource-stickiness=5000

# Configure virtual IP - same as Azure Load Balancer IP


sudo pcs resource create vip_db2id2_ID2 IPaddr2 ip='10.100.0.40'

# Configure probe port for Azure load Balancer


sudo pcs resource create nc_db2id2_ID2 azure-lb port=62500

#Create a group for ip and Azure loadbalancer probe port


sudo pcs resource group add g_ipnc_db2id2_ID2 vip_db2id2_ID2 nc_db2id2_ID2

#Create colocation constrain - keep Db2 HADR Master and Group on same node
sudo pcs constraint colocation add g_ipnc_db2id2_ID2 with master Db2_HADR_ID2-master

#Create start order constrain


sudo pcs constraint order promote Db2_HADR_ID2-master then g_ipnc_db2id2_ID2

[1] Start IBM Db2 resources:


Put Pacemaker out of maintenance mode.

# Put Pacemaker out of maintenance-mode - that start IBM Db2


sudo pcs property set maintenance-mode=false

[1] Make sure that the cluster status is OK and that all of the resources are started. It's not important which node
the resources are running on.

sudo pcs status

2 nodes configured
5 resources configured

Online: [ az-idb01 az-idb02 ]

Full list of resources:

rsc_st_azure (stonith:fence_azure_arm): Started az-idb01


Master/Slave Set: Db2_HADR_ID2-master [Db2_HADR_ID2]
Masters: [ az-idb01 ]
Slaves: [ az-idb02 ]
Resource Group: g_ipnc_db2id2_ID2
vip_db2id2_ID2 (ocf::heartbeat:IPaddr2): Started az-idb01
nc_db2id2_ID2 (ocf::heartbeat:azure-lb): Started az-idb01

Daemon Status:
corosync: active/disabled
pacemaker: active/disabled
pcsd: active/enabled
IMPORTANT
You must manage the Pacemaker clustered Db2 instance by using Pacemaker tools. If you use db2 commands such as
db2stop, Pacemaker detects the action as a failure of resource. If you're performing maintenance, you can put the nodes or
resources in maintenance mode. Pacemaker suspends monitoring resources, and you can then use normal db2
administration commands.

Configure Azure Load Balancer


To configure Azure Load Balancer, we recommend that you use the Azure Standard Load Balancer SKU and then do
the following;

NOTE
The Standard Load Balancer SKU has restrictions accessing public IP addresses from the nodes underneath the Load
Balancer. The article Public endpoint connectivity for Virtual Machines using Azure Standard Load Balancer in SAP high-
availability scenarios is describing ways on how to enable those nodes to access public IP addresses

IMPORTANT
Floating IP is not supported on a NIC secondary IP configuration in load-balancing scenarios. For details see Azure Load
balancer Limitations. If you need additional IP address for the VM, deploy a second NIC.

1. Create a front-end IP pool:


a. In the Azure portal, open the Azure Load Balancer, select frontend IP pool , and then select Add .
b. Enter the name of the new front-end IP pool (for example, Db2-connection ).
c. Set the Assignment to Static , and enter the IP address Vir tual-IP defined at the beginning.
d. Select OK .
e. After the new front-end IP pool is created, note the pool IP address.
2. Create a back-end pool:
a. In the Azure portal, open the Azure Load Balancer, select backend pools , and then select Add .
b. Enter the name of the new back-end pool (for example, Db2-backend ).
c. Select Add a vir tual machine .
d. Select the availability set or the virtual machines hosting IBM Db2 database created in the preceding step.
e. Select the virtual machines of the IBM Db2 cluster.
f. Select OK .
3. Create a health probe:
a. In the Azure portal, open the Azure Load Balancer, select health probes , and select Add .
b. Enter the name of the new health probe (for example, Db2-hp ).
c. Select TCP as the protocol and port 62500 . Keep the Inter val value set to 5 , and keep the Unhealthy
threshold value set to 2 .
d. Select OK .
4. Create the load-balancing rules:
a. In the Azure portal, open the Azure Load Balancer, select Load balancing rules , and then select Add .
b. Enter the name of the new Load Balancer rule (for example, Db2-SID ).
c. Select the front-end IP address, the back-end pool, and the health probe that you created earlier (for
example, Db2-frontend ).
d. Keep the Protocol set to TCP , and enter port Database Communication port.
e. Increase the idle timeout to 30 minutes.
f. Make sure to enable Floating IP .
g. Select OK .
[A] Add firewall rule for probe port:

sudo firewall-cmd --add-port=/tcp --permanent


sudo firewall-cmd --reload

Make changes to SAP profiles to use virtual IP for connection


To connect to the primary instance of the HADR configuration, the SAP application layer needs to use the virtual IP
address that you defined and configured for the Azure Load Balancer. The following changes are required:
/sapmnt/<SID>/profile/DEFAULT.PFL

SAPDBHOST = db-virt-hostname
j2ee/dbhost = db-virt-hostname

/sapmnt/<SID>/global/db6/db2cli.ini

Hostname=db-virt-hostname

Install primary and dialog application servers


When you install primary and dialog application servers against an Db2 HADR configuration, use the virtual host
name that you picked for the configuration.
If you performed the installation before you created the Db2 HADR configuration, make the changes as described
in the preceding section and as follows for SAP Java stacks.
ABAP+Java or Java stack systems JDBC URL check
Use the J2EE Config tool to check or update the JDBC URL. Because the J2EE Config tool is a graphical tool, you
need to have X server installed:
1. Sign in to the primary application server of the J2EE instance and execute:

sudo /usr/sap/*SID*/*Instance*/j2ee/configtool/configtool.sh

2. In the left frame, choose security store .


3. In the right frame, choose the key jdbc/pool/\<SAPSID>/url .
4. Change the host name in the JDBC URL to the virtual host name.
jdbc:db2://db-virt-hostname:5912/TSP:deferPrepares=0

5. Select Add .
6. To save your changes, select the disk icon at the upper left.
7. Close the configuration tool.
8. Restart the Java instance.

Configure log archiving for HADR setup


To configure the Db2 log archiving for HADR setup, we recommend that you configure both the primary and the
standby database to have automatic log retrieval capability from all log archive locations. Both the primary and
standby database must be able to retrieve log archive files from all the log archive locations to which either one of
the database instances might archive log files.
The log archiving is performed only by the primary database. If you change the HADR roles of the database
servers or if a failure occurs, the new primary database is responsible for log archiving. If you've set up multiple
log archive locations, your logs might be archived twice. In the event of a local or remote catch-up, you might also
have to manually copy the archived logs from the old primary server to the active log location of the new primary
server.
We recommend configuring a common NFS share or GlusterFS, where logs are written from both nodes. The NFS
share or GlusterFS has to be highly available.
You can use existing highly available NFS shares or GlusterFS for transports or a profile directory. For more
information, see:
GlusterFS on Azure VMs on Red Hat Enterprise Linux for SAP NetWeaver
High availability for SAP NetWeaver on Azure VMs on Red Hat Enterprise Linux with Azure NetApp Files for
SAP Applications
Azure NetApp Files (to create NFS shares)

Test the cluster setup


This section describes how you can test your Db2 HADR setup. Every test assumes IBM Db2 primary is running on
the az-idb01 virtual machine. User with sudo privileges or root (not recommended) must be used.
The initial status for all test cases is explained here: (crm_mon -r or pcs status)
pcs status is a snapshot of Pacemaker status at execution time
crm_mon -r is continuous output of Pacemaker status
2 nodes configured
5 resources configured

Online: [ az-idb01 az-idb02 ]

Full list of resources:

rsc_st_azure (stonith:fence_azure_arm): Started az-idb01


Master/Slave Set: Db2_HADR_ID2-master [Db2_HADR_ID2]
Masters: [ az-idb01 ]
Slaves: [ az-idb02 ]
Resource Group: g_ipnc_db2id2_ID2
vip_db2id2_ID2 (ocf::heartbeat:IPaddr2): Started az-idb01
nc_db2id2_ID2 (ocf::heartbeat:azure-lb): Started az-idb01

Daemon Status:
corosync: active/disabled
pacemaker: active/disabled
pcsd: active/enabled

The original status in an SAP system is documented in Transaction DBACOCKPIT > Configuration > Overview, as
shown in the following image:
Test takeover of IBM Db2

IMPORTANT
Before you start the test, make sure that:
Pacemaker doesn't have any failed actions (pcs status).
There are no location constraints (leftovers of migration test)
The IBM Db2 HADR synchronization is working. Check with user db2<sid>

db2pd -hadr -db <DBSID>

Migrate the node that's running the primary Db2 database by executing following command:

sudo pcs resource move Db2_HADR_ID2-master

After the migration is done, the crm status output looks like:

2 nodes configured
5 resources configured

Online: [ az-idb01 az-idb02 ]

Full list of resources:

rsc_st_azure (stonith:fence_azure_arm): Started az-idb01


Master/Slave Set: Db2_HADR_ID2-master [Db2_HADR_ID2]
Masters: [ az-idb02 ]
Stopped: [ az-idb01 ]
Resource Group: g_ipnc_db2id2_ID2
vip_db2id2_ID2 (ocf::heartbeat:IPaddr2): Started az-idb02
nc_db2id2_ID2 (ocf::heartbeat:azure-lb): Started az-idb02

The original status in an SAP system is documented in Transaction DBACOCKPIT > Configuration > Overview, as
shown in the following image:

Resource migration with "pcs resource move" creates location constraints. Location constraints in this case are
preventing running IBM Db2 instance on az-idb01. If location constraints are not deleted, the resource cannot fail
back.
Remove the location constrain and standby node will be started on az-idb01.

sudo pcs resource clear Db2_HADR_ID2-master

And cluster status changes to:

2 nodes configured
5 resources configured

Online: [ az-idb01 az-idb02 ]

Full list of resources:

rsc_st_azure (stonith:fence_azure_arm): Started az-idb01


Master/Slave Set: Db2_HADR_ID2-master [Db2_HADR_ID2]
Masters: [ az-idb02 ]
Slaves: [ az-idb01 ]
Resource Group: g_ipnc_db2id2_ID2
vip_db2id2_ID2 (ocf::heartbeat:IPaddr2): Started az-idb02
nc_db2id2_ID2 (ocf::heartbeat:azure-lb): Started az-idb02

Migrate the resource back to az-idb01 and clear the location constraints

sudo pcs resource move Db2_HADR_ID2-master az-idb01


sudo pcs resource clear Db2_HADR_ID2-master

pcs resource move <res_name> : Creates location constraints and can cause issues with takeover
pcs resource clear <res_name> : Clears location constraints
pcs resource cleanup <res_name> : Clears all errors of the resource
Test a manual takeover
You can test a manual takeover by stopping the Pacemaker service on az-idb01 node:

systemctl stop pacemaker

status on az-ibdb02
2 nodes configured
5 resources configured

Node az-idb01: pending


Online: [ az-idb02 ]

Full list of resources:

rsc_st_azure (stonith:fence_azure_arm): Started az-idb02


Master/Slave Set: Db2_HADR_ID2-master [Db2_HADR_ID2]
Masters: [ az-idb02 ]
Stopped: [ az-idb01 ]
Resource Group: g_ipnc_db2id2_ID2
vip_db2id2_ID2 (ocf::heartbeat:IPaddr2): Started az-idb02
nc_db2id2_ID2 (ocf::heartbeat:azure-lb): Started az-idb02

Daemon Status:
corosync: active/disabled
pacemaker: active/disabled
pcsd: active/enabled

After the failover, you can start the service again on az-idb01.

systemctl start pacemaker

Kill the Db2 process on the node that runs the HADR primary database

#Kill main db2 process - db2sysc


[sapadmin@az-idb02 ~]$ sudo ps -ef|grep db2sysc
db2ptr 34598 34596 8 14:21 ? 00:00:07 db2sysc 0
[sapadmin@az-idb02 ~]$ sudo kill -9 34598

The Db2 instance is going to fail, and Pacemaker will move master node and report following status:

2 nodes configured
5 resources configured

Online: [ az-idb01 az-idb02 ]

Full list of resources:

rsc_st_azure (stonith:fence_azure_arm): Started az-idb02


Master/Slave Set: Db2_HADR_ID2-master [Db2_HADR_ID2]
Masters: [ az-idb02 ]
Stopped: [ az-idb01 ]
Resource Group: g_ipnc_db2id2_ID2
vip_db2id2_ID2 (ocf::heartbeat:IPaddr2): Started az-idb02
nc_db2id2_ID2 (ocf::heartbeat:azure-lb): Started az-idb02

Failed Actions:
* Db2_HADR_ID2_demote_0 on az-idb01 'unknown error' (1): call=49, status=complete, exitreason='none',
last-rc-change='Wed Jun 26 09:57:35 2019', queued=0ms, exec=362ms

Pacemaker will restart the Db2 primary database instance on the same node, or it will fail over to the node that's
running the secondary database instance and an error is reported.
Kill the Db2 process on the node that runs the secondary database instance
[sapadmin@az-idb02 ~]$ sudo ps -ef|grep db2sysc
db2id2 23144 23142 2 09:53 ? 00:00:13 db2sysc 0
[sapadmin@az-idb02 ~]$ sudo kill -9 23144

The node gets into failed stated and error reported

2 nodes configured
5 resources configured

Online: [ az-idb01 az-idb02 ]

Full list of resources:

rsc_st_azure (stonith:fence_azure_arm): Started az-idb02


Master/Slave Set: Db2_HADR_ID2-master [Db2_HADR_ID2]
Masters: [ az-idb01 ]
Slaves: [ az-idb02 ]
Resource Group: g_ipnc_db2id2_ID2
vip_db2id2_ID2 (ocf::heartbeat:IPaddr2): Started az-idb01
nc_db2id2_ID2 (ocf::heartbeat:azure-lb): Started az-idb01

Failed Actions:
* Db2_HADR_ID2_monitor_20000 on az-idb02 'not running' (7): call=144, status=complete, exitreason='none',
last-rc-change='Wed Jun 26 10:02:09 2019', queued=0ms, exec=0ms

The Db2 instance gets restarted in the secondary role it had assigned before.
Stop DB via db2stop force on the node that runs the HADR primary database instance
As user db2<sid> execute command db2stop force:

az-idb01:db2ptr> db2stop force

Failure detected:

2 nodes configured
5 resources configured

Online: [ az-idb01 az-idb02 ]

Full list of resources:

rsc_st_azure (stonith:fence_azure_arm): Started az-idb02


Master/Slave Set: Db2_HADR_ID2-master [Db2_HADR_ID2]
Slaves: [ az-idb02 ]
Stopped: [ az-idb01 ]
Resource Group: g_ipnc_db2id2_ID2
vip_db2id2_ID2 (ocf::heartbeat:IPaddr2): Stopped
nc_db2id2_ID2 (ocf::heartbeat:azure-lb): Stopped

Failed Actions:
* Db2_HADR_ID2_demote_0 on az-idb01 'unknown error' (1): call=110, status=complete, exitreason='none',
last-rc-change='Wed Jun 26 14:03:12 2019', queued=0ms, exec=355ms

The Db2 HADR secondary database instance got promoted into the primary role.
2 nodes configured
5 resources configured

Online: [ az-idb01 az-idb02 ]

Full list of resources:

rsc_st_azure (stonith:fence_azure_arm): Started az-idb02


Master/Slave Set: Db2_HADR_ID2-master [Db2_HADR_ID2]
Masters: [ az-idb02 ]
Slaves: [ az-idb01 ]
Resource Group: g_ipnc_db2id2_ID2
vip_db2id2_ID2 (ocf::heartbeat:IPaddr2): Started az-idb02
nc_db2id2_ID2 (ocf::heartbeat:azure-lb): Started az-idb02

Failed Actions:
* Db2_HADR_ID2_demote_0 on az-idb01 'unknown error' (1): call=110, status=complete, exitreason='none',
last-rc-change='Wed Jun 26 14:03:12 2019', queued=0ms, exec=355ms

Crash the VM that runs the HADR primary database instance with "halt"

#Linux kernel panic.


sudo echo b > /proc/sysrq-trigger

In such a case, Pacemaker will detect that the node that's running the primary database instance isn't responding.

2 nodes configured
5 resources configured

Node az-idb01: UNCLEAN (online)


Online: [ az-idb02 ]

Full list of resources:

rsc_st_azure (stonith:fence_azure_arm): Started az-idb02


Master/Slave Set: Db2_HADR_ID2-master [Db2_HADR_ID2]
Masters: [ az-idb01 ]
Slaves: [ az-idb02 ]
Resource Group: g_ipnc_db2id2_ID2
vip_db2id2_ID2 (ocf::heartbeat:IPaddr2): Started az-idb01
nc_db2id2_ID2 (ocf::heartbeat:azure-lb): Started az-idb01

The next step is to check for a Split brain situation. After the surviving node has determined that the node that last
ran the primary database instance is down, a failover of resources is executed.

2 nodes configured
5 resources configured

Online: [ az-idb02 ]
OFFLINE: [ az-idb01 ]

Full list of resources:

rsc_st_azure (stonith:fence_azure_arm): Started az-idb02


Master/Slave Set: Db2_HADR_ID2-master [Db2_HADR_ID2]
Masters: [ az-idb02 ]
Stopped: [ az-idb01 ]
Resource Group: g_ipnc_db2id2_ID2
vip_db2id2_ID2 (ocf::heartbeat:IPaddr2): Started az-idb02
nc_db2id2_ID2 (ocf::heartbeat:azure-lb): Started az-idb02
In the event of a kernel panic, the failed node will be restared by fencing agent. After the failed node is back online,
you must start pacemaker cluster by

sudo pcs cluster start

it starts the Db2 instance into the secondary role.

2 nodes configured
5 resources configured

Online: [ az-idb01 az-idb02 ]

Full list of resources:

rsc_st_azure (stonith:fence_azure_arm): Started az-idb02


Master/Slave Set: Db2_HADR_ID2-master [Db2_HADR_ID2]
Masters: [ az-idb02 ]
Slaves: [ az-idb01 ]
Resource Group: g_ipnc_db2id2_ID2
vip_db2id2_ID2 (ocf::heartbeat:IPaddr2): Started az-idb02
nc_db2id2_ID2 (ocf::heartbeat:azure-lb): Started az-idb02

Next steps
High-availability architecture and scenarios for SAP NetWeaver
Setting up Pacemaker on Red Hat Enterprise Linux in Azure
SAP ASE Azure Virtual Machines DBMS deployment
for SAP workload
12/22/2020 • 16 minutes to read • Edit Online

In this document, covers several different areas to consider when deploying SAP ASE in Azure IaaS. As a
precondition to this document, you should have read the document Considerations for Azure Virtual Machines
DBMS deployment for SAP workload and other guides in the SAP workload on Azure documentation. This
document covers SAP ASE running on Linux and on Windows Operating Systems. The minimum supported
release on Azure is SAP ASE 16.0.02 (Release 16 Support Pack 2). It is recommended to deploy the latest version of
SAP and the latest Patch Level. As a minimum SAP ASE 16.0.03.07 (Release 16 Support Pack 3 Patch Level 7) is
recommended. The most recent version of SAP can be found in Targeted ASE 16.0 Release Schedule and CR list
Information.
Additional information about release support with SAP applications or installation media location are found,
besides in the SAP Product Availability Matrix in these locations:
SAP support note #2134316
SAP support note #1941500
SAP support note #1590719
SAP support note #1973241
Remark: Throughout documentation within and outside the SAP world, the name of the product is referenced as
Sybase ASE or SAP ASE or in some cases both. In order to stay consistent, we use the name SAP ASE in this
documentation.

Operating system support


The SAP Product Availability Matrix contains the supported Operating System and SAP Kernel combinations for
each SAP application. Linux distributions SUSE 12.x, SUSE 15.x, Red Hat 7.x are fully supported. Oracle Linux as
operating system for SAP ASE is not supported. It is recommended to use the most recent Linux releases available.
Windows customers should use Windows Server 2016 or Windows Server 2019 releases. Older releases of
Windows such as Windows 2012 are technically supported but the latest Windows version is always
recommended.

Specifics to SAP ASE on Windows


Starting with Microsoft Azure, you can migrate your existing SAP ASE applications to Azure Virtual Machines. SAP
ASE in an Azure Virtual Machine enables you to reduce the total cost of ownership of deployment, management,
and maintenance of enterprise breadth applications by easily migrating these applications to Microsoft Azure.
With SAP ASE in an Azure Virtual Machine, administrators and developers can still use the same development and
administration tools that are available on-premises.
Microsoft Azure offers numerous different virtual machine types that allow you to run smallest SAP systems and
landscapes up to large SAP systems and landscapes with thousands of users. SAP sizing SAPS numbers of the
different SAP certified VM SKUs is provided in SAP support note #1928533.
Documentation to install SAP ASE on Windows can be found in the SAP ASE Installation Guide for Windows
Lock Pages in Memory is a setting that will prevent the SAP ASE database buffer from being paged out. This
setting is useful for large busy systems with a lot of memory. Contact BC-DB-SYB for more information.
Linux operating system specific settings
On Linux VMs, run saptune with profile SAP-ASE Linux Huge Pages should be enabled by default and can be
verified with command
cat /proc/meminfo

The page size is typically 2048 KB. For details see the article Huge Pages on Linux

Recommendations on VM and disk structure for SAP ASE deployments


SAP ASE for SAP NetWeaver Applications is supported on any VM type listed in SAP support note #1928533
Typical VM types used for medium size SAP ASE database servers include Esv3. Large multi-terabyte databases
can leverage M-series VM types. The SAP ASE transaction log disk write performance may be improved by
enabling the M-series Write Accelerator. Write Accelerator should be tested carefully with SAP ASE due to the way
that SAP ASE performs Log Writes. Review SAP support note #2816580 and consider running a performance test.
Write Accelerator is designed for transaction log disk only. The disk level cache should be set to NONE. Don't be
surprised if Azure Write Accelerator does not show similar improvements as with other DBMS. Based on the way
SAP ASE writes into the transaction log, it could be that there is little to no acceleration by Azure Write Accelerator.
Separate disks are recommended for Data devices and Log Devices. The system databases sybsecurity and
saptools do not require dedicated disks and can be placed on the disks containing the SAP database data and log
devices

File systems, stripe size & IO balancing


SAP ASE writes data sequentially into disk storage devices unless configured otherwise. This means an empty SAP
ASE database with four devices will write data into the first device only. The other disk devices will only be written
to when the first device is full. The amount of READ and WRITE IO to each SAP ASE device is likely to be different.
To balance disk IO across all available Azure disks either Windows Storage Spaces or Linux LVM2 needs to be used.
On Linux, it is recommended to use XFS file system to format the disks. The LVM stripe size should be tested with a
performance test. 128 KB stripe size is a good starting point. On Windows, the NTFS Allocation Unit Size (AUS)
should be tested. 64 KB can be used as a starting value.
It is recommended to configure Automatic Database Expansion as described in the article Configuring Automatic
Database Space Expansion in SAP Adaptive Server Enterprise and SAP support note #1815695.
Sample SAP ASE on Azure virtual machine, disk and file system configurations
The templates below show sample configurations for both Linux and Windows. Before confirming the virtual
machine and disk configuration ensure that the network and storage bandwidth quotas of the individual VM are
sufficient to meet the business requirement. Also keep in mind that different Azure VM types have different
maximum numbers of disks that can be attached to the VM. For example, a E4s_v3 VM has a limit 48 MB/sec
storage IO throughput. If the storage throughput required by database backup activity demands more than 48
MB/sec then a larger VM type with more storage bandwidth throughput is unavoidable. When configuring Azure
storage, you also need to keep in mind that especially with Azure Premium storage the throughput and IOPS per
GB of capacity do change. See more on this topic in the article What disk types are available in Azure?. The quotas
for specific Azure VM types are documented in the article Memory optimized virtual machine sizes and articles
linked to it.

NOTE
If a DBMS system is being moved from on-premises to Azure, it is recommended to perform monitoring on the VM and
assess the CPU, memory, IOPS and storage throughput. Compare the peak values observed with the VM quota limits
documented in the articles mentioned above

The examples given below are for illustrative purposes and can be modified based on individual needs. Due to the
design of SAP ASE, the number of data devices is not as critical as with other databases. The number of data
devices detailed in this document is a guide only.
An example of a configuration for a small SAP ASE DB Server with a database size between 50 GB – 250 GB, such
as SAP solution Manager, could look like

C O N F IGURAT IO N W IN DO W S L IN UX C O M M EN T S

VM Type E4s_v3 (4 vCPU/32 GB E4s_v3 (4 vCPU/32 GB ---


RAM) RAM)

Accelerated Networking Enable Enable ---

SAP ASE version 16.0.03.07 or higher 16.0.03.07 or higher ---

# of data devices 4 4 ---

# of log devices 1 1 ---

# of temp devices 1 1 more for SAP BW workload

Operating system Windows Server 2019 SUSE 12 SP4/ 15 SP1 or ---


RHEL 7.6

Disk aggregation Storage Spaces LVM2 ---

File system NTFS XFS

Format block size needs workload testing needs workload testing ---

# and type of data disks Premium storage: 2 x P10 Premium storage: 2 x P10 Cache = Read Only
(RAID0) (RAID0)

# and type of log disks Premium storage: 1 x P20 Premium storage: 1 x P20 Cache = NONE

ASE MaxMemory parameter 90% of Physical RAM 90% of Physical RAM assuming single instance
C O N F IGURAT IO N W IN DO W S L IN UX C O M M EN T S

# of backup devices 4 4 ---

# and type of backup disks 1 1 ---

An example of a configuration for a medium SAP ASE DB Server with a database size between 250 GB – 750 GB,
such as a smaller SAP Business Suite system, could look like

C O N F IGURAT IO N W IN DO W S L IN UX C O M M EN T S

VM Type E16s_v3 (16 vCPU/128 GB E16s_v3 (16 vCPU/128 GB ---


RAM) RAM)

Accelerated Networking Enable Enable ---

SAP ASE version 16.0.03.07 or higher 16.0.03.07 or higher ---

# of data devices 8 8 ---

# of log devices 1 1 ---

# of temp devices 1 1 more for SAP BW workload

Operating system Windows Server 2019 SUSE 12 SP4/ 15 SP1 or ---


RHEL 7.6

Disk aggregation Storage Spaces LVM2 ---

File system NTFS XFS

Format block size needs workload testing needs workload testing ---

# and type of data disks Premium storage: 4 x P20 Premium storage: 4 x P20 Cache = Read Only
(RAID0) (RAID0)

# and type of log disks Premium storage: 1 x P20 Premium storage: 1 x P20 Cache = NONE

ASE MaxMemory parameter 90% of Physical RAM 90% of Physical RAM assuming single instance

# of backup devices 4 4 ---

# and type of backup disks 1 1 ---

An example of a configuration for a small SAP ASE DB Server with a database size between 750 GB – 2000 GB,
such as a larger SAP Business Suite system, could look like

C O N F IGURAT IO N W IN DO W S L IN UX C O M M EN T S

VM Type E64s_v3 (64 vCPU/432 GB E64s_v3 (64 vCPU/432 GB ---


RAM) RAM)

Accelerated Networking Enable Enable ---


C O N F IGURAT IO N W IN DO W S L IN UX C O M M EN T S

SAP ASE version 16.0.03.07 or higher 16.0.03.07 or higher ---

# of data devices 16 16 ---

# of log devices 1 1 ---

# of temp devices 1 1 more for SAP BW workload

Operating system Windows Server 2019 SUSE 12 SP4/ 15 SP1 or ---


RHEL 7.6

Disk aggregation Storage Spaces LVM2 ---

File system NTFS XFS

Format block size needs workload testing needs workload testing ---

# and type of data disks Premium storage: 4 x P30 Premium storage: 4 x P30 Cache = Read Only
(RAID0) (RAID0)

# and type of log disks Premium storage: 1 x P20 Premium storage: 1 x P20 Cache = NONE

ASE MaxMemory parameter 90% of Physical RAM 90% of Physical RAM assuming single instance

# of backup devices 4 4 ---

# and type of backup disks 1 1 ---

An example of a configuration for a small SAP ASE DB Server with a database size of 2 TB+, such as a larger
globally used SAP Business Suite system, could look like

C O N F IGURAT IO N W IN DO W S L IN UX C O M M EN T S

VM Type M-Series (1.0 to 4.0 TB M-Series (1.0 to 4.0 TB ---


RAM) RAM)

Accelerated Networking Enable Enable ---

SAP ASE version 16.0.03.07 or higher 16.0.03.07 or higher ---

# of data devices 32 32 ---

# of log devices 1 1 ---

# of temp devices 1 1 more for SAP BW workload

Operating system Windows Server 2019 SUSE 12 SP4/ 15 SP1 or ---


RHEL 7.6

Disk aggregation Storage Spaces LVM2 ---


C O N F IGURAT IO N W IN DO W S L IN UX C O M M EN T S

File system NTFS XFS

Format block size needs workload testing needs workload testing ---

# and type of data disks Premium storage: 4+ x P30 Premium storage: 4+ x P30 Cache = Read Only,
(RAID0) (RAID0) Consider Azure Ultra disk

# and type of log disks Premium storage: 1 x P20 Premium storage: 1 x P20 Cache = NONE, Consider
Azure Ultra disk

ASE MaxMemory parameter 90% of Physical RAM 90% of Physical RAM assuming single instance

# of backup devices 16 16 ---

# and type of backup disks 4 4 Use LVM2/Storage Spaces

Backup & restore considerations for SAP ASE on Azure


Increasing the number of data and backup devices increases backup and restore performance. It is recommended
to stripe the Azure disks that are hosting the SAP ASE backup device as show in the tables shown earlier. Care
should be taken to balance the number of backup devices and disks and ensure that backup throughput should
not exceed 40%-50% of total VM throughput quota. It is recommended to use SAP Backup Compression as a
default. More details can be found in the articles:
SAP support note #1588316
SAP support note #1801984
SAP support note #1585981
Do not use drive D:\ or /temp space as database or log dump destination.
Impact of database compression
In configurations where I/O bandwidth can become a limiting factor, measures, which reduce IOPS might help to
stretch the workload one can run in an IaaS scenario like Azure. Therefore, it is recommended to make sure that
SAP ASE compression is used before uploading an existing SAP database to Azure.
The recommendation to apply compression before uploading to Azure is given out of several reasons:
The amount of data to be uploaded to Azure is lower
The duration of the compression execution is shorter assuming that one can use stronger hardware with more
CPUs or higher I/O bandwidth or less I/O latency on-premises
Smaller database sizes might lead to less costs for disk allocation
Data- and LOB-Compression work in a VM hosted in Azure Virtual Machines as it does on-premises. For more
details on how to check if compression is already in use in an existing SAP ASE database, check SAP support note
1750510. For more details on SAP ASE database compression check SAP support note #2121797

High availability of SAP ASE on Azure


The HADR Users Guide details the setup and configuration of a 2 node SAP ASE “Always-on” solution. In addition,
a third disaster recovery node is also supported. SAP ASE supports many High Available configurations including
shared disk and native OS clustering (floating IP). The only supported configuration on Azure is using Fault
Manager without Floating IP. The Floating IP Address method will not work on Azure. The SAP Kernel is an “HA
Aware” application and knows about the primary and secondary SAP ASE servers. There are no close integrations
between the SAP ASE and Azure, the Azure Internal load balancer is not used. Therefore, the standard SAP ASE
documentation should be followed starting with SAP ASE HADR Users Guide

NOTE
The only supported configuration on Azure is using Fault Manager without Floating IP. The Floating IP Address method will
not work on Azure.

Third node for disaster recovery


Beyond using SAP ASE Always-On for local high availability, you might want to extend the configuration to an
asynchronously replicated node in another Azure region. Documentation for such a scenario can be found here.

SAP ASE database encryption & SSL


SAP Software provisioning Manager (SWPM) is giving an option to encrypt the database during installation. If you
want to use encryption, it is recommended to use SAP Full Database Encryption. See details documented in:
SAP support note #2556658
SAP support note #2224138
SAP support note #2401066
SAP support note #2593925

NOTE
If a SAP ASE database is encrypted then Backup Dump Compression will not work. See also SAP support note #2680905

SAP ASE on Azure deployment checklist


Deploy SAP ASE 16.0.03.07 or higher
Update to latest version and patches of FaultManager and SAPHostAgent
Deploy on latest certified OS available such as Windows 2019, Suse 15.1 or Redhat 7.6 or higher
Use SAP Certified VMs – high memory Azure VM SKUs such as Es_v3 or for x-large systems M-Series VM SKUs
are recommended
Match the disk IOPS and total VM aggregate throughput quota of the VM with the disk design. Deploy
sufficient number of disks
Aggregate disks using Windows Storage Spaces or Linux LVM2 with correct stripe size and file system
Create sufficient number of devices for data, log, temp, and backup purposes
Consider using UltraDisk for x-large systems
Run saptune SAP-ASE on Linux OS
Secure the database with DB Encryption – manually store keys in Azure Key Vault
Complete the SAP on Azure Checklist
Configure log backup and full backup
Test HA/DR, backup and restore and perform stress & volume test
Confirm Automatic Database Extension is working

Using DBACockpit to monitor database instances


For SAP systems, which are using SAP ASE as database platform, the DBACockpit is accessible as embedded
browser windows in transaction DBACockpit or as Webdynpro. However, the full functionality for monitoring and
administering the database is available in the Webdynpro implementation of the DBACockpit only.
As with on-premises systems several steps are required to enable all SAP NetWeaver functionality used by the
Webdynpro implementation of the DBACockpit. Follow SAP support note #1245200 to enable the usage of
webdynpros and generate the required ones. When following the instructions in the above notes, you also
configure the Internet Communication Manager ( ICM ) along with the ports to be used for http and https
connections. The default setting for http looks like:

icm/server_port_0 = PROT=HTTP,PORT=8000,PROCTIMEOUT=600,TIMEOUT=600
icm/server_port_1 = PROT=HTTPS,PORT=443$$,PROCTIMEOUT=600,TIMEOUT=600

and the links generated in transaction DBACockpit looks similar to:

https://<fullyqualifiedhostname>:44300/sap/bc/webdynpro/sap/dba_cockpit
http://<fullyqualifiedhostname>:8000/sap/bc/webdynpro/sap/dba_cockpit

Depending on how the Azure Virtual Machine hosting the SAP system is connected to your AD and DNS, you need
to make sure that ICM is using a fully qualified hostname that can be resolved on the machine where you are
opening the DBACockpit from. See SAP support note #773830 to understand how ICM determines the fully
qualified host name based on profile parameters and set parameter icm/host_name_full explicitly if necessary.
If you deployed the VM in a Cloud-Only scenario without cross-premises connectivity between on-premises and
Azure, you need to define a public IP address and a domainlabel . The format of the public DNS name of the VM
looks like:

<custom domainlabel >. <azure region >.cloudapp.azure.com

More details related to the DNS name can be found [here][virtual-machines-azurerm-versus-azuresm].


Setting the SAP profile parameter icm/host_name_full to the DNS name of the Azure VM the link might look
similar to:

https://mydomainlabel.westeurope.cloudapp.net:44300/sap/bc/webdynpro/sap/dba_cockpit
http://mydomainlabel.westeurope.cloudapp.net:8000/sap/bc/webdynpro/sap/dba_cockpit

In this case you need to make sure to:


Add Inbound rules to the Network Security Group in the Azure portal for the TCP/IP ports used to
communicate with ICM
Add Inbound rules to the Windows Firewall configuration for the TCP/IP ports used to communicate with the
ICM
For an automated imported of all corrections available, it is recommended to periodically apply the correction
collection SAP Note applicable to your SAP version:
SAP support note #1558958
SAP support note #1619967
SAP support note #1882376
Further information about DBA Cockpit for SAP ASE can be found in the following SAP Notes:
SAP support note #1605680
SAP support note #1757924
SAP support note #1757928
SAP support note #1758182
SAP support note #1758496
SAP support note #1814258
SAP support note #1922555
SAP support note #1956005

Useful links, notes & whitepapers for SAP ASE


The starting page for SAP ASE 16.0.03.07 Documentation gives links to various documents of which the
documents of:
SAP ASE Learning Journey - Administration & Monitoring
SAP ASE Learning Journey - Installation & Upgrade
are helpful. Another useful document is SAP Applications on SAP Adaptive Server Enterprise Best Practices for
Migration and Runtime.
Other helpful SAP support notes are:
SAP support note #2134316
SAP support note #1748888
SAP support note #2588660
SAP support note #1680803
SAP support note #1724091
SAP support note #1775764
SAP support note #2162183
SAP support note #1928533
SAP support note #2015553
SAP support note #1750510
SAP support note #1752266
SAP support note #2162183
SAP support note #1588316
Other information is published on
SAP Applications on SAP Adaptive Server Enterprise
SAP ASE infocenter
SAP ASE Always-on with 3rd DR Node Setup
A Monthly newsletter is published through SAP support note #2381575

Next steps
Check the article SAP workloads on Azure: planning and deployment checklist
SAP MaxDB, liveCache, and Content Server
deployment on Azure VMs
12/22/2020 • 10 minutes to read • Edit Online

This document covers several different areas to consider when deploying MaxDB, liveCache, and Content Server in
Azure IaaS. As a precondition to this document, you should have read the document Considerations for Azure
Virtual Machines DBMS deployment for SAP workload as well as other guides in the SAP workload on Azure
documentation.

Specifics for the SAP MaxDB deployments on Windows


SAP MaxDB Version Support on Azure
SAP currently supports SAP MaxDB version 7.9 or higher for use with SAP NetWeaver-based products in Azure. All
updates for SAP MaxDB server, or JDBC and ODBC drivers to be used with SAP NetWeaver-based products are
provided solely through the SAP Service Marketplace at https://support.sap.com/swdc. General information on
running SAP NetWeaver on SAP MaxDB can be found at https://www.sap.com/community/topic/maxdb.html.
Supported Microsoft Windows Versions and Azure VM types for SAP MaxDB DBMS
To find the supported Microsoft Windows version for SAP MaxDB DBMS on Azure, see:
SAP Product Availability Matrix (PAM)
SAP Note 1928533
It is highly recommended to use the newest version of the operating system Microsoft Windows, which is
Microsoft Windows 2016.
Available SAP MaxDB Documentation for MaxDB
You can find the updated list of SAP MaxDB documentation in the following SAP Note 767598
SAP MaxDB Configuration Guidelines for SAP Installations in Azure VMs
Storage configuration
Azure storage best practices for SAP MaxDB follow the general recommendations mentioned in chapter Storage
structure of a VM for RDBMS Deployments.

IMPORTANT
Like other databases, SAP MaxDB also has data and log files. However, in SAP MaxDB terminology the correct term is
"volume" (not "file"). For example, there are SAP MaxDB data volumes and log volumes. Do not confuse these with OS disk
volumes.

In short you have to:


If you use Azure Storage accounts, set the Azure storage account that holds the SAP MaxDB data and log
volumes (data and log files) to Local Redundant Storage (LRS) as specified in Considerations for Azure
Virtual Machines DBMS deployment for SAP workload.
Separate the IO path for SAP MaxDB data volumes (data files) from the IO path for log volumes (log files). It
means that SAP MaxDB data volumes (data files) have to be installed on one logical drive and SAP MaxDB log
volumes (log files) have to be installed on another logical drive.
Set the proper caching type for each disk, depending on whether you use it for SAP MaxDB data or log volumes
(data and log files), and whether you use Azure Standard or Azure Premium Storage, as described in
Considerations for Azure Virtual Machines DBMS deployment for SAP workload.
As long as the current IOPS quota per disk satisfies the requirements, it is possible to store all the data volumes
on a single mounted disk, and also store all database log volumes on another single mounted disk.
If more IOPS and/or space are required, it is recommended to use Microsoft Window Storage Pools (only
available in Microsoft Windows Server 2012 and higher) to create one large logical device over multiple
mounted disks. For more details, see also Considerations for Azure Virtual Machines DBMS deployment for SAP
workload. This approach simplifies the administration overhead to manage the disk space and avoids the effort
of manually distributing files across multiple mounted disks.
it is highly recommended to use Azure Premium Storage for MaxDB deployments.

Backup and Restore


When deploying SAP MaxDB into Azure, you must review your backup methodology. Even if the system is not a
productive system, the SAP database hosted by SAP MaxDB must be backed up periodically. Since Azure Storage
keeps three images, a backup is now less important in terms of protecting your system against storage failure and
more important operational or administrative failures. The primary reason for maintaining a proper backup and
restore plan is so that you can compensate for logical or manual errors by providing point-in-time recovery
capabilities. So the goal is to either use backups to restore the database to a certain point in time or to use the
backups in Azure to seed another system by copying the existing database.
Backing up and restoring a database in Azure works the same way as it does for on-premises systems, so you can
use standard SAP MaxDB backup/restore tools, which are described in one of the SAP MaxDB documentation
documents listed in SAP Note 767598.
Performance Considerations for Backup and Restore
As in bare-metal deployments, backup and restore performance are dependent on how many volumes can be read
in parallel and the throughput of those volumes. Therefore, one can assume:
The fewer the number of disks used to store the database devices, the lower the overall read throughput
The fewer targets (Stripe Directories, disks) to write the backup to, the lower the throughput
To increase the number of targets to write to, there are two options that you can use, possibly in combination,
depending on your needs:
Dedicating separate volumes for backup
Striping the backup target volume over multiple mounted disks in order to improve the IOPS throughput on
that striped disk volume
Having separate dedicated logical disk devices for:
SAP MaxDB backup volumes (i.e. files)
SAP MaxDB data volumes (i.e. files)
SAP MaxDB log volumes (i.e. files)
Striping a volume over multiple mounted disks has been discussed earlier in Considerations for Azure Virtual
Machines DBMS deployment for SAP workload.
Other considerations
All other general areas such as Azure Availability Sets or SAP monitoring also apply as described in Considerations
for Azure Virtual Machines DBMS deployment for SAP workload. for deployments of VMs with the SAP MaxDB
database. Other SAP MaxDB-specific settings are transparent to Azure VMs and are described in different
documents listed in SAP Note 767598 and in these SAP Notes:
826037
1139904
1173395

Specifics for SAP liveCache deployments on Windows


SAP liveCache Version Support
Minimal version of SAP liveCache supported in Azure Virtual Machines is SAP LC/LCAPPS 10.0 SP 25 including
liveCache 7.9.08.31 and LCA-Build 25 , released for EhP 2 for SAP SCM 7.0 and later releases.
Supported Microsoft Windows Versions and Azure VM types for SAP liveCache DBMS
To find the supported Microsoft Windows version for SAP liveCache on Azure, see:
SAP Product Availability Matrix (PAM)
SAP Note 1928533
It is highly recommended to use the newest version of the operating system Microsoft Windows Server.
SAP liveCache Configuration Guidelines for SAP Installations in Azure VMs
Recommended Azure VM Types for liveCache
As SAP liveCache is an application that performs huge calculations, the amount and speed of RAM and CPU has a
major influence on SAP liveCache performance.
For the Azure VM types supported by SAP (SAP Note 1928533), all virtual CPU resources allocated to the VM are
backed by dedicated physical CPU resources of the hypervisor. No overprovisioning (and therefore no competition
for CPU resources) takes place.
Similarly, for all Azure VM instance types supported by SAP, the VM memory is 100% mapped to the physical
memory - over-provisioning (over-commitment), for example, is not used.
From this perspective, it is highly recommended to use the most recent Dv2, Dv3, Ev3, and M-series VMs. The
choice of the different VM types depends on the memory you need for liveCache and the CPU resources you need.
As with all other DBMS deployments it is advisable to leverage Azure Premium Storage for performance critical
volumes.
Storage Configuration for liveCache in Azure
As SAP liveCache is based on SAP MaxDB technology, all the Azure storage best practice recommendations
mentioned for SAP MaxDB described in this document are also valid for SAP liveCache.
Dedicated Azure VM for liveCache scenario
As SAP liveCache intensively uses computational power, for productive usage it is highly recommended to deploy
on a dedicated Azure Virtual Machine.

Backup and Restore for liveCache in Azure


backup and restore, including performance considerations, are already described in the relevant SAP MaxDB
chapters in this document.
Other considerations
All other general areas are already described in the relevant SAP MaxDB chapter.

Specifics for the SAP Content Server deployment on Windows in Azure


The SAP Content Server is a separate, server-based component to store content such as electronic documents in
different formats. The SAP Content Server is provided by development of technology and is to be used cross-
application for any SAP applications. It is installed on a separate system. Typical content is training material and
documentation from Knowledge Warehouse or technical drawings originating from the mySAP PLM Document
Management System.
SAP Content Server Version Support for Azure VMs
SAP currently supports:
SAP Content Ser ver with version 6.50 (and higher)
SAP MaxDB version 7.9
Microsoft IIS (Internet Information Ser ver) version 8.0 (and higher)
It is highly recommended to use the newest version of SAP Content Server, and the newest version of Microsoft
IIS .
Check the latest supported versions of SAP Content Server and Microsoft IIS in the SAP Product Availability Matrix
(PAM).
Supported Microsoft Windows and Azure VM types for SAP Content Server
To find out supported Windows version for SAP Content Server on Azure, see:
SAP Product Availability Matrix (PAM)
SAP Note 1928533
It is highly recommended to use the newest version of Microsoft Windows Server.
SAP Content Server Configuration Guidelines for SAP Installations in Azure VMs
Storage Configuration for Content Server in Azure
If you configure SAP Content Server to store files in the SAP MaxDB database, all Azure storage best practices
recommendation mentioned for SAP MaxDB in this document are also valid for the SAP Content Server scenario.
If you configure SAP Content Server to store files in the file system, it is recommended to use a dedicated logical
drive. Using Windows Storage Spaces enables you to also increase logical disk size and IOPS throughput, as
described in Considerations for Azure Virtual Machines DBMS deployment for SAP workload.
SAP Content Server Location
SAP Content Server has to be deployed in the same Azure region and Azure VNET where the SAP system is
deployed. You are free to decide whether you want to deploy SAP Content Server components on a dedicated
Azure VM or on the same VM where the SAP system is running.

SAP Cache Server Location


The SAP Cache Server is an additional server-based component to provide access to (cached) documents locally.
The SAP Cache Server caches the documents of an SAP Content Server. This is to optimize network traffic if
documents have to be retrieved more than once from different locations. The general rule is that the SAP Cache
Server has to be physically close to the client that accesses the SAP Cache Server.
Here you have two options:
1. Client is a backend SAP system If a backend SAP system is configured to access SAP Content Server, that
SAP system is a client. As both SAP system and SAP Content Server are deployed in the same Azure region, in
the same Azure datacenter, they are physically close to each other. Therefore, there is no need to have a
dedicated SAP Cache Server. SAP UI clients (SAP GUI or web browser) access the SAP system directly, and the
SAP system retrieves documents from the SAP Content Server.
2. Client is an on-premises web browser The SAP Content Server can be configured to be accessed directly
by the web browser. In this case, a web browser running on-premises is a client of the SAP Content Server. On-
premises datacenter and Azure datacenter are placed in different physical locations (ideally close to each other).
Your on-premises datacenter is connected to Azure via Azure Site-to-Site VPN or ExpressRoute. Although both
options offer secure VPN network connection to Azure, site-to-site network connection does not offer a
network bandwidth and latency SLA between the on-premises datacenter and the Azure datacenter. To speed up
access to documents, you can do one of the following:
a. Install SAP Cache Server on-premises, close to the on-premises web browser (option in figure below)
b. Configure Azure ExpressRoute, which offers a high-speed and low-latency dedicated network connection
between on-premises datacenter and Azure datacenter.

Backup / Restore
If you configure the SAP Content Server to store files in the SAP MaxDB database, the backup/restore procedure
and performance considerations are already described in SAP MaxDB chapters of this document.
If you configure the SAP Content Server to store files in the file system, one option is to execute manual
backup/restore of the whole file structure where the documents are located. Similar to SAP MaxDB backup/restore,
it is recommended to have a dedicated disk volume for backup purpose.
Other
Other SAP Content Server-specific settings are transparent to Azure VMs and are described in various documents
and SAP Notes:
https://service.sap.com/contentserver
SAP Note 1619726
SAP HANA high availability for Azure virtual
machines
12/22/2020 • 2 minutes to read • Edit Online

You can use numerous Azure capabilities to deploy mission-critical databases like SAP HANA on Azure VMs. This
article provides guidance on how to achieve availability for SAP HANA instances that are hosted in Azure VMs.
The article describes several scenarios that you can implement by using the Azure infrastructure to increase
availability of SAP HANA in Azure.

Prerequisites
This article assumes that you are familiar with infrastructure as a service (IaaS) basics in Azure, including:
How to deploy virtual machines or virtual networks via the Azure portal or PowerShell.
Using the Azure cross-platform command-line interface (Azure CLI), including the option to use JavaScript
Object Notation (JSON) templates.
This article also assumes that you are familiar with installing SAP HANA instances, and with administrating and
operating SAP HANA instances. It's especially important to be familiar with the setup and operations of HANA
system replication. This includes tasks like backup and restore for SAP HANA databases.
These articles provide a good overview of using SAP HANA in Azure:
Manual installation of single-instance SAP HANA on Azure VMs
Set up SAP HANA system replication in Azure VMs
Back up SAP HANA on Azure VMs
It's also a good idea to be familiar with these articles about SAP HANA:
High availability for SAP HANA
FAQ: High availability for SAP HANA
Perform system replication for SAP HANA
SAP HANA 2.0 SPS 01 What’s new: High availability
Network recommendations for SAP HANA system replication
SAP HANA system replication
SAP HANA service auto-restart
Configure SAP HANA system replication
Beyond being familiar with deploying VMs in Azure, before you define your availability architecture in Azure, we
recommend that you read Manage the availability of Windows virtual machines in Azure.

Service level agreements for Azure components


Azure has different availability SLAs for different components, like networking, storage, and VMs. All SLAs are
documented. For more information, see Microsoft Azure Service Level Agreements.
SLA for Virtual Machines describes three different SLAs, for three different configurations:
A single VM that uses Azure premium SSDs for the OS disk and all data disks. This option provides a monthly
uptime of 99.9 percent.
Multiple (at least two) VMs that are organized in an Azure availability set. This option provides a monthly
uptime of 99.95 percent.
Multiple (at least two) VMs that are organized in an Availablity Zone. This option provided a monthly uptime of
99.99 percent.
Measure your availability requirement against the SLAs that Azure components can provide. Then, choose your
scenarios for SAP HANA to achieve your required level of availability.

Next steps
Learn about SAP HANA availability within one Azure region.
Learn about SAP HANA availability across Azure regions.
SAP HANA availability within one Azure region
12/22/2020 • 10 minutes to read • Edit Online

This article describes several availability scenarios within one Azure region. Azure has many regions, spread
throughout the world. For the list of Azure regions, see Azure regions. For deploying SAP HANA on VMs within
one Azure region, Microsoft offers deployment of a single VM with a HANA instance. For increased availability, you
can deploy two VMs with two HANA instances within an Azure availability set that uses HANA system replication
for availability.
Currently, Azure is offering Azure Availability Zones. This article does not describe Availability Zones in detail. But,
it includes a general discussion about using Availability Sets versus Availability Zones.
Azure regions where Availability Zones are offered have multiple datacenters. The datacenters are independent in
the supply of power source, cooling, and network. The reason for offering different zones within a single Azure
region is to deploy applications across two or three Availability Zones that are offered. Deploying across zones,
issues in power and networking affecting only one Azure Availability Zone infrastructure, your application
deployment within an Azure region is still functional. Some reduced capacity might occur. For example, VMs in one
zone might be lost, but VMs in the other two zones would still be up and running.
An Azure Availability Set is a logical grouping capability that helps ensure that the VM resources that you place
within the Availability Set are failure-isolated from each other when they are deployed within an Azure datacenter.
Azure ensures that the VMs you place within an Availability Set run across multiple physical servers, compute
racks, storage units, and network switches. In some Azure documentation, this configuration is referred to as
placements in different update and fault domains. These placements usually are within an Azure datacenter.
Assuming that power source and network issues would affect the datacenter that you are deploying, all your
capacity in one Azure region would be affected.
The placement of datacenters that represent Azure Availability Zones is a compromise between delivering
acceptable network latency between services deployed in different zones, and a distance between datacenters.
Natural catastrophes ideally wouldn't affect the power, network supply, and infrastructure for all Availability Zones
in this region. However, as monumental natural catastrophes have shown, Availability Zones might not always
provide the availability that you want within one region. Think about Hurricane Maria that hit the island of Puerto
Rico on September 20, 2017. The hurricane basically caused a nearly 100 percent blackout on the 90-mile-wide
island.

Single-VM scenario
In a single-VM scenario, you create an Azure VM for the SAP HANA instance. You use Azure Premium Storage to
host the operating system disk and all your data disks. The Azure uptime SLA of 99.9 percent and the SLAs of
other Azure components is sufficient for you to fulfill your availability SLAs for your customers. In this scenario,
you have no need to leverage an Azure Availability Set for VMs that run the DBMS layer. In this scenario, you rely
on two different features:
Azure VM auto-restart (also referred to as Azure service healing)
SAP HANA auto-restart
Azure VM auto restart, or service healing, is a functionality in Azure that works on two levels:
The Azure server host checks the health of a VM that's hosted on the server host.
The Azure fabric controller monitors the health and availability of the server host.
A health check functionality monitors the health of every VM that's hosted on an Azure server host. If a VM falls
into a non-healthy state, a reboot of the VM can be initiated by the Azure host agent that checks the health of the
VM. The fabric controller checks the health of the host by checking many different parameters that might indicate
issues with the host hardware. It also checks on the accessibility of the host via the network. An indication of
problems with the host can lead to the following events:
If the host signals a bad health state, a reboot of the host and a restart of the VMs that were running on the
host is triggered.
If the host is not in a healthy state after successful reboot, a redeployment of the VMs that were originally on
the now unhealthy node onto an healthy host server is initiated. In this case, the original host is marked as not
healthy. It won't be used for further deployments until it's cleared or replaced.
If the unhealthy host has problems during the reboot process, an immediate restart of the VMs on an healthy
host is triggered.
With the host and VM monitoring provided by Azure, Azure VMs that experience host issues are automatically
restarted on a healthy Azure host.

IMPORTANT
Azure service healing will not restart Linux VMs where the guest OS is in a kernel panic state. The default settings of the
commonly used Linux releases, are not automatically restarting VMs or server where the Linux kernel is in panic state.
Instead the default foresees to keep the OS in kernel panic state to be able to attach a kernel debugger to analyze. Azure is
honoring that behavior by not automatically restarting a VM with the guest OS in a such a state. Assumption is that such
occurrences are extremely rare. You could overwrite the default behavior to enable a restart of the VM. To change the
default behavior enable the parameter 'kernel.panic' in /etc/sysctl.conf. The time you set for this parameter is in seconds.
Common recommended values are to wait for 20-30 seconds before triggering the reboot through this parameter. See also
https://gitlab.com/procps-ng/procps/blob/master/sysctl.conf.

The second feature that you rely on in this scenario is the fact that the HANA service that runs in a restarted VM
starts automatically after the VM reboots. You can set up HANA service auto-restart through the watchdog
services of the various HANA services.
You might improve this single-VM scenario by adding a cold failover node to an SAP HANA configuration. In the
SAP HANA documentation, this setup is called host auto-failover. This configuration might make sense in an on-
premises deployment situation where the server hardware is limited, and you dedicate a single-server node as the
host auto-failover node for a set of production hosts. But in Azure, where the underlying infrastructure of Azure
provides a healthy target server for a successful VM restart, it doesn't make sense to deploy SAP HANA host auto-
failover. Because of Azure service healing, there is no reference architecture that foresees a standby node for
HANA host auto-failover.
Special case of SAP HANA scale -out configurations in Azure
High availability for SAP HANA scale-out configurations is relying on service healing of Azure VMs and the restart
of the SAP HANA instance as the VM is up and running again. High availability architectures based on HANA
System Replication are going to be introduced at a later time.

Availability scenarios for two different VMs


If you use two Azure VMs within an Azure Availability Set, you can increase the uptime between these two VMs if
they're placed in an Azure Availability Set within one Azure region. The base setup in Azure would look like:
To illustrate the different availability scenarios, a few of the layers in the diagram are omitted. The diagram shows
only layers that depict VMs, hosts, Availability Sets, and Azure regions. Azure Virtual Network instances, resource
groups, and subscriptions don't play a role in the scenarios described in this section.
Replicate backups to a second virtual machine
One of the most rudimentary setups is to use backups. In particular, you might have transaction log backups
shipped from one VM to another Azure VM. You can choose the Azure Storage type. In this setup, you are
responsible for scripting the copy of scheduled backups that are conducted on the first VM to the second VM. If
you need to use the second VM instances, you must restore the full, incremental/differential, and transaction log
backups to the point that you need.
The architecture looks like:
This setup is not well suited to achieving great Recovery Point Objective (RPO) and Recovery Time Objective (RTO)
times. RTO times especially would suffer due to the need to fully restore the complete database by using the
copied backups. However, this setup is useful for recovering from unintended data deletion on the main instances.
With this setup, at any time, you can restore to a certain point in time, extract the data, and import the deleted data
into your main instance. Hence, it might make sense to use a backup copy method in combination with other high-
availability functionality.
While backups are being copied, you might be able to use a smaller VM than the main VM that the SAP HANA
instance is running on. Keep in mind that you can attach a smaller number of VHDs to smaller VMs. For
information about the limits of individual VM types, see Sizes for Linux virtual machines in Azure.
SAP HANA system replication without automatic failover
The scenarios described in this section use SAP HANA system replication. For the SAP documentation, see System
replication. Scenarios without automatic failover are not common for configurations within one Azure region. A
configuration without automatic failover, though avoiding a Pacemaker setup, obligates you to monitor and
failover manually. Since this takes and efforts as well, most customers are relying on Azure service healing instead.
There are some edge cases where this configuration might help in terms of failure scenarios. Or, in some cases, a
customer might want to realize more efficiency.
SAP HANA system replication without auto failover and without data preload
In this scenario, you use SAP HANA system replication to move data in a synchronous manner to achieve an RPO
of 0. On the other hand, you have a long enough RTO that you don't need either failover or data preloading into
the HANA instance cache. In this case, it's possible to achieve further economy in your configuration by taking the
following actions:
Run another SAP HANA instance in the second VM. The SAP HANA instance in the second VM takes most of the
memory of the virtual machine. In case a failover to the second VM, you need to shut down the running SAP
HANA instance that has the data fully loaded in the second VM, so that the replicated data can be loaded into
the cache of the targeted HANA instance in the second VM.
Use a smaller VM size on the second VM. If a failover occurs, you have an additional step before the manual
failover. In this step, you resize the VM to the size of the source VM.
The scenario looks like:
NOTE
Even if you don't use data preload in the HANA system replication target, you need at least 64 GB of memory. You also need
enough memory in addition to 64 GB to keep the rowstore data in the memory of the target instance.

SAP HANA system replication without auto failover and with data preload
In this scenario, data that's replicated to the HANA instance in the second VM is preloaded. This eliminates the two
advantages of not preloading data. In this case, you can't run another SAP HANA system on the second VM. You
also can't use a smaller VM size. Hence, customers rarely implement this scenario.
SAP HANA system replication with automatic failover
In the standard and most common availability configuration within one Azure region, two Azure VMs running
SLES Linux have a failover cluster defined. The SLES Linux cluster is based on the Pacemaker framework, in
conjunction with a STONITH device.
From an SAP HANA perspective, the replication mode that's used is synced and an automatic failover is
configured. In the second VM, the SAP HANA instance acts as a hot standby node. The standby node receives a
synchronous stream of change records from the primary SAP HANA instance. As transactions are committed by
the application at the HANA primary node, the primary HANA node waits to confirm the commit to the application
until the secondary SAP HANA node confirms that it received the commit record. SAP HANA offers two
synchronous replication modes. For details and for a description of differences between these two synchronous
replication modes, see the SAP article Replication modes for SAP HANA system replication.
The overall configuration looks like:
You might choose this solution because it enables you to achieve an RPO=0 and an low RTO. Configure the SAP
HANA client connectivity so that the SAP HANA clients use the virtual IP address to connect to the HANA system
replication configuration. Such a configuration eliminates the need to reconfigure the application if a failover to
the secondary node occurs. In this scenario, the Azure VM SKUs for the primary and secondary VMs must be the
same.

Next steps
For step-by-step guidance on setting up these configurations in Azure, see:
Set up SAP HANA system replication in Azure VMs
High availability for SAP HANA by using system replication
For more information about SAP HANA availability across Azure regions, see:
SAP HANA availability across Azure regions
SAP HANA availability across Azure regions
12/22/2020 • 5 minutes to read • Edit Online

This article describes scenarios related to SAP HANA availability across different Azure regions. Because of the
distance between Azure regions, setting up SAP HANA availability in multiple Azure regions involves special
considerations.

Why deploy across multiple Azure regions


Azure regions often are separated by large distances. Depending on the geopolitical region, the distance between
Azure regions might be hundreds of miles, or even several thousand miles, like in the United States. Because of
the distance, network traffic between assets that are deployed in two different Azure regions experience significant
network roundtrip latency. The latency is significant enough to exclude synchronous data exchange between two
SAP HANA instances under typical SAP workloads.
On the other hand, organizations often have a distance requirement between the location of the primary
datacenter and a secondary datacenter. A distance requirement helps provide availability if a natural disaster
occurs in a wider geographic location. Examples include the hurricanes that hit the Caribbean and Florida in
September and October 2017. Your organization might have at least a minimum distance requirement. For most
Azure customers, a minimum distance definition requires you to design for availability across Azure regions.
Because the distance between two Azure regions is too large to use the HANA synchronous replication mode, RTO
and RPO requirements might force you to deploy availability configurations in one region, and then supplement
with additional deployments in a second region.
Another aspect to consider in this scenario is failover and client redirect. The assumption is that a failover between
SAP HANA instances in two different Azure regions always is a manual failover. Because the replication mode of
SAP HANA system replication is set to asynchronous, there's a potential that data committed in the primary HANA
instance hasn't yet made it to the secondary HANA instance. Therefore, automatic failover isn't an option for
configurations where the replication is asynchronous. Even with manually controlled failover, as in a failover
exercise, you need to take measures to ensure that all the committed data on the primary side made it to the
secondary instance before you manually move over to the other Azure region.
Azure Virtual Network uses a different IP address range. The IP addresses are deployed in the second Azure
region. So, you either need to change the SAP HANA client configuration, or preferably, you need to create steps
to change the name resolution. This way, the clients are redirected to the new secondary site's server IP address.
For more information, see the SAP article Client connection recovery after takeover.

Simple availability between two Azure regions


You might choose not to put any availability configuration in place within a single region, but still have the
demand to have the workload served if a disaster occurs. Typical cases for such scenarios are nonproduction
systems. Although having the system down for half a day or even a day is sustainable, you can't allow the system
to be unavailable for 48 hours or more. To make the setup less costly, run another system that is even less
important in the VM. The other system functions as a destination. You can also size the VM in the secondary
region to be smaller, and choose not to preload the data. Because the failover is manual and entails many more
steps to fail over the complete application stack, the additional time to shut down the VM, resize it, and then
restart the VM is acceptable.
If you are using the scenario of sharing the DR target with a QA system in one VM, you need to take these
considerations into account:
There are two operation modes with delta_datashipping and logreplay, which are available for such a scenario
Both operation modes have different memory requirements without preloading data
Delta_datashipping might require drastically less memory without the preload option than logreplay could
require. See chapter 4.3 of the SAP document How To Perform System Replication for SAP HANA
The memory requirement of logreplay operation mode without preload is not deterministic and depends on
the columnstore structures loaded. In extreme cases, you might require 50% of the memory of the primary
instance. The memory for logreplay operation mode is independent on whether you chose to have the data
preloaded set or not.

NOTE
In this configuration, you can't provide an RPO=0 because your HANA system replication mode is asynchronous. If you
need to provide an RPO=0, this configuration isn't the configuration of choice.

A small change that you can make in the configuration might be to configure data as preloading. However, given
the manual nature of failover and the fact that application layers also need to move to the second region, it might
not make sense to preload data.

Combine availability within one region and across regions


A combination of availability within and across regions might be driven by these factors:
A requirement of RPO=0 within an Azure region.
The organization isn't willing or able to have global operations affected by a major natural catastrophe that
affects a larger region. This was the case for some hurricanes that hit the Caribbean over the past few years.
Regulations that demand distances between primary and secondary sites that are clearly beyond what Azure
availability zones can provide.
In these cases, you can set up what SAP calls an SAP HANA multitier system replication configuration by using
HANA system replication. The architecture would look like:
SAP introduced multi-target system replication with HANA 2.0 SPS3. Multi-target system replication brings some
advantages in update scenarios. For example, the DR site (Region 2) is not impacted when the secondary HA site is
down for maintenance or updates. You can find out more about HANA multi-target system replication here.
Possible architecture with multi-target replication would look like:

If the organization has requirements for high availability readiness in the second(DR) Azure region, then the
architecture would look like:
Using logreplay as operation mode, this configuration provides an RPO=0, with low RTO, within the primary
region. The configuration also provides decent RPO if a move to the second region is involved. The RTO times in
the second region are dependent on whether data is preloaded. Many customers use the VM in the secondary
region to run a test system. In that use case, the data can't be preloaded.

IMPORTANT
The operation modes between the different tiers need to be homogeneous. You can't use logreply as operation mode
between tier 1 and tier 2 and delta_datashipping to supply tier 3. You can only choose the one or the other operation mode
that needs to be consistent for all tiers. Since delta_datashipping is not suitable to give you an RPO=0, the only reasonable
operation mode for such a multi-tier configuration remains logreplay. For details about operation modes and some
restrictions, see the SAP article Operation modes for SAP HANA system replication.

Next steps
For step-by-step guidance on setting up these configurations in Azure, see:
Set up SAP HANA system replication in Azure VMs
High availability for SAP HANA by using system replication
SAP Business One on Azure Virtual Machines
12/22/2020 • 7 minutes to read • Edit Online

This document provides guidance to deploy SAP Business One on Azure Virtual Machines. The documentation is
not a replacement for installation documentation of Business one for SAP. The documentation should cover basic
planning and deployment guidelines for the Azure infrastructure to run Business One applications on.
Business One supports two different databases:
SQL Server - see SAP Note #928839 - Release Planning for Microsoft SQL Server
SAP HANA - for exact SAP Business One support matrix for SAP HANA, checkout the SAP Product Availability
Matrix
Regarding SQL Server, the basic deployment considerations as documented in the Azure Virtual Machines DBMS
deployment for SAP NetWeaver applies. for SAP HANA, considerations are mentioned in this document.

Prerequisites
To use this guide, you need basic knowledge of the following Azure components:
Azure virtual machines on Windows
Azure virtual machines on Linux
Azure networking and virtual networks management with PowerShell
Azure networking and virtual networks with CLI
Manage Azure disks with the Azure CLI
Even if you are interested in business One only, the document Azure Virtual Machines planning and
implementation for SAP NetWeaver can be a good source of information.
The assumption is that you as the instance deploying SAP Business One are:
Familiar with installing SAP HANA on a given infrastructure like a VM
Familiar installing the SAP Business One application on an infrastructure like Azure VMs
Familiar with operating SAP Business One and the DBMS system chosen
Familiar with deploying infrastructure in Azure
All these areas will not be covered in this document.
Besides Azure documentation you should be aware of main SAP Notes, which refer to Business One or which are
central Notes from SAP for business One:
528296 - General Overview Note for SAP Business One Releases and Related Products
2216195 - Release Updates Note for SAP Business One 9.2, version for SAP HANA
2483583 - Central Note for SAP Business One 9.3
2483615 - Release Updates Note for SAP Business One 9.3
2483595 - Collective Note for SAP Business One 9.3 General Issues
2027458 - Collective Consulting Note for SAP HANA-Related Topics of SAP Business One, version for SAP HANA

Business One Architecture


Business One is an application that has two tiers:
A client tier with a 'fat' client
A database tier that contains the database schema for a tenant
A better overview which components are running in the client part and which parts are running in the server part
is documented in SAP Business One Administrator's Guide
Since there is heavy latency critical interaction between the client tier and the DBMS tier, both tiers need to be
located in Azure when deploying in Azure. it is usual that the users then RDS into one or multiple VMs running an
RDS service for the Business One client components.
Sizing VMs for SAP Business One
Regarding the sizing of the client VM(s), the resource requirements are documented by SAP in the document SAP
Business One Hardware Requirements Guide. For Azure, you need to focus and calculate with the requirements
stated in chapter 2.4 of the document.
As Azure virtual machines for hosting the Business One client components and the DBMS host, only VMs that are
SAP NetWeaver supported are allowed. To find the list of SAP NetWeaver supported Azure VMs, read SAP Note
#1928533.
Running SAP HANA as DBMS backend for Business One, only VMs, which are listed for Business on HANA in the
HANA certified IaaS platform list are supported for HANA. The Business One client components are not affected by
this stronger restriction for the SAP HANA as DBMS system.
Operating system releases to use for SAP Business One
In principle, it is always best to use the most recent operating system releases. Especially in the Linux space, new
Azure functionality was introduced with different more recent minor releases of Suse and Red Hat. On the Windows
side, using Windows Server 2016 is highly recommended.

Deploying infrastructure in Azure for SAP Business One


In the next few chapters, the infrastructure pieces that matter for deploying SAP.
Azure network infrastructure
The network infrastructure you need to deploy in Azure depends on whether you deploy a single Business One
system for yourself. Or whether you are a hoster who hosts dozens of Business One systems for customers. There
also might be slight changes in the design on whether how you connect to Azure. Going through different
possibilities, one design where you have a VPN connectivity into Azure and where you extend your Active Directory
through VPN or ExpressRoute into Azure.
The simplified configuration presented introduces several security instances that allow to control and limit routing.
It starts with
The router/firewall on the customer on-premises side.
The next instance is the Azure Network Security Group that you can use to introduce routing and security rules
for the Azure VNet that you run your SAP Business one configuration in.
In order to avoid that users of Business One client can as well see the server that runs the Business One server,
which runs the database, you should separate the VM hosting the Business one client and the business one
server in two different subnets within the VNet.
You would use Azure NSG assigned to the two different subnets again in order to limit access to the Business
one server.
A more sophisticated version of an Azure network configuration is based on the Azure documented best practices
of hub and spoke architecture. The architecture pattern of hub and spoke would change the first simplified
configuration to one like this:

For cases where the users are connecting through the internet without any private connectivity into Azure, the
design of the network in Azure should be aligned with the principles documented in the Azure reference
architecture for DMZ between Azure and the Internet.
Business One database server
For the database type, SQL Server and SAP HANA are available. Independent of the DBMS, you should read the
document Considerations for Azure Virtual Machines DBMS deployment for SAP workload to get a general
understanding of DBMS deployments in Azure VMs and the related networking and storage topics.
Though emphasized in the specific and generic database documents already, you should make yourself familiar
with:
Manage the availability of Windows virtual machines in Azure and Manage the availability of Linux virtual
machines in Azure
SLA for Virtual Machines
These documents should help you to decide on the selection of storage types and high availability configuration.
In principle you should:
Use Premium SSDs over Standard HDDs. To learn more about the available disk types, see our article Select a
disk type
Use Azure Managed disks over unmanaged disks
Make sure that you have sufficient IOPS and I/O throughput configured with your disk configuration
Combine /hana/data and /hana/log volume in order to have a cost efficient storage configuration
SQL Server as DBMS
For deploying SQL Server as DBMS for Business One, go along the document SQL Server Azure Virtual Machines
DBMS deployment for SAP NetWeaver.
Rough sizing estimates for the DBMS side for SQL Server are:

N UM B ER O F USERS VC P US M EM O RY EXA M P L E VM T Y P ES

up to 20 4 16 GB D4s_v3, E4s_v3

up to 40 8 32 GB D8s_v3, E8s_v3

up to 80 16 64 GB D16s_v3, E16s_v3

up to 150 32 128 GB D32s_v3, E32s_v3

The sizing listed above should give an idea where to start with. It may be that you need less or more resources, in
which case an adaption on Azure is easy. A change between VM types is possible with just a restart of the VM.
SAP HANA as DBMS
Using SAP HANA as DBMS the following sections you should follow the considerations of the document SAP HANA
on Azure operations guide.
For high availability and disaster recovery configurations around SAP HANA as database for Business One in Azure,
you should read the documentation SAP HANA high availability for Azure virtual machines and the documentation
pointed to from that document.
For SAP HANA backup and restore strategies, you should read the document Backup guide for SAP HANA on Azure
Virtual Machines and the documentation pointed to from that document.
Business One client server
For these components storage considerations are not the primary concern. nevertheless, you want to have a
reliable platform. Therefore, you should use Azure Premium Storage for this VM, even for the base VHD. Sizing the
VM, with the data given in SAP Business One Hardware Requirements Guide. For Azure, you need to focus and
calculate with the requirements stated in chapter 2.4 of the document. As you calculate the requirements, you need
to compare them against the following documents to find the ideal VM for you:
Sizes for Windows virtual machines in Azure
SAP Note #1928533
Compare number of CPUs and memory needed to what is documented by Microsoft. Also keep network
throughput in mind when choosing the VMs.
Deploy SAP IDES EHP7 SP3 for SAP ERP 6.0 on
Azure
12/22/2020 • 4 minutes to read • Edit Online

This article describes how to deploy an SAP IDES system running with SQL Server and the Windows operating
system on Azure via the SAP Cloud Appliance Library (SAP CAL) 3.0. The screenshots show the step-by-step
process. To deploy a different solution, follow the same steps.
To start with the SAP CAL, go to the SAP Cloud Appliance Library website. SAP also has a blog about the new SAP
Cloud Appliance Library 3.0.

NOTE
As of May 29, 2017, you can use the Azure Resource Manager deployment model in addition to the less-preferred classic
deployment model to deploy the SAP CAL. We recommend that you use the new Resource Manager deployment model and
disregard the classic deployment model.

If you already created an SAP CAL account that uses the classic model, you need to create another SAP CAL
account. This account needs to exclusively deploy into Azure by using the Resource Manager model.
After you sign in to the SAP CAL, the first page usually leads you to the Solutions page. The solutions offered on
the SAP CAL are steadily increasing, so you might need to scroll quite a bit to find the solution you want. The
highlighted Windows-based SAP IDES solution that is available exclusively on Azure demonstrates the deployment
process:

Create an account in the SAP CAL


1. To sign in to the SAP CAL for the first time, use your SAP S-User or other user registered with SAP. Then
define an SAP CAL account that is used by the SAP CAL to deploy appliances on Azure. In the account
definition, you need to:
a. Select the deployment model on Azure (Resource Manager or classic).
b. Enter your Azure subscription. An SAP CAL account can be assigned to one subscription only. If you need
more than one subscription, you need to create another SAP CAL account.
c. Give the SAP CAL permission to deploy into your Azure subscription.

NOTE
The next steps show how to create an SAP CAL account for Resource Manager deployments. If you already have an
SAP CAL account that is linked to the classic deployment model, you need to follow these steps to create a new SAP
CAL account. The new SAP CAL account needs to deploy in the Resource Manager model.

2. To create a new SAP CAL account, the Accounts page shows two choices for Azure:
a. Microsoft Azure (classic) is the classic deployment model and is no longer preferred.
b. Microsoft Azure is the new Resource Manager deployment model.

To deploy in the Resource Manager model, select Microsoft Azure .

3. Enter the Azure Subscription ID that can be found on the Azure portal.
4. To authorize the SAP CAL to deploy into the Azure subscription you defined, click Authorize . The following
page appears in the browser tab:

5. If more than one user is listed, choose the Microsoft account that is linked to be the coadministrator of the
Azure subscription you selected. The following page appears in the browser tab:

6. Click Accept . If the authorization is successful, the SAP CAL account definition displays again. After a short
time, a message confirms that the authorization process was successful.
7. To assign the newly created SAP CAL account to your user, enter your User ID in the text box on the right
and click Add .
8. To associate your account with the user that you use to sign in to the SAP CAL, click Review .
9. To create the association between your user and the newly created SAP CAL account, click Create .

You successfully created an SAP CAL account that is able to:


Use the Resource Manager deployment model.
Deploy SAP systems into your Azure subscription.

NOTE
Before you can deploy the SAP IDES solution based on Windows and SQL Server, you might need to sign up for an SAP CAL
subscription. Otherwise, the solution might show up as Locked on the overview page.

Deploy a solution
1. After you set up an SAP CAL account, select The SAP IDES solution on Windows and SQL Ser ver
solution. Click Create Instance , and confirm the usage and terms conditions.
2. On the Basic Mode: Create Instance page, you need to:
a. Enter an instance Name .
b. Select an Azure Region . You might need an SAP CAL subscription to get multiple Azure regions offered.
c. Enter the master Password for the solution, as shown:
3. Click Create . After some time, depending on the size and complexity of the solution (the SAP CAL provides
an estimate), the status is shown as active and ready for use:

4. To find the resource group and all its objects that were created by the SAP CAL, go to the Azure portal. The
virtual machine can be found starting with the same instance name that was given in the SAP CAL.
5. On the SAP CAL portal, go to the deployed instances and click Connect . The following pop-up window
appears:

6. Before you can use one of the options to connect to the deployed systems, click Getting Star ted Guide .
The documentation names the users for each of the connectivity methods. The passwords for those users are
set to the master password you defined at the beginning of the deployment process. In the documentation,
other more functional users are listed with their passwords, which you can use to sign in to the deployed
system.
Within a few hours, a healthy SAP IDES system is deployed in Azure.
If you bought an SAP CAL subscription, SAP fully supports deployments through the SAP CAL on Azure. The
support queue is BC-VCM-CAL.
SAP LaMa connector for Azure
12/22/2020 • 24 minutes to read • Edit Online

NOTE
General Support Statement: Please always open an incident with SAP on component BC-VCM-LVM-HYPERV if you need support for SAP LaMa or
the Azure connector.

SAP LaMa is used by many customers to operate and monitor their SAP landscape. Since SAP LaMa 3.0 SP05, it ships with a
connector to Azure by default. You can use this connector to deallocate and start virtual machines, copy and relocate managed disks,
and delete managed disks. With these basic operations, you can relocate, copy, clone, and refresh SAP systems using SAP LaMa.
This guide describes how you set up the Azure connector for SAP LaMa, create virtual machines that can be used to install adaptive
SAP systems and how to configure them.

NOTE
The connector is only available in the SAP LaMa Enterprise Edition

Resources
The following SAP Notes are related to the topic of SAP LaMa on Azure:

N OT E N UM B ER T IT L E

2343511 Microsoft Azure connector for SAP Landscape Management (LaMa)

2350235 SAP Landscape Management 3.0 - Enterprise edition

Also read the SAP Help Portal for SAP LaMa.

General remarks
Make sure to enable Automatic Mountpoint Creation in Setup -> Settings -> Engine
If SAP LaMa mounts volumes using the SAP Adaptive Extensions on a virtual machine, the mount point must exist if this setting
is not enabled.
Use separate subnet and don't use dynamic IP addresses to prevent IP address "stealing" when deploying new VMs and SAP
instances are unprepared
If you use dynamic IP address allocation in the subnet, which is also used by SAP LaMa, preparing an SAP system with SAP
LaMa might fail. If an SAP system is unprepared, the IP addresses are not reserved and might get allocated to other virtual
machines.
If you sign in to managed hosts, make sure to not block file systems from being unmounted
If you sign in to a Linux virtual machines and change the working directory to a directory in a mount point, for example
/usr/sap/AH1/ASCS00/exe, the volume cannot be unmounted and a relocate or unprepare fails.
Make sure to disable CLOUD_NETCONFIG_MANAGE on SUSE SLES Linux virtual machines. For more details, see SUSE KB
7023633.

Set up Azure connector for SAP LaMa


The Azure connector is shipped as of SAP LaMa 3.0 SP05. We recommend always installing the latest support package and patch for
SAP LaMa 3.0.
The Azure connector uses the Azure Resource Manager API to manage your Azure resources. SAP LaMa can use a Service Principal or
a Managed Identity to authenticate against this API. If your SAP LaMa is running on an Azure VM, we recommend using a Managed
Identity as described in chapter Use a Managed Identity to get access to the Azure API. If you want to use a Service Principal, follow
the steps in chapter Use a Service Principal to get access to the Azure API.
Use a Service Principal to get access to the Azure API
The Azure connector can use a Service Principal to authorize against Microsoft Azure. Follow these steps to create a Service Principal
for SAP Landscape Management (LaMa).
1. Go to https://portal.azure.com
2. Open the Azure Active Directory blade
3. Click on App registrations
4. Click on New registration
5. Enter a name and click on Register
6. Select the new App and click on Certificates & secrets in the Settings tab
7. Create a new client secret, enter a description for a new key, select when the secret should expire and click on Save
8. Write down the Value. It is used as the password for the Service Principal
9. Write down the Application ID. It is used as the username of the Service Principal
The Service Principal does not have permissions to access your Azure resources by default. You need to give the Service Principal
permissions to access them.
1. Go to https://portal.azure.com
2. Open the Resource groups blade
3. Select the resource group you want to use
4. Click Access control (IAM)
5. Click on Add role assignment
6. Select the role Contributor
7. Enter the name of the application you created above
8. Click Save
9. Repeat step 3 to 8 for all resource groups you want to use in SAP LaMa
Use a Managed Identity to get access to the Azure API
To be able to use a Managed Identity, your SAP LaMa instance has to run on an Azure VM that has a system or user assigned identity.
For more information about Managed Identities, read What is managed identities for Azure resources? and Configure managed
identities for Azure resources on a VM using the Azure portal.
The Managed Identity does not have permissions to access your Azure resources by default. You need to give it permissions to access
them.
1. Go to https://portal.azure.com
2. Open the Resource groups blade
3. Select the resource group you want to use
4. Click Access control (IAM)
5. Click on Add -> Add Role assignment
6. Select the role Contributor
7. Select 'Virtual Machine' for 'Assign access to'
8. Select the virtual machine where your SAP LaMa instance is running on
9. Click Save
10. Repeat the steps for all resource groups you want to use in SAP LaMa
In your SAP LaMa Azure connector configuration, select 'Use Managed Identity' to enable the usage of the Managed Identity. If you
want to use a system assigned identity, make sure to leave the User Name field empty. If you want to use a user assigned identity,
enter the user assigned identity Id into the User Name field.
Create a new connector in SAP LaMa
Open the SAP LaMa website and navigate to Infrastructure. Go to tab Cloud Managers and click on Add. Select the Microsoft Azure
Cloud Adapter and click Next. Enter the following information:
Label: Choose a name for the connector instance
User Name: Service Principal Application ID or ID of the user assigned identity of the virtual machine. See [Using a System or User
Assigned Identity] for more information
Password: Service Principal key/password. You can leave this field empty if you use a system or user assigned identity.
URL: Keep default https://management.azure.com/
Monitoring Interval (Seconds): Should be at least 300
Use Managed Identity: SAP LaMa can use a system or user assigned identity to authenticate against the Azure API. See chapter Use
a Managed Identity to get access to the Azure API in this guide.
Subscription ID: Azure subscription ID
Azure Active Directory Tenant ID: ID of the Active Directory tenant
Proxy host: Hostname of the proxy if SAP LaMa needs a proxy to connect to the internet
Proxy port: TCP port of the proxy
Change Storage Type to save costs: Enable this setting if the Azure Adapter should change the storage type of the Managed Disks
to save costs when the disks are not in use. For data disks that are referenced in an SAP instance configuration, the adapter will
change the disk type to Standard Storage during an instance unprepare and back to the original storage type during an instance
prepare. If you stop a virtual machine in SAP LaMa, the adapter will change the storage type of all attached disks, including the OS
disk to Standard Storage. If you start a virtual machine in SAP LaMa, the adapter will change the storage type back to the original
storage type.
Click on Test Configuration to validate your input. You should see
Connection successful: Connection to Microsoft cloud was successful. 7 resource groups found (only 10 groups requested)
at the bottom of the website.

Provision a new adaptive SAP system


You can manually deploy a new virtual machine or use one of the Azure templates in the quickstart repository. It contains templates
for SAP NetWeaver ASCS, SAP NetWeaver application servers, and the database. You can also use these templates to provision new
hosts as part of a system copy/clone etc.
We recommend using a separate subnet for all virtual machines that you want to manage with SAP LaMa and don’t use dynamic IP
addresses to prevent IP address "stealing" when deploying new virtual machines and SAP instances are unprepared.

NOTE
If possible, remove all virtual machine extensions as they might cause long runtimes for detaching disks from a virtual machine.

Make sure that user <hanasid>adm, <sapsid>adm and group sapsys exist on the target machine with the same ID and gid or use
LDAP. Enable and start the NFS server on the virtual machines that should be used to run the SAP NetWeaver (A)SCS.
Manual Deployment
SAP LaMa communicates with the virtual machine using the SAP Host Agent. If you deploy the virtual machines manually or not
using the Azure Resource Manager template from the quickstart repository, make sure to install the latest SAP Host Agent and the
SAP Adaptive Extensions. For more information about the required patch levels for Azure, see SAP Note 2343511.
Manual deployment of a Linux Virtual Machine
Create a new virtual machine with one of the supported operation systems listed in SAP Note 2343511. Add additional IP
configurations for the SAP instances. Each instance needs at least on IP address and must be installed using a virtual hostname.
The SAP NetWeaver ASCS instance needs disks for /sapmnt/<SAPSID>, /usr/sap/<SAPSID>, /usr/sap/trans, and
/usr/sap/<sapsid>adm. The SAP NetWeaver application servers do not need additional disks. Everything related to the SAP instance
must be stored on the ASCS and exported via NFS. Otherwise, it is currently not possible to add additional application servers using
SAP LaMa.
Manual deployment for SAP HANA
Create a new virtual machine with one of the supported operation systems for SAP HANA as listed in SAP Note 2343511. Add one
additional IP configuration for SAP HANA and one per HANA tenant.
SAP HANA needs disks for /hana/shared, /hana/backup, /hana/data, and /hana/log

Manual deployment for Oracle Database on Linux


Create a new virtual machine with one of the supported operation systems for Oracle databases as listed in SAP Note 2343511. Add
one additional IP configuration for the Oracle database.
The Oracle database needs disks for /oracle, /home/oraod1, and /home/oracle
Manual deployment for Microsoft SQL Server
Create a new virtual machine with one of the supported operation systems for Microsoft SQL Server as listed in SAP Note 2343511.
Add one additional IP configuration for the SQL Server instance.
The SQL Server database server needs disks for the database data and log files and disks for c:\usr\sap.

Make sure to install a supported Microsoft ODBC driver for SQL Server on a virtual machine that you want to use to relocate an SAP
NetWeaver application server to or as a system copy/clone target.
SAP LaMa cannot relocate SQL Server itself so a virtual machine that you want to use to relocate a database instance to or as a
system copy/clone target needs SQL Server preinstalled.
Deploy Virtual Machine Using an Azure Template
Download the following latest available archives from the SAP Software Marketplace for the operating system of the virtual machines:
1. SAPCAR 7.21
2. SAP HOST AGENT 7.21
3. SAP ADAPTIVE EXTENSION 1.0 EXT
Also download the following components from the Microsoft Download Center
1. Microsoft Visual C++ 2010 Redistributable Package (x64) (Windows only)
2. Microsoft ODBC Driver for SQL Server (SQL Server only)
The components are required to deploy the template. The easiest way to make them available to the template is to upload them to an
Azure storage account and create a Shared Access Signature (SAS).
The templates have the following parameters:
sapSystemId: The SAP system ID. It is used to create the disk layout (for example /usr/sap/<sapsid>).
computerName: The computer name of the new virtual machine. This parameter is also used by SAP LaMa. When you use this
template to provision a new virtual machine as part of a system copy, SAP LaMa waits until the host with this computer name
can be reached.
osType: The type of the operating system you want to deploy.
dbtype: The type of the database. This parameter is used to determine how many additional IP configurations need to be added
and how the disk layout should look like.
sapSystemSize: The size of the SAP System you want to deploy. It is used to determine the virtual machine instance type and
size.
adminUsername: Username for the virtual machine.
adminPassword: Password for the virtual machine. You can also provide a public key for SSH.
sshKeyData: Public SSH key for the virtual machines. Only supported for Linux operating systems.
subnetId: The ID of the subnet you want to use.
deployEmptyTarget: You can deploy an empty target if you want to use the virtual machine as a target for an instance relocate
or similar. In this case, no additional disks or IP configurations are attached.
sapcarLocation: The location for the sapcar application that matches the operating system you deploy. sapcar is used to extract
the archives you provide in other parameters.
sapHostAgentArchiveLocation: The location of the SAP Host Agent archive. SAP Host Agent is deployed as part of this template
deployment.
sapacExtLocation: The location of the SAP Adaptive Extensions. SAP Note 2343511 lists the minimum patch level required for
Azure.
vcRedistLocation: The location of the VC Runtime that is required to install the SAP Adaptive Extensions. This parameter is only
required for Windows.
odbcDriverLocation: The location of the ODBC driver you want to install. Only Microsoft ODBC driver for SQL Server is
supported.
sapadmPassword: The password for the sapadm user.
sapadmId: The Linux User ID of the sapadm user. Not required for Windows.
sapsysGid: The Linux group ID of the sapsys group. Not required for Windows.
_artifactsLocation: The base URI, where artifacts required by this template are located. When the template is deployed using the
accompanying scripts, a private location in the subscription will be used and this value will be automatically generated. Only
needed if you do not deploy the template from GitHub.
_artifactsLocationSasToken: The sasToken required to access _artifactsLocation. When the template is deployed using the
accompanying scripts, a sasToken will be automatically generated. Only needed if you do not deploy the template from GitHub.
SAP HANA
In the examples below, we assume that you install SAP HANA with system ID HN1 and the SAP NetWeaver system with system ID
AH1. The virtual hostnames are hn1-db for the HANA instance, ah1-db for the HANA tenant used by the SAP NetWeaver system, ah1-
ascs for the SAP NetWeaver ASCS and ah1-di-0 for the first SAP NetWeaver application server.
Install SAP NetWeaver ASCS for SAP HANA using Azure Managed Disks
Before you start the SAP Software Provisioning Manager (SWPM), you need to mount the IP address of virtual hostname of the ASCS.
The recommended way is to use sapacext. If you mount the IP address using sapacext, make sure to remount the IP address after a
reboot.

Linux
# /usr/sap/hostctrl/exe/sapacext -a ifup -i <network interface> -h <virtual hostname or IP address> -n <subnet mask>
/usr/sap/hostctrl/exe/sapacext -a ifup -i eth0 -h ah1-ascs -n 255.255.255.128

Windows

# C:\Program Files\SAP\hostctrl\exe\sapacext.exe -a ifup -i <network interface> -h <virtual hostname or IP address> -n <subnet


mask>
C:\Program Files\SAP\hostctrl\exe\sapacext.exe -a ifup -i "Ethernet 3" -h ah1-ascs -n 255.255.255.128

Run SWPM and use ah1-ascs for the ASCS Instance Host Name.

Linux
Add the following profile parameter to the SAP Host Agent profile, which is located at /usr/sap/hostctrl/exe/host_profile. For more
information, see SAP Note 2628497.

acosprep/nfs_paths=/home/ah1adm,/usr/sap/trans,/sapmnt/AH1,/usr/sap/AH1

Install SAP NetWeaver ASCS for SAP HANA on Azure NetAppFiles (ANF) BETA

NOTE
This functionality is nor GA yet. For more information refer to SAP Note 2815988 (only visible to preview customers). Open an SAP incident on
component BC-VCM-LVM-HYPERV and request to join the LaMa storage adapter for Azure NetApp Files preview

ANF provides NFS for Azure. In the context of SAP LaMa this simplifies the creation of the ABAP Central Services (ASCS) instances and
the subsequent installation of application servers. Previously the ASCS instance had to act as NFS server as well and the parameter
acosprep/nfs_paths had to be added to the host_profile of the SAP Hostagent.
ANF is currently available in these regions:
Australia East, Central US, East US, East US 2, North Europe, South Central US, West Europe and West US 2.
Network Requirements
ANF requires a delegated subnet which must be part of the same VNET as the SAP servers. Here’s an example for such a
configuration. This screen shows the creation of the VNET and the first subnet:
The next step creates the delegated subnet for Microsoft.NetApp/volumes.
Now a NetApp account needs to be created within the Azure portal:

Within the NetApp account the capacity pool specifies the size and type of disks for each pool:

The NFS volumes can now be defined. Since there will be volumes for multiple systems in one pool, a self-explaining naming scheme
should be chosen. Adding the SID helps to group related volumes together. For the ASCS and the AS instance the following mounts
are needed: /sapmnt/<SID>, /usr/sap/<SID>, and /home/<sid>adm. Optionally, /usr/sap/trans is needed for the central transport
directory, which is at least used by all systems of one landscape.
NOTE
During the BETA phase the name of the volumes must be unique within the subscription.
These steps need to be repeated for the other volumes as well.
Now these volumes need to be mounted to the systems where the initial installation with the SAP SWPM will be performed.
First the mount points need to be created. In this case the SID is AN1 so the following commands need to be executed:

mkdir -p /home/an1adm
mkdir -p /sapmnt/AN1
mkdir -p /usr/sap/AN1
mkdir -p /usr/sap/trans

Next the ANF volumes will be mounted with the following commands:

# sudo mount -t nfs -o rw,hard,rsize=65536,wsize=65536,vers=3,tcp 9.9.9.132:/an1-home-sidadm /home/an1adm


# sudo mount -t nfs -o rw,hard,rsize=65536,wsize=65536,vers=3,tcp 9.9.9.132:/an1-sapmnt-sid /sapmnt/AN1
# sudo mount -t nfs -o rw,hard,rsize=65536,wsize=65536,vers=3,tcp 9.9.9.132:/an1-usr-sap-sid /usr/sap/AN1
# sudo mount -t nfs -o rw,hard,rsize=65536,wsize=65536,vers=3,tcp 9.9.9.132:/global-usr-sap-trans /usr/sap/trans

The mount commands can also be derived from the portal. The local mount points need to adjusted.
Use the df -h command to verify.

Now the installation with SWPM must be performed.


The same steps must be performed for at least one AS instance.
After the successful installation the system must be discovered within SAP LaMa.
The mount points should look like this for the ASCS and the AS instance:

(This is an example. The IP addresses and export path are different from the ones used before)
Install SAP HANA
If you install SAP HANA using the commandline tool hdblcm, use parameter --hostname to provide a virtual hostname. You need to
add the IP address of the virtual hostname of the database to a network interface. The recommended way is to use sapacext. If you
mount the IP address using sapacext, make sure to remount the IP address after a reboot.
Add another virtual hostname and IP address for the name that is used by the application servers to connect to the HANA tenant.
# /usr/sap/hostctrl/exe/sapacext -a ifup -i <network interface> -h <virtual hostname or IP address> -n <subnet mask>
/usr/sap/hostctrl/exe/sapacext -a ifup -i eth0 -h hn1-db -n 255.255.255.128
/usr/sap/hostctrl/exe/sapacext -a ifup -i eth0 -h ah1-db -n 255.255.255.128

Run the database instance installation of SWPM on the application server virtual machine, not on the HANA virtual machine. Use ah1-
db for the Database Host in dialog Database for SAP System.
Install SAP NetWeaver Application Server for SAP HANA
Before you start the SAP Software Provisioning Manager (SWPM), you need to mount the IP address of virtual hostname of the
application server. The recommended way is to use sapacext. If you mount the IP address using sapacext, make sure to remount the IP
address after a reboot.

Linux

# /usr/sap/hostctrl/exe/sapacext -a ifup -i <network interface> -h <virtual hostname or IP address> -n <subnet mask>


/usr/sap/hostctrl/exe/sapacext -a ifup -i eth0 -h ah1-di-0 -n 255.255.255.128

Windows

# C:\Program Files\SAP\hostctrl\exe\sapacext.exe -a ifup -i <network interface> -h <virtual hostname or IP address> -n <subnet


mask>
C:\Program Files\SAP\hostctrl\exe\sapacext.exe -a ifup -i "Ethernet 3" -h ah1-di-0 -n 255.255.255.128

It is recommended to use SAP NetWeaver profile parameter dbs/hdb/hdb_use_ident to set the identity that is used to find the key in
the HDB userstore. You can add this parameter manually after the database instance installation with SWPM or run SWPM with

# from https://blogs.sap.com/2015/04/14/sap-hana-client-software-different-ways-to-set-the-connectivity-data/
/sapdb/DVDs/IM_LINUX_X86_64/sapinst HDB_USE_IDENT=SYSTEM_COO

If you set it manually, you also need to create new HDB userstore entries.

# run as <sapsid>adm
/usr/sap/AH1/hdbclient/hdbuserstore LIST
# reuse the port that was listed from the command above, in this example 35041
/usr/sap/AH1/hdbclient/hdbuserstore SET DEFAULT ah1-db:35041@AH1 SAPABAP1 <password>

Use ah1-di-0 for the PAS Instance Host Name in dialog Primary Application Server Instance.
Post-Installation Steps for SAP HANA
Make sure to back up the SYSTEMDB and all tenant databases before you try to do a tenant copy, tenant move or create a system
replication.
Microsoft SQL Server
In the examples below, we assume that you install the SAP NetWeaver system with system ID AS1. The virtual hostnames are as1-db
for the SQL Server instance used by the SAP NetWeaver system, as1-ascs for the SAP NetWeaver ASCS and as1-di-0 for the first SAP
NetWeaver application server.
Install SAP NetWeaver ASCS for SQL Server
Before you start the SAP Software Provisioning Manager (SWPM), you need to mount the IP address of virtual hostname of the ASCS.
The recommended way is to use sapacext. If you mount the IP address using sapacext, make sure to remount the IP address after a
reboot.

# C:\Program Files\SAP\hostctrl\exe\sapacext.exe -a ifup -i <network interface> -h <virtual hostname or IP address> -n <subnet


mask>
C:\Program Files\SAP\hostctrl\exe\sapacext.exe -a ifup -i "Ethernet 3" -h as1-ascs -n 255.255.255.128

Run SWPM and use as1-ascs for the ASCS Instance Host Name.
Install SQL Server
You need to add the IP address of the virtual hostname of the database to a network interface. The recommended way is to use
sapacext. If you mount the IP address using sapacext, make sure to remount the IP address after a reboot.
# C:\Program Files\SAP\hostctrl\exe\sapacext.exe -a ifup -i <network interface> -h <virtual hostname or IP address> -n <subnet
mask>
C:\Program Files\SAP\hostctrl\exe\sapacext.exe -a ifup -i "Ethernet 3" -h as1-db -n 255.255.255.128

Run the database instance installation of SWPM on the SQL server virtual machine. Use SAPINST_USE_HOSTNAME=as1-db to
override the hostname used to connect to SQL Server. If you deployed the virtual machine using the Azure Resource Manager
template, make sure to set the directory used for the database data files to C:\sql\data and database log file to C:\sql\log.
Make sure that the user NT AUTHORITY\SYSTEM has access to the SQL Server and has the server role sysadmin. For more
information, see SAP Note 1877727 and 2562184.
Install SAP NetWeaver Application Server
Before you start the SAP Software Provisioning Manager (SWPM), you need to mount the IP address of virtual hostname of the
application server. The recommended way is to use sapacext. If you mount the IP address using sapacext, make sure to remount the IP
address after a reboot.

# C:\Program Files\SAP\hostctrl\exe\sapacext.exe -a ifup -i <network interface> -h <virtual hostname or IP address> -n <subnet


mask>
C:\Program Files\SAP\hostctrl\exe\sapacext.exe -a ifup -i "Ethernet 3" -h as1-di-0 -n 255.255.255.128

Use as1-di-0 for the PAS Instance Host Name in dialog Primary Application Server Instance.

Troubleshooting
Errors and Warnings during Discover
The SELECT permission was denied
[Microsoft][ODBC SQL Server Driver][SQL Server]The SELECT permission was denied on the object
'log_shipping_primary_databases', database 'msdb', schema 'dbo'. [SOAPFaultException]
The SELECT permission was denied on the object 'log_shipping_primary_databases', database 'msdb', schema 'dbo'.
Solution
Make sure that NT AUTHORITY\SYSTEM can access the SQL Server. See SAP Note 2562184
Errors and Warnings for Instance Validation
An exception was raised in validation of the HDB userstore
see Log Viewer
com.sap.nw.lm.aci.monitor.api.validation.RuntimeValidationException: Exception in validator with ID
'RuntimeHDBConnectionValidator' (Validation: 'VALIDATION_HDB_USERSTORE'): Could not retrieve the hdbuserstore
HANA userstore is not in the correct location
Solution
Make sure that /usr/sap/AH1/hdbclient/install/installation.ini is correct
Errors and Warnings during a System Copy
An error occurred when validating the system provisioning step
Caused by: com.sap.nw.lm.aci.engine.base.api.util.exception.HAOperationException Calling '/usr/sap/hostctrl/exe/sapacext -a
ShowHanaBackups -m HN1 -f 50 -h hn1-db -o level=0;status=5;port=35013 pf=/usr/sap/hostctrl/exe/host_profile -R -T
dev_lvminfo -u SYSTEM -p hook -r' | /usr/sap/hostctrl/exe/sapacext -a ShowHanaBackups -m HN1 -f 50 -h hn1-db -o
level=0;status=5;port=35013 pf=/usr/sap/hostctrl/exe/host_profile -R -T dev_lvminfo -u SYSTEM -p hook -r
Solution
Take backup of all databases in source HANA system
System Copy Step Start of database instance
Host Agent Operation '000D3A282BC91EE8A1D76CF1F92E2944' failed (OperationException. FaultCode: '127', Message:
'Command execution failed. : [Microsoft][ODBC SQL Server Driver][SQL Server]User does not have permission to alter
database 'AS2', the database does not exist, or the database is not in a state that allows access checks.')
Solution
Make sure that NT AUTHORITY\SYSTEM can access the SQL Server. See SAP Note 2562184
Errors and Warnings during a System Clone
Error occurred when trying to register instance agent in step Forced Register and Start Instance Agent of application server or
ASCS
Error occurred when trying to register instance agent. (RemoteException: 'Failed to load instance data from profile '\as1-
ascs\sapmnt\AS1\SYS\profile\AS1_D00_as1-di-0': Cannot access profile '\as1-ascs\sapmnt\AS1\SYS\profile\AS1_D00_as1-
di-0': No such file or directory.')
Solution
Make sure that the sapmnt share on the ASCS/SCS has Full Access for SAP_AS1_GlobalAdmin
Error in step Enable Startup Protection for Clone
Failed to open file '\as1-ascs\sapmnt\AS1\SYS\profile\AS1_D00_as1-di-0' Cause: No such file or directory
Solution
The computer account of the application server needs write access to the profile
Errors and Warnings during Create System Replication
Exception when clicking on Create System Replication
Caused by: com.sap.nw.lm.aci.engine.base.api.util.exception.HAOperationException Calling '/usr/sap/hostctrl/exe/sapacext -a
ShowHanaBackups -m HN1 -f 50 -h hn1-db -o level=0;status=5;port=35013 pf=/usr/sap/hostctrl/exe/host_profile -R -T
dev_lvminfo -u SYSTEM -p hook -r' | /usr/sap/hostctrl/exe/sapacext -a ShowHanaBackups -m HN1 -f 50 -h hn1-db -o
level=0;status=5;port=35013 pf=/usr/sap/hostctrl/exe/host_profile -R -T dev_lvminfo -u SYSTEM -p hook -r
Solution
Test if sapacext can be executed as <hanasid >adm
Error when full copy is not enabled in Storage Step
An error occurred when reporting a context attribute message for path IStorageCopyData.storageVolumeCopyList:1 and
field targetStorageSystemId
Solution
Ignore Warnings in step and try again. This issue will be fixed in a new support package/patch of SAP LaMa.
Errors and Warnings during Relocate
Path '/usr/sap/AH1' is not allowed for nfs reexports.
Check SAP Note 2628497 for details.
Solution
Add ASCS exports to ASCS HostAgent Profile. See SAP Note 2628497
Function not implemented when relocating ASCS
Command Output: exportfs: host:/usr/sap/AX1: Function not implemented
Solution
Make sure that the NFS server service is enabled on the relocate target virtual machine
Errors and Warnings during Application Server Installation
Error executing SAPinst step: getProfileDir
ERROR: (Last error reported by the step: Caught ESAPinstException in module call: Validator of step
'|NW_DI|ind|ind|ind|ind|0|0|NW_GetSidFromProfiles|ind|ind|ind|ind|getSid|0|NW_readProfileDir|ind|ind|ind|ind|readProfile|0|getProfileDir'
reported an error: Node \\as1-ascs\sapmnt\AS1\SYS\profile does not exist. Start SAPinst in interactive mode to solve this
problem)
Solution
Make sure that SWPM is running with a user that has access to the profile. This user can be configured in the Application
Server Installation wizard
Error executing SAPinst step: askUnicode
ERROR: (Last error reported by the step: Caught ESAPinstException in module call: Validator of step
'|NW_DI|ind|ind|ind|ind|0|0|NW_GetSidFromProfiles|ind|ind|ind|ind|getSid|0|NW_getUnicode|ind|ind|ind|ind|unicode|0|askUnicode'
reported an error: Start SAPinst in interactive mode to solve this problem)
Solution
If you use a recent SAP kernel, SWPM cannot determine whether the system is a unicode system anymore using the
message server of the ASCS. See SAP Note 2445033 for more details.
This issue will be fixed in a new support package/patch of SAP LaMa.
Set profile parameter OS_UNICODE=uc in the default profile of your SAP system to work around this issue.
Error executing SAPinst step: dCheckGivenServer
Error executing SAPinst step: dCheckGivenServer" version="1.0" ERROR: (Last error reported by the step: <p> Installation
was canceled by user. </p>
Solution
Make sure that SWPM is running with a user that has access to the profile. This user can be configured in the Application
Server Installation wizard
Error executing SAPinst step: checkClient
Error executing SAPinst step: checkClient" version="1.0" ERROR: (Last error reported by the step: <p> Installation was
canceled by user. </p>)
Solution
Make sure that the Microsoft ODBC driver for SQL Server is installed on the virtual machine on which you want to install
the application server
Error executing SAPinst step: copyScripts
Last error reported by the step: System call failed. DETAILS: Error 13 (0x0000000d) (Permission denied) in execution of
system call 'fopenU' with parameter (\\as1-ascs/sapmnt/AS1/SYS/exe/uc/NTAMD64/strdbs.cmd, w), line (494) in file
(\bas/bas/749_REL/bc_749_REL/src/ins/SAPINST/impl/src/syslib/filesystem/syxxcfstrm2.cpp), stack trace:
CThrThread.cpp: 85: CThrThread::threadFunction()
CSiServiceSet.cpp: 63: CSiServiceSet::executeService()
CSiStepExecute.cpp: 913: CSiStepExecute::execute()
EJSController.cpp: 179: EJSControllerImpl::executeScript()
JSExtension.hpp: 1136: CallFunctionBase::call()
iaxxcfile.cpp: 183: iastring CIaOsFileConnect::callMemberFunction(iastring const& name, args_t const& args)
iaxxcfile.cpp: 1849: iastring CIaOsFileConnect::newFileStream(args_t const& _args)
iaxxbfile.cpp: 773: CIaOsFile::newFileStream_impl(4)
syxxcfile.cpp: 233: CSyFileImpl::openStream(ISyFile::eFileOpenMode)
syxxcfstrm.cpp: 29: CSyFileStreamImpl::CSyFileStreamImpl(CSyFileStream*,iastring,ISyFile::eFileOpenMode)
syxxcfstrm.cpp: 265: CSyFileStreamImpl::open()
syxxcfstrm2.cpp: 58: CSyFileStream2Impl::CSyFileStream2Impl(const CSyPath & \\aw1-
ascs/sapmnt/AW1/SYS/exe/uc/NTAMD64/strdbs.cmd, 0x4)
syxxcfstrm2.cpp: 456: CSyFileStream2Impl::open()
Solution
Make sure that SWPM is running with a user that has access to the profile. This user can be configured in the Application
Server Installation wizard
Error executing SAPinst step: askPasswords
Last error reported by the step: System call failed. DETAILS: Error 5 (0x00000005) (Access is denied.) in execution of system
call 'NetValidatePasswordPolicy' with parameter (...), line (359) in file
(\bas/bas/749_REL/bc_749_REL/src/ins/SAPINST/impl/src/syslib/account/synxcaccmg.cpp), stack trace:
CThrThread.cpp: 85: CThrThread::threadFunction()
CSiServiceSet.cpp: 63: CSiServiceSet::executeService()
CSiStepExecute.cpp: 913: CSiStepExecute::execute()
EJSController.cpp: 179: EJSControllerImpl::executeScript()
JSExtension.hpp: 1136: CallFunctionBase::call()
CSiStepExecute.cpp: 764: CSiStepExecute::invokeDialog()
DarkModeGuiEngine.cpp: 56: DarkModeGuiEngine::showDialogCalledByJs()
DarkModeDialog.cpp: 85: DarkModeDialog::submit()
EJSController.cpp: 179: EJSControllerImpl::executeScript()
JSExtension.hpp: 1136: CallFunctionBase::call()
iaxxcaccount.cpp: 107: iastring CIaOsAccountConnect::callMemberFunction(iastring const& name, args_t const& args)
iaxxcaccount.cpp: 1186: iastring CIaOsAccountConnect::validatePasswordPolicy(args_t const& _args)
iaxxbaccount.cpp: 430: CIaOsAccount::validatePasswordPolicy_impl()
synxcaccmg.cpp: 297: ISyAccountMgt::PasswordValidationMessage
CSyAccountMgtImpl::validatePasswordPolicy(saponazure,*****) const )
Solution
Make sure to add a Host rule in step Isolation to allow communication from the VM to the domain controller

Next steps
SAP HANA on Azure operations guide
Azure Virtual Machines planning and implementation for SAP
Azure Virtual Machines deployment for SAP
Azure Virtual Machines DBMS deployment for SAP
Azure Virtual Machines high availability for SAP
NetWeaver
12/22/2020 • 2 minutes to read • Edit Online

Azure Virtual Machines is the solution for organizations that need compute, storage, and network resources, in
minimal time, and without lengthy procurement cycles. You can use Azure Virtual Machines to deploy classic
applications such as SAP NetWeaver-based ABAP, Java, and an ABAP+Java stack. Extend reliability and availability
without additional on-premises resources. Azure Virtual Machines supports cross-premises connectivity, so you
can integrate Azure Virtual Machines into your organization's on-premises domains, private clouds, and SAP
system landscape.
This series of articles covers:
Architecture and scenarios.
Infrastructure preparation.
SAP installation steps for deploying high-availability SAP systems in Azure by using the Azure Resource
Manager deployment model.

IMPORTANT
We strongly recommend that you use the Azure Resource Manager deployment model for your SAP installations. It
offers many benefits that are not available in the classic deployment model. Learn more about Azure deployment
models.

SAP high availability on:


Windows , using Windows Ser ver Failover Cluster (WSFC)
Linux , using Linux Cluster Framework
In these articles, you learn how to help protect single point of failure (SPOF) components, such as SAP Central
Services (ASCS/SCS) and database management systems (DBMS). You also learn about redundant components in
Azure, such as SAP application server.

High-availability architecture and scenarios for SAP NetWeaver


Summar y: In this article, we discuss high availability architecture of an SAP system in Azure. We discuss how to
solve high availability of SAP single point of failure (SPOF) and redundant components and the specifics of Azure
infrastructure high availability. We also cover how these parts relate to SAP system components. Additionally, the
discussion is broken out for Windows and Linux specifics. Various SAP high-availability scenarios are covered as
well.
Updated: October 2017
Azure Virtual Machines high availability architecture and scenarios for SAP NetWeaver

The article covers both Windows and Linux .

Azure infrastructure preparation for SAP NetWeaver high-availability


deployment
Summar y: In the articles listed here, we cover the steps that you can take to deploy Azure infrastructure in
preparation for SAP installation. To simplify Azure infrastructure deployment, SAP Azure Resource Manager
templates are used to automate the whole process.
Updated: March 2019
Prepare Azure infrastructure for SAP high availability by using a Windows failover cluster and shared
disk for SAP ASCS/SCS instances
Prepare Azure infrastructure for SAP high availability by using a Windows failover cluster and file share
for SAP ASCS/SCS instances

Prepare Azure infrastructure for SAP high availability by using a SUSE Linux Enterprise Server cluster
framework for SAP ASCS/SCS instances

Prepare Azure infrastructure for SAP high availability by using a SUSE Linux Enterprise Server cluster
framework for SAP ASCS/SCS instances with Azure NetApp files

Prepare Azure infrastructure for SAP ASCS/SCS high availability - set up GlusterFS on RHEL

Prepare Azure infrastructure for SAP ASCS/SCS high availability - set up Pacemaker on RHEL

Installation of an SAP NetWeaver high availability system in Azure


Summar y: The articles listed here present step-by-step examples of the installation and configuration of a high-
availability SAP system in a Windows Server Failover Clustering cluster and Linux cluster framework in Azure.
Updated: March 2019
Install SAP NetWeaver high availability by using a Windows failover cluster and shared disk for SAP
ASCS/SCS instances
Install SAP NetWeaver high availability by using a Windows failover cluster and file share for SAP
ASCS/SCS instances

Install SAP NetWeaver high availability by using a SUSE Linux Enterprise Server cluster framework for
SAP ASCS/SCS instances

Install SAP NetWeaver high availability by using a SUSE Linux Enterprise Server cluster framework for
SAP ASCS/SCS instances with Azure NetApp Files

Install SAP NetWeaver ASCS/SCS in high availability configuration on RHEL

Install SAP NetWeaver ASCS/SCS in high availability configuration on RHEL with Azure NetApp Files
High-availability architecture and scenarios for SAP
NetWeaver
12/22/2020 • 11 minutes to read • Edit Online

Terminology definitions
High availability : Refers to a set of technologies that minimize IT disruptions by providing business continuity
of IT services through redundant, fault-tolerant, or failover-protected components inside the same data center. In
our case, the data center resides within one Azure region.
Disaster recover y : Also refers to the minimizing of IT services disruption and their recovery, but across various
data centers that might be hundreds of miles away from one another. In our case, the data centers might reside in
various Azure regions within the same geopolitical region or in locations as established by you as a customer.

Overview of high availability


SAP high availability in Azure can be separated into three types:
Azure infrastructure high availability :
For example, high availability can include compute (VMs), network, or storage and its benefits for
increasing the availability of SAP applications.
Utilizing Azure infrastructure VM restar t to achieve higher availability of SAP applications :
If you decide not to use functionalities such as Windows Server Failover Clustering (WSFC) or Pacemaker
on Linux, Azure VM restart is utilized. It protects SAP systems against planned and unplanned downtime
of the Azure physical server infrastructure and overall underlying Azure platform.
SAP application high availability :
To achieve full SAP system high availability, you must protect all critical SAP system components. For
example:
Redundant SAP application servers.
Unique components. An example might be a single point of failure (SPOF) component, such as an SAP
ASCS/SCS instance or a database management system (DBMS).
SAP high availability in Azure differs from SAP high availability in an on-premises physical or virtual
environment. The following paper SAP NetWeaver high availability and business continuity in virtual
environments with VMware and Hyper-V on Microsoft Windows describes standard SAP high-availability
configurations in virtualized environments on Windows.
There is no sapinst-integrated SAP high-availability configuration for Linux as there is for Windows. For
information about SAP high availability on-premises for Linux, see High availability partner information.

Azure infrastructure high availability


SLA for single -instance virtual machines
There is currently a single-VM SLA of 99.9% with premium storage. To get an idea about what the availability of a
single VM might be, you can build the product of the various available Azure Service Level Agreements.
The basis for the calculation is 30 days per month, or 43,200 minutes. For example, a 0.05% downtime
corresponds to 21.6 minutes. As usual, the availability of the various services is calculated in the following way:
(Availability Service #1/100) * (Availability Service #2/100) * (Availability Service #3/100) *…
For example:
(99.95/100) * (99.9/100) * (99.9/100) = 0.9975 or an overall availability of 99.75%.
Multiple instances of virtual machines in the same availability set
For all virtual machines that have two or more instances deployed in the same availability set, we guarantee that
you will have virtual machine connectivity to at least one instance at least 99.95% of the time.
When two or more VMs are part of the same availability set, each virtual machine in the availability set is
assigned an update domain and a fault domain by the underlying Azure platform.
Update domains guarantee that multiple VMs are not rebooted at the same time during the planned
maintenance of an Azure infrastructure. Only one VM is rebooted at a time.
Fault domains guarantee that VMs are deployed on hardware components that do not share a common
power source and network switch. When servers, a network switch, or a power source undergo an
unplanned downtime, only one VM is affected.
For more information, see Manage the availability of Windows virtual machines in Azure.
An availability set is used for achieving high availability of:
Redundant SAP application servers.
Clusters with two or more nodes (VMs, for example) that protect SPOFs such as an SAP ASCS/SCS instance or
a DBMS.
Azure Availability Zones
Azure is in process of rolling out a concepts of Azure Availability Zones throughout different Azure Regions. In
Azure regions where Availability Zones are offered, the Azure regions have multiple data centers, which are
independent in supply of power source, cooling, and network. Reason for offering different zones within a single
Azure region is to enable you to deploy applications across two or three Availability Zones offered. Assuming
that issues in power sources and/or network would affect one Availability Zone infrastructure only, your
application deployment within an Azure region is still fully functional. Eventually with some reduced capacity
since some VMs in one zone might be lost. But VMs in the other two zones are still up and running. The Azure
regions that offer zones are listed in Azure Availability Zones.
Using Availability Zones, there are some things to consider. The considerations list like:
You can't deploy Azure Availability Sets within an Availability Zone. You need to choose either an Availability
Zone or an Availability Set as deployment frame for a VM.
You can't use the Basic Load Balancer to create failover cluster solutions based on Windows Failover Cluster
Services or Linux Pacemaker. Instead you need to use the Azure Standard Load Balancer SKU
Azure Availability Zones are not giving any guarantees of certain distance between the different zones within
one region
The network latency between different Azure Availability Zones within the different Azure regions might be
different from Azure region to region. There will be cases, where you as a customer can reasonably run the
SAP application layer deployed across different zones since the network latency from one zone to the active
DBMS VM is still acceptable from a business process impact. Whereas there will be customer scenarios where
the latency between the active DBMS VM in one zone and an SAP application instance in a VM in another
zone can be too intrusive and not acceptable for the SAP business processes. As a result, the deployment
architectures need to be different with an active/active architecture for the application or active/passive
architecture if latency is too high.
Using Azure managed disks is mandatory for deploying into Azure Availability Zones
Planned and unplanned maintenance of virtual machines
Two types of Azure platform events can affect the availability of your virtual machines:
Planned maintenance events are periodic updates made by Microsoft to the underlying Azure platform.
The updates improve overall reliability, performance, and security of the platform infrastructure that your
virtual machines run on.
Unplanned maintenance events occur when the hardware or physical infrastructure underlying your
virtual machine has failed in some way. It might include local network failures, local disk failures, or other
rack level failures. When such a failure is detected, the Azure platform automatically migrates your virtual
machine from the unhealthy physical server that hosts your virtual machine to a healthy physical server.
Such events are rare, but they might also cause your virtual machine to reboot.
For more information, see Manage the availability of Windows virtual machines in Azure.
Azure Storage redundancy
The data in your storage account is always replicated to ensure durability and high availability, meeting the Azure
Storage SLA even in the face of transient hardware failures.
Because Azure Storage keeps three images of the data by default, the use of RAID 5 or RAID 1 across multiple
Azure disks is unnecessary.
For more information, see Azure Storage replication.
Azure Managed Disks
Managed Disks is a resource type in Azure Resource Manager that is recommended to be used instead of virtual
hard disks (VHDs) that are stored in Azure storage accounts. Managed disks automatically align with an Azure
availability set of the virtual machine they are attached to. They increase the availability of your virtual machine
and the services that are running on it.
For more information, see Azure Managed Disks overview.
We recommend that you use managed disks because they simplify the deployment and management of your
virtual machines.

Utilizing Azure infrastructure high availability to achieve higher


availability of SAP applications
If you decide not to use functionalities such as WSFC or Pacemaker on Linux (currently supported only for SUSE
Linux Enterprise Server [SLES] 12 and later), Azure VM restart is utilized. It protects SAP systems against planned
and unplanned downtime of the Azure physical server infrastructure and overall underlying Azure platform.
For more information about this approach, see Utilize Azure infrastructure VM restart to achieve higher
availability of the SAP system.

High availability of SAP applications on Azure IaaS


To achieve full SAP system high availability, you must protect all critical SAP system components. For example:
Redundant SAP application servers.
Unique components. An example might be a single point of failure (SPOF) component, such as an SAP
ASCS/SCS instance or a database management system (DBMS).
The next sections discuss how to achieve high availability for all three critical SAP system components.
High-availability architecture for SAP application servers
This section applies to:

Windows and Linux

You usually don't need a specific high-availability solution for the SAP application server and dialog instances.
You achieve high availability by redundancy, and you configure multiple dialog instances in various instances of
Azure virtual machines. You should have at least two SAP application instances installed in two instances of Azure
virtual machines.

Figure 1: High-availability SAP application server


You must place all virtual machines that host SAP application server instances in the same Azure availability set.
An Azure availability set ensures that:
All virtual machines are not part of the same update domain.
An update domain ensures that the virtual machines aren't updated at the same time during planned
maintenance downtime.
The basic functionality, which builds on different update and fault domains within an Azure scale unit, was
already introduced in the update domains section.
All virtual machines are not part of the same fault domain.
A fault domain ensures that virtual machines are deployed so that no single point of failure affects the
availability of all virtual machines.
The number of update and fault domains that can be used by an Azure availability set within an Azure scale unit
is finite. If you keep adding VMs to a single availability set, two or more VMs will eventually end up in the same
fault or update domain.
If you deploy a few SAP application server instances in their dedicated VMs, assuming that we have five update
domains, the following picture emerges. The actual maximum number of update and fault domains within an
availability set might change in the future:
Figure 2: High availability of SAP application servers in an Azure availability set
For more information, see Manage the availability of Windows virtual machines in Azure.
For more information, see the Azure availability sets section of the Azure virtual machines planning and
implementation for SAP NetWeaver document.
Unmanaged disks only: Because the Azure storage account is a potential single point of failure, it's important
to have at least two Azure storage accounts, in which at least two virtual machines are distributed. In an ideal
setup, the disks of each virtual machine that is running an SAP dialog instance would be deployed in a different
storage account.

IMPORTANT
We strongly recommend that you use Azure managed disks for your SAP high-availability installations. Because managed
disks automatically align with the availability set of the virtual machine they are attached to, they increase the availability of
your virtual machine and the services that are running on it.

High-availability architecture for an SAP ASCS/SCS instance on Windows

Windows

You can use a WSFC solution to protect the SAP ASCS/SCS instance. The solution has two variants:
Cluster the SAP ASCS/SCS instance by using clustered shared disks : For more information about
this architecture, see Cluster an SAP ASCS/SCS instance on a Windows failover cluster by using a cluster
shared disk.
Cluster the SAP ASCS/SCS instance by using file share : For more information about this
architecture, see Cluster an SAP ASCS/SCS instance on a Windows failover cluster by using file share.
Cluster the SAP ASCS/SCS instance by using ANF SMB share : For more information about this
architecture, see Cluster Cluster an SAP ASCS/SCS instance on a Windows failover cluster by using ANF
SMB file share.
High-availability architecture for an SAP ASCS/SCS instance on Linux

Linux
For more information about clustering the SAP ASCS/SCS instance by using the SLES cluster framework, see
High availability for SAP NetWeaver on Azure VMs on SUSE Linux Enterprise Server for SAP applications. For
alternative HA architecture on SLES, which doesn't require highly available NFS see High-availability guide
for SAP NetWeaver on SUSE Linux Enterprise Server with Azure NetApp Files for SAP applications.

For more information about clustering the SAP ASCS/SCS instance by using the Red Hat cluster framework, see
Azure Virtual Machines high availability for SAP NetWeaver on Red Hat Enterprise Linux
SAP NetWeaver multi-SID configuration for a clustered SAP ASCS/SCS instance

Windows
Multi-SID is supported with WSFC, using file share and shared disk.
For more information about multi-SID high-availability architecture on Windows, see:

SAP ASCS/SCS instance multi-SID high availability for Windows Server Failover Clustering and file share
SAP ASCS/SCS instance multi-SID high availability for Windows Server Failover Clustering and shared
disk

Linux
Multi-SID clustering is supported on Linux Pacemaker clusters for SAP ASCS/ERS, limited to five SAP SIDs on
the same cluster. For more information about multi-SID high-availability architecture on Linux, see:

HA for SAP NW on Azure VMs on SLES for SAP applications multi-SID guide
HA for SAP NW on Azure VMs on RHEL for SAP applications multi-SID guide
High-availability DBMS instance
The DBMS also is a single point of contact in an SAP system. You need to protect it by using a high-availability
solution. The following figure shows a SQL Server AlwaysOn high-availability solution in Azure, with Windows
Server Failover Clustering and the Azure internal load balancer. SQL Server AlwaysOn replicates DBMS data and
log files by using its own DBMS replication. In this case, you don't need cluster shared disk, which simplifies the
entire setup.
Figure 3: Example of a high-availability SAP DBMS, with SQL Server AlwaysOn
For more information about clustering SQL Server DBMS in Azure by using the Azure Resource Manager
deployment model, see these articles:
Configure an AlwaysOn availability group in Azure virtual machines manually by using Resource Manager
Configure an Azure internal load balancer for an AlwaysOn availability group in Azure
For more information about clustering SAP HANA DBMS in Azure by using the Azure Resource Manager
deployment model, see High availability of SAP HANA on Azure virtual machines (VMs).
Utilize Azure infrastructure VM restart to achieve
“higher availability” of an SAP system
12/22/2020 • 5 minutes to read • Edit Online

This section applies to:

Windows and Linux

If you decide not to use functionalities such as Windows Server Failover Clustering (WSFC) or Pacemaker on Linux
(currently supported only for SUSE Linux Enterprise Server [SLES] 12 and later), Azure VM restart is utilized. It
protects SAP systems against planned and unplanned downtime of the Azure physical server infrastructure and
overall underlying Azure platform.

NOTE
Azure VM restart primarily protects VMs and not applications. Although VM restart doesn't offer high availability for SAP
applications, it does offer a certain level of infrastructure availability. It also indirectly offers “higher availability” of SAP
systems. There is also no SLA for the time it takes to restart a VM after a planned or unplanned host outage, which makes
this method of high availability unsuitable for the critical components of an SAP system. Examples of critical components
might be an ASCS/SCS instance or a database management system (DBMS).

Another important infrastructure element for high availability is storage. For example, the Azure Storage SLA is
99.9% availability. If you deploy all VMs and their disks in a single Azure storage account, potential Azure Storage
unavailability will cause the unavailability of all VMs that are placed in that storage account and all SAP
components that are running inside of the VMs.
Instead of putting all VMs into a single Azure storage account, you can use dedicated storage accounts for each VM.
By using multiple independent Azure storage accounts, you increase overall VM and SAP application availability.
Azure managed disks are automatically placed in the fault domain of the virtual machine they are attached to. If
you place two virtual machines in an availability set and use managed disks, the platform takes care of distributing
the managed disks into different fault domains as well. If you plan to use a premium storage account, we highly
recommend using managed disks.
A sample architecture of an SAP NetWeaver system that uses Azure infrastructure high availability and storage
accounts might look like this:
A sample architecture of an SAP NetWeaver system that uses Azure infrastructure high availability and managed
disks might look like this:

For critical SAP components, you have achieved the following so far:
High availability of SAP application servers
SAP application server instances are redundant components. Each SAP application server instance is
deployed on its own VM, which is running in a different Azure fault and upgrade domain. For more
information, see the Fault domains and Upgrade domains sections.
You can ensure this configuration by using Azure availability sets. For more information, see the Azure
availability sets section.
Potential planned or unplanned unavailability of an Azure fault or upgrade domain will cause unavailability
of a restricted number of VMs with their SAP application server instances.
Each SAP application server instance is placed in its own Azure storage account. The potential unavailability
of one Azure storage account will cause the unavailability of only one VM with its SAP application server
instance. However, be aware that there is a limit on the number of Azure storage accounts within one Azure
subscription. To ensure automatic start of an ASCS/SCS instance after the VM reboot, set the Autostart
parameter in the ASCS/SCS instance start profile that is described in the Using Autostart for SAP instances
section.
For more information, see High availability for SAP application servers.
Even if you use managed disks, the disks are stored in an Azure storage account and might be unavailable in
the event of a storage outage.
Higher availability of SAP ASCS/SCS instances
In this scenario, utilize Azure VM restart to protect the VM with the installed SAP ASCS/SCS instance. In the
case of planned or unplanned downtime of Azure servers, VMs are restarted on another available server. As
mentioned earlier, Azure VM restart primarily protects VMs and not applications, in this case the ASCS/SCS
instance. Through the VM restart, you indirectly reach “higher availability” of the SAP ASCS/SCS instance.
To ensure an automatic start of ASCS/SCS instance after the VM reboot, set the Autostart parameter in the
ASCS/SCS instance start profile, as described in the Using Autostart for SAP instances section. This setting
means that the ASCS/SCS instance as a single point of failure (SPOF) running in a single VM will determine
the availability of the whole SAP landscape.
Higher availability of the DBMS server
As in the preceding SAP ASCS/SCS instance use case, you utilize Azure VM restart to protect the VM with
installed DBMS software, and you achieve “higher availability” of DBMS software through VM restart.
A DBMS that's running in a single VM is also a SPOF, and it is the determinative factor for the availability of
the whole SAP landscape.

Using Autostart for SAP instances


SAP offers a setting that lets you start SAP instances immediately after the start of the OS within the VM. The
instructions are documented in SAP Knowledge Base Article 1909114. However, SAP no longer recommends the
use of the setting, because it does not allow control of the order of instance restarts if more than one VM is
affected or if multiple instances are running per VM.
Assuming a typical Azure scenario of one SAP application server instance in a VM and a single VM eventually
getting restarted, Autostart is not critical. But you can enable it by adding the following parameter into the start
profile of the SAP Advanced Business Application Programming (ABAP) or Java instance:
Autostart = 1

NOTE
The Autostart parameter has certain shortcomings as well. Specifically, the parameter triggers the start of an SAP ABAP or
Java instance when the related Windows or Linux service of the instance is started. That sequence occurs when the operating
system boots up. However, restarts of SAP services are also a common occurrence for SAP Software Lifecycle Management
functionality such as Software Update Manger (SUM) or other updates or upgrades. These functionalities are not expecting
an instance to be restarted automatically. Therefore, the Autostart parameter should be disabled before you run such tasks.
The Autostart parameter also should not be used for SAP instances that are clustered, such as ASCS/SCS/CI.

For more information about Autostart for SAP instances, see the following articles:
Start or stop SAP along with your Unix Server Start/Stop
Starting and stopping SAP NetWeaver management agents

Next steps
For information about full SAP NetWeaver application-aware high availability, see SAP application high availability
on Azure IaaS.
SAP workload configurations with Azure Availability
Zones
12/22/2020 • 15 minutes to read • Edit Online

Azure Availability Zones is one of the high-availability features that Azure provides. Using Availability Zones
improves the overall availability of SAP workloads on Azure. This feature is already available in some Azure
regions. In the future, it will be available in more regions.
This graphic shows the basic architecture of SAP high availability:

The SAP application layer is deployed across one Azure availability set. For high availability of SAP Central
Services, you can deploy two VMs in a separate availability set. Use Windows Server Failover Clustering or
Pacemaker (Linux) as a high-availability framework with automatic failover in case of an infrastructure or software
problem. To learn more about these deployments, see:
Cluster an SAP ASCS/SCS instance on a Windows failover cluster by using a cluster shared disk
Cluster an SAP ASCS/SCS instance on a Windows failover cluster by using file share
High availability for SAP NetWeaver on Azure VMs on SUSE Linux Enterprise Server for SAP applications
Azure Virtual Machines high availability for SAP NetWeaver on Red Hat Enterprise Linux
A similar architecture applies for the DBMS layer of SAP NetWeaver, S/4HANA, or Hybris systems. You deploy the
DBMS layer in an active/passive mode with a failover cluster solution to protect from infrastructure or software
failure. The failover cluster solution could be a DBMS-specific failover framework, Windows Server Failover
Clustering, or Pacemaker.
To deploy the same architecture by using Azure Availability Zones, you need to make some changes to the
architecture outlined earlier. This article describes these changes.

Considerations for deploying across Availability Zones


Consider the following when you use Availability Zones:
There are no guarantees regarding the distances between various Availability Zones within an Azure region.
Availability Zones are not an ideal DR solution. Natural disasters can cause widespread damage in world
regions, including heavy damage to power infrastructures. The distances between various zones might not be
large enough to constitute a proper DR solution.
The network latency across Availability Zones is not the same in all Azure regions. In some cases, you can
deploy and run the SAP application layer across different zones because the network latency from one zone to
the active DBMS VM is acceptable. But in some Azure regions, the latency between the active DBMS VM and the
SAP application instance, when deployed in different zones, might not be acceptable for SAP business
processes. In these cases, the deployment architecture needs to be different, with an active/active architecture
for the application or an active/passive architecture where cross-zone network latency is too high.
When deciding where to use Availability Zones, base your decision on the network latency between the zones.
Network latency plays an important role in two areas:
Latency between the two DBMS instances that need to have synchronous replication. The higher the
network latency, the more likely it will affect the scalability of your workload.
The difference in network latency between a VM running an SAP dialog instance in-zone with the active
DBMS instance and a similar VM in another zone. As this difference increases, the influence on the
running time of business processes and batch jobs also increases, dependent on whether they run in-
zone with the DBMS or in a different zone.
When you deploy Azure VMs across Availability Zones and establish failover solutions within the same Azure
region, some restrictions apply:
You must use Azure Managed Disks when you deploy to Azure Availability Zones.
The mapping of zone enumerations to the physical zones is fixed on an Azure subscription basis. If you're using
different subscriptions to deploy your SAP systems, you need to define the ideal zones for each subscription.
You can't deploy Azure availability sets within an Azure Availability Zone unless you use Azure Proximity
Placement Group. The way how you can deploy the SAP DBMS layer and the central services across zones and
at the same time deploy the SAP application layer using availability sets and still achieve close proximity of the
VMs is documented in the article Azure Proximity Placement Groups for optimal network latency with SAP
applications. If you are not leveraging Azure proximity placement groups, you need to choose one or the other
as a deployment framework for virtual machines.
You can't use an Azure Basic Load Balancer to create failover cluster solutions based on Windows Server
Failover Clustering or Linux Pacemaker. Instead, you need to use the Azure Standard Load Balancer SKU.
The ideal Availability Zones combination
Before you decide how to use Availability Zones, you need to determine:
The network latency among the three zones of an Azure region. This will enable you to choose the zones with
the least network latency in cross-zone network traffic.
The difference between VM-to-VM latency within one of the zones, of your choosing, and the network latency
across two zones of your choosing.
A determination of whether the VM types that you need to deploy are available in the two zones that you
selected. With some VMs, especially M-Series VMs, you might encounter situations in which some SKUs are
available in only two of the three zones.

Network latency between and within zones


To determine the latency between the different zones, you need to:
Deploy the VM SKU you want to use for your DBMS instance in all three zones. Make sure Azure Accelerated
Networking is enabled when you take this measurement.
When you find the two zones with the least network latency, deploy another three VMs of the VM SKU that you
want to use as the application layer VM across the three Availability Zones. Measure the network latency
against the two DBMS VMs in the two DBMS zones that you selected.
Use niping as a measuring tool. This tool, from SAP, is described in SAP support notes #500235 and
#1100926. Focus on the commands documented for latency measurements. Because ping doesn't work
through the Azure Accelerated Networking code paths, we don't recommend that you use it.
You don't need to perform these tests manually. You can find a PowerShell procedure Availability Zone Latency Test
that automates the latency tests described.
Based on your measurements and the availability of your VM SKUs in the Availability Zones, you need to make
some decisions:
Define the ideal zones for the DBMS layer.
Determine whether you want to distribute your active SAP application layer across one, two, or all three zones,
based on differences of network latency in-zone versus across zones.
Determine whether you want to deploy an active/passive configuration or an active/active configuration, from
an application point of view. (These configurations are explained later in this article.)
In making these decisions, also take into account SAP's network latency recommendations, as documented in SAP
note #1100926.

IMPORTANT
The measurements and decisions you make are valid for the Azure subscription you used when you took the measurements.
If you use another Azure subscription, you need to repeat the measurements. The mapping of enumerated zones might be
different for another Azure subscription.
IMPORTANT
It's expected that the measurements described earlier will provide different results in every Azure region that supports
Availability Zones. Even if your network latency requirements are the same, you might need to adopt different deployment
strategies in different Azure regions because the network latency between zones can be different. In some Azure regions, the
network latency among the three different zones can be vastly different. In other regions, the network latency among the
three different zones might be more uniform. The claim that there is always a network latency between 1 and 2 milliseconds
is not correct. The network latency across Availability Zones in Azure regions can't be generalized.

Active/Active deployment
This deployment architecture is called active/active because you deploy your active SAP application servers across
two or three zones. The SAP Central Services instance that uses enqueue replication will be deployed between two
zones. The same is true for the DBMS layer, which will be deployed across the same zones as SAP Central Service.
When considering this configuration, you need to find the two Availability Zones in your region that offer cross-
zone network latency that's acceptable for your workload and your synchronous DBMS replication. You also want
to be sure the delta between network latency within the zones you selected and the cross-zone network latency
isn't too large. This is because you don't want large variations, depending on whether a job runs in-zone with the
DBMS server or across zones, in the running times of your business processes or batch jobs. Some variations are
acceptable, but not factors of difference.
A simplified schema of an active/active deployment across two zones could look like this:

The following considerations apply for this configuration:


Not using Azure Proximity Placement Group, you treat the Azure Availability Zones as fault and update
domains for all the VMs because availability sets can't be deployed in Azure Availability Zones.
If you want to combine zonal deployments for the DBMS layer and central services, but want to use Azure
availability sets for the application layer, you need to use Azure proximity groups as described in the article
Azure Proximity Placement Groups for optimal network latency with SAP applications.
For the load balancers of the failover clusters of SAP Central Services and the DBMS layer, you need to use the
Standard SKU Azure Load Balancer. The Basic Load Balancer won't work across zones.
The Azure virtual network that you deployed to host the SAP system, together with its subnets, is stretched
across zones. You don't need separate virtual networks for each zone.
For all virtual machines you deploy, you need to use Azure Managed Disks. Unmanaged disks aren't supported
for zonal deployments.
Azure Premium Storage and Ultra SSD storage don't support any type of storage replication across zones. The
application (DBMS or SAP Central Services) must replicate important data.
The same is true for the shared sapmnt directory, which is a shared disk (Windows), a CIFS share (Windows), or
an NFS share (Linux). You need to use a technology that replicates these shared disks or shares between the
zones. These technologies are supported:
For Windows, a cluster solution that uses SIOS DataKeeper, as documented in Cluster an SAP
ASCS/SCS instance on a Windows failover cluster by using a cluster shared disk in Azure.
For SUSE Linux, an NFS share that's built as documented in High availability for NFS on Azure VMs
on SUSE Linux Enterprise Server.
Currently, the solution that uses Microsoft Scale-Out File Server, as documented in Prepare Azure
infrastructure for SAP high availability by using a Windows failover cluster and file share for SAP
ASCS/SCS instances, is not supported across zones.
The third zone is used to host the SBD device in case you build a SUSE Linux Pacemaker cluster or additional
application instances.
To achieve run time consistency for critical business processes, you can try to direct certain batch jobs and
users to application instances that are in-zone with the active DBMS instance by using SAP batch server groups,
SAP logon groups, or RFC groups. However, in the case of a zonal failover, you would need to manually move
these groups to instances running on VMs that are in-zone with the active DB VM.
You might want to deploy dormant dialog instances in each of the zones. This is to enable an immediate return
to the former resource capacity if a zone used by part of your application instances is out of service.

IMPORTANT
In this active/active scenario additional charges for bandwidth are announced by Microsoft from 04/01/2020 on. Check the
document Bandwidth Pricing Details. The data transfer between the SAP application layer and SAP DBMS layer is quite
intensive. Therefore the active/active scenario can contribute to costs quite a bit. Keep checking this article to get the exact
costs

Active/Passive deployment
If you can't find an acceptable delta between the network latency within one zone and the latency of cross-zone
network traffic, you can deploy an architecture that has an active/passive character from the SAP application layer
point of view. You define an active zone, which is the zone where you deploy the complete application layer and
where you attempt to run both the active DBMS and the SAP Central Services instance. With such a configuration,
you need to make sure you don't have extreme run time variations, depending on whether a job runs in-zone with
the active DBMS instance or not, in business transactions and batch jobs.
The basic layout of the architecture looks like this:
The following considerations apply for this configuration:
Availability sets can't be deployed in Azure Availability Zones. To compensate for that, you can use Azure
proximity placement groups as documented in the article Azure Proximity Placement Groups for optimal
network latency with SAP applications.
When you use this architecture, you need to monitor the status closely and try to keep the active DBMS and
SAP Central Services instances in the same zone as your deployed application layer. In case of a failover of
SAP Central Service or the DBMS instance, you want to make sure that you can manually fail back into the
zone with the SAP application layer deployed as quickly as possible.
For the load balancers of the failover clusters of SAP Central Services and the DBMS layer, you need to use
the Standard SKU Azure Load Balancer. The Basic Load Balancer won't work across zones.
The Azure virtual network that you deployed to host the SAP system, together with its subnets, is stretched
across zones. You don't need separate virtual networks for each zone.
For all virtual machines you deploy, you need to use Azure Managed Disks. Unmanaged disks aren't
supported for zonal deployments.
Azure Premium Storage and Ultra SSD storage don't support any type of storage replication across zones.
The application (DBMS or SAP Central Services) must replicate important data.
The same is true for the shared sapmnt directory, which is a shared disk (Windows), a CIFS share
(Windows), or an NFS share (Linux). You need to use a technology that replicates these shared disks or
shares between the zones. These technologies are supported:
For Windows, a cluster solution that uses SIOS DataKeeper, as documented in Cluster an SAP ASCS/SCS
instance on a Windows failover cluster by using a cluster shared disk in Azure.
For SUSE Linux, an NFS share that's built as documented in High availability for NFS on Azure VMs on
SUSE Linux Enterprise Server.
Currently, the solution that uses Microsoft Scale-Out File Server, as documented in Prepare Azure
infrastructure for SAP high availability by using a Windows failover cluster and file share for SAP ASCS/SCS
instances, is not supported across zones.
The third zone is used to host the SBD device in case you build a SUSE Linux Pacemaker cluster or
additional application instances.
You should deploy dormant VMs in the passive zone (from a DBMS point of view) so you can start
application resources in case of a zone failure.
Azure Site Recovery is currently unable to replicate active VMs to dormant VMs between zones.
You should invest in automation that allows you, in case of a zone failure, to automatically start the SAP
application layer in the second zone.

Combined high availability and disaster recovery configuration


Microsoft doesn't share any information about geographical distances between the facilities that host different
Azure Availability Zones in an Azure region. Still, some customers are using zones for a combined HA and DR
configuration that promises a recovery point objective (RPO) of zero. This means that you shouldn't lose any
committed database transactions even in the case of disaster recovery.

NOTE
We recommend that you use a configuration like this only in certain circumstances. For example, you might use it when data
can't leave the Azure region for security or compliance reasons.

Here's one example of how such a configuration might look:


The following considerations apply for this configuration:
You're either assuming that there's a significant distance between the facilities hosting an Availability Zone
or you're forced to stay within a certain Azure region. Availability sets can't be deployed in Azure Availability
Zones. To compensate for that, you can use Azure proximity placement groups as documented in the article
Azure Proximity Placement Groups for optimal network latency with SAP applications.
When you use this architecture, you need to monitor the status closely and try to keep the active DBMS and
SAP Central Services instances in the same zone as your deployed application layer. In case of a failover of
SAP Central Service or the DBMS instance, you want to make sure that you can manually fail back into the
zone with the SAP application layer deployed as quickly as possible.
You should have production application instances pre-installed in the VMs that run the active QA
application instances.
In case of a zone failure, shut down the QA application instances and start the production instances instead.
Note that you need to use virtual names for the application instances to make this work.
For the load balancers of the failover clusters of SAP Central Services and the DBMS layer, you need to use
the Standard SKU Azure Load Balancer. The Basic Load Balancer won't work across zones.
The Azure virtual network that you deployed to host the SAP system, together with its subnets, is stretched
across zones. You don't need separate virtual networks for each zone.
For all virtual machines you deploy, you need to use Azure Managed Disks. Unmanaged disks aren't
supported for zonal deployments.
Azure Premium Storage and Ultra SSD storage don't support any type of storage replication across zones.
The application (DBMS or SAP Central Services) must replicate important data.
The same is true for the shared sapmnt directory, which is a shared disk (Windows), a CIFS share
(Windows), or an NFS share (Linux). You need to use a technology that replicates these shared disks or
shares between the zones. These technologies are supported:
For Windows, a cluster solution that uses SIOS DataKeeper, as documented in Cluster an SAP ASCS/SCS
instance on a Windows failover cluster by using a cluster shared disk in Azure.
For SUSE Linux, an NFS share that's built as documented in High availability for NFS on Azure VMs on
SUSE Linux Enterprise Server.
Currently, the solution that uses Microsoft Scale-Out File Server, as documented in Prepare Azure
infrastructure for SAP high availability by using a Windows failover cluster and file share for SAP ASCS/SCS
instances, is not supported across zones.
The third zone is used to host the SBD device in case you build a SUSE Linux Pacemaker cluster or
additional application instances.

Next steps
Here are some next steps for deploying across Azure Availability Zones:
Cluster an SAP ASCS/SCS instance on a Windows failover cluster by using a cluster shared disk in Azure
Prepare Azure infrastructure for SAP high availability by using a Windows failover cluster and file share for SAP
ASCS/SCS instances
Cluster an SAP ASCS/SCS instance on a Windows
failover cluster by using a cluster shared disk in
Azure
12/22/2020 • 8 minutes to read • Edit Online

Windows

Windows Server failover clustering is the foundation of a high-availability SAP ASCS/SCS installation and DBMS
in Windows.
A failover cluster is a group of 1+n-independent servers (nodes) that work together to increase the availability of
applications and services. If a node failure occurs, Windows Server failover clustering calculates the number of
failures that can occur and still maintain a healthy cluster to provide applications and services. You can choose
from different quorum modes to achieve failover clustering.

Prerequisites
Before you begin the tasks in this article, review the following article:
Azure Virtual Machines high-availability architecture and scenarios for SAP NetWeaver

Windows Server failover clustering in Azure


Windows Server failover clustering with Azure Virtual Machines requires additional configuration steps. When
you build a cluster, you need to set several IP addresses and virtual host names for the SAP ASCS/SCS instance.
Name resolution in Azure and the cluster virtual host name
The Azure cloud platform doesn't offer the option to configure virtual IP addresses, such as floating IP addresses.
You need an alternative solution to set up a virtual IP address to reach the cluster resource in the cloud.
The Azure Load Balancer service provides an internal load balancer for Azure. With the internal load balancer,
clients reach the cluster over the cluster virtual IP address.
Deploy the internal load balancer in the resource group that contains the cluster nodes. Then, configure all
necessary port forwarding rules by using the probe ports of the internal load balancer. Clients can connect via
the virtual host name. The DNS server resolves the cluster IP address, and the internal load balancer handles
port forwarding to the active node of the cluster.

IMPORTANT
Floating IP is not supported on a NIC secondary IP configuration in load-balancing scenarios. For details see Azure Load
balancer Limitations. If you need additional IP address for the VM, deploy a second NIC.
Windows Server failover clustering configuration in Azure without a shared disk
SAP ASCS/SCS HA with cluster shared disks
In Windows, an SAP ASCS/SCS instance contains SAP central services, the SAP message server, enqueue server
processes, and SAP global host files. SAP global host files store central files for the entire SAP system.
An SAP ASCS/SCS instance has the following components:
SAP central services:
Two processes, a message and enqueue server, and an <ASCS/SCS virtual host name>, which is used
to access these two processes.
File structure: S:\usr\sap\<SID>\ASCS/SCS<instance number>
SAP global host files:
File structure: S:\usr\sap\<SID>\SYS...
The sapmnt file share, which enables access to these global S:\usr\sap\<SID>\SYS... files by using
the following UNC path:
\\<ASCS/SCS virtual host name>\sapmnt\<SID>\SYS...
Processes, file structure, and global host sapmnt file share of an SAP ASCS/SCS instance
In a high-availability setting, you cluster SAP ASCS/SCS instances. We use clustered shared disks (drive S, in our
example), to place the SAP ASCS/SCS and SAP global host files.

SAP ASCS/SCS HA architecture with shared disk


With Enqueue server replication 1 architecture:
The same <ASCS/SCS virtual host name> is used to access the SAP message and enqueue server processes,
and the SAP global host files via the sapmnt file share.
The same cluster shared disk drive S is shared between them.
With Enqueue server replication 2 architecture:
The same <ASCS/SCS virtual host name> is used to access the SAP message server process, and the SAP
global host files via the sapmnt file share.
The same cluster shared disk drive S is shared between them.
There is separate <ERS virtual host name> to access the enqueue server process
SAP ASCS/SCS HA architecture with shared disk
Shared Disk and Enqueue Replication Server
1. Shared disk is supported with Enqueue server replication 1 architecture, where Enqueue Replication
Server (ERS) instance:
is not clustered
uses localhost name
is deployed on local disks on each of the cluster nodes
2. Shared disk is also supported with Enqueue server replication 2 architecture, where the Enqueue
Replication Server 2 (ERS2) instance:
is clustered
uses dedicated virtual/network host name
needs the IP address of ERS virtual hostname to be configured on Azure Internal Load Balancer, in
addition to the (A)SCS IP address
is deployed on local disks on each of the clustered nodes, therefore there is no need for shared disk

TIP
You can find more information about Enqueue Replication Server 1 and 2 (ERS1 and ERS2) here:
Enqueue Replication Server in a Microsoft Failover Cluster
New Enqueue Replicator in Failover Cluster environments

Options for shared disk in Azure for SAP workloads


There are two options for shared disk in a windows failover cluster in Azure:
Azure shared disks - feature, that allows to attach Azure managed disk to multiple VMs simultaneously.
Using 3rd-party software SIOS DataKeeper Cluster Edition to create a mirrored storage that simulates cluster
shared storage.
When selecting the technology for for shared disk, keep in mind the following considerations:
Azure shared disk for SAP workloads
Allows you to attach Azure managed disk to multiple VMs simultaneously without the need for additional
software to maintain and operate
You will be operating with a single Azure shared disk on one storage cluster. That has an impacts on the
reliability of your SAP solution.
Currently the only supported deployment is with Azure shared Premium disk in Availability set. Azure Shared
Disk is not supported in zonal deployment.
Make sure to provision Azure Premium disk with a minimum disk size as specified in Premium SSD ranges to
be able to attach to the required number of VMs simultaneously (typically 2 for SAP ASCS Windows Failover
cluster ).
Azure shared Ultra disk is not supported for SAP workloads, as it doesn't support deployment in Availability
set or zonal deployment.
SIOS
The SIOS solution provides real-time synchronous data replication between two disks
With the SIOS solution you operate with two managed disks, and if using either Availability sets or
Availability zones,the managed disks will land on different storage clusters.
Deployment in Availability zones is supported
Requires installing and operating third-party software, which you will need to purchase additionally
Shared Disk using Azure shared disk
Microsoft is offering Azure shared disks, which can be used to implement SAP ASCS/SCS High Availability with a
shared disk option.
Prerequisites and limitations
Currently you can use Azure Premium SSD disks as an Azure shared disk for the SAP ASCS/SCS instance. The
following limitations are currently in place:
Azure Ultra disk is not supported as Azure Shared Disk for SAP workloads. Currently it is not possible to place
Azure VMs, using Azure Ultra Disk in Availability Set
Azure Shared disk with Premium SSD disks is only supported with VMs in Availability Set. It is not supported
in Availability Zones deployment.
Azure shared disk value maxShares determines how many cluster nodes can use the shared disk. Typically for
SAP ASCS/SCS instance you will configure two nodes in Windows Failover Cluster, therefore the value for
maxShares must be set to two.
All SAP ASCS/SCS cluster VMs must be deployed in the same Azure proximity placement group.
Although, you can deploy Windows cluster VMs in Availability Set with Azure shared disk without PPG, PPG
will ensure close physical proximity of Azure shared disks and the cluster VMs, therefore achieving lower
latency between the VMs and the storage layer.
For further details on limitations for Azure shared disk, please review very carefully the Limitations section of
Azure Shared Disk documentation.

IMPORTANT
When deploying SAP ASCS/SCS Windows Failover cluster with Azure shared disk, be aware that your deployment will be
operating with a single shared disk in one storage cluster. Your SAP ASCS/SCS instance would be impacted, in case of
issues with the storage cluster, where the Azure shared disk is deployed.

TIP
Review the SAP Netweaver on Azure planning guide and the Azure Storage guide for SAP workloads for important
considerations, when planning your SAP deployment.

Supported OS versions
Both Windows Server 2016 and 2019 are supported (use the latest data center images).
We strongly recommend using Windows Ser ver 2019 Datacenter , as:
Windows 2019 Failover Cluster Service is Azure aware
There is added integration and awareness of Azure Host Maintenance and improved experience by
monitoring for Azure schedule events.
It is possible to use Distributed network name(it is the default option). Therefore, there is no need to have a
dedicated IP address for the cluster network name. Also, there is no need to configure this IP address on
Azure Internal Load Balancer.
Shared disks in Azure with SIOS DataKeeper
Another option for shared disk is to use third-party software SIOS DataKeeper Cluster Edition to create a
mirrored storage that simulates cluster shared storage. The SIOS solution provides real-time synchronous data
replication.
To create a shared disk resource for a cluster:
1. Attach an additional disk to each of the virtual machines in a Windows cluster configuration.
2. Run SIOS DataKeeper Cluster Edition on both virtual machine nodes.
3. Configure SIOS DataKeeper Cluster Edition so that it mirrors the content of the additional disk attached
volume from the source virtual machine to the additional disk attached volume of the target virtual machine.
SIOS DataKeeper abstracts the source and target local volumes, and then presents them to Windows Server
failover clustering as one shared disk.
Get more information about SIOS DataKeeper.

Windows failover clustering configuration in Azure with SIOS DataKeeper

NOTE
You don't need shared disks for high availability with some DBMS products, like SQL Server. SQL Server AlwaysOn
replicates DBMS data and log files from the local disk of one cluster node to the local disk of another cluster node. In this
case, the Windows cluster configuration doesn't need a shared disk.

Next steps
Prepare the Azure infrastructure for SAP HA by using a Windows failover cluster and shared disk for an
SAP ASCS/SCS instance
Install SAP NetWeaver HA on a Windows failover cluster and shared disk for an SAP ASCS/SCS instance
Cluster an SAP ASCS/SCS instance on a Windows
failover cluster by using a file share in Azure
12/22/2020 • 7 minutes to read • Edit Online

Windows

Windows Server failover clustering is the foundation of a high-availability SAP ASCS/SCS installation and DBMS
in Windows.
A failover cluster is a group of 1+n independent servers (nodes) that work together to increase the availability of
applications and services. If a node failure occurs, Windows Server failover clustering calculates the number of
failures that can occur and still maintain a healthy cluster to provide applications and services. You can choose
from different quorum modes to achieve failover clustering.

Prerequisites
Before you begin the tasks that are described in this article, review this article:
Azure Virtual Machines high-availability architecture and scenarios for SAP NetWeaver

IMPORTANT
Clustering SAP ASCS/SCS instances by using a file share is supported for SAP NetWeaver 7.40 (and later), with SAP Kernel
7.49 (and later).

Windows Server failover clustering in Azure


Compared to bare-metal or private cloud deployments, Azure Virtual Machines requires additional steps to
configure Windows Server failover clustering. When you build a cluster, you need to set several IP addresses and
virtual host names for the SAP ASCS/SCS instance.
Name resolution in Azure and the cluster virtual host name
The Azure cloud platform doesn't offer the option to configure virtual IP addresses, such as floating IP addresses.
You need an alternative solution to set up a virtual IP address to reach the cluster resource in the cloud.
The Azure Load Balancer service provides an internal load balancer for Azure. With the internal load balancer,
clients reach the cluster over the cluster virtual IP address.
Deploy the internal load balancer in the resource group that contains the cluster nodes. Then, configure all
necessary port forwarding rules by using the probe ports of the internal load balancer. The clients can connect via
the virtual host name. The DNS server resolves the cluster IP address. The internal load balancer handles port
forwarding to the active node of the cluster.
Figure 1: Windows Server failover clustering configuration in Azure without a shared disk

SAP ASCS/SCS HA with file share


SAP developed a new approach, and an alternative to cluster shared disks, for clustering an SAP ASCS/SCS
instance on a Windows failover cluster. Instead of using cluster shared disks, you can use an SMB file share to
deploy SAP global host files.

NOTE
An SMB file share is an alternative to using cluster shared disks for clustering SAP ASCS/SCS instances.

This architecture is specific in the following ways:


SAP central services (with its own file structure and message and enqueue processes) are separate from the
SAP global host files.
SAP central services run under an SAP ASCS/SCS instance.
SAP ASCS/SCS instance is clustered and is accessible by using the <ASCS/SCS virtual host name> virtual host
name.
SAP global files are placed on the SMB file share and are accessed by using the <SAP global host> host name:
\\<SAP global host>\sapmnt\<SID>\SYS...
The SAP ASCS/SCS instance is installed on a local disk on both cluster nodes.
The <ASCS/SCS virtual host name> network name is different from <SAP global host>.
Figure 2: New SAP ASCS/SCS HA architecture with an SMB file share
Prerequisites for an SMB file share:
SMB 3.0 (or later) protocol.
Ability to set Active Directory access control lists (ACLs) for Active Directory user groups and the computer$
computer object.
The file share must be HA-enabled:
Disks used to store files must not be a single point of failure.
Server or VM downtime does not cause downtime on the file share.
The SAP <SID> cluster role does not contain cluster shared disks or a generic file share cluster resource.

Figure 3: SAP <SID> cluster role resources for using a file share
Scale-out file shares with Storage Spaces Direct in Azure as an
SAPMNT file share
You can use a scale-out file share to host and protect SAP global host files. A scale-out file share also offers a
highly available SAPMNT file share service.

Figure 4: A scale-out file share used to protect SAP global host files

IMPORTANT
Scale-out file shares are fully supported in the Microsoft Azure cloud, and in on-premises environments.

A scale-out file share offers a highly available and horizontally scalable SAPMNT file share.
Storage Spaces Direct is used as a shared disk for a scale-out file share. You can use Storage Spaces Direct to build
highly available and scalable storage using servers with local storage. Shared storage that is used for a scale-out
file share, like for SAP global host files, is not a single point of failure.
When choosing Storage Spaces Direct, consider these use cases:
The virtual machines used to build the Storage Spaces Direct cluster need to be deployed in an Azure
availability set.
For disaster recovery of a Storage Spaces Direct Cluster, you can use Azure Site Recovery Services.
It is not supported to stretch the Storage Space Direct Cluster across different Azure Availability Zones.
SAP prerequisites for scale -out file shares in Azure
To use a scale-out file share, your system must meet the following requirements:
At least two cluster nodes for a scale-out file share.
Each node must have at least two local disks.
For performance reason, you must use mirroring resiliency:
Two-way mirroring for a scale-out file share with two cluster nodes.
Three-way mirroring for a scale-out file share with three (or more) cluster nodes.
We recommend three (or more) cluster nodes for a scale-out file share, with three-way mirroring. This setup
offers more scalability and more storage resiliency than the scale-out file share setup with two cluster nodes
and two-way mirroring.
You must use Azure Premium disks.
We recommend that you use Azure Managed Disks.
We recommend that you format volumes by using Resilient File System (ReFS).
For more information, see SAP Note 1869038 - SAP support for ReFs filesystem and the Choosing the
file system chapter of the article Planning volumes in Storage Spaces Direct.
Be sure that you install Microsoft KB4025334 cumulative update.
You can use DS-Series or DSv2-Series Azure VM sizes.
For good network performance between VMs, which is needed for Storage Spaces Direct disk sync, use a VM
type that has at least a “high” network bandwidth. For more information, see the DSv2-Series and DS-Series
specifications.
We recommend that you reserve some unallocated capacity in the storage pool. Leaving some unallocated
capacity in the storage pool gives volumes space to repair "in place" if a drive fails. This improves data safety
and performance. For more information, see Choosing volume size.
You don't need to configure the Azure internal load balancer for the scale-out file share network name, such as
for <SAP global host>. This is done for the <ASCS/SCS virtual host name> of the SAP ASCS/SCS instance or
for the DBMS. A scale-out file share scales out the load across all cluster nodes. <SAP global host> uses the
local IP address for all cluster nodes.

IMPORTANT
You cannot rename the SAPMNT file share, which points to <SAP global host>. SAP supports only the share name
"sapmnt."
For more information, see SAP Note 2492395 - Can the share name sapmnt be changed?

Configure SAP ASCS/SCS instances and a scale -out file share in two clusters
You can deploy SAP ASCS/SCS instances in one cluster, with their own SAP <SID> cluster role. In this case, you
configure the scale-out file share on another cluster, with another cluster role.

IMPORTANT
In this scenario, the SAP ASCS/SCS instance is configured to access the SAP global host by using UNC path \\<SAP global
host>\sapmnt\<SID>\SYS.
Figure 5: An SAP ASCS/SCS instance and a scale-out file share deployed in two clusters

IMPORTANT
In the Azure cloud, each cluster that is used for SAP and scale-out file shares must be deployed in its own Azure availability
set or across Azure Availability Zones. This ensures distributed placement of the cluster VMs across the underlying Azure
infrastructure. Availability Zone deployments are supported with this technology.

Generic file share with SIOS DataKeeper as cluster shared disks


A generic file share is another option for achieving a highly available file share.
In this case, you can use a third-party SIOS solution as a cluster shared disk.

Next steps
Prepare the Azure infrastructure for SAP HA by using a Windows failover cluster and file share for an SAP
ASCS/SCS instance
Install SAP NetWeaver HA on a Windows failover cluster and file share for an SAP ASCS/SCS instance
Deploy a two-node Storage Spaces Direct scale-out file server for UPD storage in Azure
Storage Spaces Direct in Windows Server 2016
Deep dive: Volumes in Storage Spaces Direct
High availability for SAP NetWeaver on Azure VMs
on Windows with Azure NetApp Files(SMB) for SAP
applications
12/22/2020 • 7 minutes to read • Edit Online

This article describes how to deploy, configure the virtual machines, install the cluster framework, and install a
highly available SAP NetWeaver 7.50 system on Windows VMs, using SMB on Azure NetApp Files.
The database layer isn't covered in detail in this article. We assume that the Azure virtual network has already been
created.
Read the following SAP Notes and papers first:
Azure NetApp Files documentation
SAP Note 1928533, which contains:
A list of Azure VM sizes that are supported for the deployment of SAP software
Important capacity information for Azure VM sizes
Supported SAP software, and operating system (OS) and database combinations
Required SAP kernel version for Windows on Microsoft Azure
SAP Note 2015553 lists prerequisites for SAP-supported SAP software deployments in Azure.
SAP Note 2178632 has detailed information about all monitoring metrics reported for SAP in Azure.
SAP Note 1999351 has additional troubleshooting information for the Azure Enhanced Monitoring Extension for
SAP.
SAP Note 2287140 lists prerequisites for SAP-supported CA feature of SMB 3.x protocol.
SAP Note 2802770 has troubleshooting information for the slow running SAP transaction AL11 on Windows
2012 and 2016.
SAP Note 1911507 has information about transparent failover feature for a file share on Windows Server with
the SMB 3.0 protocol.
SAP Note 662452 has recommendation(deactivating 8.3 name generation) to address Poor file system
performance/errors during data accesses.
Install SAP NetWeaver high availability on a Windows failover cluster and file share for SAP ASCS/SCS instances
on Azure
Azure Virtual Machines high-availability architecture and scenarios for SAP NetWeaver
Add probe port in ASCS cluster configuration
Installation of an (A)SCS Instance on a Failover Cluster
Create an SMB volume for Azure NetApp Files
NetApp SAP Applications on Microsoft Azure using Azure NetApp Files

Overview
SAP developed a new approach, and an alternative to cluster shared disks, for clustering an SAP ASCS/SCS instance
on a Windows failover cluster. Instead of using cluster shared disks, one can use an SMB file share to deploy SAP
global host files. Azure NetApp Files supports SMBv3 (along with NFS) with NTFS ACL using Active Directory. Azure
NetApp Files is automatically highly available (as it is a PaaS service). These features make Azure NetApp Files great
option for hosting the SMB file share for SAP global.
Both Azure Active Directory (AD) Domain Services and Active Directory Domain Services (AD DS) are supported.
You can use existing Active Directory domain controllers with Azure NetApp Files. Domain controllers can be in
Azure as virtual machines, or on premises via ExpressRoute or S2S VPN. In this article, we will use Domain
controller in an Azure VM.
High availability(HA) for SAP Netweaver central services requires shared storage. To achieve that on Windows, so
far it was necessary to build either SOFS cluster or use cluster shared disk s/w like SIOS. Now it is possible to
achieve SAP Netweaver HA by using shared storage, deployed on Azure NetApp Files. Using Azure NetApp Files for
the shared storage eliminates the need for either SOFS or SIOS.

NOTE
Clustering SAP ASCS/SCS instances by using a file share is supported for SAP NetWeaver 7.40 (and later), with SAP Kernel
7.49 (and later).

The prerequisites for an SMB file share are:


SMB 3.0 (or later) protocol.
Ability to set Active Directory access control lists (ACLs) for Active Directory user groups and the computer$
computer object.
The file share must be HA-enabled.
The share for the SAP Central services in this reference architecture is offered by Azure NetApp Files:
Create and mount SMB volume for Azure NetApp Files
Perform the following steps, as preparation for using Azure NetApp Files.
1. Follow the steps to Register for Azure NetApp Files
2. Create Azure NetApp account, following the steps described in Create a NetApp account
3. Set up capacity pool, following the instructions in Set up a capacity pool
4. Azure NetApp Files resources must reside in delegated subnet. Follow the instructions in Delegate a subnet
to Azure NetApp Files to create delegated subnet.

IMPORTANT
You need to create Active Directory connections before creating an SMB volume. Review the requirements for Active
Directory connections.

5. Create Active Directory connection, as described in Create an Active Directory connection


6. Create SMB Azure NetApp Files SMB volume, following the instructions in Add an SMB volume
7. Mount the SMB volume on your Windows Virtual Machine.

TIP
You can find the instructions on how to mount the Azure NetApp Files volume, if you navigate in Azure Portal to the Azure
NetApp Files object, click on the Volumes blade, then Mount Instructions .
Prepare the infrastructure for SAP HA by using a Windows failover
cluster
1. Set the ASCS/SCS load balancing rules for the Azure internal load balancer.
2. Add Windows virtual machines to the domain.
3. Add registry entries on both cluster nodes of the SAP ASCS/SCS instance
4. Set up a Windows Server failover cluster for an SAP ASCS/SCS instance
5. If you are using Windows Server 2016, we recommend that you configure Azure Cloud Witness.

Install SAP ASCS instance on both nodes


You need the following software from SAP:
SAP Software Provisioning Manager (SWPM) installation tool version SPS25 or later.
SAP Kernel 7.49 or later
Create a virtual host name (cluster network name) for the clustered SAP ASCS/SCS instance, as described in
Create a virtual host name for the clustered SAP ASCS/SCS instance.

NOTE
Clustering SAP ASCS/SCS instances by using a file share is supported for SAP NetWeaver 7.40 (and later), with SAP Kernel
7.49 (and later).

Install an ASCS/SCS instance on the first ASCS/SCS cluster node


1. Install an SAP ASCS/SCS instance on the first cluster node. Start the SAP SWPM installation tool, then
navigate to: Product > DBMS > Installation > Application Server ABAP (or Java) > High-Availability System
> ASCS/SCS instance > First cluster node.
2. Select File Share Cluster as the Cluster share Configuration in SWPM.
3. When prompted at step SAP System Cluster Parameters , enter the host name for the Azure NetApp Files
SMB share you already created as File Share Host Name . In this example, the SMB share host name is
anfsmb-9562 .

IMPORTANT
If Pre-requisite checker Results in SWPM shows Continuous availability feature condition not met, it can be addressed
by following the instructions in Delayed error message when you try to access a shared folder that no longer exists in
Windows.

TIP
If Pre-requisite checker Results in SWPM shows Swap Size condition not met, you can adjust the SWAP size by
navigating to My Computer>System Properties>Performance Settings> Advanced> Virtual memory> Change.

4. Configure an SAP cluster resource, the SAP-SID-IP probe port, by using PowerShell. Execute this
configuration on one of the SAP ASCS/SCS cluster nodes, as described in Configure probe port.
Install an ASCS/SCS instance on the second ASCS/SCS cluster node
1. Install an SAP ASCS/SCS instance on the second cluster node. Start the SAP SWPM installation tool, then
navigate to Product > DBMS > Installation > Application Server ABAP (or Java) > High-Availability System >
ASCS/SCS instance > Additional cluster node.
Install a DBMS instance and SAP application servers
Complete your SAP installation, by installing:
A DBMS instance
A primary SAP application server
An additional SAP application server

Test the SAP ASCS/SCS instance failover


Fail over from cluster node A to cluster node B and back
In this test scenario we will refer to cluster node sapascs1 as node A, and to cluster node sapascs2 as node B.
1. Verify that the cluster resources are running on node A.

2. Restart cluster node A. The SAP cluster resources will move to cluster node B.

Lock entry test


1.Verify that the SAP Enqueue Replication Server (ERS) is active
2. Log on to the SAP system, execute transaction SU01 and open a user ID in change mode. That will generate SAP
lock entry.
3. As you are logged in the SAP system, display the lock entry, by navigating to transaction ST12.
4. Fail over ASCS resources from cluster node A to cluster node B.
5. Verify that the lock entry, generated before the SAP ASCS/SCS cluster resources failover is retained.
For more information, see Troubleshooting for Enqueue Failover in ASCS with ERS

Next steps
Azure Virtual Machines planning and implementation for SAP
Azure Virtual Machines deployment for SAP
Azure Virtual Machines DBMS deployment for SAP
To learn how to establish high availability and plan for disaster recovery of SAP
HANA on Azure (large instances), see SAP HANA (large instances) high availability and disaster recovery on
Azure.
To learn how to establish high availability and plan for disaster recovery of SAP HANA on Azure VMs, see High
Availability of SAP HANA on Azure Virtual Machines (VMs)
High availability for SAP NetWeaver on Azure VMs
on SUSE Linux Enterprise Server for SAP applications
12/22/2020 • 34 minutes to read • Edit Online

This article describes how to deploy the virtual machines, configure the virtual machines, install the cluster
framework, and install a highly available SAP NetWeaver 7.50 system. In the example configurations, installation
commands etc. ASCS instance number 00, ERS instance number 02, and SAP System ID NW1 is used. The names of
the resources (for example virtual machines, virtual networks) in the example assume that you have used the
converged template with SAP system ID NW1 to create the resources.
Read the following SAP Notes and papers first
SAP Note 1928533, which has:
List of Azure VM sizes that are supported for the deployment of SAP software
Important capacity information for Azure VM sizes
Supported SAP software, and operating system (OS) and database combinations
Required SAP kernel version for Windows and Linux on Microsoft Azure
SAP Note 2015553 lists prerequisites for SAP-supported SAP software deployments in Azure.
SAP Note 2205917 has recommended OS settings for SUSE Linux Enterprise Server for SAP Applications
SAP Note 1944799 has SAP HANA Guidelines for SUSE Linux Enterprise Server for SAP Applications
SAP Note 2178632 has detailed information about all monitoring metrics reported for SAP in Azure.
SAP Note 2191498 has the required SAP Host Agent version for Linux in Azure.
SAP Note 2243692 has information about SAP licensing on Linux in Azure.
SAP Note 1984787 has general information about SUSE Linux Enterprise Server 12.
SAP Note 1999351 has additional troubleshooting information for the Azure Enhanced Monitoring Extension
for SAP.
SAP Community WIKI has all required SAP Notes for Linux.
Azure Virtual Machines planning and implementation for SAP on Linux
Azure Virtual Machines deployment for SAP on Linux
Azure Virtual Machines DBMS deployment for SAP on Linux
SUSE SAP HA Best Practice Guides The guides contain all required information to set up Netweaver HA and
SAP HANA System Replication on-premises. Use these guides as a general baseline. They provide much
more detailed information.
SUSE High Availability Extension 12 SP3 Release Notes

Overview
To achieve high availability, SAP NetWeaver requires an NFS server. The NFS server is configured in a separate
cluster and can be used by multiple SAP systems.
The NFS server, SAP NetWeaver ASCS, SAP NetWeaver SCS, SAP NetWeaver ERS, and the SAP HANA database use
virtual hostname and virtual IP addresses. On Azure, a load balancer is required to use a virtual IP address. We
recommend using Standard load balancer. The following list shows the configuration of the (A)SCS and ERS load
balancer.
(A )SCS
Frontend configuration
IP address 10.0.0.7
Probe Port
Port 620<nr>
Load balancing rules
If using Standard Load Balancer, select HA por ts
If using Basic Load Balancer, create Load balancing rules for the following ports
32<nr> TCP
36<nr> TCP
39<nr> TCP
81<nr> TCP
5<nr> 13 TCP
5<nr> 14 TCP
5<nr> 16 TCP
ERS
Frontend configuration
IP address 10.0.0.8
Probe Port
Port 621<nr>
Load-balancing rules
If using Standard Load Balancer, select HA por ts
If using Basic Load Balancer, create Load balancing rules for the following ports
32<nr> TCP
33<nr> TCP
5<nr> 13 TCP
5<nr> 14 TCP
5<nr> 16 TCP
Backend configuration
Connected to primary network interfaces of all virtual machines that should be part of the (A)SCS/ERS
cluster

Setting up a highly available NFS server


SAP NetWeaver requires shared storage for the transport and profile directory. Read High availability for NFS on
Azure VMs on SUSE Linux Enterprise Server on how to set up an NFS server for SAP NetWeaver.

Setting up (A)SCS
You can either use an Azure Template from GitHub to deploy all required Azure resources, including the virtual
machines, availability set and load balancer or you can deploy the resources manually.
Deploy Linux via Azure Template
The Azure Marketplace contains an image for SUSE Linux Enterprise Server for SAP Applications 12 that you can
use to deploy new virtual machines. The marketplace image contains the resource agent for SAP NetWeaver.
You can use one of the quickstart templates on GitHub to deploy all required resources. The template deploys the
virtual machines, the load balancer, availability set etc. Follow these steps to deploy the template:
1. Open the ASCS/SCS Multi SID template or the converged template on the Azure portal. The ASCS/SCS template
only creates the load-balancing rules for the SAP NetWeaver ASCS/SCS and ERS (Linux only) instances whereas
the converged template also creates the load-balancing rules for a database (for example Microsoft SQL Server
or SAP HANA). If you plan to install an SAP NetWeaver based system and you also want to install the database
on the same machines, use the converged template.
2. Enter the following parameters
a. Resource Prefix (ASCS/SCS Multi SID template only)
Enter the prefix you want to use. The value is used as a prefix for the resources that are deployed.
b. Sap System ID (converged template only)
Enter the SAP system ID of the SAP system you want to install. The ID is used as a prefix for the resources
that are deployed.
c. Stack Type
Select the SAP NetWeaver stack type
d. Os Type
Select one of the Linux distributions. For this example, select SLES 12 BYOS
e. Db Type
Select HANA
f. Sap System Size.
The amount of SAPS the new system provides. If you are not sure how many SAPS the system requires,
ask your SAP Technology Partner or System Integrator
g. System Availability
Select HA
h. Admin Username and Admin Password
A new user is created that can be used to log on to the machine.
i. Subnet ID
If you want to deploy the VM into an existing VNet where you have a subnet defined the VM should be
assigned to, name the ID of that specific subnet. The ID usually looks like /subscriptions/<subscription
ID> /resourceGroups/<resource group
name> /providers/Microsoft.Network/virtualNetworks/<vir tual network name> /subnets/<subnet
name>
Deploy Linux manually via Azure portal
You first need to create the virtual machines for this NFS cluster. Afterwards, you create a load balancer and use the
virtual machines in the backend pool.
1. Create a Resource Group
2. Create a Virtual Network
3. Create an Availability Set
Set max update domain
4. Create Virtual Machine 1
Use at least SLES4SAP 12 SP1, in this example the SLES4SAP 12 SP1 image
https://portal.azure.com/#create/SUSE.SUSELinuxEnterpriseServerforSAPApplications12SP1PremiumImage-
ARM
SLES For SAP Applications 12 SP1 is used
Select Availability Set created earlier
5. Create Virtual Machine 2
Use at least SLES4SAP 12 SP1, in this example the SLES4SAP 12 SP1 image
https://portal.azure.com/#create/SUSE.SUSELinuxEnterpriseServerforSAPApplications12SP1PremiumImage-
ARM
SLES For SAP Applications 12 SP1 is used
Select Availability Set created earlier
6. Add at least one data disk to both virtual machines
The data disks are used for the /usr/sap/ <SAPSID > directory
7. Create load balancer (internal, standard):
a. Create the frontend IP addresses
a. IP address 10.0.0.7 for the ASCS
a. Open the load balancer, select frontend IP pool, and click Add
b. Enter the name of the new frontend IP pool (for example nw1-ascs-frontend )
c. Set the Assignment to Static and enter the IP address (for example 10.0.0.7 )
d. Click OK
b. IP address 10.0.0.8 for the ASCS ERS
Repeat the steps above to create an IP address for the ERS (for example 10.0.0.8 and nw1-
aers-backend )
b. Create the backend pool
a. Open the load balancer, select backend pools, and click Add
b. Enter the name of the new backend pool (for example nw1-backend )
c. Click Add a virtual machine.
d. Select Virtual Machine
e. Select the virtual machines of the (A)SCS cluster and their IP addresses.
f. Click Add
c. Create the health probes
a. Port 62000 for ASCS
a. Open the load balancer, select health probes, and click Add
b. Enter the name of the new health probe (for example nw1-ascs-hp )
c. Select TCP as protocol, port 62000 , keep Interval 5 and Unhealthy threshold 2
d. Click OK
b. Port 62102 for ASCS ERS
Repeat the steps above to create a health probe for the ERS (for example 62102 and nw1-
aers-hp )
d. Load-balancing rules
a. Load-balancing rules for ASCS
a. Open the load balancer, select load-balancing rules and click Add
b. Enter the name of the new load balancer rule (for example nw1-lb-ascs )
c. Select the frontend IP address, backend pool, and health probe you created earlier (for
example nw1-ascs-frontend , nw1-backend and nw1-ascs-hp )
d. Select HA por ts
e. Increase idle timeout to 30 minutes
f. Make sure to enable Floating IP
g. Click OK
Repeat the steps above to create load balancing rules for ERS (for example nw1-lb-ers )
8. Alternatively, if your scenario requires basic load balancer (internal), follow these steps:
a. Create the frontend IP addresses
a. IP address 10.0.0.7 for the ASCS
a. Open the load balancer, select frontend IP pool, and click Add
b. Enter the name of the new frontend IP pool (for example nw1-ascs-frontend )
c. Set the Assignment to Static and enter the IP address (for example 10.0.0.7 )
d. Click OK
b. IP address 10.0.0.8 for the ASCS ERS
Repeat the steps above to create an IP address for the ERS (for example 10.0.0.8 and nw1-
aers-frontend )
b. Create the backend pool
a. Open the load balancer, select backend pools, and click Add
b. Enter the name of the new backend pool (for example nw1-backend )
c. Click Add a virtual machine.
d. Select the Availability Set you created earlier
e. Select the virtual machines of the (A)SCS cluster
f. Click OK
c. Create the health probes
a. Port 62000 for ASCS
a. Open the load balancer, select health probes, and click Add
b. Enter the name of the new health probe (for example nw1-ascs-hp )
c. Select TCP as protocol, port 62000 , keep Interval 5 and Unhealthy threshold 2
d. Click OK
b. Port 62102 for ASCS ERS
Repeat the steps above to create a health probe for the ERS (for example 62102 and nw1-
aers-hp )
d. Load-balancing rules
a. 3200 TCP for ASCS
a. Open the load balancer, select load-balancing rules and click Add
b. Enter the name of the new load balancer rule (for example nw1-lb-3200 )
c. Select the frontend IP address, backend pool, and health probe you created earlier (for
example nw1-ascs-frontend )
d. Keep protocol TCP , enter port 3200
e. Increase idle timeout to 30 minutes
f. Make sure to enable Floating IP
g. Click OK
b. Additional ports for the ASCS
Repeat the steps above for ports 3600 , 3900 , 8100 , 500 13, 500 14, 500 16 and TCP for the
ASCS
c. Additional ports for the ASCS ERS
Repeat the steps above for ports 3302 , 502 13, 502 14, 502 16 and TCP for the ASCS ERS

IMPORTANT
Floating IP is not supported on a NIC secondary IP configuration in load-balancing scenarios. For details see Azure Load
balancer Limitations. If you need additional IP address for the VM, deploy a second NIC.

NOTE
When VMs without public IP addresses are placed in the backend pool of internal (no public IP address) Standard Azure load
balancer, there will be no outbound internet connectivity, unless additional configuration is performed to allow routing to
public end points. For details on how to achieve outbound connectivity see Public endpoint connectivity for Virtual Machines
using Azure Standard Load Balancer in SAP high-availability scenarios.

IMPORTANT
Do not enable TCP timestamps on Azure VMs placed behind Azure Load Balancer. Enabling TCP timestamps will cause the
health probes to fail. Set parameter net.ipv4.tcp_timestamps to 0 . For details see Load Balancer health probes.

Create Pacemaker cluster


Follow the steps in Setting up Pacemaker on SUSE Linux Enterprise Server in Azure to create a basic Pacemaker
cluster for this (A)SCS server.
Installation
The following items are prefixed with either [A] - applicable to all nodes, [1] - only applicable to node 1 or [2] -
only applicable to node 2.
1. [A] Install SUSE Connector

sudo zypper install sap-suse-cluster-connector

NOTE
The known issue with using a dash in host names is fixed with version 3.1.1 of package sap-suse-cluster-
connector . Make sure that you are using at least version 3.1.1 of package sap-suse-cluster-connector, if using cluster
nodes with dash in the host name. Otherwise your cluster will not work.

Make sure that you installed the new version of the SAP SUSE cluster connector. The old one was called
sap_suse_cluster_connector and the new one is called sap-suse-cluster-connector .

sudo zypper info sap-suse-cluster-connector

Information for package sap-suse-cluster-connector:


---------------------------------------------------
Repository : SLE-12-SP3-SAP-Updates
Name : sap-suse-cluster-connector
<b>Version : 3.0.0-2.2</b>
Arch : noarch
Vendor : SUSE LLC <https://www.suse.com/>
Support Level : Level 3
Installed Size : 41.6 KiB
<b>Installed : Yes</b>
Status : up-to-date
Source package : sap-suse-cluster-connector-3.0.0-2.2.src
Summary : SUSE High Availability Setup for SAP Products

2. [A] Update SAP resource agents


A patch for the resource-agents package is required to use the new configuration, that is described in this
article. You can check, if the patch is already installed with the following command

sudo grep 'parameter name="IS_ERS"' /usr/lib/ocf/resource.d/heartbeat/SAPInstance

The output should be similar to

<parameter name="IS_ERS" unique="0" required="0">

If the grep command does not find the IS_ERS parameter, you need to install the patch listed on the SUSE
download page

# example for patch for SLES 12 SP1


sudo zypper in -t patch SUSE-SLE-HA-12-SP1-2017-885=1
# example for patch for SLES 12 SP2
sudo zypper in -t patch SUSE-SLE-HA-12-SP2-2017-886=1

3. [A] Setup host name resolution


You can either use a DNS server or modify the /etc/hosts on all nodes. This example shows how to use the
/etc/hosts file. Replace the IP address and the hostname in the following commands
sudo vi /etc/hosts

Insert the following lines to /etc/hosts. Change the IP address and hostname to match your environment

# IP address of the load balancer frontend configuration for NFS


10.0.0.4 nw1-nfs
# IP address of the load balancer frontend configuration for SAP NetWeaver ASCS
10.0.0.7 nw1-ascs
# IP address of the load balancer frontend configuration for SAP NetWeaver ASCS ERS
10.0.0.8 nw1-aers
# IP address of the load balancer frontend configuration for database
10.0.0.13 nw1-db

Prepare for SAP NetWeaver installation


1. [A] Create the shared directories

sudo mkdir -p /sapmnt/NW1


sudo mkdir -p /usr/sap/trans
sudo mkdir -p /usr/sap/NW1/SYS
sudo mkdir -p /usr/sap/NW1/ASCS00
sudo mkdir -p /usr/sap/NW1/ERS02

sudo chattr +i /sapmnt/NW1


sudo chattr +i /usr/sap/trans
sudo chattr +i /usr/sap/NW1/SYS
sudo chattr +i /usr/sap/NW1/ASCS00
sudo chattr +i /usr/sap/NW1/ERS02

2. [A] Configure autofs

sudo vi /etc/auto.master

# Add the following line to the file, save and exit


+auto.master
/- /etc/auto.direct

Create a file with

sudo vi /etc/auto.direct

# Add the following lines to the file, save and exit


/sapmnt/NW1 -nfsvers=4,nosymlink,sync nw1-nfs:/NW1/sapmntsid
/usr/sap/trans -nfsvers=4,nosymlink,sync nw1-nfs:/NW1/trans
/usr/sap/NW1/SYS -nfsvers=4,nosymlink,sync nw1-nfs:/NW1/sidsys

Restart autofs to mount the new shares

sudo systemctl enable autofs


sudo service autofs restart

3. [A] Configure SWAP file


sudo vi /etc/waagent.conf

# Set the property ResourceDisk.EnableSwap to y


# Create and use swapfile on resource disk.
ResourceDisk.EnableSwap=y

# Set the size of the SWAP file with property ResourceDisk.SwapSizeMB


# The free space of resource disk varies by virtual machine size. Make sure that you do not set a value
that is too big. You can check the SWAP space with command swapon
# Size of the swapfile.
ResourceDisk.SwapSizeMB=2000

Restart the Agent to activate the change

sudo service waagent restart

Installing SAP NetWeaver ASCS/ERS


1. [1] Create a virtual IP resource and health-probe for the ASCS instance

IMPORTANT
Recent testing revealed situations, where netcat stops responding to requests due to backlog and its limitation of
handling only one connection. The netcat resource stops listening to the Azure Load balancer requests and the
floating IP becomes unavailable.
For existing Pacemaker clusters, we recommended in the past replacing netcat with socat. Currently we recommend
using azure-lb resource agent, which is part of package resource-agents, with the following package version
requirements:
For SLES 12 SP4/SP5, the version must be at least resource-agents-4.3.018.a7fb5035-3.30.1.
For SLES 15/15 SP1, the version must be at least resource-agents-4.3.0184.6ee15eb2-4.13.1.
Note that the change will require brief downtime.
For existing Pacemaker clusters, if the configuration was already changed to use socat as described in Azure Load-
Balancer Detection Hardening, there is no requirement to switch immediately to azure-lb resource agent.

sudo crm node standby nw1-cl-1

sudo crm configure primitive fs_NW1_ASCS Filesystem device='nw1-nfs:/NW1/ASCS'


directory='/usr/sap/NW1/ASCS00' fstype='nfs4' \
op start timeout=60s interval=0 \
op stop timeout=60s interval=0 \
op monitor interval=20s timeout=40s

sudo crm configure primitive vip_NW1_ASCS IPaddr2 \


params ip=10.0.0.7 cidr_netmask=24 \
op monitor interval=10 timeout=20

sudo crm configure primitive nc_NW1_ASCS azure-lb port=62000

sudo crm configure group g-NW1_ASCS fs_NW1_ASCS nc_NW1_ASCS vip_NW1_ASCS \


meta resource-stickiness=3000

Make sure that the cluster status is ok and that all resources are started. It is not important on which node
the resources are running.
sudo crm_mon -r

# Node nw1-cl-1: standby


# Online: [ nw1-cl-0 ]
#
# Full list of resources:
#
# stonith-sbd (stonith:external/sbd): Started nw1-cl-0
# Resource Group: g-NW1_ASCS
# fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-0
# nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-0
# vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-0

2. [1] Install SAP NetWeaver ASCS


Install SAP NetWeaver ASCS as root on the first node using a virtual hostname that maps to the IP address
of the load balancer frontend configuration for the ASCS, for example nw1-ascs , 10.0.0.7 and the instance
number that you used for the probe of the load balancer, for example 00 .
You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a non-root user to connect to
sapinst.

sudo <swpm>/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin

If the installation fails to create a subfolder in /usr/sap/NW1 /ASCS00 , try setting the owner and group of
the ASCS00 folder and retry.

chown nw1adm /usr/sap/NW1/ASCS00


chgrp sapsys /usr/sap/NW1/ASCS00

3. [1] Create a virtual IP resource and health-probe for the ERS instance

sudo crm node online nw1-cl-1


sudo crm node standby nw1-cl-0

sudo crm configure primitive fs_NW1_ERS Filesystem device='nw1-nfs:/NW1/ASCSERS'


directory='/usr/sap/NW1/ERS02' fstype='nfs4' \
op start timeout=60s interval=0 \
op stop timeout=60s interval=0 \
op monitor interval=20s timeout=40s

sudo crm configure primitive vip_NW1_ERS IPaddr2 \


params ip=10.0.0.8 cidr_netmask=24 \
op monitor interval=10 timeout=20

sudo crm configure primitive nc_NW1_ERS azure-lb port=62102

sudo crm configure group g-NW1_ERS fs_NW1_ERS nc_NW1_ERS vip_NW1_ERS

Make sure that the cluster status is ok and that all resources are started. It is not important on which node
the resources are running.
sudo crm_mon -r

# Node nw1-cl-0: standby


# Online: [ nw1-cl-1 ]
#
# Full list of resources:
#
# stonith-sbd (stonith:external/sbd): Started nw1-cl-1
# Resource Group: g-NW1_ASCS
# fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-1
# nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-1
# vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-1
# Resource Group: g-NW1_ERS
# fs_NW1_ERS (ocf::heartbeat:Filesystem): Started nw1-cl-1
# nc_NW1_ERS (ocf::heartbeat:azure-lb): Started nw1-cl-1
# vip_NW1_ERS (ocf::heartbeat:IPaddr2): Started nw1-cl-1

4. [2] Install SAP NetWeaver ERS


Install SAP NetWeaver ERS as root on the second node using a virtual hostname that maps to the IP address
of the load balancer frontend configuration for the ERS, for example nw1-aers , 10.0.0.8 and the instance
number that you used for the probe of the load balancer, for example 02 .
You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a non-root user to connect to
sapinst.

sudo <swpm>/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin

NOTE
Use SWPM SP 20 PL 05 or higher. Lower versions do not set the permissions correctly and the installation will fail.

If the installation fails to create a subfolder in /usr/sap/NW1 /ERS02 , try setting the owner and group of the
ERS02 folder and retry.

chown nw1adm /usr/sap/NW1/ERS02


chgrp sapsys /usr/sap/NW1/ERS02

5. [1] Adapt the ASCS/SCS and ERS instance profiles


ASCS/SCS profile

sudo vi /sapmnt/NW1/profile/NW1_ASCS00_nw1-ascs

# Change the restart command to a start command


#Restart_Program_01 = local $(_EN) pf=$(_PF)
Start_Program_01 = local $(_EN) pf=$(_PF)

# Add the following lines


service/halib = $(DIR_CT_RUN)/saphascriptco.so
service/halib_cluster_connector = /usr/bin/sap_suse_cluster_connector

# Add the keep alive parameter, if using ENSA1


enque/encni/set_so_keepalive = true

For both ENSA1 and ENSA2, make sure that the keepalive OS parameters are set as described in SAP note
1410736.
ERS profile

sudo vi /sapmnt/NW1/profile/NW1_ERS02_nw1-aers

# Change the restart command to a start command


#Restart_Program_00 = local $(_ER) pf=$(_PFL) NR=$(SCSID)
Start_Program_00 = local $(_ER) pf=$(_PFL) NR=$(SCSID)

# Add the following lines


service/halib = $(DIR_CT_RUN)/saphascriptco.so
service/halib_cluster_connector = /usr/bin/sap_suse_cluster_connector

# remove Autostart from ERS profile


# Autostart = 1

6. [A] Configure Keep Alive


The communication between the SAP NetWeaver application server and the ASCS/SCS is routed through a
software load balancer. The load balancer disconnects inactive connections after a configurable timeout. To
prevent this you need to set a parameter in the SAP NetWeaver ASCS/SCS profile, if using ENSA1, and
change the Linux system keepalive settings on all SAP servers for both ENSA1/ENSA2. Read SAP Note
1410736 for more information.

# Change the Linux system configuration


sudo sysctl net.ipv4.tcp_keepalive_time=300

7. [A] Configure the SAP users after the installation

# Add sidadm to the haclient group


sudo usermod -aG haclient nw1adm

8. [1] Add the ASCS and ERS SAP services to the sapservice file
Add the ASCS service entry to the second node and copy the ERS service entry to the first node.

cat /usr/sap/sapservices | grep ASCS00 | sudo ssh nw1-cl-1 "cat >>/usr/sap/sapservices"


sudo ssh nw1-cl-1 "cat /usr/sap/sapservices" | grep ERS02 | sudo tee -a /usr/sap/sapservices

9. [1] Create the SAP cluster resources


If using enqueue server 1 architecture (ENSA1), define the resources as follows:
sudo crm configure property maintenance-mode="true"

sudo crm configure primitive rsc_sap_NW1_ASCS00 SAPInstance \


operations \$id=rsc_sap_NW1_ASCS00-operations \
op monitor interval=11 timeout=60 on-fail=restart \
params InstanceName=NW1_ASCS00_nw1-ascs START_PROFILE="/sapmnt/NW1/profile/NW1_ASCS00_nw1-ascs" \
AUTOMATIC_RECOVER=false \
meta resource-stickiness=5000 failure-timeout=60 migration-threshold=1 priority=10

sudo crm configure primitive rsc_sap_NW1_ERS02 SAPInstance \


operations \$id=rsc_sap_NW1_ERS02-operations \
op monitor interval=11 timeout=60 on-fail=restart \
params InstanceName=NW1_ERS02_nw1-aers START_PROFILE="/sapmnt/NW1/profile/NW1_ERS02_nw1-aers"
AUTOMATIC_RECOVER=false IS_ERS=true \
meta priority=1000

sudo crm configure modgroup g-NW1_ASCS add rsc_sap_NW1_ASCS00


sudo crm configure modgroup g-NW1_ERS add rsc_sap_NW1_ERS02

sudo crm configure colocation col_sap_NW1_no_both -5000: g-NW1_ERS g-NW1_ASCS


sudo crm configure location loc_sap_NW1_failover_to_ers rsc_sap_NW1_ASCS00 rule 2000: runs_ers_NW1 eq 1
sudo crm configure order ord_sap_NW1_first_start_ascs Optional: rsc_sap_NW1_ASCS00:start
rsc_sap_NW1_ERS02:stop symmetrical=false

sudo crm node online nw1-cl-0


sudo crm configure property maintenance-mode="false"

SAP introduced support for enqueue server 2, including replication, as of SAP NW 7.52. Starting with ABAP
Platform 1809, enqueue server 2 is installed by default. See SAP note 2630416 for enqueue server 2 support. If
using enqueue server 2 architecture (ENSA2), define the resources as follows:

sudo crm configure property maintenance-mode="true"

sudo crm configure primitive rsc_sap_NW1_ASCS00 SAPInstance \


operations \$id=rsc_sap_NW1_ASCS00-operations \
op monitor interval=11 timeout=60 on-fail=restart \
params InstanceName=NW1_ASCS00_nw1-ascs START_PROFILE="/sapmnt/NW1/profile/NW1_ASCS00_nw1-ascs" \
AUTOMATIC_RECOVER=false \
meta resource-stickiness=5000

sudo crm configure primitive rsc_sap_NW1_ERS02 SAPInstance \


operations \$id=rsc_sap_NW1_ERS02-operations \
op monitor interval=11 timeout=60 on-fail=restart \
params InstanceName=NW1_ERS02_nw1-aers START_PROFILE="/sapmnt/NW1/profile/NW1_ERS02_nw1-aers"
AUTOMATIC_RECOVER=false IS_ERS=true

sudo crm configure modgroup g-NW1_ASCS add rsc_sap_NW1_ASCS00


sudo crm configure modgroup g-NW1_ERS add rsc_sap_NW1_ERS02

sudo crm configure colocation col_sap_NW1_no_both -5000: g-NW1_ERS g-NW1_ASCS


sudo crm configure order ord_sap_NW1_first_start_ascs Optional: rsc_sap_NW1_ASCS00:start
rsc_sap_NW1_ERS02:stop symmetrical=false

sudo crm node online nw1-cl-0


sudo crm configure property maintenance-mode="false"

If you are upgrading from an older version and switching to enqueue server 2, see SAP note 2641019.
Make sure that the cluster status is ok and that all resources are started. It is not important on which node the
resources are running.
sudo crm_mon -r

# Online: [ nw1-cl-0 nw1-cl-1 ]


#
# Full list of resources:
#
# stonith-sbd (stonith:external/sbd): Started nw1-cl-1
# Resource Group: g-NW1_ASCS
# fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-1
# nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-1
# vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-1
# rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-1
# Resource Group: g-NW1_ERS
# fs_NW1_ERS (ocf::heartbeat:Filesystem): Started nw1-cl-0
# nc_NW1_ERS (ocf::heartbeat:azure-lb): Started nw1-cl-0
# vip_NW1_ERS (ocf::heartbeat:IPaddr2): Started nw1-cl-0
# rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-0

SAP NetWeaver application server preparation


Some databases require that the database instance installation is executed on an application server. Prepare the
application server virtual machines to be able to use them in these cases.
The steps bellow assume that you install the application server on a server different from the ASCS/SCS and HANA
servers. Otherwise some of the steps below (like configuring host name resolution) are not needed.
1. Configure operating system
Reduce the size of the dirty cache. For more information, see Low write performance on SLES 11/12 servers
with large RAM.

sudo vi /etc/sysctl.conf

# Change/set the following settings


vm.dirty_bytes = 629145600
vm.dirty_background_bytes = 314572800

2. Setup host name resolution


You can either use a DNS server or modify the /etc/hosts on all nodes. This example shows how to use the
/etc/hosts file. Replace the IP address and the hostname in the following commands

sudo vi /etc/hosts

Insert the following lines to /etc/hosts. Change the IP address and hostname to match your environment

# IP address of the load balancer frontend configuration for NFS


10.0.0.4 nw1-nfs
# IP address of the load balancer frontend configuration for SAP NetWeaver ASCS/SCS
10.0.0.7 nw1-ascs
# IP address of the load balancer frontend configuration for SAP NetWeaver ERS
10.0.0.8 nw1-aers
# IP address of the load balancer frontend configuration for database
10.0.0.13 nw1-db
# IP address of all application servers
10.0.0.20 nw1-di-0
10.0.0.21 nw1-di-1
3. Create the sapmnt directory

sudo mkdir -p /sapmnt/NW1


sudo mkdir -p /usr/sap/trans

sudo chattr +i /sapmnt/NW1


sudo chattr +i /usr/sap/trans

4. Configure autofs

sudo vi /etc/auto.master

# Add the following line to the file, save and exit


+auto.master
/- /etc/auto.direct

Create a new file with

sudo vi /etc/auto.direct

# Add the following lines to the file, save and exit


/sapmnt/NW1 -nfsvers=4,nosymlink,sync nw1-nfs:/NW1/sapmntsid
/usr/sap/trans -nfsvers=4,nosymlink,sync nw1-nfs:/NW1/trans

Restart autofs to mount the new shares

sudo systemctl enable autofs


sudo service autofs restart

5. Configure SWAP file

sudo vi /etc/waagent.conf

# Set the property ResourceDisk.EnableSwap to y


# Create and use swapfile on resource disk.
ResourceDisk.EnableSwap=y

# Set the size of the SWAP file with property ResourceDisk.SwapSizeMB


# The free space of resource disk varies by virtual machine size. Make sure that you do not set a value
that is too big. You can check the SWAP space with command swapon
# Size of the swapfile.
ResourceDisk.SwapSizeMB=2000

Restart the Agent to activate the change

sudo service waagent restart

Install database
In this example, SAP NetWeaver is installed on SAP HANA. You can use every supported database for this
installation. For more information on how to install SAP HANA in Azure, see High Availability of SAP HANA on
Azure Virtual Machines (VMs). For a list of supported databases, see SAP Note 1928533.
1. Run the SAP database instance installation
Install the SAP NetWeaver database instance as root using a virtual hostname that maps to the IP address of
the load balancer frontend configuration for the database for example nw1-db and 10.0.0.13 .
You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a non-root user to connect to
sapinst.

sudo <swpm>/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin

SAP NetWeaver application server installation


Follow these steps to install an SAP application server.
1. Prepare application server
Follow the steps in the chapter SAP NetWeaver application server preparation above to prepare the
application server.
2. Install SAP NetWeaver application server
Install a primary or additional SAP NetWeaver applications server.
You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a non-root user to connect to
sapinst.

sudo <swpm>/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin

3. Update SAP HANA secure store


Update the SAP HANA secure store to point to the virtual name of the SAP HANA System Replication setup.
Run the following command to list the entries

hdbuserstore List

This should list all entries and should look similar to

DATA FILE : /home/nw1adm/.hdb/nw1-di-0/SSFS_HDB.DAT


KEY FILE : /home/nw1adm/.hdb/nw1-di-0/SSFS_HDB.KEY

KEY DEFAULT
ENV : 10.0.0.14:30313
USER: SAPABAP1
DATABASE: HN1

The output shows that the IP address of the default entry is pointing to the virtual machine and not to the
load balancer's IP address. This entry needs to be changed to point to the virtual hostname of the load
balancer. Make sure to use the same port (30313 in the output above) and database name (HN1 in the
output above)!

su - nw1adm
hdbuserstore SET DEFAULT nw1-db:30313@HN1 SAPABAP1 <password of ABAP schema>

Test the cluster setup


The following tests are a copy of the test cases in the best practices guides of SUSE. They are copied for your
convenience. Always also read the best practices guides and perform all additional tests that might have been
added.
1. Test HAGetFailoverConfig, HACheckConfig and HACheckFailoverConfig
Run the following commands as <sapsid>adm on the node where the ASCS instance is currently running. If
the commands fail with FAIL: Insufficient memory, it might be caused by dashes in your hostname. This is a
known issue and will be fixed by SUSE in the sap-suse-cluster-connector package.
nw1-cl-0:nw1adm 54> sapcontrol -nr 00 -function HAGetFailoverConfig

# 15.08.2018 13:50:36
# HAGetFailoverConfig
# OK
# HAActive: TRUE
# HAProductVersion: Toolchain Module
# HASAPInterfaceVersion: Toolchain Module (sap_suse_cluster_connector 3.0.1)
# HADocumentation: https://www.suse.com/products/sles-for-sap/resource-library/sap-best-practices/
# HAActiveNode:
# HANodes: nw1-cl-0, nw1-cl-1

nw1-cl-0:nw1adm 55> sapcontrol -nr 00 -function HACheckConfig

# 15.08.2018 14:00:04
# HACheckConfig
# OK
# state, category, description, comment
# SUCCESS, SAP CONFIGURATION, Redundant ABAP instance configuration, 2 ABAP instances detected
# SUCCESS, SAP CONFIGURATION, Redundant Java instance configuration, 0 Java instances detected
# SUCCESS, SAP CONFIGURATION, Enqueue separation, All Enqueue server separated from application server
# SUCCESS, SAP CONFIGURATION, MessageServer separation, All MessageServer separated from application
server
# SUCCESS, SAP CONFIGURATION, ABAP instances on multiple hosts, ABAP instances on multiple hosts
detected
# SUCCESS, SAP CONFIGURATION, Redundant ABAP SPOOL service configuration, 2 ABAP instances with SPOOL
service detected
# SUCCESS, SAP STATE, Redundant ABAP SPOOL service state, 2 ABAP instances with active SPOOL service
detected
# SUCCESS, SAP STATE, ABAP instances with ABAP SPOOL service on multiple hosts, ABAP instances with
active ABAP SPOOL service on multiple hosts detected
# SUCCESS, SAP CONFIGURATION, Redundant ABAP BATCH service configuration, 2 ABAP instances with BATCH
service detected
# SUCCESS, SAP STATE, Redundant ABAP BATCH service state, 2 ABAP instances with active BATCH service
detected
# SUCCESS, SAP STATE, ABAP instances with ABAP BATCH service on multiple hosts, ABAP instances with
active ABAP BATCH service on multiple hosts detected
# SUCCESS, SAP CONFIGURATION, Redundant ABAP DIALOG service configuration, 2 ABAP instances with DIALOG
service detected
# SUCCESS, SAP STATE, Redundant ABAP DIALOG service state, 2 ABAP instances with active DIALOG service
detected
# SUCCESS, SAP STATE, ABAP instances with ABAP DIALOG service on multiple hosts, ABAP instances with
active ABAP DIALOG service on multiple hosts detected
# SUCCESS, SAP CONFIGURATION, Redundant ABAP UPDATE service configuration, 2 ABAP instances with UPDATE
service detected
# SUCCESS, SAP STATE, Redundant ABAP UPDATE service state, 2 ABAP instances with active UPDATE service
detected
# SUCCESS, SAP STATE, ABAP instances with ABAP UPDATE service on multiple hosts, ABAP instances with
active ABAP UPDATE service on multiple hosts detected
# SUCCESS, SAP STATE, SCS instance running, SCS instance status ok
# SUCCESS, SAP CONFIGURATION, SAPInstance RA sufficient version (nw1-ascs_NW1_00), SAPInstance includes
is-ers patch
# SUCCESS, SAP CONFIGURATION, Enqueue replication (nw1-ascs_NW1_00), Enqueue replication enabled
# SUCCESS, SAP STATE, Enqueue replication state (nw1-ascs_NW1_00), Enqueue replication active

nw1-cl-0:nw1adm 56> sapcontrol -nr 00 -function HACheckFailoverConfig

# 15.08.2018 14:04:08
# HACheckFailoverConfig
# OK
# state, category, description, comment
# SUCCESS, SAP CONFIGURATION, SAPInstance RA sufficient version, SAPInstance includes is-ers patch

2. Manually migrate the ASCS instance


Resource state before starting the test:
stonith-sbd (stonith:external/sbd): Started nw1-cl-0
Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-0
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-0
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-0
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-0
Resource Group: g-NW1_ERS
fs_NW1_ERS (ocf::heartbeat:Filesystem): Started nw1-cl-1
nc_NW1_ERS (ocf::heartbeat:azure-lb): Started nw1-cl-1
vip_NW1_ERS (ocf::heartbeat:IPaddr2): Started nw1-cl-1
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-1

Run the following commands as root to migrate the ASCS instance.

nw1-cl-0:~ # crm resource migrate rsc_sap_NW1_ASCS00 force


# INFO: Move constraint created for rsc_sap_NW1_ASCS00

nw1-cl-0:~ # crm resource unmigrate rsc_sap_NW1_ASCS00


# INFO: Removed migration constraints for rsc_sap_NW1_ASCS00

# Remove failed actions for the ERS that occurred as part of the migration
nw1-cl-0:~ # crm resource cleanup rsc_sap_NW1_ERS02

Resource state after the test:

stonith-sbd (stonith:external/sbd): Started nw1-cl-0


Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-1
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-1
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-1
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-1
Resource Group: g-NW1_ERS
fs_NW1_ERS (ocf::heartbeat:Filesystem): Started nw1-cl-0
nc_NW1_ERS (ocf::heartbeat:azure-lb): Started nw1-cl-0
vip_NW1_ERS (ocf::heartbeat:IPaddr2): Started nw1-cl-0
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-0

3. Test HAFailoverToNode
Resource state before starting the test:

stonith-sbd (stonith:external/sbd): Started nw1-cl-0


Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-1
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-1
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-1
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-1
Resource Group: g-NW1_ERS
fs_NW1_ERS (ocf::heartbeat:Filesystem): Started nw1-cl-0
nc_NW1_ERS (ocf::heartbeat:azure-lb): Started nw1-cl-0
vip_NW1_ERS (ocf::heartbeat:IPaddr2): Started nw1-cl-0
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-0

Run the following commands as <sapsid>adm to migrate the ASCS instance.


nw1-cl-0:nw1adm 55> sapcontrol -nr 00 -host nw1-ascs -user nw1adm <password> -function HAFailoverToNode
""

# run as root
# Remove failed actions for the ERS that occurred as part of the migration
nw1-cl-0:~ # crm resource cleanup rsc_sap_NW1_ERS02
# Remove migration constraints
nw1-cl-0:~ # crm resource clear rsc_sap_NW1_ASCS00
#INFO: Removed migration constraints for rsc_sap_NW1_ASCS00

Resource state after the test:

stonith-sbd (stonith:external/sbd): Started nw1-cl-0


Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-0
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-0
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-0
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-0
Resource Group: g-NW1_ERS
fs_NW1_ERS (ocf::heartbeat:Filesystem): Started nw1-cl-1
nc_NW1_ERS (ocf::heartbeat:azure-lb): Started nw1-cl-1
vip_NW1_ERS (ocf::heartbeat:IPaddr2): Started nw1-cl-1
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-1

4. Simulate node crash


Resource state before starting the test:

stonith-sbd (stonith:external/sbd): Started nw1-cl-0


Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-0
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-0
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-0
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-0
Resource Group: g-NW1_ERS
fs_NW1_ERS (ocf::heartbeat:Filesystem): Started nw1-cl-1
nc_NW1_ERS (ocf::heartbeat:azure-lb): Started nw1-cl-1
vip_NW1_ERS (ocf::heartbeat:IPaddr2): Started nw1-cl-1
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-1

Run the following command as root on the node where the ASCS instance is running

nw1-cl-0:~ # echo b > /proc/sysrq-trigger

If you use SBD, Pacemaker should not automatically start on the killed node. The status after the node is
started again should look like this.
Online: [ nw1-cl-1 ]
OFFLINE: [ nw1-cl-0 ]

Full list of resources:

stonith-sbd (stonith:external/sbd): Started nw1-cl-1


Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-1
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-1
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-1
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-1
Resource Group: g-NW1_ERS
fs_NW1_ERS (ocf::heartbeat:Filesystem): Started nw1-cl-1
nc_NW1_ERS (ocf::heartbeat:azure-lb): Started nw1-cl-1
vip_NW1_ERS (ocf::heartbeat:IPaddr2): Started nw1-cl-1
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-1

Failed Actions:
* rsc_sap_NW1_ERS02_monitor_11000 on nw1-cl-1 'not running' (7): call=219, status=complete,
exitreason='none',
last-rc-change='Wed Aug 15 14:38:38 2018', queued=0ms, exec=0ms

Use the following commands to start Pacemaker on the killed node, clean the SBD messages, and clean the
failed resources.

# run as root
# list the SBD device(s)
nw1-cl-0:~ # cat /etc/sysconfig/sbd | grep SBD_DEVICE=
# SBD_DEVICE="/dev/disk/by-id/scsi-36001405772fe8401e6240c985857e116;/dev/disk/by-id/scsi-
36001405034a84428af24ddd8c3a3e9e1;/dev/disk/by-id/scsi-36001405cdd5ac8d40e548449318510c3"

nw1-cl-0:~ # sbd -d /dev/disk/by-id/scsi-36001405772fe8401e6240c985857e116 -d /dev/disk/by-id/scsi-


36001405034a84428af24ddd8c3a3e9e1 -d /dev/disk/by-id/scsi-36001405cdd5ac8d40e548449318510c3 message nw1-
cl-0 clear

nw1-cl-0:~ # systemctl start pacemaker


nw1-cl-0:~ # crm resource cleanup rsc_sap_NW1_ASCS00
nw1-cl-0:~ # crm resource cleanup rsc_sap_NW1_ERS02

Resource state after the test:

stonith-sbd (stonith:external/sbd): Started nw1-cl-1


Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-1
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-1
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-1
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-1
Resource Group: g-NW1_ERS
fs_NW1_ERS (ocf::heartbeat:Filesystem): Started nw1-cl-0
nc_NW1_ERS (ocf::heartbeat:azure-lb): Started nw1-cl-0
vip_NW1_ERS (ocf::heartbeat:IPaddr2): Started nw1-cl-0
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-0

5. Test manual restart of ASCS instance


Resource state before starting the test:
stonith-sbd (stonith:external/sbd): Started nw1-cl-1
Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-1
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-1
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-1
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-1
Resource Group: g-NW1_ERS
fs_NW1_ERS (ocf::heartbeat:Filesystem): Started nw1-cl-0
nc_NW1_ERS (ocf::heartbeat:azure-lb): Started nw1-cl-0
vip_NW1_ERS (ocf::heartbeat:IPaddr2): Started nw1-cl-0
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-0

Create an enqueue lock by, for example edit a user in transaction su01. Run the following commands as
<sapsid>adm on the node where the ASCS instance is running. The commands will stop the ASCS instance
and start it again. If using enqueue server 1 architecture, the enqueue lock is expected to be lost in this test. If
using enqueue server 2 architecture, the enqueue will be retained.

nw1-cl-1:nw1adm 54> sapcontrol -nr 00 -function StopWait 600 2

The ASCS instance should now be disabled in Pacemaker

rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Stopped (disabled)

Start the ASCS instance again on the same node.

nw1-cl-1:nw1adm 54> sapcontrol -nr 00 -function StartWait 600 2

The enqueue lock of transaction su01 should be lost and the back-end should have been reset. Resource
state after the test:

stonith-sbd (stonith:external/sbd): Started nw1-cl-1


Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-1
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-1
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-1
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-1
Resource Group: g-NW1_ERS
fs_NW1_ERS (ocf::heartbeat:Filesystem): Started nw1-cl-0
nc_NW1_ERS (ocf::heartbeat:azure-lb): Started nw1-cl-0
vip_NW1_ERS (ocf::heartbeat:IPaddr2): Started nw1-cl-0
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-0

6. Kill message server process


Resource state before starting the test:
stonith-sbd (stonith:external/sbd): Started nw1-cl-1
Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-1
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-1
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-1
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-1
Resource Group: g-NW1_ERS
fs_NW1_ERS (ocf::heartbeat:Filesystem): Started nw1-cl-0
nc_NW1_ERS (ocf::heartbeat:azure-lb): Started nw1-cl-0
vip_NW1_ERS (ocf::heartbeat:IPaddr2): Started nw1-cl-0
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-0

Run the following commands as root to identify the process of the message server and kill it.

nw1-cl-1:~ # pgrep ms.sapNW1 | xargs kill -9

If you only kill the message server once, it will be restarted by sapstart. If you kill it often enough, Pacemaker
will eventually move the ASCS instance to the other node. Run the following commands as root to clean up
the resource state of the ASCS and ERS instance after the test.

nw1-cl-0:~ # crm resource cleanup rsc_sap_NW1_ASCS00


nw1-cl-0:~ # crm resource cleanup rsc_sap_NW1_ERS02

Resource state after the test:

stonith-sbd (stonith:external/sbd): Started nw1-cl-1


Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-0
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-0
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-0
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-0
Resource Group: g-NW1_ERS
fs_NW1_ERS (ocf::heartbeat:Filesystem): Started nw1-cl-1
nc_NW1_ERS (ocf::heartbeat:azure-lb): Started nw1-cl-1
vip_NW1_ERS (ocf::heartbeat:IPaddr2): Started nw1-cl-1
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-1

7. Kill enqueue server process


Resource state before starting the test:

stonith-sbd (stonith:external/sbd): Started nw1-cl-1


Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-0
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-0
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-0
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-0
Resource Group: g-NW1_ERS
fs_NW1_ERS (ocf::heartbeat:Filesystem): Started nw1-cl-1
nc_NW1_ERS (ocf::heartbeat:azure-lb): Started nw1-cl-1
vip_NW1_ERS (ocf::heartbeat:IPaddr2): Started nw1-cl-1
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-1

Run the following commands as root on the node where the ASCS instance is running to kill the enqueue
server.
nw1-cl-0:~ # pgrep en.sapNW1 | xargs kill -9

The ASCS instance should immediately fail over to the other node. The ERS instance should also fail over
after the ASCS instance is started. Run the following commands as root to clean up the resource state of the
ASCS and ERS instance after the test.

nw1-cl-0:~ # crm resource cleanup rsc_sap_NW1_ASCS00


nw1-cl-0:~ # crm resource cleanup rsc_sap_NW1_ERS02

Resource state after the test:

stonith-sbd (stonith:external/sbd): Started nw1-cl-1


Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-1
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-1
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-1
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-1
Resource Group: g-NW1_ERS
fs_NW1_ERS (ocf::heartbeat:Filesystem): Started nw1-cl-0
nc_NW1_ERS (ocf::heartbeat:azure-lb): Started nw1-cl-0
vip_NW1_ERS (ocf::heartbeat:IPaddr2): Started nw1-cl-0
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-0

8. Kill enqueue replication server process


Resource state before starting the test:

stonith-sbd (stonith:external/sbd): Started nw1-cl-1


Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-1
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-1
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-1
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-1
Resource Group: g-NW1_ERS
fs_NW1_ERS (ocf::heartbeat:Filesystem): Started nw1-cl-0
nc_NW1_ERS (ocf::heartbeat:azure-lb): Started nw1-cl-0
vip_NW1_ERS (ocf::heartbeat:IPaddr2): Started nw1-cl-0
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-0

Run the following command as root on the node where the ERS instance is running to kill the enqueue
replication server process.

nw1-cl-0:~ # pgrep er.sapNW1 | xargs kill -9

If you only run the command once, sapstart will restart the process. If you run it often enough, sapstart will
not restart the process and the resource will be in a stopped state. Run the following commands as root to
clean up the resource state of the ERS instance after the test.

nw1-cl-0:~ # crm resource cleanup rsc_sap_NW1_ERS02

Resource state after the test:


stonith-sbd (stonith:external/sbd): Started nw1-cl-1
Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-1
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-1
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-1
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-1
Resource Group: g-NW1_ERS
fs_NW1_ERS (ocf::heartbeat:Filesystem): Started nw1-cl-0
nc_NW1_ERS (ocf::heartbeat:azure-lb): Started nw1-cl-0
vip_NW1_ERS (ocf::heartbeat:IPaddr2): Started nw1-cl-0
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-0

9. Kill enqueue sapstartsrv process


Resource state before starting the test:

stonith-sbd (stonith:external/sbd): Started nw1-cl-1


Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-1
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-1
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-1
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-1
Resource Group: g-NW1_ERS
fs_NW1_ERS (ocf::heartbeat:Filesystem): Started nw1-cl-0
nc_NW1_ERS (ocf::heartbeat:azure-lb): Started nw1-cl-0
vip_NW1_ERS (ocf::heartbeat:IPaddr2): Started nw1-cl-0
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-0

Run the following commands as root on the node where the ASCS is running.

nw1-cl-1:~ # pgrep -fl ASCS00.*sapstartsrv


# 59545 sapstartsrv

nw1-cl-1:~ # kill -9 59545

The sapstartsrv process should always be restarted by the Pacemaker resource agent. Resource state after
the test:

stonith-sbd (stonith:external/sbd): Started nw1-cl-1


Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-1
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-1
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-1
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-1
Resource Group: g-NW1_ERS
fs_NW1_ERS (ocf::heartbeat:Filesystem): Started nw1-cl-0
nc_NW1_ERS (ocf::heartbeat:azure-lb): Started nw1-cl-0
vip_NW1_ERS (ocf::heartbeat:IPaddr2): Started nw1-cl-0
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-0

Next steps
HA for SAP NW on Azure VMs on SLES for SAP applications multi-SID guide
Azure Virtual Machines planning and implementation for SAP
Azure Virtual Machines deployment for SAP
Azure Virtual Machines DBMS deployment for SAP
To learn how to establish high availability and plan for disaster recovery of SAP HANA on Azure VMs, see High
Availability of SAP HANA on Azure Virtual Machines (VMs)
High availability for SAP NetWeaver on Azure VMs
on SUSE Linux Enterprise Server with Azure NetApp
Files for SAP applications
12/22/2020 • 40 minutes to read • Edit Online

This article describes how to deploy the virtual machines, configure the virtual machines, install the cluster
framework, and install a highly available SAP NetWeaver 7.50 system, using Azure NetApp Files. In the example
configurations, installation commands etc., the ASCS instance is number 00, the ERS instance number 01, the
Primary Application instance (PAS) is 02 and the Application instance (AAS) is 03. SAP System ID QAS is used.
This article explains how to achieve high availability for SAP NetWeaver application with Azure NetApp Files. The
database layer isn't covered in detail in this article.
Read the following SAP Notes and papers first:
Azure NetApp Files documentation
SAP Note 1928533, which has:
List of Azure VM sizes that are supported for the deployment of SAP software
Important capacity information for Azure VM sizes
Supported SAP software, and operating system (OS) and database combinations
Required SAP kernel version for Windows and Linux on Microsoft Azure
SAP Note 2015553 lists prerequisites for SAP-supported SAP software deployments in Azure.
SAP Note 2205917 has recommended OS settings for SUSE Linux Enterprise Server for SAP Applications
SAP Note 1944799 has SAP HANA Guidelines for SUSE Linux Enterprise Server for SAP Applications
SAP Note 2178632 has detailed information about all monitoring metrics reported for SAP in Azure.
SAP Note 2191498 has the required SAP Host Agent version for Linux in Azure.
SAP Note 2243692 has information about SAP licensing on Linux in Azure.
SAP Note 1984787 has general information about SUSE Linux Enterprise Server 12.
SAP Note 1999351 has additional troubleshooting information for the Azure Enhanced Monitoring Extension for
SAP.
SAP Community WIKI](https://wiki.scn.sap.com/wiki/display/HOME/SAPonLinuxNotes) has all required SAP
Notes for Linux.
Azure Virtual Machines planning and implementation for SAP on Linux
Azure Virtual Machines deployment for SAP on Linux
Azure Virtual Machines DBMS deployment for SAP on Linux
SUSE SAP HA Best Practice Guides The guides contain all required information to set up Netweaver HA and SAP
HANA System Replication on-premises. Use these guides as a general baseline. They provide much more
detailed information.
SUSE High Availability Extension 12 SP3 Release Notes
NetApp SAP Applications on Microsoft Azure using Azure NetApp Files

Overview
High availability(HA) for SAP Netweaver central services requires shared storage. To achieve that on SUSE Linux so
far it was necessary to build separate highly available NFS cluster.
Now it is possible to achieve SAP Netweaver HA by using shared storage, deployed on Azure NetApp Files. Using
Azure NetApp Files for the shared storage eliminates the need for additional NFS cluster. Pacemaker is still needed
for HA of the SAP Netweaver central services(ASCS/SCS).

SAP NetWeaver ASCS, SAP NetWeaver SCS, SAP NetWeaver ERS, and the SAP HANA database use virtual
hostname and virtual IP addresses. On Azure, a load balancer is required to use a virtual IP address. We
recommend using Standard load balancer. The following list shows the configuration of the (A)SCS and ERS load
balancer.
(A )SCS
Frontend configuration
IP address 10.1.1.20
Probe Port
Port 620<nr>
Load-balancing rules
If using Standard Load Balancer, select HA por ts
If using Basic Load Balancer, create Load balancing rules for the following ports
32<nr> TCP
36<nr> TCP
39<nr> TCP
81<nr> TCP
5<nr> 13 TCP
5<nr> 14 TCP
5<nr> 16 TCP
ERS
Frontend configuration
IP address 10.1.1.21
Probe Port
Port 621<nr>
Load-balancing rules
If using Standard Load Balancer, select HA por ts
If using Basic Load Balancer, create Load balancing rules for the following ports
32<nr> TCP
33<nr> TCP
5<nr> 13 TCP
5<nr> 14 TCP
5<nr> 16 TCP
Backend configuration
Connected to primary network interfaces of all virtual machines that should be part of the (A)SCS/ERS
cluster

Setting up the Azure NetApp Files infrastructure


SAP NetWeaver requires shared storage for the transport and profile directory. Before proceeding with the setup
for Azure NetApp files infrastructure, familiarize yourself with the Azure NetApp Files documentation. Check if your
selected Azure region offers Azure NetApp Files. The following link shows the availability of Azure NetApp Files by
Azure region: Azure NetApp Files Availability by Azure Region.
Azure NetApp files is available in several Azure regions. Before deploying Azure NetApp Files, request onboarding
to Azure NetApp Files, following the Register for Azure NetApp files instructions.
Deploy Azure NetApp Files resources
The steps assume that you have already deployed Azure Virtual Network. The Azure NetApp Files resources and the
VMs, where the Azure NetApp Files resources will be mounted must be deployed in the same Azure Virtual
Network or in peered Azure Virtual Networks.
1. If you haven't done that already, request onboarding to Azure NetApp Files.
2. Create the NetApp account in the selected Azure region, following the instructions to create NetApp Account.
3. Set up Azure NetApp Files capacity pool, following the instructions on how to set up Azure NetApp Files
capacity pool.
The SAP Netweaver architecture presented in this article uses single Azure NetApp Files capacity pool,
Premium SKU. We recommend Azure NetApp Files Premium SKU for SAP Netweaver application workload
on Azure.
4. Delegate a subnet to Azure NetApp files as described in the instructions Delegate a subnet to Azure NetApp
Files.
5. Deploy Azure NetApp Files volumes, following the instructions to create a volume for Azure NetApp Files.
Deploy the volumes in the designated Azure NetApp Files subnet. The IP addresses of the Azure NetApp
volumes are assigned automatically. Keep in mind that the Azure NetApp Files resources and the Azure VMs
must be in the same Azure Virtual Network or in peered Azure Virtual Networks. In this example we use two
Azure NetApp Files volumes: sapQAS and trans. The file paths that are mounted to the corresponding mount
points are /usrsapqas /sapmntQAS , /usrsapqas /usrsapQAS sys, etc.
a. volume sapQAS (nfs://10.1.0.4/usrsapqas /sapmntQAS )
b. volume sapQAS (nfs://10.1.0.4/usrsapqas /usrsapQAS ascs)
c. volume sapQAS (nfs://10.1.0.4/usrsapqas /usrsapQAS sys)
d. volume sapQAS (nfs://10.1.0.4/usrsapqas /usrsapQAS ers)
e. volume trans (nfs://10.1.0.4/trans)
f. volume sapQAS (nfs://10.1.0.4/usrsapqas /usrsapQAS pas)
g. volume sapQAS (nfs://10.1.0.4/usrsapqas /usrsapQAS aas)
In this example, we used Azure NetApp Files for all SAP Netweaver file systems to demonstrate how Azure NetApp
Files can be used. The SAP file systems that don't need to be mounted via NFS can also be deployed as Azure disk
storage . In this example a-e must be on Azure NetApp Files and f-g (that is, /usr/sap/QAS /D02 ,
/usr/sap/QAS /D03 ) could be deployed as Azure disk storage.
Important considerations
When considering Azure NetApp Files for the SAP Netweaver on SUSE High Availability architecture, be aware of
the following important considerations:
The minimum capacity pool is 4 TiB. The capacity pool size can be increased be in 1 TiB increments.
The minimum volume is 100 GiB
Azure NetApp Files and all virtual machines, where Azure NetApp Files volumes will be mounted, must be in the
same Azure Virtual Network or in peered virtual networks in the same region. Azure NetApp Files access over
VNET peering in the same region is supported now. Azure NetApp access over global peering is not yet
supported.
The selected virtual network must have a subnet, delegated to Azure NetApp Files.
Azure NetApp Files offers export policy: you can control the allowed clients, the access type (Read&Write, Read
Only, etc.).
Azure NetApp Files feature isn't zone aware yet. Currently Azure NetApp Files feature isn't deployed in all
Availability zones in an Azure region. Be aware of the potential latency implications in some Azure regions.
Azure NetApp Files volumes can be deployed as NFSv3 or NFSv4.1 volumes. Both protocols are supported for
the SAP application layer (ASCS/ERS, SAP application servers).

Deploy Linux VMs manually via Azure portal


First you need to create the Azure NetApp Files volumes. Deploy the VMs. Afterwards, you create a load balancer
and use the virtual machines in the backend pools.
1. Create a Resource Group
2. Create a Virtual Network
3. Create an Availability Set for ASCS
Set max update domain
4. Create Virtual Machine 1
Use at least SLES4SAP 12 SP3, in this example the SLES4SAP 12 SP3 image is used
Select Availability Set created earlier for ASCS
5. Create Virtual Machine 2
Use at least SLES4SAP 12 SP3, in this example the SLES4SAP 12 SP3 image is used
Select Availability Set created earlier for ASCS
6. Create an Availability Set for the SAP application instances (PAS, AAS)
Set max update domain
7. Create Virtual Machine 3
Use at least SLES4SAP 12 SP3, in this example the SLES4SAP 12 SP3 image is used
Select Availability Set created earlier for PAS/AAS
8. Create Virtual Machine 4
Use at least SLES4SAP 12 SP3, in this example the SLES4SAP 12 SP3 image is used
Select Availability Set created earlier for PAS/AAS

Disable ID mapping (if using NFSv4.1)


The instructions in this section are only applicable, if using Azure NetApp Files volumes with NFSv4.1 protocol.
Perform the configuration on all VMs, where Azure NetApp Files NFSv4.1 volumes will be mounted.
1. Verify the NFS domain setting. Make sure that the domain is configured as the default Azure NetApp Files
domain, i.e. defaultv4iddomain.com and the mapping is set to nobody .

IMPORTANT
Make sure to set the NFS domain in /etc/idmapd.conf on the VM to match the default domain configuration on
Azure NetApp Files: defaultv4iddomain.com . If there's a mismatch between the domain configuration on the NFS
client (i.e. the VM) and the NFS server, i.e. the Azure NetApp configuration, then the permissions for files on Azure
NetApp volumes that are mounted on the VMs will be displayed as nobody .

sudo cat /etc/idmapd.conf


# Example
[General]
Verbosity = 0
Pipefs-Directory = /var/lib/nfs/rpc_pipefs
Domain = defaultv4iddomain.com
[Mapping]
Nobody-User = nobody
Nobody-Group = nobody

2. [A] Verify nfs4_disable_idmapping. It should be set to Y . To create the directory structure where
nfs4_disable_idmapping is located, execute the mount command. You won't be able to manually create the
directory under /sys/modules, because access is reserved for the kernel / drivers.

# Check nfs4_disable_idmapping
cat /sys/module/nfs/parameters/nfs4_disable_idmapping
# If you need to set nfs4_disable_idmapping to Y
mkdir /mnt/tmp
mount 10.1.0.4:/sapmnt/qas /mnt/tmp
umount /mnt/tmp
echo "Y" > /sys/module/nfs/parameters/nfs4_disable_idmapping
# Make the configuration permanent
echo "options nfs nfs4_disable_idmapping=Y" >> /etc/modprobe.d/nfs.conf

Setting up (A)SCS
In this example, the resources were deployed manually via the Azure portal .
Deploy Azure Load Balancer manually via Azure portal
First you need to create the Azure NetApp Files volumes. Deploy the VMs. Afterwards, you create a load balancer
and use the virtual machines in the backend pool.
1. Create load balancer (internal, standard):
a. Create the frontend IP addresses
a. IP address 10.1.1.20 for the ASCS
a. Open the load balancer, select frontend IP pool, and click Add
b. Enter the name of the new frontend IP pool (for example frontend.QAS.ASCS )
c. Set the Assignment to Static and enter the IP address (for example 10.1.1.20 )
d. Click OK
b. IP address 10.1.1.21 for the ASCS ERS
Repeat the steps above under "a" to create an IP address for the ERS (for example 10.1.1.21
and frontend.QAS.ERS )
b. Create the backend pool
a. Open the load balancer, select backend pools, and click Add
b. Enter the name of the new backend pool (for example backend.QAS )
c. Click Add a virtual machine.
d. Select Virtual machine
e. Select the virtual machines of the (A)SCS cluster and their IP addresses.
f. Click Add
c. Create the health probes
a. Port 62000 for ASCS
a. Open the load balancer, select health probes, and click Add
b. Enter the name of the new health probe (for example health.QAS.ASCS )
c. Select TCP as protocol, port 62000 , keep Interval 5 and Unhealthy threshold 2
d. Click OK
b. Port 62101 for ASCS ERS
Repeat the steps above under "c" to create a health probe for the ERS (for example 62101
and health.QAS.ERS )
d. Load-balancing rules
a. Create a backend pool for the ASCS
a. Open the load balancer, select Load-balancing rules and click Add
b. Enter the name of the new load balancer rule (for example lb.QAS.ASCS )
c. Select the frontend IP address for ASCS, backend pool, and health probe you created earlier
(for example frontend.QAS.ASCS , backend.QAS and health.QAS.ASCS )
d. Select HA por ts
e. Increase idle timeout to 30 minutes
f. Make sure to enable Floating IP
g. Click OK
Repeat the steps above to create load balancing rules for ERS (for example lb.QAS.ERS )
2. Alternatively, if your scenario requires basic load balancer (internal), follow these steps:
a. Create the frontend IP addresses
a. IP address 10.1.1.20 for the ASCS
a. Open the load balancer, select frontend IP pool, and click Add
b. Enter the name of the new frontend IP pool (for example frontend.QAS.ASCS )
c. Set the Assignment to Static and enter the IP address (for example 10.1.1.20 )
d. Click OK
b. IP address 10.1.1.21 for the ASCS ERS
Repeat the steps above under "a" to create an IP address for the ERS (for example 10.1.1.21
and frontend.QAS.ERS )
b. Create the backend pool
a. Open the load balancer, select backend pools, and click Add
b. Enter the name of the new backend pool (for example backend.QAS )
c. Click Add a virtual machine.
d. Select the Availability Set you created earlier for ASCS
e. Select the virtual machines of the (A)SCS cluster
f. Click OK
c. Create the health probes
a. Port 62000 for ASCS
a. Open the load balancer, select health probes, and click Add
b. Enter the name of the new health probe (for example health.QAS.ASCS )
c. Select TCP as protocol, port 62000 , keep Interval 5 and Unhealthy threshold 2
d. Click OK
b. Port 62101 for ASCS ERS
Repeat the steps above under "c" to create a health probe for the ERS (for example 62101
and health.QAS.ERS )
d. Load-balancing rules
a. 3200 TCP for ASCS
a. Open the load balancer, select Load-balancing rules and click Add
b. Enter the name of the new load balancer rule (for example lb.QAS.ASCS.3200 )
c. Select the frontend IP address for ASCS, backend pool, and health probe you created earlier
(for example frontend.QAS.ASCS )
d. Keep protocol TCP , enter port 3200
e. Increase idle timeout to 30 minutes
f. Make sure to enable Floating IP
g. Click OK
b. Additional ports for the ASCS
Repeat the steps above under "d" for ports 3600 , 3900 , 8100 , 500 13, 500 14, 500 16 and
TCP for the ASCS
c. Additional ports for the ASCS ERS
Repeat the steps above under "d" for ports 3201 , 3301 , 501 13, 501 14, 501 16 and TCP for
the ASCS ERS

IMPORTANT
Floating IP is not supported on a NIC secondary IP configuration in load-balancing scenarios. For details see
Azure Load balancer Limitations. If you need additional IP address for the VM, deploy a second NIC.
NOTE
When VMs without public IP addresses are placed in the backend pool of internal (no public IP address)
Standard Azure load balancer, there will be no outbound internet connectivity, unless additional configuration
is performed to allow routing to public end points. For details on how to achieve outbound connectivity see
Public endpoint connectivity for Virtual Machines using Azure Standard Load Balancer in SAP high-availability
scenarios.

IMPORTANT
Do not enable TCP timestamps on Azure VMs placed behind Azure Load Balancer. Enabling TCP timestamps
will cause the health probes to fail. Set parameter net.ipv4.tcp_timestamps to 0 . For details see Load
Balancer health probes.

Create Pacemaker cluster


Follow the steps in Setting up Pacemaker on SUSE Linux Enterprise Server in Azure to create a basic Pacemaker
cluster for this (A)SCS server.
Installation
The following items are prefixed with either [A] - applicable to all nodes, [1] - only applicable to node 1 or [2] -
only applicable to node 2.
1. [A] Install SUSE Connector

sudo zypper install sap-suse-cluster-connector

NOTE
The known issue with using a dash in host names is fixed with version 3.1.1 of package sap-suse-cluster-
connector . Make sure that you are using at least version 3.1.1 of package sap-suse-cluster-connector, if using cluster
nodes with dash in the host name. Otherwise your cluster will not work.

Make sure that you installed the new version of the SAP SUSE cluster connector. The old one was called
sap_suse_cluster_connector and the new one is called sap-suse-cluster-connector .

sudo zypper info sap-suse-cluster-connector

# Information for package sap-suse-cluster-connector:


# ---------------------------------------------------
# Repository : SLE-12-SP3-SAP-Updates
# Name : sap-suse-cluster-connector
# Version : 3.1.0-8.1
# Arch : noarch
# Vendor : SUSE LLC <https://www.suse.com/>
# Support Level : Level 3
# Installed Size : 45.6 KiB
# Installed : Yes
# Status : up-to-date
# Source package : sap-suse-cluster-connector-3.1.0-8.1.src
# Summary : SUSE High Availability Setup for SAP Products

2. [A] Update SAP resource agents


A patch for the resource-agents package is required to use the new configuration, that is described in this
article. You can check, if the patch is already installed with the following command

sudo grep 'parameter name="IS_ERS"' /usr/lib/ocf/resource.d/heartbeat/SAPInstance

The output should be similar to

<parameter name="IS_ERS" unique="0" required="0">

If the grep command does not find the IS_ERS parameter, you need to install the patch listed on the SUSE
download page

# example for patch for SLES 12 SP1


sudo zypper in -t patch SUSE-SLE-HA-12-SP1-2017-885=1
# example for patch for SLES 12 SP2
sudo zypper in -t patch SUSE-SLE-HA-12-SP2-2017-886=1

3. [A] Setup host name resolution


You can either use a DNS server or modify the /etc/hosts on all nodes. This example shows how to use the
/etc/hosts file. Replace the IP address and the hostname in the following commands

sudo vi /etc/hosts

Insert the following lines to /etc/hosts. Change the IP address and hostname to match your environment

# IP address of cluster node 1


10.1.1.18 anftstsapcl1
# IP address of cluster node 2
10.1.1.6 anftstsapcl2
# IP address of the load balancer frontend configuration for SAP Netweaver ASCS
10.1.1.20 anftstsapvh
# IP address of the load balancer frontend configuration for SAP Netweaver ERS
10.1.1.21 anftstsapers

4. [1] Create SAP directories in the Azure NetApp Files volume.


Mount temporarily the Azure NetApp Files volume on one of the VMs and create the SAP directories(file
paths).
# mount temporarily the volume
sudo mkdir -p /saptmp
# If using NFSv3
sudo mount -t nfs -o rw,hard,rsize=65536,wsize=65536,vers=3,tcp 10.1.0.4:/sapQAS /saptmp
# If using NFSv4.1
sudo mount -t nfs -o rw,hard,rsize=65536,wsize=65536,vers=4.1,sec=sys,tcp 10.1.0.4:/sapQAS /saptmp
# create the SAP directories
sudo cd /saptmp
sudo mkdir -p sapmntQAS
sudo mkdir -p usrsapQASascs
sudo mkdir -p usrsapQASers
sudo mkdir -p usrsapQASsys
sudo mkdir -p usrsapQASpas
sudo mkdir -p usrsapQASaas
# unmount the volume and delete the temporary directory
sudo cd ..
sudo umount /saptmp
sudo rmdir /saptmp

Prepare for SAP NetWeaver installation


1. [A] Create the shared directories

sudo mkdir -p /sapmnt/QAS


sudo mkdir -p /usr/sap/trans
sudo mkdir -p /usr/sap/QAS/SYS
sudo mkdir -p /usr/sap/QAS/ASCS00
sudo mkdir -p /usr/sap/QAS/ERS01

sudo chattr +i /sapmnt/QAS


sudo chattr +i /usr/sap/trans
sudo chattr +i /usr/sap/QAS/SYS
sudo chattr +i /usr/sap/QAS/ASCS00
sudo chattr +i /usr/sap/QAS/ERS01

2. [A] Configure autofs

sudo vi /etc/auto.master
# Add the following line to the file, save and exit
/- /etc/auto.direct

If using NFSv3, create a file with:

sudo vi /etc/auto.direct
# Add the following lines to the file, save and exit
/sapmnt/QAS -nfsvers=3,nobind 10.1.0.4:/usrsapqas/sapmntQAS
/usr/sap/trans -nfsvers=3,nobind 10.1.0.4:/trans
/usr/sap/QAS/SYS -nfsvers=3,nobind 10.1.0.4:/usrsapqas/usrsapQASsys

If using NFSv4.1, create a file with:


sudo vi /etc/auto.direct
# Add the following lines to the file, save and exit
/sapmnt/QAS -nfsvers=4.1,nobind,sec=sys 10.1.0.4:/usrsapqas/sapmntQAS
/usr/sap/trans -nfsvers=4.1,nobind,sec=sys 10.1.0.4:/trans
/usr/sap/QAS/SYS -nfsvers=4.1,nobind,sec=sys 10.1.0.4:/usrsapqas/usrsapQASsys

NOTE
Make sure to match the NFS protocol version of the Azure NetApp Files volumes, when mounting the volumes. If the
Azure NetApp Files volumes are created as NFSv3 volumes, use the corresponding NFSv3 configuration. If the Azure
NetApp Files volumes are created as NFSv4.1 volumes, follow the instructions to disable ID mapping and make sure to
use the corresponding NFSv4.1 configuration. In this example the Azure NetApp Files volumes were created as NFSv3
volumes.

Restart autofs to mount the new shares

sudo systemctl enable autofs


sudo service autofs restart

3. [A] Configure SWAP file

sudo vi /etc/waagent.conf

# Set the property ResourceDisk.EnableSwap to y


# Create and use swapfile on resource disk.
ResourceDisk.EnableSwap=y

# Set the size of the SWAP file with property ResourceDisk.SwapSizeMB


# The free space of resource disk varies by virtual machine size. Make sure that you do not set a value
that is too big. You can check the SWAP space with command swapon
# Size of the swapfile.
ResourceDisk.SwapSizeMB=2000

Restart the Agent to activate the change

sudo service waagent restart

Installing SAP NetWeaver ASCS/ERS


1. [1] Create a virtual IP resource and health-probe for the ASCS instance
IMPORTANT
Recent testing revealed situations, where netcat stops responding to requests due to backlog and its limitation of
handling only one connection. The netcat resource stops listening to the Azure Load balancer requests and the
floating IP becomes unavailable.
For existing Pacemaker clusters, we recommended in the past replacing netcat with socat. Currently we recommend
using azure-lb resource agent, which is part of package resource-agents, with the following package version
requirements:
For SLES 12 SP4/SP5, the version must be at least resource-agents-4.3.018.a7fb5035-3.30.1.
For SLES 15/15 SP1, the version must be at least resource-agents-4.3.0184.6ee15eb2-4.13.1.
Note that the change will require brief downtime.
For existing Pacemaker clusters, if the configuration was already changed to use socat as described in Azure Load-
Balancer Detection Hardening, there is no requirement to switch immediately to azure-lb resource agent.

sudo crm node standby anftstsapcl2


# If using NFSv3
sudo crm configure primitive fs_QAS_ASCS Filesystem device='10.1.0.4/usrsapqas/usrsapQASascs'
directory='/usr/sap/QAS/ASCS00' fstype='nfs' \
op start timeout=60s interval=0 \
op stop timeout=60s interval=0 \
op monitor interval=20s timeout=40s

# If using NFSv4.1
sudo crm configure primitive fs_QAS_ASCS Filesystem device='10.1.0.4:/usrsapqas/usrsapQASascs'
directory='/usr/sap/QAS/ASCS00' fstype='nfs' options='sec=sys,vers=4.1' \
op start timeout=60s interval=0 \
op stop timeout=60s interval=0 \
op monitor interval=20s timeout=40s

sudo crm configure primitive vip_QAS_ASCS IPaddr2 \


params ip=10.1.1.20 cidr_netmask=24 \
op monitor interval=10 timeout=20

sudo crm configure primitive nc_QAS_ASCS azure-lb port=62000

sudo crm configure group g-QAS_ASCS fs_QAS_ASCS nc_QAS_ASCS vip_QAS_ASCS \


meta resource-stickiness=3000

Make sure that the cluster status is ok and that all resources are started. It is not important on which node
the resources are running.

sudo crm_mon -r

# Node anftstsapcl2: standby


# Online: [ anftstsapcl1 ]
#
# Full list of resources:
#
# Resource Group: g-QAS_ASCS
# fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl1
# nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl1
# vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
# stonith-sbd (stonith:external/sbd): Started anftstsapcl2

2. [1] Install SAP NetWeaver ASCS


Install SAP NetWeaver ASCS as root on the first node using a virtual hostname that maps to the IP address
of the load balancer frontend configuration for the ASCS, for example anftstsapvh , 10.1.1.20 and the
instance number that you used for the probe of the load balancer, for example 00 .
You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a non-root user to connect to
sapinst. You can use parameter SAPINST_USE_HOSTNAME to install SAP, using virtual hostname.

sudo <swpm>/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin SAPINST_USE_HOSTNAME=virtual_hostname

If the installation fails to create a subfolder in /usr/sap/QAS /ASCS00 , try setting the owner and group of the
ASCS00 folder and retry.

chown qasadm /usr/sap/QAS/ASCS00


chgrp sapsys /usr/sap/QAS/ASCS00

3. [1] Create a virtual IP resource and health-probe for the ERS instance

sudo crm node online anftstsapcl2


sudo crm node standby anftstsapcl1
# If using NFSv3
sudo crm configure primitive fs_QAS_ERS Filesystem device='10.1.0.4:/usrsapqas/usrsapQASers'
directory='/usr/sap/QAS/ERS01' fstype='nfs' \
op start timeout=60s interval=0 \
op stop timeout=60s interval=0 \
op monitor interval=20s timeout=40s

# If using NFSv4.1
sudo crm configure primitive fs_QAS_ERS Filesystem device='10.1.0.4:/usrsapqas/usrsapQASers'
directory='/usr/sap/QAS/ERS01' fstype='nfs' options='sec=sys,vers=4.1' \
op start timeout=60s interval=0 \
op stop timeout=60s interval=0 \
op monitor interval=20s timeout=40s

sudo crm configure primitive vip_QAS_ERS IPaddr2 \


params ip=10.1.1.21 cidr_netmask=24 \
op monitor interval=10 timeout=20

sudo crm configure primitive nc_QAS_ERS azure-lb port=62101

sudo crm configure group g-QAS_ERS fs_QAS_ERS nc_QAS_ERS vip_QAS_ERS

Make sure that the cluster status is ok and that all resources are started. It is not important on which node
the resources are running.

sudo crm_mon -r

# Node anftstsapcl1: standby


# Online: [ anftstsapcl2 ]
#
# Full list of resources:
#
# stonith-sbd (stonith:external/sbd): Started anftstsapcl2
# Resource Group: g-QAS_ASCS
# fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl2
# nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl2
# vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
# Resource Group: g-QAS_ERS
# fs_QAS_ERS (ocf::heartbeat:Filesystem): Started anftstsapcl2
# nc_QAS_ERS (ocf::heartbeat:azure-lb): Started anftstsapcl2
# vip_QAS_ERS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
4. [2] Install SAP NetWeaver ERS
Install SAP NetWeaver ERS as root on the second node using a virtual hostname that maps to the IP address
of the load balancer frontend configuration for the ERS, for example anftstsapers , 10.1.1.21 and the
instance number that you used for the probe of the load balancer, for example 01 .
You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a non-root user to connect to
sapinst. You can use parameter SAPINST_USE_HOSTNAME to install SAP, using virtual hostname.

sudo <swpm>/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin SAPINST_USE_HOSTNAME=virtual_hostname

NOTE
Use SWPM SP 20 PL 05 or higher. Lower versions do not set the permissions correctly and the installation will fail.

If the installation fails to create a subfolder in /usr/sap/QAS /ERS01 , try setting the owner and group of the
ERS01 folder and retry.

chown qasadm /usr/sap/QAS/ERS01


chgrp sapsys /usr/sap/QAS/ERS01

5. [1] Adapt the ASCS/SCS and ERS instance profiles


ASCS/SCS profile

sudo vi /sapmnt/QAS/profile/QAS_ASCS00_anftstsapvh

# Change the restart command to a start command


#Restart_Program_01 = local $(_EN) pf=$(_PF)
Start_Program_01 = local $(_EN) pf=$(_PF)

# Add the following lines


service/halib = $(DIR_CT_RUN)/saphascriptco.so
service/halib_cluster_connector = /usr/bin/sap_suse_cluster_connector

# Add the keep alive parameter, if using ENSA1


enque/encni/set_so_keepalive = true

For both ENSA1 and ENSA2, make sure that the keepalive OS parameters are set as described in SAP note
1410736.
ERS profile
sudo vi /sapmnt/QAS/profile/QAS_ERS01_anftstsapers

# Change the restart command to a start command


#Restart_Program_00 = local $(_ER) pf=$(_PFL) NR=$(SCSID)
Start_Program_00 = local $(_ER) pf=$(_PFL) NR=$(SCSID)

# Add the following lines


service/halib = $(DIR_CT_RUN)/saphascriptco.so
service/halib_cluster_connector = /usr/bin/sap_suse_cluster_connector

# remove Autostart from ERS profile


# Autostart = 1

6. [A] Configure Keep Alive


The communication between the SAP NetWeaver application server and the ASCS/SCS is routed through a
software load balancer. The load balancer disconnects inactive connections after a configurable timeout. To
prevent this you need to set a parameter in the SAP NetWeaver ASCS/SCS profile, if using ENSA1, and
change the Linux system keepalive settings on all SAP servers for both ENSA1/ENSA2. Read SAP Note
1410736 for more information.

# Change the Linux system configuration


sudo sysctl net.ipv4.tcp_keepalive_time=300

7. [A] Configure the SAP users after the installation

# Add sidadm to the haclient group


sudo usermod -aG haclient qasadm

8. [1] Add the ASCS and ERS SAP services to the sapservice file
Add the ASCS service entry to the second node and copy the ERS service entry to the first node.

cat /usr/sap/sapservices | grep ASCS00 | sudo ssh anftstsapcl2 "cat >>/usr/sap/sapservices"


sudo ssh anftstsapcl2 "cat /usr/sap/sapservices" | grep ERS01 | sudo tee -a /usr/sap/sapservices

9. [1] Create the SAP cluster resources


If using enqueue server 1 architecture (ENSA1), define the resources as follows:
sudo crm configure property maintenance-mode="true"

sudo crm configure primitive rsc_sap_QAS_ASCS00 SAPInstance \


operations \$id=rsc_sap_QAS_ASCS00-operations \
op monitor interval=11 timeout=60 on-fail=restart \
params InstanceName=QAS_ASCS00_anftstsapvh START_PROFILE="/sapmnt/QAS/profile/QAS_ASCS00_anftstsapvh" \
AUTOMATIC_RECOVER=false \
meta resource-stickiness=5000 failure-timeout=60 migration-threshold=1 priority=10

sudo crm configure primitive rsc_sap_QAS_ERS01 SAPInstance \


operations \$id=rsc_sap_QAS_ERS01-operations \
op monitor interval=11 timeout=60 on-fail=restart \
params InstanceName=QAS_ERS01_anftstsapers START_PROFILE="/sapmnt/QAS/profile/QAS_ERS01_anftstsapers"
AUTOMATIC_RECOVER=false IS_ERS=true \
meta priority=1000

sudo crm configure modgroup g-QAS_ASCS add rsc_sap_QAS_ASCS00


sudo crm configure modgroup g-QAS_ERS add rsc_sap_QAS_ERS01

sudo crm configure colocation col_sap_QAS_no_both -5000: g-QAS_ERS g-QAS_ASCS


sudo crm configure location loc_sap_QAS_failover_to_ers rsc_sap_QAS_ASCS00 rule 2000: runs_ers_QAS eq 1
sudo crm configure order ord_sap_QAS_first_start_ascs Optional: rsc_sap_QAS_ASCS00:start
rsc_sap_QAS_ERS01:stop symmetrical=false

sudo crm node online anftstsapcl1


sudo crm configure property maintenance-mode="false"

SAP introduced support for enqueue server 2, including replication, as of SAP NW 7.52. Starting with ABAP
Platform 1809, enqueue server 2 is installed by default. See SAP note 2630416 for enqueue server 2 support. If
using enqueue server 2 architecture (ENSA2), define the resources as follows:

sudo crm configure property maintenance-mode="true"

sudo crm configure primitive rsc_sap_QAS_ASCS00 SAPInstance \


operations \$id=rsc_sap_QAS_ASCS00-operations \
op monitor interval=11 timeout=60 on-fail=restart \
params InstanceName=QAS_ASCS00_anftstsapvh START_PROFILE="/sapmnt/QAS/profile/QAS_ASCS00_anftstsapvh" \
AUTOMATIC_RECOVER=false \
meta resource-stickiness=5000

sudo crm configure primitive rsc_sap_QAS_ERS01 SAPInstance \


operations \$id=rsc_sap_QAS_ERS01-operations \
op monitor interval=11 timeout=60 on-fail=restart \
params InstanceName=QAS_ERS01_anftstsapers START_PROFILE="/sapmnt/QAS/profile/QAS_ERS01_anftstsapers"
AUTOMATIC_RECOVER=false IS_ERS=true

sudo crm configure modgroup g-QAS_ASCS add rsc_sap_QAS_ASCS00


sudo crm configure modgroup g-QAS_ERS add rsc_sap_QAS_ERS01

sudo crm configure colocation col_sap_QAS_no_both -5000: g-QAS_ERS g-QAS_ASCS


sudo crm configure order ord_sap_QAS_first_start_ascs Optional: rsc_sap_QAS_ASCS00:start
rsc_sap_QAS_ERS01:stop symmetrical=false

sudo crm node online anftstsapcl1


sudo crm configure property maintenance-mode="false"

If you are upgrading from an older version and switching to enqueue server 2, see SAP note 2641019.
Make sure that the cluster status is ok and that all resources are started. It is not important on which node the
resources are running.
sudo crm_mon -r
# Full list of resources:
#
# stonith-sbd (stonith:external/sbd): Started anftstsapcl2
# Resource Group: g-QAS_ASCS
# fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl1
# nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl1
# vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
# rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl1
# Resource Group: g-QAS_ERS
# fs_QAS_ERS (ocf::heartbeat:Filesystem): Started anftstsapcl2
# nc_QAS_ERS (ocf::heartbeat:azure-lb): Started anftstsapcl2
# vip_QAS_ERS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
# rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl2

SAP NetWeaver application server preparation


Some databases require that the database instance installation is executed on an application server. Prepare the
application server virtual machines to be able to use them in these cases.
The steps bellow assume that you install the application server on a server different from the ASCS/SCS and HANA
servers. Otherwise some of the steps below (like configuring host name resolution) are not needed.
The following items are prefixed with either [A] - applicable to both PAS and AAS, [P] - only applicable to PAS or [S]
- only applicable to AAS.
1. [A] Configure operating system
Reduce the size of the dirty cache. For more information, see Low write performance on SLES 11/12 servers
with large RAM.

sudo vi /etc/sysctl.conf
# Change/set the following settings
vm.dirty_bytes = 629145600
vm.dirty_background_bytes = 314572800

2. [A] Setup host name resolution


You can either use a DNS server or modify the /etc/hosts on all nodes. This example shows how to use the
/etc/hosts file. Replace the IP address and the hostname in the following commands

sudo vi /etc/hosts

Insert the following lines to /etc/hosts. Change the IP address and hostname to match your environment

# IP address of the load balancer frontend configuration for SAP NetWeaver ASCS/SCS
10.1.1.20 anftstsapvh
# IP address of the load balancer frontend configuration for SAP NetWeaver ERS
10.1.1.21 anftstsapers
# IP address of all application servers
10.1.1.15 anftstsapa01
10.1.1.16 anftstsapa02

3. [A] Create the sapmnt directory


sudo mkdir -p /sapmnt/QAS
sudo mkdir -p /usr/sap/trans

sudo chattr +i /sapmnt/QAS


sudo chattr +i /usr/sap/trans

4. [P] Create the PAS directory

sudo mkdir -p /usr/sap/QAS/D02


sudo chattr +i /usr/sap/QAS/D02

5. [S] Create the AAS directory

sudo mkdir -p /usr/sap/QAS/D03


sudo chattr +i /usr/sap/QAS/D03

6. [P] Configure autofs on PAS

sudo vi /etc/auto.master

# Add the following line to the file, save and exit


/- /etc/auto.direct

If using NFSv3, create a new file with:

sudo vi /etc/auto.direct
# Add the following lines to the file, save and exit
/sapmnt/QAS -nfsvers=3,nobind 10.1.0.4:/usrsapqas/sapmntQAS
/usr/sap/trans -nfsvers=3,nobind 10.1.0.4:/trans
/usr/sap/QAS/D02 -nfsvers=3,nobind 10.1.0.4:/usrsapqas/usrsapQASpas

If using NFSv4.1, create a new file with:

sudo vi /etc/auto.direct
# Add the following lines to the file, save and exit
/sapmnt/QAS -nfsvers=4.1,nobind,sec=sys 10.1.0.4:/usrsapqas/sapmntQAS
/usr/sap/trans -nfsvers=4.1,nobind,sec=sys 10.1.0.4:/trans
/usr/sap/QAS/D02 -nfsvers=4.1,nobind,sec=sys 10.1.0.4:/usrsapqas/usrsapQASpas

Restart autofs to mount the new shares

sudo systemctl enable autofs


sudo service autofs restart

7. [P] Configure autofs on AAS


sudo vi /etc/auto.master

# Add the following line to the file, save and exit


/- /etc/auto.direct

If using NFSv3, create a new file with:

sudo vi /etc/auto.direct
# Add the following lines to the file, save and exit
/sapmnt/QAS -nfsvers=3,nobind 10.1.0.4:/usrsapqas/sapmntQAS
/usr/sap/trans -nfsvers=3,nobind 10.1.0.4:/trans
/usr/sap/QAS/D03 -nfsvers=3,nobind 10.1.0.4:/usrsapqas/usrsapQASaas

If using NFSv4.1, create a new file with:

sudo vi /etc/auto.direct
# Add the following lines to the file, save and exit
/sapmnt/QAS -nfsvers=4.1,nobind,sec=sys 10.1.0.4:/usrsapqas/sapmntQAS
/usr/sap/trans -nfsvers=4.1,nobind,sec=sys 10.1.0.4:/trans
/usr/sap/QAS/D03 -nfsvers=4.1,nobind,sec=sys 10.1.0.4:/usrsapqas/usrsapQASaas

Restart autofs to mount the new shares

sudo systemctl enable autofs


sudo service autofs restart

8. [A] Configure SWAP file

sudo vi /etc/waagent.conf

# Set the property ResourceDisk.EnableSwap to y


# Create and use swapfile on resource disk.
ResourceDisk.EnableSwap=y

# Set the size of the SWAP file with property ResourceDisk.SwapSizeMB


# The free space of resource disk varies by virtual machine size. Make sure that you do not set a value
that is too big. You can check the SWAP space with command swapon
# Size of the swapfile.
ResourceDisk.SwapSizeMB=2000

Restart the Agent to activate the change

sudo service waagent restart

Install database
In this example, SAP NetWeaver is installed on SAP HANA. You can use every supported database for this
installation. For more information on how to install SAP HANA in Azure, see High Availability of SAP HANA on
Azure Virtual Machines (VMs). For a list of supported databases, see SAP Note 1928533.
Run the SAP database instance installation
Install the SAP NetWeaver database instance as root using a virtual hostname that maps to the IP address of
the load balancer frontend configuration for the database.
You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a non-root user to connect to
sapinst.

sudo <swpm>/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin

SAP NetWeaver application server installation


Follow these steps to install an SAP application server.
1. [A] Prepare application server Follow the steps in the chapter SAP NetWeaver application server preparation
above to prepare the application server.
2. [A] Install SAP NetWeaver application server Install a primary or additional SAP NetWeaver applications
server.
You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a non-root user to connect to
sapinst.

sudo <swpm>/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin

3. [A] Update SAP HANA secure store


Update the SAP HANA secure store to point to the virtual name of the SAP HANA System Replication setup.
Run the following command to list the entries

hdbuserstore List

This should list all entries and should look similar to

DATA FILE : /home/qasadm/.hdb/anftstsapa01/SSFS_HDB.DAT


KEY FILE : /home/qasadm/.hdb/anftstsapa01/SSFS_HDB.KEY

KEY DEFAULT
ENV : 10.1.1.5:30313
USER: SAPABAP1
DATABASE: QAS

The output shows that the IP address of the default entry is pointing to the virtual machine and not to the
load balancer's IP address. This entry needs to be changed to point to the virtual hostname of the load
balancer. Make sure to use the same port (30313 in the output above) and database name (QAS in the
output above)!

su - qasadm
hdbuserstore SET DEFAULT qasdb:30313@QAS SAPABAP1 <password of ABAP schema>

Test the cluster setup


The following tests are a copy of the test cases in the best practices guides of SUSE. They are copied for your
convenience. Always also read the best practices guides and perform all additional tests that might have been
added.
1. Test HAGetFailoverConfig, HACheckConfig, and HACheckFailoverConfig
Run the following commands as <sapsid>adm on the node where the ASCS instance is currently running. If
the commands fail with FAIL: Insufficient memory, it might be caused by dashes in your hostname. This is a
known issue and will be fixed by SUSE in the sap-suse-cluster-connector package.
anftstsapcl1:qasadm 52> sapcontrol -nr 00 -function HAGetFailoverConfig
07.03.2019 20:08:59
HAGetFailoverConfig
OK
HAActive: TRUE
HAProductVersion: SUSE Linux Enterprise Server for SAP Applications 12 SP3
HASAPInterfaceVersion: SUSE Linux Enterprise Server for SAP Applications 12 SP3
(sap_suse_cluster_connector 3.1.0)
HADocumentation: https://www.suse.com/products/sles-for-sap/resource-library/sap-best-practices/
HAActiveNode: anftstsapcl1
HANodes: anftstsapcl1, anftstsapcl2

anftstsapcl1:qasadm 54> sapcontrol -nr 00 -function HACheckConfig


07.03.2019 23:28:29
HACheckConfig
OK
state, category, description, comment
SUCCESS, SAP CONFIGURATION, Redundant ABAP instance configuration, 2 ABAP instances detected
SUCCESS, SAP CONFIGURATION, Redundant Java instance configuration, 0 Java instances detected
SUCCESS, SAP CONFIGURATION, Enqueue separation, All Enqueue server separated from application server
SUCCESS, SAP CONFIGURATION, MessageServer separation, All MessageServer separated from application
server
SUCCESS, SAP CONFIGURATION, ABAP instances on multiple hosts, ABAP instances on multiple hosts detected
SUCCESS, SAP CONFIGURATION, Redundant ABAP SPOOL service configuration, 2 ABAP instances with SPOOL
service detected
SUCCESS, SAP STATE, Redundant ABAP SPOOL service state, 2 ABAP instances with active SPOOL service
detected
SUCCESS, SAP STATE, ABAP instances with ABAP SPOOL service on multiple hosts, ABAP instances with active
ABAP SPOOL service on multiple hosts detected
SUCCESS, SAP CONFIGURATION, Redundant ABAP BATCH service configuration, 2 ABAP instances with BATCH
service detected
SUCCESS, SAP STATE, Redundant ABAP BATCH service state, 2 ABAP instances with active BATCH service
detected
SUCCESS, SAP STATE, ABAP instances with ABAP BATCH service on multiple hosts, ABAP instances with active
ABAP BATCH service on multiple hosts detected
SUCCESS, SAP CONFIGURATION, Redundant ABAP DIALOG service configuration, 2 ABAP instances with DIALOG
service detected
SUCCESS, SAP STATE, Redundant ABAP DIALOG service state, 2 ABAP instances with active DIALOG service
detected
SUCCESS, SAP STATE, ABAP instances with ABAP DIALOG service on multiple hosts, ABAP instances with
active ABAP DIALOG service on multiple hosts detected
SUCCESS, SAP CONFIGURATION, Redundant ABAP UPDATE service configuration, 2 ABAP instances with UPDATE
service detected
SUCCESS, SAP STATE, Redundant ABAP UPDATE service state, 2 ABAP instances with active UPDATE service
detected
SUCCESS, SAP STATE, ABAP instances with ABAP UPDATE service on multiple hosts, ABAP instances with
active ABAP UPDATE service on multiple hosts detected
SUCCESS, SAP STATE, SCS instance running, SCS instance status ok
SUCCESS, SAP CONFIGURATION, SAPInstance RA sufficient version (anftstsapvh_QAS_00), SAPInstance includes
is-ers patch
SUCCESS, SAP CONFIGURATION, Enqueue replication (anftstsapvh_QAS_00), Enqueue replication enabled
SUCCESS, SAP STATE, Enqueue replication state (anftstsapvh_QAS_00), Enqueue replication active

anftstsapcl1:qasadm 55> sapcontrol -nr 00 -function HACheckFailoverConfig


07.03.2019 23:30:48
HACheckFailoverConfig
OK
state, category, description, comment
SUCCESS, SAP CONFIGURATION, SAPInstance RA sufficient version, SAPInstance includes is-ers patch

2. Manually migrate the ASCS instance


Resource state before starting the test:
Resource Group: g-QAS_ASCS
fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl2
nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl2
vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
rscsap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl2
stonith-sbd (stonith:external/sbd): Started anftstsapcl1
Resource Group: g-QAS_ERS
fs_QAS_ERS (ocf::heartbeat:Filesystem): Started anftstsapcl1
nc_QAS_ERS (ocf::heartbeat:azure-lb): Started anftstsapcl1
vip_QAS_ERS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Starting anftstsapcl1

Run the following commands as root to migrate the ASCS instance.

anftstsapcl1:~ # crm resource migrate rsc_sap_QAS_ASCS00 force


INFO: Move constraint created for rsc_sap_QAS_ASCS00

anftstsapcl1:~ # crm resource unmigrate rsc_sap_QAS_ASCS00


INFO: Removed migration constraints for rsc_sap_QAS_ASCS00

# Remove failed actions for the ERS that occurred as part of the migration
anftstsapcl1:~ # crm resource cleanup rsc_sap_QAS_ERS01

Resource state after the test:

Resource Group: g-QAS_ASCS


fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl1
nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl1
vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl1
stonith-sbd (stonith:external/sbd): Started anftstsapcl1
Resource Group: g-QAS_ERS
fs_QAS_ERS (ocf::heartbeat:Filesystem): Started anftstsapcl2
nc_QAS_ERS (ocf::heartbeat:azure-lb): Started anftstsapcl2
vip_QAS_ERS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl2

3. Test HAFailoverToNode
Resource state before starting the test:

Resource Group: g-QAS_ASCS


fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl1
nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl1
vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl1
stonith-sbd (stonith:external/sbd): Started anftstsapcl1
Resource Group: g-QAS_ERS
fs_QAS_ERS (ocf::heartbeat:Filesystem): Started anftstsapcl2
nc_QAS_ERS (ocf::heartbeat:azure-lb): Started anftstsapcl2
vip_QAS_ERS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl2

Run the following commands as <sapsid>adm to migrate the ASCS instance.


anftstsapcl1:qasadm 53> sapcontrol -nr 00 -host anftstsapvh -user qasadm <password> -function
HAFailoverToNode ""

# run as root
# Remove failed actions for the ERS that occurred as part of the migration
anftstsapcl1:~ # crm resource cleanup rsc_sap_QAS_ERS01
# Remove migration constraints
anftstsapcl1:~ # crm resource clear rsc_sap_QAS_ASCS00
#INFO: Removed migration constraints for rsc_sap_QAS_ASCS00

Resource state after the test:

Resource Group: g-QAS_ASCS


fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl2
nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl2
vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl2
stonith-sbd (stonith:external/sbd): Started anftstsapcl1
Resource Group: g-QAS_ERS
fs_QAS_ERS (ocf::heartbeat:Filesystem): Started anftstsapcl1
nc_QAS_ERS (ocf::heartbeat:azure-lb): Started anftstsapcl1
vip_QAS_ERS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl1

4. Simulate node crash


Resource state before starting the test:

Resource Group: g-QAS_ASCS


fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl2
nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl2
vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl2
stonith-sbd (stonith:external/sbd): Started anftstsapcl1
Resource Group: g-QAS_ERS
fs_QAS_ERS (ocf::heartbeat:Filesystem): Started anftstsapcl1
nc_QAS_ERS (ocf::heartbeat:azure-lb): Started anftstsapcl1
vip_QAS_ERS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl1

Run the following command as root on the node where the ASCS instance is running

anftstsapcl2:~ # echo b > /proc/sysrq-trigger

If you use SBD, Pacemaker should not automatically start on the killed node. The status after the node is
started again should look like this.
Online:
Online: [ anftstsapcl1 ]
OFFLINE: [ anftstsapcl2 ]

Full list of resources:

Resource Group: g-QAS_ASCS


fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl1
nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl1
vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl1
stonith-sbd (stonith:external/sbd): Started anftstsapcl1
Resource Group: g-QAS_ERS
fs_QAS_ERS (ocf::heartbeat:Filesystem): Started anftstsapcl1
nc_QAS_ERS (ocf::heartbeat:azure-lb): Started anftstsapcl1
vip_QAS_ERS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl1

Failed Actions:
* rsc_sap_QAS_ERS01_monitor_11000 on anftstsapcl1 'not running' (7): call=166, status=complete,
exitreason='',
last-rc-change='Fri Mar 8 18:26:10 2019', queued=0ms, exec=0ms

Use the following commands to start Pacemaker on the killed node, clean the SBD messages, and clean the
failed resources.

# run as root
# list the SBD device(s)
anftstsapcl2:~ # cat /etc/sysconfig/sbd | grep SBD_DEVICE=
# SBD_DEVICE="/dev/disk/by-id/scsi-36001405b730e31e7d5a4516a2a697dcf;/dev/disk/by-id/scsi-
36001405f69d7ed91ef54461a442c676e;/dev/disk/by-id/scsi-360014058e5f335f2567488882f3a2c3a"

anftstsapcl2:~ # sbd -d /dev/disk/by-id/scsi-36001405772fe8401e6240c985857e11 -d /dev/disk/by-id/scsi-


36001405f69d7ed91ef54461a442c676e -d /dev/disk/by-id/scsi-360014058e5f335f2567488882f3a2c3a message
anftstsapcl2 clear

anftstsapcl2:~ # systemctl start pacemaker


anftstsapcl2:~ # crm resource cleanup rsc_sap_QAS_ASCS00
anftstsapcl2:~ # crm resource cleanup rsc_sap_QAS_ERS01

Resource state after the test:

Full list of resources:

Resource Group: g-QAS_ASCS


fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl1
nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl1
vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl1
stonith-sbd (stonith:external/sbd): Started anftstsapcl1
Resource Group: g-QAS_ERS
fs_QAS_ERS (ocf::heartbeat:Filesystem): Started anftstsapcl2
nc_QAS_ERS (ocf::heartbeat:azure-lb): Started anftstsapcl2
vip_QAS_ERS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl2

5. Test manual restart of ASCS instance


Resource state before starting the test:
Resource Group: g-QAS_ASCS
fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl2
nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl2
vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl2
stonith-sbd (stonith:external/sbd): Started anftstsapcl1
Resource Group: g-QAS_ERS
fs_QAS_ERS (ocf::heartbeat:Filesystem): Started anftstsapcl1
nc_QAS_ERS (ocf::heartbeat:azure-lb): Started anftstsapcl1
vip_QAS_ERS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl1

Create an enqueue lock by, for example edit a user in transaction su01. Run the following commands as
<sapsid>adm on the node where the ASCS instance is running. The commands will stop the ASCS instance
and start it again. If using enqueue server 1 architecture, the enqueue lock is expected to be lost in this test. If
using enqueue server 2 architecture, the enqueue will be retained.

anftstsapcl2:qasadm 51> sapcontrol -nr 00 -function StopWait 600 2

The ASCS instance should now be disabled in Pacemaker

rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Stopped (disabled)

Start the ASCS instance again on the same node.

anftstsapcl2:qasadm 52> sapcontrol -nr 00 -function StartWait 600 2

The enqueue lock of transaction su01 should be lost, if using enqueue server replication 1 architecture and
the back-end should have been reset. Resource state after the test:

Resource Group: g-QAS_ASCS


fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl2
nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl2
vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl2
stonith-sbd (stonith:external/sbd): Started anftstsapcl1
Resource Group: g-QAS_ERS
fs_QAS_ERS (ocf::heartbeat:Filesystem): Started anftstsapcl1
nc_QAS_ERS (ocf::heartbeat:azure-lb): Started anftstsapcl1
vip_QAS_ERS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl1

6. Kill message server process


Resource state before starting the test:
Resource Group: g-QAS_ASCS
fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl2
nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl2
vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl2
stonith-sbd (stonith:external/sbd): Started anftstsapcl1
Resource Group: g-QAS_ERS
fs_QAS_ERS (ocf::heartbeat:Filesystem): Started anftstsapcl1
nc_QAS_ERS (ocf::heartbeat:azure-lb): Started anftstsapcl1
vip_QAS_ERS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl1

Run the following commands as root to identify the process of the message server and kill it.

anftstsapcl2:~ # pgrep ms.sapQAS | xargs kill -9

If you only kill the message server once, it will be restarted by sapstart . If you kill it often enough,
Pacemaker will eventually move the ASCS instance to the other node. Run the following commands as root
to clean up the resource state of the ASCS and ERS instance after the test.

anftstsapcl2:~ # crm resource cleanup rsc_sap_QAS_ASCS00


anftstsapcl2:~ # crm resource cleanup rsc_sap_QAS_ERS01

Resource state after the test:

Resource Group: g-QAS_ASCS


fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl1
nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl1
vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl1
stonith-sbd (stonith:external/sbd): Started anftstsapcl1
Resource Group: g-QAS_ERS
fs_QAS_ERS (ocf::heartbeat:Filesystem): Started anftstsapcl2
nc_QAS_ERS (ocf::heartbeat:azure-lb): Started anftstsapcl2
vip_QAS_ERS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl2

7. Kill enqueue server process


Resource state before starting the test:

Resource Group: g-QAS_ASCS


fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl1
nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl1
vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl1
stonith-sbd (stonith:external/sbd): Started anftstsapcl1
Resource Group: g-QAS_ERS
fs_QAS_ERS (ocf::heartbeat:Filesystem): Started anftstsapcl2
nc_QAS_ERS (ocf::heartbeat:azure-lb): Started anftstsapcl2
vip_QAS_ERS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl2

Run the following commands as root on the node where the ASCS instance is running to kill the enqueue
server.

anftstsapcl1:~ # pgrep en.sapQAS | xargs kill -9

The ASCS instance should immediately fail over to the other node. The ERS instance should also fail over
after the ASCS instance is started. Run the following commands as root to clean up the resource state of the
ASCS and ERS instance after the test.

anftstsapcl1:~ # crm resource cleanup rsc_sap_QAS_ASCS00


anftstsapcl1:~ # crm resource cleanup rsc_sap_QAS_ERS01

Resource state after the test:

Resource Group: g-QAS_ASCS


fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl2
nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl2
vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl2
stonith-sbd (stonith:external/sbd): Started anftstsapcl1
Resource Group: g-QAS_ERS
fs_QAS_ERS (ocf::heartbeat:Filesystem): Started anftstsapcl1
nc_QAS_ERS (ocf::heartbeat:azure-lb): Started anftstsapcl1
vip_QAS_ERS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl1

8. Kill enqueue replication server process


Resource state before starting the test:

Resource Group: g-QAS_ASCS


fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl2
nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl2
vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl2
stonith-sbd (stonith:external/sbd): Started anftstsapcl1
Resource Group: g-QAS_ERS
fs_QAS_ERS (ocf::heartbeat:Filesystem): Started anftstsapcl1
nc_QAS_ERS (ocf::heartbeat:azure-lb): Started anftstsapcl1
vip_QAS_ERS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl1

Run the following command as root on the node where the ERS instance is running to kill the enqueue
replication server process.

anftstsapcl1:~ # pgrep er.sapQAS | xargs kill -9

If you only run the command once, sapstart will restart the process. If you run it often enough, sapstart
will not restart the process and the resource will be in a stopped state. Run the following commands as root
to clean up the resource state of the ERS instance after the test.

anftstsapcl1:~ # crm resource cleanup rsc_sap_QAS_ERS01

Resource state after the test:


Resource Group: g-QAS_ASCS
fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl2
nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl2
vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl2
stonith-sbd (stonith:external/sbd): Started anftstsapcl1
Resource Group: g-QAS_ERS
fs_QAS_ERS (ocf::heartbeat:Filesystem): Started anftstsapcl1
nc_QAS_ERS (ocf::heartbeat:azure-lb): Started anftstsapcl1
vip_QAS_ERS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl1

9. Kill enqueue sapstartsrv process


Resource state before starting the test:

Resource Group: g-QAS_ASCS


fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl2
nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl2
vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl2
stonith-sbd (stonith:external/sbd): Started anftstsapcl1
Resource Group: g-QAS_ERS
fs_QAS_ERS (ocf::heartbeat:Filesystem): Started anftstsapcl1
nc_QAS_ERS (ocf::heartbeat:azure-lb): Started anftstsapcl1
vip_QAS_ERS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl1

Run the following commands as root on the node where the ASCS is running.

anftstsapcl2:~ # pgrep -fl ASCS00.*sapstartsrv


#67625 sapstartsrv

anftstsapcl2:~ # kill -9 67625

The sapstartsrv process should always be restarted by the Pacemaker resource agent. Resource state after
the test:

Resource Group: g-QAS_ASCS


fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl2
nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl2
vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl2
stonith-sbd (stonith:external/sbd): Started anftstsapcl1
Resource Group: g-QAS_ERS
fs_QAS_ERS (ocf::heartbeat:Filesystem): Started anftstsapcl1
nc_QAS_ERS (ocf::heartbeat:azure-lb): Started anftstsapcl1
vip_QAS_ERS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl1

Next steps
HA for SAP NW on Azure VMs on SLES for SAP applications multi-SID guide
Azure Virtual Machines planning and implementation for SAP
Azure Virtual Machines deployment for SAP
Azure Virtual Machines DBMS deployment for SAP
To learn how to establish high availability and plan for disaster recovery of SAP HANA on Azure VMs, see High
Availability of SAP HANA on Azure Virtual Machines (VMs)
Azure Virtual Machines high availability for SAP
NetWeaver on Red Hat Enterprise Linux
12/22/2020 • 28 minutes to read • Edit Online

This article describes how to deploy the virtual machines, configure the virtual machines, install the cluster
framework, and install a highly available SAP NetWeaver 7.50 system. In the example configurations, installation
commands etc. ASCS instance number 00, ERS instance number 02, and SAP System ID NW1 is used. The names of
the resources (for example virtual machines, virtual networks) in the example assume that you have used the
ASCS/SCS template with Resource Prefix NW1 to create the resources.
Read the following SAP Notes and papers first
SAP Note 1928533, which has:
List of Azure VM sizes that are supported for the deployment of SAP software
Important capacity information for Azure VM sizes
Supported SAP software, and operating system (OS) and database combinations
Required SAP kernel version for Windows and Linux on Microsoft Azure
SAP Note 2015553 lists prerequisites for SAP-supported SAP software deployments in Azure.
SAP Note 2002167 has recommended OS settings for Red Hat Enterprise Linux
SAP Note 2009879 has SAP HANA Guidelines for Red Hat Enterprise Linux
SAP Note 2178632 has detailed information about all monitoring metrics reported for SAP in Azure.
SAP Note 2191498 has the required SAP Host Agent version for Linux in Azure.
SAP Note 2243692 has information about SAP licensing on Linux in Azure.
SAP Note 1999351 has additional troubleshooting information for the Azure Enhanced Monitoring Extension
for SAP.
SAP Community WIKI has all required SAP Notes for Linux.
Azure Virtual Machines planning and implementation for SAP on Linux
Azure Virtual Machines deployment for SAP on Linux
Azure Virtual Machines DBMS deployment for SAP on Linux
Product Documentation for Red Hat Gluster Storage
SAP Netweaver in pacemaker cluster
General RHEL documentation
High Availability Add-On Overview
High Availability Add-On Administration
High Availability Add-On Reference
Configuring ASCS/ERS for SAP Netweaver with standalone resources in RHEL 7.5
Configure SAP S/4HANA ASCS/ERS with Standalone Enqueue Server 2 (ENSA2) in Pacemaker on RHEL
Azure specific RHEL documentation:
Support Policies for RHEL High Availability Clusters - Microsoft Azure Virtual Machines as Cluster
Members
Installing and Configuring a Red Hat Enterprise Linux 7.4 (and later) High-Availability Cluster on Microsoft
Azure

Overview
To achieve high availability, SAP NetWeaver requires shared storage. GlusterFS is configured in a separate cluster
and can be used by multiple SAP systems.

SAP NetWeaver ASCS, SAP NetWeaver SCS, SAP NetWeaver ERS, and the SAP HANA database use virtual
hostname and virtual IP addresses. On Azure, a load balancer is required to use a virtual IP address. We
recommend using Standard load balancer. The following list shows the configuration of the (A)SCS and ERS load
balancer.
(A )SCS
Frontend configuration
IP address 10.0.0.7
Probe Port
Port 620<nr>
Load-balancing rules
If using Standard Load Balancer, select HA por ts
If using Basic Load Balancer, create Load balancing rules for the following ports
32<nr> TCP
36<nr> TCP
39<nr> TCP
81<nr> TCP
5<nr> 13 TCP
5<nr> 14 TCP
5<nr> 16 TCP
ERS
Frontend configuration
IP address 10.0.0.8
Probe Port
Port 621<nr>
Load-balancing rules
If using Standard Load Balancer, select HA por ts
If using Basic Load Balancer, create Load balancing rules for the following ports
32<nr> TCP
33<nr> TCP
5<nr> 13 TCP
5<nr> 14 TCP
5<nr> 16 TCP
Backend configuration
Connected to primary network interfaces of all virtual machines that should be part of the (A)SCS/ERS
cluster

Setting up GlusterFS
SAP NetWeaver requires shared storage for the transport and profile directory. Read GlusterFS on Azure VMs on
Red Hat Enterprise Linux for SAP NetWeaver on how to set up GlusterFS for SAP NetWeaver.

Setting up (A)SCS
You can either use an Azure Template from GitHub to deploy all required Azure resources, including the virtual
machines, availability set and load balancer or you can deploy the resources manually.
Deploy Linux via Azure Template
The Azure Marketplace contains an image for Red Hat Enterprise Linux that you can use to deploy new virtual
machines. You can use one of the quickstart templates on GitHub to deploy all required resources. The template
deploys the virtual machines, the load balancer, availability set etc. Follow these steps to deploy the template:
1. Open the ASCS/SCS template on the Azure portal
2. Enter the following parameters
a. Resource Prefix
Enter the prefix you want to use. The value is used as a prefix for the resources that are deployed.
b. Stack Type
Select the SAP NetWeaver stack type
c. Os Type
Select one of the Linux distributions. For this example, select RHEL 7
d. Db Type
Select HANA
e. Sap System Count
The number of SAP system that run in this cluster. Select 1.
f. System Availability
Select HA
g. Admin Username, Admin Password or SSH key
A new user is created that can be used to sign in to the machine.
h. Subnet ID
If you want to deploy the VM into an existing VNet where you have a subnet defined the VM should be
assigned to, name the ID of that specific subnet. The ID usually looks like /subscriptions/<subscription
ID> /resourceGroups/<resource group
name> /providers/Microsoft.Network/virtualNetworks/<vir tual network name> /subnets/<subnet
name>
Deploy Linux manually via Azure portal
You first need to create the virtual machines for this cluster. Afterwards, you create a load balancer and use the
virtual machines in the backend pool.
1. Create a Resource Group
2. Create a Virtual Network
3. Create an Availability Set
Set max update domain
4. Create Virtual Machine 1
Use at least RHEL 7, in this example the Red Hat Enterprise Linux 7.4 image
https://portal.azure.com/#create/RedHat.RedHatEnterpriseLinux74-ARM
Select Availability Set created earlier
5. Create Virtual Machine 2
Use at least RHEL 7, in this example the Red Hat Enterprise Linux 7.4 image
https://portal.azure.com/#create/RedHat.RedHatEnterpriseLinux74-ARM
Select Availability Set created earlier
6. Add at least one data disk to both virtual machines
The data disks are used for the /usr/sap/ <SAPSID > directory
7. Create load balancer (internal, standard):
a. Create the frontend IP addresses
a. IP address 10.0.0.7 for the ASCS
a. Open the load balancer, select frontend IP pool, and click Add
b. Enter the name of the new frontend IP pool (for example nw1-ascs-frontend )
c. Set the Assignment to Static and enter the IP address (for example 10.0.0.7 )
d. Click OK
b. IP address 10.0.0.8 for the ASCS ERS
Repeat the steps above to create an IP address for the ERS (for example 10.0.0.8 and nw1-
aers-frontend )
b. Create the backend pool
a. Open the load balancer, select backend pools, and click Add
b. Enter the name of the new backend pool (for example nw1-backend )
c. Click Add a virtual machine.
d. Select Virtual machine.
e. Select the virtual machines of the (A)SCS cluster and their IP addresses.
f. Click Add
c. Create the health probes
a. Port 62000 for ASCS
a. Open the load balancer, select health probes, and click Add
b. Enter the name of the new health probe (for example nw1-ascs-hp )
c. Select TCP as protocol, port 62000 , keep Interval 5 and Unhealthy threshold 2
d. Click OK
b. Port 62102 for ASCS ERS
Repeat the steps above to create a health probe for the ERS (for example 62102 and nw1-
aers-hp )
d. Load-balancing rules
a. Load-balancing rules for ASCS
a. Open the load balancer, select load-balancing rules and click Add
b. Enter the name of the new load balancer rule (for example nw1-lb-ascs )
c. Select the frontend IP address, backend pool, and health probe you created earlier (for
example nw1-ascs-frontend , nw1-backend and nw1-ascs-hp )
d. Select HA por ts
e. Increase idle timeout to 30 minutes
f. Make sure to enable Floating IP
g. Click OK
Repeat the steps above to create load balancing rules for ERS (for example nw1-lb-ers )
8. Alternatively, if your scenario requires basic load balancer (internal), follow these steps:
a. Create the frontend IP addresses
a. IP address 10.0.0.7 for the ASCS
a. Open the load balancer, select frontend IP pool, and click Add
b. Enter the name of the new frontend IP pool (for example nw1-ascs-frontend )
c. Set the Assignment to Static and enter the IP address (for example 10.0.0.7 )
d. Click OK
b. IP address 10.0.0.8 for the ASCS ERS
Repeat the steps above to create an IP address for the ERS (for example 10.0.0.8 and nw1-
aers-frontend )
b. Create the backend pool
a. Open the load balancer, select backend pools, and click Add
b. Enter the name of the new backend pool (for example nw1-backend )
c. Click Add a virtual machine.
d. Select the Availability Set you created earlier
e. Select the virtual machines of the (A)SCS cluster
f. Click OK
c. Create the health probes
a. Port 62000 for ASCS
a. Open the load balancer, select health probes, and click Add
b. Enter the name of the new health probe (for example nw1-ascs-hp )
c. Select TCP as protocol, port 62000 , keep Interval 5 and Unhealthy threshold 2
d. Click OK
b. Port 62102 for ASCS ERS
Repeat the steps above to create a health probe for the ERS (for example 62102 and nw1-
aers-hp )
d. Load-balancing rules
a. 3200 TCP for ASCS
a. Open the load balancer, select load-balancing rules and click Add
b. Enter the name of the new load balancer rule (for example nw1-lb-3200 )
c. Select the frontend IP address, backend pool, and health probe you created earlier (for
example nw1-ascs-frontend )
d. Keep protocol TCP , enter port 3200
e. Increase idle timeout to 30 minutes
f. Make sure to enable Floating IP
g. Click OK
b. Additional ports for the ASCS
Repeat the steps above for ports 3600 , 3900 , 8100 , 500 13, 500 14, 500 16 and TCP for the
ASCS
c. Additional ports for the ASCS ERS
Repeat the steps above for ports 3302 , 502 13, 502 14, 502 16 and TCP for the ASCS ERS

IMPORTANT
Floating IP is not supported on a NIC secondary IP configuration in load-balancing scenarios. For details see Azure Load
balancer Limitations. If you need additional IP address for the VM, deploy a second NIC.

NOTE
When VMs without public IP addresses are placed in the backend pool of internal (no public IP address) Standard Azure load
balancer, there will be no outbound internet connectivity, unless additional configuration is performed to allow routing to
public end points. For details on how to achieve outbound connectivity see Public endpoint connectivity for Virtual Machines
using Azure Standard Load Balancer in SAP high-availability scenarios.

IMPORTANT
Do not enable TCP timestamps on Azure VMs placed behind Azure Load Balancer. Enabling TCP timestamps will cause the
health probes to fail. Set parameter net.ipv4.tcp_timestamps to 0 . For details see Load Balancer health probes.

Create Pacemaker cluster


Follow the steps in Setting up Pacemaker on Red Hat Enterprise Linux in Azure to create a basic Pacemaker cluster
for this (A)SCS server.
Prepare for SAP NetWeaver installation
The following items are prefixed with either [A] - applicable to all nodes, [1] - only applicable to node 1 or [2] -
only applicable to node 2.
1. [A] Setup host name resolution
You can either use a DNS server or modify the /etc/hosts on all nodes. This example shows how to use the
/etc/hosts file. Replace the IP address and the hostname in the following commands
sudo vi /etc/hosts

Insert the following lines to /etc/hosts. Change the IP address and hostname to match your environment

# IP addresses of the GlusterFS nodes


10.0.0.40 glust-0
10.0.0.41 glust-1
10.0.0.42 glust-2
# IP address of the load balancer frontend configuration for SAP NetWeaver ASCS
10.0.0.7 nw1-ascs
# IP address of the load balancer frontend configuration for SAP NetWeaver ASCS ERS
10.0.0.8 nw1-aers

2. [A] Create the shared directories

sudo mkdir -p /sapmnt/NW1


sudo mkdir -p /usr/sap/trans
sudo mkdir -p /usr/sap/NW1/SYS
sudo mkdir -p /usr/sap/NW1/ASCS00
sudo mkdir -p /usr/sap/NW1/ERS02

sudo chattr +i /sapmnt/NW1


sudo chattr +i /usr/sap/trans
sudo chattr +i /usr/sap/NW1/SYS
sudo chattr +i /usr/sap/NW1/ASCS00
sudo chattr +i /usr/sap/NW1/ERS02

3. [A] Install GlusterFS client and other requirements

sudo yum -y install glusterfs-fuse resource-agents resource-agents-sap

4. [A] Check version of resource-agents-sap


Make sure that the version of the installed resource-agents-sap package is at least 3.9.5-124.el7

sudo yum info resource-agents-sap

# Loaded plugins: langpacks, product-id, search-disabled-repos


# Repodata is over 2 weeks old. Install yum-cron? Or run: yum makecache fast
# Installed Packages
# Name : resource-agents-sap
# Arch : x86_64
# Version : 3.9.5
# Release : 124.el7
# Size : 100 k
# Repo : installed
# From repo : rhel-sap-for-rhel-7-server-rpms
# Summary : SAP cluster resource agents and connector script
# URL : https://github.com/ClusterLabs/resource-agents
# License : GPLv2+
# Description : The SAP resource agents and connector script interface with
# : Pacemaker to allow SAP instances to be managed in a cluster
# : environment.

5. [A] Add mount entries


sudo vi /etc/fstab

# Add the following lines to fstab, save and exit


glust-0:/NW1-sapmnt /sapmnt/NW1 glusterfs backup-volfile-servers=glust-1:glust-2 0 0
glust-0:/NW1-trans /usr/sap/trans glusterfs backup-volfile-servers=glust-1:glust-2 0 0
glust-0:/NW1-sys /usr/sap/NW1/SYS glusterfs backup-volfile-servers=glust-1:glust-2 0 0

Mount the new shares

sudo mount -a

6. [A] Configure SWAP file

sudo vi /etc/waagent.conf

# Set the property ResourceDisk.EnableSwap to y


# Create and use swapfile on resource disk.
ResourceDisk.EnableSwap=y

# Set the size of the SWAP file with property ResourceDisk.SwapSizeMB


# The free space of resource disk varies by virtual machine size. Make sure that you do not set a value
that is too big. You can check the SWAP space with command swapon
# Size of the swapfile.
ResourceDisk.SwapSizeMB=2000

Restart the Agent to activate the change

sudo service waagent restart

7. [A] RHEL configuration


Configure RHEL as described in SAP Note 2002167
Installing SAP NetWeaver ASCS/ERS
1. [1] Create a virtual IP resource and health-probe for the ASCS instance

sudo pcs node standby nw1-cl-1

sudo pcs resource create fs_NW1_ASCS Filesystem device='glust-0:/NW1-ascs' \


directory='/usr/sap/NW1/ASCS00' fstype='glusterfs' \
options='backup-volfile-servers=glust-1:glust-2' \
--group g-NW1_ASCS

sudo pcs resource create vip_NW1_ASCS IPaddr2 \


ip=10.0.0.7 cidr_netmask=24 \
--group g-NW1_ASCS

sudo pcs resource create nc_NW1_ASCS azure-lb port=62000 \


--group g-NW1_ASCS

Make sure that the cluster status is ok and that all resources are started. It is not important on which node
the resources are running.
sudo pcs status

# Node nw1-cl-1: standby


# Online: [ nw1-cl-0 ]
#
# Full list of resources:
#
# rsc_st_azure (stonith:fence_azure_arm): Started nw1-cl-0
# Resource Group: g-NW1_ASCS
# fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-0
# nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-0
# vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-0

2. [1] Install SAP NetWeaver ASCS


Install SAP NetWeaver ASCS as root on the first node using a virtual hostname that maps to the IP address
of the load balancer frontend configuration for the ASCS, for example nw1-ascs , 10.0.0.7 and the instance
number that you used for the probe of the load balancer, for example 00 .
You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a non-root user to connect to
sapinst.

# Allow access to SWPM. This rule is not permanent. If you reboot the machine, you have to run the
command again.
sudo firewall-cmd --zone=public --add-port=4237/tcp

sudo <swpm>/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin

If the installation fails to create a subfolder in /usr/sap/NW1 /ASCS00 , try setting the owner and group of
the ASCS00 folder and retry.

sudo chown nw1adm /usr/sap/NW1/ASCS00


sudo chgrp sapsys /usr/sap/NW1/ASCS00

3. [1] Create a virtual IP resource and health-probe for the ERS instance

sudo pcs node unstandby nw1-cl-1


sudo pcs node standby nw1-cl-0

sudo pcs resource create fs_NW1_AERS Filesystem device='glust-0:/NW1-aers' \


directory='/usr/sap/NW1/ERS02' fstype='glusterfs' \
options='backup-volfile-servers=glust-1:glust-2' \
--group g-NW1_AERS

sudo pcs resource create vip_NW1_AERS IPaddr2 \


ip=10.0.0.8 cidr_netmask=24 \
--group g-NW1_AERS

sudo pcs resource create nc_NW1_AERS azure-lb port=62102 \


--group g-NW1_AERS

Make sure that the cluster status is ok and that all resources are started. It is not important on which node
the resources are running.
sudo pcs status

# Node nw1-cl-0: standby


# Online: [ nw1-cl-1 ]
#
# Full list of resources:
#
# rsc_st_azure (stonith:fence_azure_arm): Started nw1-cl-1
# Resource Group: g-NW1_ASCS
# fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-1
# nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-1
# vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-1
# Resource Group: g-NW1_AERS
# fs_NW1_AERS (ocf::heartbeat:Filesystem): Started nw1-cl-1
# nc_NW1_AERS (ocf::heartbeat:azure-lb): Started nw1-cl-1
# vip_NW1_AERS (ocf::heartbeat:IPaddr2): Started nw1-cl-1

4. [2] Install SAP NetWeaver ERS


Install SAP NetWeaver ERS as root on the second node using a virtual hostname that maps to the IP address
of the load balancer frontend configuration for the ERS, for example nw1-aers , 10.0.0.8 and the instance
number that you used for the probe of the load balancer, for example 02 .
You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a non-root user to connect to
sapinst.

# Allow access to SWPM. This rule is not permanent. If you reboot the machine, you have to run the
command again.
sudo firewall-cmd --zone=public --add-port=4237/tcp

sudo <swpm>/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin

If the installation fails to create a subfolder in /usr/sap/NW1 /ERS02 , try setting the owner and group of the
ERS02 folder and retry.

sudo chown nw1adm /usr/sap/NW1/ERS02


sudo chgrp sapsys /usr/sap/NW1/ERS02

5. [1] Adapt the ASCS/SCS and ERS instance profiles


ASCS/SCS profile

sudo vi /sapmnt/NW1/profile/NW1_ASCS00_nw1-ascs

# Change the restart command to a start command


#Restart_Program_01 = local $(_EN) pf=$(_PF)
Start_Program_01 = local $(_EN) pf=$(_PF)

# Add the keep alive parameter, if using ENSA1


enque/encni/set_so_keepalive = true

For both ENSA1 and ENSA2, make sure that the keepalive OS parameters are set as described in SAP note
1410736.
ERS profile
sudo vi /sapmnt/NW1/profile/NW1_ERS02_nw1-aers

# Change the restart command to a start command


#Restart_Program_00 = local $(_ER) pf=$(_PFL) NR=$(SCSID)
Start_Program_00 = local $(_ER) pf=$(_PFL) NR=$(SCSID)

# remove Autostart from ERS profile


# Autostart = 1

6. [A] Configure Keep Alive


The communication between the SAP NetWeaver application server and the ASCS/SCS is routed through a
software load balancer. The load balancer disconnects inactive connections after a configurable timeout. To
prevent this, you need to set a parameter in the SAP NetWeaver ASCS/SCS profile, if using ENSA1, and
change the Linux system keepalive settings on all SAP servers for both ENSA1/ENSA2. Read SAP Note
1410736 for more information.

# Change the Linux system configuration


sudo sysctl net.ipv4.tcp_keepalive_time=300

7. [A] Update the /usr/sap/sapservices file


To prevent the start of the instances by the sapinit startup script, all instances managed by Pacemaker must
be commented out from /usr/sap/sapservices file. Do not comment out the SAP HANA instance if it will be
used with HANA SR.

sudo vi /usr/sap/sapservices

# On the node where you installed the ASCS, comment out the following line
# LD_LIBRARY_PATH=/usr/sap/NW1/ASCS00/exe:$LD_LIBRARY_PATH; export LD_LIBRARY_PATH;
/usr/sap/NW1/ASCS00/exe/sapstartsrv pf=/usr/sap/NW1/SYS/profile/NW1_ASCS00_nw1-ascs -D -u nw1adm

# On the node where you installed the ERS, comment out the following line
# LD_LIBRARY_PATH=/usr/sap/NW1/ERS02/exe:$LD_LIBRARY_PATH; export LD_LIBRARY_PATH;
/usr/sap/NW1/ERS02/exe/sapstartsrv pf=/usr/sap/NW1/ERS02/profile/NW1_ERS02_nw1-aers -D -u nw1adm

8. [1] Create the SAP cluster resources


If using enqueue server 1 architecture (ENSA1), define the resources as follows:
sudo pcs property set maintenance-mode=true

sudo pcs resource create rsc_sap_NW1_ASCS00 SAPInstance \


InstanceName=NW1_ASCS00_nw1-ascs START_PROFILE="/sapmnt/NW1/profile/NW1_ASCS00_nw1-ascs" \
AUTOMATIC_RECOVER=false \
meta resource-stickiness=5000 migration-threshold=1 failure-timeout=60 \
op monitor interval=20 on-fail=restart timeout=60 \
op start interval=0 timeout=600 op stop interval=0 timeout=600 \
--group g-NW1_ASCS

sudo pcs resource create rsc_sap_NW1_ERS02 SAPInstance \


InstanceName=NW1_ERS02_nw1-aers START_PROFILE="/sapmnt/NW1/profile/NW1_ERS02_nw1-aers" \
AUTOMATIC_RECOVER=false IS_ERS=true \
op monitor interval=20 on-fail=restart timeout=60 op start interval=0 timeout=600 op stop interval=0
timeout=600 \
--group g-NW1_AERS

sudo pcs constraint colocation add g-NW1_AERS with g-NW1_ASCS -5000


sudo pcs constraint location rsc_sap_NW1_ASCS00 rule score=2000 runs_ers_NW1 eq 1
sudo pcs constraint order g-NW1_ASCS then g-NW1_AERS kind=Optional symmetrical=false

sudo pcs node unstandby nw1-cl-0


sudo pcs property set maintenance-mode=false

SAP introduced support for enqueue server 2, including replication, as of SAP NW 7.52. Starting with ABAP
Platform 1809, enqueue server 2 is installed by default. See SAP note 2630416 for enqueue server 2 support. If
using enqueue server 2 architecture (ENSA2), install resource agent resource-agents-sap-4.1.1-12.el7.x86_64 or
newer and define the resources as follows:

sudo pcs property set maintenance-mode=true

sudo pcs resource create rsc_sap_NW1_ASCS00 SAPInstance \


InstanceName=NW1_ASCS00_nw1-ascs START_PROFILE="/sapmnt/NW1/profile/NW1_ASCS00_nw1-ascs" \
AUTOMATIC_RECOVER=false \
meta resource-stickiness=5000 migration-threshold=1 failure-timeout=60 \
op monitor interval=20 on-fail=restart timeout=60 \
op start interval=0 timeout=600 op stop interval=0 timeout=600 \
--group g-NW1_ASCS

sudo pcs resource create rsc_sap_NW1_ERS02 SAPInstance \


InstanceName=NW1_ERS02_nw1-aers START_PROFILE="/sapmnt/NW1/profile/NW1_ERS02_nw1-aers" \
AUTOMATIC_RECOVER=false IS_ERS=true \
op monitor interval=20 on-fail=restart timeout=60 op start interval=0 timeout=600 op stop interval=0
timeout=600 \
--group g-NW1_AERS

sudo pcs constraint colocation add g-NW1_AERS with g-NW1_ASCS -5000


sudo pcs constraint order g-NW1_ASCS then g-NW1_AERS kind=Optional symmetrical=false
sudo pcs constraint order start g-NW1_ASCS then stop g-NW1_AERS symmetrical=false

sudo pcs node unstandby nw1-cl-0


sudo pcs property set maintenance-mode=false

If you are upgrading from an older version and switching to enqueue server 2, see SAP note 2641322.

NOTE
The timeouts in the above configuration are just examples and may need to be adapted to the specific SAP setup.

Make sure that the cluster status is ok and that all resources are started. It is not important on which node the
resources are running.

sudo pcs status

# Online: [ nw1-cl-0 nw1-cl-1 ]


#
# Full list of resources:
#
# rsc_st_azure (stonith:fence_azure_arm): Started nw1-cl-0
# Resource Group: g-NW1_ASCS
# fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-1
# nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-1
# vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-1
# rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-1
# Resource Group: g-NW1_AERS
# fs_NW1_AERS (ocf::heartbeat:Filesystem): Started nw1-cl-0
# nc_NW1_AERS (ocf::heartbeat:azure-lb): Started nw1-cl-0
# vip_NW1_AERS (ocf::heartbeat:IPaddr2): Started nw1-cl-0
# rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-0

1. [A] Add firewall rules for ASCS and ERS on both nodes

# Probe Port of ASCS


sudo firewall-cmd --zone=public --add-port=62000/tcp --permanent
sudo firewall-cmd --zone=public --add-port=62000/tcp
sudo firewall-cmd --zone=public --add-port=3200/tcp --permanent
sudo firewall-cmd --zone=public --add-port=3200/tcp
sudo firewall-cmd --zone=public --add-port=3600/tcp --permanent
sudo firewall-cmd --zone=public --add-port=3600/tcp
sudo firewall-cmd --zone=public --add-port=3900/tcp --permanent
sudo firewall-cmd --zone=public --add-port=3900/tcp
sudo firewall-cmd --zone=public --add-port=8100/tcp --permanent
sudo firewall-cmd --zone=public --add-port=8100/tcp
sudo firewall-cmd --zone=public --add-port=50013/tcp --permanent
sudo firewall-cmd --zone=public --add-port=50013/tcp
sudo firewall-cmd --zone=public --add-port=50014/tcp --permanent
sudo firewall-cmd --zone=public --add-port=50014/tcp
sudo firewall-cmd --zone=public --add-port=50016/tcp --permanent
sudo firewall-cmd --zone=public --add-port=50016/tcp
# Probe Port of ERS
sudo firewall-cmd --zone=public --add-port=62102/tcp --permanent
sudo firewall-cmd --zone=public --add-port=62102/tcp
sudo firewall-cmd --zone=public --add-port=3302/tcp --permanent
sudo firewall-cmd --zone=public --add-port=3302/tcp
sudo firewall-cmd --zone=public --add-port=50213/tcp --permanent
sudo firewall-cmd --zone=public --add-port=50213/tcp
sudo firewall-cmd --zone=public --add-port=50214/tcp --permanent
sudo firewall-cmd --zone=public --add-port=50214/tcp
sudo firewall-cmd --zone=public --add-port=50216/tcp --permanent
sudo firewall-cmd --zone=public --add-port=50216/tcp

SAP NetWeaver application server preparation


Some databases require that the database instance installation is executed on an application server. Prepare the
application server virtual machines to be able to use them in these cases.
The steps bellow assume that you install the application server on a server different from the ASCS/SCS and HANA
servers. Otherwise some of the steps below (like configuring host name resolution) are not needed.
1. Setup host name resolution
You can either use a DNS server or modify the /etc/hosts on all nodes. This example shows how to use the
/etc/hosts file. Replace the IP address and the hostname in the following commands

sudo vi /etc/hosts

Insert the following lines to /etc/hosts. Change the IP address and hostname to match your environment

# IP addresses of the GlusterFS nodes


10.0.0.40 glust-0
10.0.0.41 glust-1
10.0.0.42 glust-2
# IP address of the load balancer frontend configuration for SAP NetWeaver ASCS
10.0.0.7 nw1-ascs
# IP address of the load balancer frontend configuration for SAP NetWeaver ASCS ERS
10.0.0.8 nw1-aers
# IP address of the load balancer frontend configuration for database
10.0.0.13 nw1-db

2. Create the sapmnt directory

sudo mkdir -p /sapmnt/NW1


sudo mkdir -p /usr/sap/trans

sudo chattr +i /sapmnt/NW1


sudo chattr +i /usr/sap/trans

3. Install GlusterFS client and other requirements

sudo yum -y install glusterfs-fuse uuidd

4. Add mount entries

sudo vi /etc/fstab

# Add the following lines to fstab, save and exit


glust-0:/NW1-sapmnt /sapmnt/NW1 glusterfs backup-volfile-servers=glust-1:glust-2 0 0
glust-0:/NW1-trans /usr/sap/trans glusterfs backup-volfile-servers=glust-1:glust-2 0 0

Mount the new shares

sudo mount -a

5. Configure SWAP file

sudo vi /etc/waagent.conf

# Set the property ResourceDisk.EnableSwap to y


# Create and use swapfile on resource disk.
ResourceDisk.EnableSwap=y

# Set the size of the SWAP file with property ResourceDisk.SwapSizeMB


# The free space of resource disk varies by virtual machine size. Make sure that you do not set a value
that is too big. You can check the SWAP space with command swapon
# Size of the swapfile.
ResourceDisk.SwapSizeMB=2000
Restart the Agent to activate the change

sudo service waagent restart

Install database
In this example, SAP NetWeaver is installed on SAP HANA. You can use every supported database for this
installation. For more information on how to install SAP HANA in Azure, see High availability of SAP HANA on
Azure VMs on Red Hat Enterprise Linux. For a list of supported databases, see SAP Note 1928533.
1. Run the SAP database instance installation
Install the SAP NetWeaver database instance as root using a virtual hostname that maps to the IP address of
the load balancer frontend configuration for the database for example nw1-db and 10.0.0.13 .
You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a non-root user to connect to
sapinst.

sudo <swpm>/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin

SAP NetWeaver application server installation


Follow these steps to install an SAP application server.
1. Prepare application server
Follow the steps in the chapter SAP NetWeaver application server preparation above to prepare the
application server.
2. Install SAP NetWeaver application server
Install a primary or additional SAP NetWeaver applications server.
You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a non-root user to connect to
sapinst.

sudo <swpm>/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin

3. Update SAP HANA secure store


Update the SAP HANA secure store to point to the virtual name of the SAP HANA System Replication setup.
Run the following command to list the entries as <sapsid>adm

hdbuserstore List

This should list all entries and should look similar to


DATA FILE : /home/nw1adm/.hdb/nw1-di-0/SSFS_HDB.DAT
KEY FILE : /home/nw1adm/.hdb/nw1-di-0/SSFS_HDB.KEY

KEY DEFAULT
ENV : 10.0.0.14:30313
USER: SAPABAP1
DATABASE: NW1

The output shows that the IP address of the default entry is pointing to the virtual machine and not to the
load balancer's IP address. This entry needs to be changed to point to the virtual hostname of the load
balancer. Make sure to use the same port (30313 in the output above) and database name (HN1 in the
output above)!

su - nw1adm
hdbuserstore SET DEFAULT nw1-db:30313@NW1 SAPABAP1 <password of ABAP schema>

Test the cluster setup


1. Manually migrate the ASCS instance
Resource state before starting the test:

rsc_st_azure (stonith:fence_azure_arm): Started nw1-cl-0


Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-0
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-0
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-0
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-0
Resource Group: g-NW1_AERS
fs_NW1_AERS (ocf::heartbeat:Filesystem): Started nw1-cl-1
nc_NW1_AERS (ocf::heartbeat:azure-lb): Started nw1-cl-1
vip_NW1_AERS (ocf::heartbeat:IPaddr2): Started nw1-cl-1
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-1

Run the following commands as root to migrate the ASCS instance.

[root@nw1-cl-0 ~]# pcs resource move rsc_sap_NW1_ASCS00

[root@nw1-cl-0 ~]# pcs resource clear rsc_sap_NW1_ASCS00

# Remove failed actions for the ERS that occurred as part of the migration
[root@nw1-cl-0 ~]# pcs resource cleanup rsc_sap_NW1_ERS02

Resource state after the test:

rsc_st_azure (stonith:fence_azure_arm): Started nw1-cl-0


Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-1
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-1
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-1
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-1
Resource Group: g-NW1_AERS
fs_NW1_AERS (ocf::heartbeat:Filesystem): Started nw1-cl-0
nc_NW1_AERS (ocf::heartbeat:azure-lb): Started nw1-cl-0
vip_NW1_AERS (ocf::heartbeat:IPaddr2): Started nw1-cl-0
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-0
2. Simulate node crash
Resource state before starting the test:

rsc_st_azure (stonith:fence_azure_arm): Started nw1-cl-0


Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-1
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-1
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-1
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-1
Resource Group: g-NW1_AERS
fs_NW1_AERS (ocf::heartbeat:Filesystem): Started nw1-cl-0
nc_NW1_AERS (ocf::heartbeat:azure-lb): Started nw1-cl-0
vip_NW1_AERS (ocf::heartbeat:IPaddr2): Started nw1-cl-0
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-0

Run the following command as root on the node where the ASCS instance is running

[root@nw1-cl-1 ~]# echo b > /proc/sysrq-trigger

The status after the node is started again should look like this.

Online: [ nw1-cl-0 nw1-cl-1 ]

Full list of resources:

rsc_st_azure (stonith:fence_azure_arm): Started nw1-cl-0


Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-0
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-0
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-0
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-0
Resource Group: g-NW1_AERS
fs_NW1_AERS (ocf::heartbeat:Filesystem): Started nw1-cl-1
nc_NW1_AERS (ocf::heartbeat:azure-lb): Started nw1-cl-1
vip_NW1_AERS (ocf::heartbeat:IPaddr2): Started nw1-cl-1
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-1

Failed Actions:
* rsc_sap_NW1_ERS02_monitor_11000 on nw1-cl-0 'not running' (7): call=45, status=complete,
exitreason='',
last-rc-change='Tue Aug 21 13:52:39 2018', queued=0ms, exec=0ms

Use the following command to clean the failed resources.

[root@nw1-cl-0 ~]# pcs resource cleanup rsc_sap_NW1_ERS02

Resource state after the test:


rsc_st_azure (stonith:fence_azure_arm): Started nw1-cl-0
Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-0
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-0
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-0
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-0
Resource Group: g-NW1_AERS
fs_NW1_AERS (ocf::heartbeat:Filesystem): Started nw1-cl-1
nc_NW1_AERS (ocf::heartbeat:azure-lb): Started nw1-cl-1
vip_NW1_AERS (ocf::heartbeat:IPaddr2): Started nw1-cl-1
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-1

3. Kill message server process


Resource state before starting the test:

rsc_st_azure (stonith:fence_azure_arm): Started nw1-cl-0


Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-0
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-0
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-0
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-0
Resource Group: g-NW1_AERS
fs_NW1_AERS (ocf::heartbeat:Filesystem): Started nw1-cl-1
nc_NW1_AERS (ocf::heartbeat:azure-lb): Started nw1-cl-1
vip_NW1_AERS (ocf::heartbeat:IPaddr2): Started nw1-cl-1
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-1

Run the following commands as root to identify the process of the message server and kill it.

[root@nw1-cl-0 ~]# pgrep ms.sapNW1 | xargs kill -9

If you only kill the message server once, it will be restarted by sapstart . If you kill it often enough,
Pacemaker will eventually move the ASCS instance to the other node. Run the following commands as root
to clean up the resource state of the ASCS and ERS instance after the test.

[root@nw1-cl-0 ~]# pcs resource cleanup rsc_sap_NW1_ASCS00


[root@nw1-cl-0 ~]# pcs resource cleanup rsc_sap_NW1_ERS02

Resource state after the test:

rsc_st_azure (stonith:fence_azure_arm): Started nw1-cl-0


Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-1
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-1
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-1
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-1
Resource Group: g-NW1_AERS
fs_NW1_AERS (ocf::heartbeat:Filesystem): Started nw1-cl-0
nc_NW1_AERS (ocf::heartbeat:azure-lb): Started nw1-cl-0
vip_NW1_AERS (ocf::heartbeat:IPaddr2): Started nw1-cl-0
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-0

4. Kill enqueue server process


Resource state before starting the test:
rsc_st_azure (stonith:fence_azure_arm): Started nw1-cl-0
Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-1
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-1
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-1
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-1
Resource Group: g-NW1_AERS
fs_NW1_AERS (ocf::heartbeat:Filesystem): Started nw1-cl-0
nc_NW1_AERS (ocf::heartbeat:azure-lb): Started nw1-cl-0
vip_NW1_AERS (ocf::heartbeat:IPaddr2): Started nw1-cl-0
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-0

Run the following commands as root on the node where the ASCS instance is running to kill the enqueue
server.

[root@nw1-cl-1 ~]# pgrep en.sapNW1 | xargs kill -9

The ASCS instance should immediately fail over to the other node. The ERS instance should also fail over
after the ASCS instance is started. Run the following commands as root to clean up the resource state of the
ASCS and ERS instance after the test.

[root@nw1-cl-0 ~]# pcs resource cleanup rsc_sap_NW1_ASCS00


[root@nw1-cl-0 ~]# pcs resource cleanup rsc_sap_NW1_ERS02

Resource state after the test:

rsc_st_azure (stonith:fence_azure_arm): Started nw1-cl-0


Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-0
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-0
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-0
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-0
Resource Group: g-NW1_AERS
fs_NW1_AERS (ocf::heartbeat:Filesystem): Started nw1-cl-1
nc_NW1_AERS (ocf::heartbeat:azure-lb): Started nw1-cl-1
vip_NW1_AERS (ocf::heartbeat:IPaddr2): Started nw1-cl-1
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-1

5. Kill enqueue replication server process


Resource state before starting the test:

rsc_st_azure (stonith:fence_azure_arm): Started nw1-cl-0


Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-0
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-0
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-0
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-0
Resource Group: g-NW1_AERS
fs_NW1_AERS (ocf::heartbeat:Filesystem): Started nw1-cl-1
nc_NW1_AERS (ocf::heartbeat:azure-lb): Started nw1-cl-1
vip_NW1_AERS (ocf::heartbeat:IPaddr2): Started nw1-cl-1
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-1

Run the following command as root on the node where the ERS instance is running to kill the enqueue
replication server process.
[root@nw1-cl-1 ~]# pgrep er.sapNW1 | xargs kill -9

If you only run the command once, sapstart will restart the process. If you run it often enough, sapstart
will not restart the process and the resource will be in a stopped state. Run the following commands as root
to clean up the resource state of the ERS instance after the test.

[root@nw1-cl-0 ~]# pcs resource cleanup rsc_sap_NW1_ERS02

Resource state after the test:

rsc_st_azure (stonith:fence_azure_arm): Started nw1-cl-0


Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-0
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-0
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-0
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-0
Resource Group: g-NW1_AERS
fs_NW1_AERS (ocf::heartbeat:Filesystem): Started nw1-cl-1
nc_NW1_AERS (ocf::heartbeat:azure-lb): Started nw1-cl-1
vip_NW1_AERS (ocf::heartbeat:IPaddr2): Started nw1-cl-1
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-1

6. Kill enqueue sapstartsrv process


Resource state before starting the test:

rsc_st_azure (stonith:fence_azure_arm): Started nw1-cl-0


Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-0
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-0
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-0
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-0
Resource Group: g-NW1_AERS
fs_NW1_AERS (ocf::heartbeat:Filesystem): Started nw1-cl-1
nc_NW1_AERS (ocf::heartbeat:azure-lb): Started nw1-cl-1
vip_NW1_AERS (ocf::heartbeat:IPaddr2): Started nw1-cl-1
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-1

Run the following commands as root on the node where the ASCS is running.

[root@nw1-cl-0 ~]# pgrep -fl ASCS00.*sapstartsrv


# 59545 sapstartsrv

[root@nw1-cl-0 ~]# kill -9 59545

The sapstartsrv process should always be restarted by the Pacemaker resource agent as part of the
monitoring. Resource state after the test:
rsc_st_azure (stonith:fence_azure_arm): Started nw1-cl-0
Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-0
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-0
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-0
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-0
Resource Group: g-NW1_AERS
fs_NW1_AERS (ocf::heartbeat:Filesystem): Started nw1-cl-1
nc_NW1_AERS (ocf::heartbeat:azure-lb): Started nw1-cl-1
vip_NW1_AERS (ocf::heartbeat:IPaddr2): Started nw1-cl-1
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-1

Next steps
HA for SAP NW on Azure VMs on RHEL for SAP applications multi-SID guide
Azure Virtual Machines planning and implementation for SAP
Azure Virtual Machines deployment for SAP
Azure Virtual Machines DBMS deployment for SAP
To learn how to establish high availability and plan for disaster recovery of SAP HANA on Azure (large
instances), see SAP HANA (large instances) high availability and disaster recovery on Azure.
To learn how to establish high availability and plan for disaster recovery of SAP HANA on Azure VMs, see High
Availability of SAP HANA on Azure Virtual Machines (VMs)
Azure Virtual Machines high availability for SAP
NetWeaver on Red Hat Enterprise Linux with Azure
NetApp Files for SAP applications
12/22/2020 • 32 minutes to read • Edit Online

This article describes how to deploy the virtual machines, configure the virtual machines, install the cluster
framework, and install a highly available SAP NetWeaver 7.50 system, using Azure NetApp Files. In the example
configurations, installation commands etc. ASCS instance is number 00, the ERS instance is number 01, Primary
Application instance (PAS) is 02 and the Application instance (AAS) is 03. SAP System ID QAS is used.
The database layer isn't covered in detail in this article.
Read the following SAP Notes and papers first:
Azure NetApp Files documentation
SAP Note 1928533, which has:
List of Azure VM sizes that are supported for the deployment of SAP software
Important capacity information for Azure VM sizes
Supported SAP software, and operating system (OS) and database combinations
Required SAP kernel version for Windows and Linux on Microsoft Azure
SAP Note 2015553 lists prerequisites for SAP-supported SAP software deployments in Azure.
SAP Note 2002167 has recommended OS settings for Red Hat Enterprise Linux
SAP Note 2009879 has SAP HANA Guidelines for Red Hat Enterprise Linux
SAP Note 2178632 has detailed information about all monitoring metrics reported for SAP in Azure.
SAP Note 2191498 has the required SAP Host Agent version for Linux in Azure.
SAP Note 2243692 has information about SAP licensing on Linux in Azure.
SAP Note 1999351 has additional troubleshooting information for the Azure Enhanced Monitoring Extension
for SAP.
SAP Community WIKI has all required SAP Notes for Linux.
Azure Virtual Machines planning and implementation for SAP on Linux
Azure Virtual Machines deployment for SAP on Linux
Azure Virtual Machines DBMS deployment for SAP on Linux
SAP Netweaver in pacemaker cluster
General RHEL documentation
High Availability Add-On Overview
High Availability Add-On Administration
High Availability Add-On Reference
Configuring ASCS/ERS for SAP Netweaver with standalone resources in RHEL 7.5
Configure SAP S/4HANA ASCS/ERS with Standalone Enqueue Server 2 (ENSA2) in Pacemaker on RHEL
Azure-specific RHEL documentation:
Support Policies for RHEL High Availability Clusters - Microsoft Azure Virtual Machines as Cluster
Members
Installing and Configuring a Red Hat Enterprise Linux 7.4 (and later) High-Availability Cluster on Microsoft
Azure
NetApp SAP Applications on Microsoft Azure using Azure NetApp Files

Overview
High availability(HA) for SAP Netweaver central services requires shared storage. To achieve that on Red Hat Linux
so far it was necessary to build separate highly available GlusterFS cluster.
Now it is possible to achieve SAP Netweaver HA by using shared storage, deployed on Azure NetApp Files. Using
Azure NetApp Files for the shared storage eliminates the need for additional GlusterFS cluster. Pacemaker is still
needed for HA of the SAP Netweaver central services(ASCS/SCS).
SAP NetWeaver ASCS, SAP NetWeaver SCS, SAP NetWeaver ERS, and the SAP HANA database use virtual
hostname and virtual IP addresses. On Azure, a load balancer is required to use a virtual IP address. We
recommend using Standard load balancer. The following list shows the configuration of the load balancer with
separate front-end IPs for (A)SCS and ERS.
(A )SCS
Frontend configuration
IP address 192.168.14.9
Probe Port
Port 620<nr>
Load-balancing rules
If using Standard Load Balancer, select HA por ts
32<nr> TCP
36<nr> TCP
39<nr> TCP
81<nr> TCP
5<nr> 13 TCP
5<nr> 14 TCP
5<nr> 16 TCP
ERS
Frontend configuration
IP address 192.168.14.10
Probe Port
Port 621<nr>
Load-balancing rules
If using Standard Load Balancer, select HA por ts
32<nr> TCP
33<nr> TCP
5<nr> 13 TCP
5<nr> 14 TCP
5<nr> 16 TCP
Backend configuration
Connected to primary network interfaces of all virtual machines that should be part of the (A)SCS/ERS
cluster

Setting up the Azure NetApp Files infrastructure


SAP NetWeaver requires shared storage for the transport and profile directory. Before proceeding with the setup
for Azure NetApp files infrastructure, familiarize yourself with the Azure NetApp Files documentation. Check if your
selected Azure region offers Azure NetApp Files. The following link shows the availability of Azure NetApp Files by
Azure region: Azure NetApp Files Availability by Azure Region.
Azure NetApp files are available in several Azure regions. Before deploying Azure NetApp Files, request onboarding
to Azure NetApp Files, following the Register for Azure NetApp files instructions.
Deploy Azure NetApp Files resources
The steps assume that you have already deployed Azure Virtual Network. The Azure NetApp Files resources and the
VMs, where the Azure NetApp Files resources will be mounted must be deployed in the same Azure Virtual
Network or in peered Azure Virtual Networks.
1. If you haven't done that already, request onboarding to Azure NetApp Files.
2. Create the NetApp account in the selected Azure region, following the instructions to create NetApp Account.
3. Set up Azure NetApp Files capacity pool, following the instructions on how to set up Azure NetApp Files
capacity pool.
The SAP Netweaver architecture presented in this article uses single Azure NetApp Files capacity pool,
Premium SKU. We recommend Azure NetApp Files Premium SKU for SAP Netweaver application workload
on Azure.
4. Delegate a subnet to Azure NetApp files as described in the instructions Delegate a subnet to Azure NetApp
Files.
5. Deploy Azure NetApp Files volumes, following the instructions to create a volume for Azure NetApp Files.
Deploy the volumes in the designated Azure NetApp Files subnet. The IP addresses of the Azure NetApp
volumes are assigned automatically. Keep in mind that the Azure NetApp Files resources and the Azure VMs
must be in the same Azure Virtual Network or in peered Azure Virtual Networks. In this example we use two
Azure NetApp Files volumes: sapQAS and transSAP. The file paths that are mounted to the corresponding
mount points are /usrsapqas /sapmntQAS , /usrsapqas /usrsapQAS sys, etc.
a. volume sapQAS (nfs://192.168.24.5/usrsapqas /sapmntQAS )
b. volume sapQAS (nfs://192.168.24.5/usrsapqas /usrsapQAS ascs)
c. volume sapQAS (nfs://192.168.24.5/usrsapqas /usrsapQAS sys)
d. volume sapQAS (nfs://192.168.24.5/usrsapqas /usrsapQAS ers)
e. volume transSAP (nfs://192.168.24.4/transSAP)
f. volume sapQAS (nfs://192.168.24.5/usrsapqas /usrsapQAS pas)
g. volume sapQAS (nfs://192.168.24.5/usrsapqas /usrsapQAS aas)
In this example, we used Azure NetApp Files for all SAP Netweaver file systems to demonstrate how Azure NetApp
Files can be used. The SAP file systems that don't need to be mounted via NFS can also be deployed as Azure disk
storage . In this example a-e must be on Azure NetApp Files and f-g (that is, /usr/sap/QAS /D02 ,
/usr/sap/QAS /D03 ) could be deployed as Azure disk storage.
Important considerations
When considering Azure NetApp Files for the SAP Netweaver on SUSE High Availability architecture, be aware of
the following important considerations:
The minimum capacity pool is 4 TiB. The capacity pool size can be increased in 1 TiB increments.
The minimum volume is 100 GiB
Azure NetApp Files and all virtual machines, where Azure NetApp Files volumes will be mounted, must be in the
same Azure Virtual Network or in peered virtual networks in the same region. Azure NetApp Files access over
VNET peering in the same region is supported now. Azure NetApp access over global peering is not yet
supported.
The selected virtual network must have a subnet, delegated to Azure NetApp Files.
Azure NetApp Files offers export policy: you can control the allowed clients, the access type (Read&Write, Read
Only, etc.).
Azure NetApp Files feature isn't zone aware yet. Currently Azure NetApp Files feature isn't deployed in all
Availability zones in an Azure region. Be aware of the potential latency implications in some Azure regions.
Azure NetApp Files volumes can be deployed as NFSv3 or NFSv4.1 volumes. Both protocols are supported for
the SAP application layer (ASCS/ERS, SAP application servers).

Setting up (A)SCS
In this example, the resources were deployed manually via the Azure portal.
Deploy Linux manually via Azure portal
First you need to create the Azure NetApp Files volumes. Deploy the VMs. Afterwards, you create a load balancer
and use the virtual machines in the backend pool.
1. Create load balancer (internal, standard):
a. Create the frontend IP addresses
a. IP address 192.168.14.9 for the ASCS
a. Open the load balancer, select frontend IP pool, and click Add
b. Enter the name of the new frontend IP pool (for example frontend.QAS.ASCS )
c. Set the Assignment to Static and enter the IP address (for example 192.168.14.9 )
d. Click OK
b. IP address 192.168.14.10 for the ASCS ERS
Repeat the steps above under "a" to create an IP address for the ERS (for example
192.168.14.10 and frontend.QAS.ERS )
b. Create the backend pool
a. Open the load balancer, select backend pools, and click Add
b. Enter the name of the new backend pool (for example backend.QAS )
c. Click Add a virtual machine.
d. Select Virtual machine.
e. Select the virtual machines of the (A)SCS cluster and their IP addresses.
f. Click Add
c. Create the health probes
a. Port 62000 for ASCS
a. Open the load balancer, select health probes, and click Add
b. Enter the name of the new health probe (for example health.QAS.ASCS )
c. Select TCP as protocol, port 62000 , keep Interval 5 and Unhealthy threshold 2
d. Click OK
b. Port 62101 for ASCS ERS
Repeat the steps above under "c" to create a health probe for the ERS (for example 62101
and health.QAS.ERS )
d. Load-balancing rules
a. Load-balancing rules for ASCS
a. Open the load balancer, select Load-balancing rules, and click Add
b. Enter the name of the new load balancer rule (for example lb.QAS.ASCS )
c. Select the frontend IP address for ASCS, backend pool, and health probe you created earlier
(for example frontend.QAS.ASCS , backend.QAS and health.QAS.ASCS )
d. Select HA por ts
e. Increase idle timeout to 30 minutes
f. Make sure to enable Floating IP
g. Click OK
Repeat the steps above to create load balancing rules for ERS (for example lb.QAS.ERS )
2. Alternatively, if your scenario requires basic load balancer (internal), follow these steps:
a. Create the frontend IP addresses
a. IP address 192.168.14.9 for the ASCS
a. Open the load balancer, select frontend IP pool, and click Add
b. Enter the name of the new frontend IP pool (for example frontend.QAS.ASCS )
c. Set the Assignment to Static and enter the IP address (for example 192.168.14.9 )
d. Click OK
b. IP address 192.168.14.10 for the ASCS ERS
Repeat the steps above under "a" to create an IP address for the ERS (for example
192.168.14.10 and frontend.QAS.ERS )
b. Create the backend pool
a. Open the load balancer, select backend pools, and click Add
b. Enter the name of the new backend pool (for example backend.QAS )
c. Click Add a virtual machine.
d. Select the Availability Set you created earlier for ASCS
e. Select the virtual machines of the (A)SCS cluster
f. Click OK
c. Create the health probes
a. Port 62000 for ASCS
a. Open the load balancer, select health probes, and click Add
b. Enter the name of the new health probe (for example health.QAS.ASCS )
c. Select TCP as protocol, port 62000 , keep Interval 5 and Unhealthy threshold 2
d. Click OK
b. Port 62101 for ASCS ERS
Repeat the steps above under "c" to create a health probe for the ERS (for example 62101
and health.QAS.ERS )
d. Load-balancing rules
a. 3200 TCP for ASCS
a. Open the load balancer, select Load-balancing rules, and click Add
b. Enter the name of the new load balancer rule (for example lb.QAS.ASCS.3200 )
c. Select the frontend IP address for ASCS, backend pool, and health probe you created earlier
(for example frontend.QAS.ASCS )
d. Keep protocol TCP , enter port 3200
e. Increase idle timeout to 30 minutes
f. Make sure to enable Floating IP
g. Click OK
b. Additional ports for the ASCS
Repeat the steps above under "d" for ports 3600 , 3900 , 8100 , 500 13, 500 14, 500 16 and
TCP for the ASCS
c. Additional ports for the ASCS ERS
Repeat the steps above under "d" for ports 3201 , 3301 , 501 13, 501 14, 501 16 and TCP for
the ASCS ERS

IMPORTANT
Floating IP is not supported on a NIC secondary IP configuration in load-balancing scenarios. For details see
Azure Load balancer Limitations. If you need additional IP address for the VM, deploy a second NIC.
NOTE
When VMs without public IP addresses are placed in the backend pool of internal (no public IP address)
Standard Azure load balancer, there will be no outbound internet connectivity, unless additional configuration
is performed to allow routing to public end points. For details on how to achieve outbound connectivity see
Public endpoint connectivity for Virtual Machines using Azure Standard Load Balancer in SAP high-availability
scenarios.

IMPORTANT
Do not enable TCP timestamps on Azure VMs placed behind Azure Load Balancer. Enabling TCP timestamps
will cause the health probes to fail. Set parameter net.ipv4.tcp_timestamps to 0 . For details see Load
Balancer health probes.

Disable ID mapping (if using NFSv4.1)


The instructions in this section are only applicable, if using Azure NetApp Files volumes with NFSv4.1 protocol.
Perform the configuration on all VMs, where Azure NetApp Files NFSv4.1 volumes will be mounted.
1. Verify the NFS domain setting. Make sure that the domain is configured as the default Azure NetApp Files
domain, i.e. defaultv4iddomain.com and the mapping is set to nobody .

IMPORTANT
Make sure to set the NFS domain in /etc/idmapd.conf on the VM to match the default domain configuration on
Azure NetApp Files: defaultv4iddomain.com . If there's a mismatch between the domain configuration on the NFS
client (i.e. the VM) and the NFS server, i.e. the Azure NetApp configuration, then the permissions for files on Azure
NetApp volumes that are mounted on the VMs will be displayed as nobody .

sudo cat /etc/idmapd.conf


# Example
[General]
Domain = defaultv4iddomain.com
[Mapping]
Nobody-User = nobody
Nobody-Group = nobody

2. [A] Verify . It should be set to Y . To create the directory structure where


nfs4_disable_idmapping
nfs4_disable_idmappingis located, execute the mount command. You won't be able to manually create the
directory under /sys/modules, because access is reserved for the kernel / drivers.

# Check nfs4_disable_idmapping
cat /sys/module/nfs/parameters/nfs4_disable_idmapping
# If you need to set nfs4_disable_idmapping to Y
mkdir /mnt/tmp
mount 192.168.24.5:/sapQAS
umount /mnt/tmp
echo "Y" > /sys/module/nfs/parameters/nfs4_disable_idmapping
# Make the configuration permanent
echo "options nfs nfs4_disable_idmapping=Y" >> /etc/modprobe.d/nfs.conf
For more details on how to change nfs4_disable_idmapping parameter see
https://access.redhat.com/solutions/1749883.
Create Pacemaker cluster
Follow the steps in Setting up Pacemaker on Red Hat Enterprise Linux in Azure to create a basic Pacemaker cluster
for this (A)SCS server.
Prepare for SAP NetWeaver installation
The following items are prefixed with either [A] - applicable to all nodes, [1] - only applicable to node 1 or [2] -
only applicable to node 2.
1. [A] Setup host name resolution
You can either use a DNS server or modify the /etc/hosts on all nodes. This example shows how to use the
/etc/hosts file. Replace the IP address and the hostname in the following commands

sudo vi /etc/hosts

Insert the following lines to /etc/hosts. Change the IP address and hostname to match your environment

# IP address of cluster node 1


192.168.14.5 anftstsapcl1
# IP address of cluster node 2
192.168.14.6 anftstsapcl2
# IP address of the load balancer frontend configuration for SAP Netweaver ASCS
192.168.14.9 anftstsapvh
# IP address of the load balancer frontend configuration for SAP Netweaver ERS
192.168.14.10 anftstsapers

2. [1] Create SAP directories in the Azure NetApp Files volume.


Mount temporarily the Azure NetApp Files volume on one of the VMs and create the SAP directories(file
paths).

# mount temporarily the volume


sudo mkdir -p /saptmp
# If using NFSv3
sudo mount -t nfs -o rw,hard,rsize=65536,wsize=65536,vers=3,tcp 192.168.24.5:/sapQAS /saptmp
# If using NFSv4.1
sudo mount -t nfs -o rw,hard,rsize=65536,wsize=65536,vers=4.1,sec=sys,tcp 192.168.24.5:/sapQAS /saptmp
# create the SAP directories
sudo cd /saptmp
sudo mkdir -p sapmntQAS
sudo mkdir -p usrsapQASascs
sudo mkdir -p usrsapQASers
sudo mkdir -p usrsapQASsys
sudo mkdir -p usrsapQASpas
sudo mkdir -p usrsapQASaas
# unmount the volume and delete the temporary directory
sudo cd ..
sudo umount /saptmp
sudo rmdir /saptmp

3. [A] Create the shared directories


sudo mkdir -p /sapmnt/QAS
sudo mkdir -p /usr/sap/trans
sudo mkdir -p /usr/sap/QAS/SYS
sudo mkdir -p /usr/sap/QAS/ASCS00
sudo mkdir -p /usr/sap/QAS/ERS01

sudo chattr +i /sapmnt/QAS


sudo chattr +i /usr/sap/trans
sudo chattr +i /usr/sap/QAS/SYS
sudo chattr +i /usr/sap/QAS/ASCS00
sudo chattr +i /usr/sap/QAS/ERS01

4. [A] Install NFS client and other requirements

sudo yum -y install nfs-utils resource-agents resource-agents-sap

5. [A] Check version of resource-agents-sap


Make sure that the version of the installed resource-agents-sap package is at least 3.9.5-124.el7

sudo yum info resource-agents-sap

# Loaded plugins: langpacks, product-id, search-disabled-repos


# Repodata is over 2 weeks old. Install yum-cron? Or run: yum makecache fast
# Installed Packages
# Name : resource-agents-sap
# Arch : x86_64
# Version : 3.9.5
# Release : 124.el7
# Size : 100 k
# Repo : installed
# From repo : rhel-sap-for-rhel-7-server-rpms
# Summary : SAP cluster resource agents and connector script
# URL : https://github.com/ClusterLabs/resource-agents
# License : GPLv2+
# Description : The SAP resource agents and connector script interface with
# : Pacemaker to allow SAP instances to be managed in a cluster
# : environment.

6. [A] Add mount entries


If using NFSv3:

sudo vi /etc/fstab

# Add the following lines to fstab, save and exit


192.168.24.5:/sapQAS/sapmntQAS /sapmnt/QAS nfs rw,hard,rsize=65536,wsize=65536,vers=3
192.168.24.5:/sapQAS/usrsapQASsys /usr/sap/QAS/SYS nfs rw,hard,rsize=65536,wsize=65536,vers=3
192.168.24.4:/transSAP /usr/sap/trans nfs rw,hard,rsize=65536,wsize=65536,vers=3

If using NFSv4.1:

sudo vi /etc/fstab

# Add the following lines to fstab, save and exit


192.168.24.5:/sapQAS/sapmntQAS /sapmnt/QAS nfs rw,hard,rsize=65536,wsize=65536,vers=4.1,sec=sys
192.168.24.5:/sapQAS/usrsapQASsys /usr/sap/QAS/SYS nfs rw,hard,rsize=65536,wsize=65536,vers=4.1,sec=sys
192.168.24.4:/transSAP /usr/sap/trans nfs rw,hard,rsize=65536,wsize=65536,vers=4.1,sec=sys
NOTE
Make sure to match the NFS protocol version of the Azure NetApp Files volumes, when mounting the volumes. If the
Azure NetApp Files volumes are created as NFSv3 volumes, use the corresponding NFSv3 configuration. If the Azure
NetApp Files volumes are created as NFSv4.1 volumes, follow the instructions to disable ID mapping and make sure to
use the corresponding NFSv4.1 configuration. In this example the Azure NetApp Files volumes were created as NFSv3
volumes.

Mount the new shares

sudo mount -a

7. [A] Configure SWAP file

sudo vi /etc/waagent.conf

# Set the property ResourceDisk.EnableSwap to y


# Create and use swapfile on resource disk.
ResourceDisk.EnableSwap=y

# Set the size of the SWAP file with property ResourceDisk.SwapSizeMB


# The free space of resource disk varies by virtual machine size. Make sure that you do not set a value
that is too big. You can check the SWAP space with command swapon
# Size of the swapfile.
ResourceDisk.SwapSizeMB=2000

Restart the Agent to activate the change

sudo service waagent restart

8. [A] RHEL configuration


Configure RHEL as described in SAP Note 2002167
Installing SAP NetWeaver ASCS/ERS
1. [1] Create a virtual IP resource and health-probe for the ASCS instance

sudo pcs node standby anftstsapcl2


# If using NFSv3
sudo pcs resource create fs_QAS_ASCS Filesystem device='192.168.24.5:/sapQAS/usrsapQASascs' \
directory='/usr/sap/QAS/ASCS00' fstype='nfs' force_unmount=safe \
op start interval=0 timeout=60 op stop interval=0 timeout=120 op monitor interval=200 timeout=40 \
--group g-QAS_ASCS

# If using NFSv4.1
sudo pcs resource create fs_QAS_ASCS Filesystem device='192.168.24.5:/sapQAS/usrsapQASascs' \
directory='/usr/sap/QAS/ASCS00' fstype='nfs' force_unmount=safe options='sec=sys,vers=4.1' \
op start interval=0 timeout=60 op stop interval=0 timeout=120 op monitor interval=200 timeout=40 \
--group g-QAS_ASCS

sudo pcs resource create vip_QAS_ASCS IPaddr2 \


ip=192.168.14.9 cidr_netmask=24 \
--group g-QAS_ASCS

sudo pcs resource create nc_QAS_ASCS azure-lb port=62000 \


--group g-QAS_ASCS
Make sure that the cluster status is ok and that all resources are started. It is not important on which node
the resources are running.

sudo pcs status

# Node anftstsapcl2: standby


# Online: [ anftstsapcl1 ]
#
# Full list of resources:
#
# rsc_st_azure (stonith:fence_azure_arm): Started anftstsapcl1
# Resource Group: g-QAS_ASCS
# fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl1
# nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl1
# vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl1

2. [1] Install SAP NetWeaver ASCS


Install SAP NetWeaver ASCS as root on the first node using a virtual hostname that maps to the IP address
of the load balancer frontend configuration for the ASCS, for example anftstsapvh , 192.168.14.9 and the
instance number that you used for the probe of the load balancer, for example 00 .
You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a non-root user to connect to
sapinst.

# Allow access to SWPM. This rule is not permanent. If you reboot the machine, you have to run the
command again.
sudo firewall-cmd --zone=public --add-port=4237/tcp

sudo <swpm>/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin SAPINST_USE_HOSTNAME=<virtual_hostname>

If the installation fails to create a subfolder in /usr/sap/QAS /ASCS00 , try setting the owner and group of the
ASCS00 folder and retry.

sudo chown qasadm /usr/sap/QAS/ASCS00


sudo chgrp sapsys /usr/sap/QAS/ASCS00

3. [1] Create a virtual IP resource and health-probe for the ERS instance
sudo pcs node unstandby anftstsapcl2
sudo pcs node standby anftstsapcl1

# If using NFSv3
sudo pcs resource create fs_QAS_AERS Filesystem device='192.168.24.5:/sapQAS/usrsapQASers' \
directory='/usr/sap/QAS/ERS01' fstype='nfs' force_unmount=safe \
op start interval=0 timeout=60 op stop interval=0 timeout=120 op monitor interval=200 timeout=40 \
--group g-QAS_AERS

# If using NFSv4.1
sudo pcs resource create fs_QAS_AERS Filesystem device='192.168.24.5:/sapQAS/usrsapQASers' \
directory='/usr/sap/QAS/ERS01' fstype='nfs' force_unmount=safe options='sec=sys,vers=4.1' \
op start interval=0 timeout=60 op stop interval=0 timeout=120 op monitor interval=200 timeout=40 \
--group g-QAS_AERS

sudo pcs resource create vip_QAS_AERS IPaddr2 \


ip=192.168.14.10 cidr_netmask=24 \
--group g-QAS_AERS

sudo pcs resource create nc_QAS_AERS azure-lb port=62101 \


--group g-QAS_AERS

Make sure that the cluster status is ok and that all resources are started. It is not important on which node
the resources are running.

sudo pcs status

# Node anftstsapcl1: standby


# Online: [ anftstsapcl2 ]
#
# Full list of resources:
#
# rsc_st_azure (stonith:fence_azure_arm): Started anftstsapcl2
# Resource Group: g-QAS_ASCS
# fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl2
# nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl2<
# vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
# Resource Group: g-QAS_AERS
# fs_QAS_AERS (ocf::heartbeat:Filesystem): Started anftstsapcl2
# nc_QAS_AERS (ocf::heartbeat:azure-lb): Started anftstsapcl2
# vip_QAS_AERS (ocf::heartbeat:IPaddr2): Started anftstsapcl2

4. [2] Install SAP NetWeaver ERS


Install SAP NetWeaver ERS as root on the second node using a virtual hostname that maps to the IP address
of the load balancer frontend configuration for the ERS, for example anftstsapers , 192.168.14.10 and the
instance number that you used for the probe of the load balancer, for example 01 .
You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a non-root user to connect to
sapinst.

# Allow access to SWPM. This rule is not permanent. If you reboot the machine, you have to run the
command again.
sudo firewall-cmd --zone=public --add-port=4237/tcp

sudo <swpm>/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin SAPINST_USE_HOSTNAME=<virtual_hostname>

If the installation fails to create a subfolder in /usr/sap/QAS /ERS01 , try setting the owner and group of the
ERS01 folder and retry.
sudo chown qaadm /usr/sap/QAS/ERS01
sudo chgrp sapsys /usr/sap/QAS/ERS01

5. [1] Adapt the ASCS/SCS and ERS instance profiles


ASCS/SCS profile

sudo vi /sapmnt/QAS/profile/QAS_ASCS00_anftstsapvh

# Change the restart command to a start command


#Restart_Program_01 = local $(_EN) pf=$(_PF)
Start_Program_01 = local $(_EN) pf=$(_PF)

# Add the keep alive parameter, if using ENSA1


enque/encni/set_so_keepalive = true

For both ENSA1 and ENSA2, make sure that the keepalive OS parameters are set as described in SAP note
1410736.
ERS profile

sudo vi /sapmnt/QAS/profile/QAS_ERS01_anftstsapers

# Change the restart command to a start command


#Restart_Program_00 = local $(_ER) pf=$(_PFL) NR=$(SCSID)
Start_Program_00 = local $(_ER) pf=$(_PFL) NR=$(SCSID)

# remove Autostart from ERS profile


# Autostart = 1

6. [A] Configure Keep Alive


The communication between the SAP NetWeaver application server and the ASCS/SCS is routed through a
software load balancer. The load balancer disconnects inactive connections after a configurable timeout. To
prevent this, you need to set a parameter in the SAP NetWeaver ASCS/SCS profile, if using ENSA1, and
change the Linux system keepalive settings on all SAP servers for both ENSA1/ENSA2. Read SAP Note
1410736 for more information.

# Change the Linux system configuration


sudo sysctl net.ipv4.tcp_keepalive_time=300

7. [A] Update the /usr/sap/sapservices file


To prevent the start of the instances by the sapinit startup script, all instances managed by Pacemaker must
be commented out from /usr/sap/sapservices file. Do not comment out the SAP HANA instance if it will be
used with HANA SR.

sudo vi /usr/sap/sapservices

# On the node where you installed the ASCS, comment out the following line
# LD_LIBRARY_PATH=/usr/sap/QAS/ASCS00/exe:$LD_LIBRARY_PATH; export LD_LIBRARY_PATH;
/usr/sap/QAS/ASCS00/exe/sapstartsrv pf=/usr/sap/QAS/SYS/profile/QAS_ASCS00_anftstsapvh -D -u qasadm

# On the node where you installed the ERS, comment out the following line
# LD_LIBRARY_PATH=/usr/sap/QAS/ERS01/exe:$LD_LIBRARY_PATH; export LD_LIBRARY_PATH;
/usr/sap/QAS/ERS01/exe/sapstartsrv pf=/usr/sap/QAS/ERS01/profile/QAS_ERS01_anftstsapers -D -u qasadm
8. [1] Create the SAP cluster resources
If using enqueue server 1 architecture (ENSA1), define the resources as follows:

sudo pcs property set maintenance-mode=true

sudo pcs resource create rsc_sap_QAS_ASCS00 SAPInstance \


InstanceName=QAS_ASCS00_anftstsapvh START_PROFILE="/sapmnt/QAS/profile/QAS_ASCS00_anftstsapvh" \
AUTOMATIC_RECOVER=false \
meta resource-stickiness=5000 migration-threshold=1 failure-timeout=60 \
op monitor interval=20 on-fail=restart timeout=60 \
op start interval=0 timeout=600 op stop interval=0 timeout=600 \
--group g-QAS_ASCS

sudo pcs resource create rsc_sap_QAS_ERS01 SAPInstance \


InstanceName=QAS_ERS01_anftstsapers START_PROFILE="/sapmnt/QAS/profile/QAS_ERS01_anftstsapers" \
AUTOMATIC_RECOVER=false IS_ERS=true \
op monitor interval=20 on-fail=restart timeout=60 op start interval=0 timeout=600 op stop interval=0
timeout=600 \
--group g-QAS_AERS

sudo pcs constraint colocation add g-QAS_AERS with g-QAS_ASCS -5000


sudo pcs constraint location rsc_sap_QAS_ASCS00 rule score=2000 runs_ers_QAS eq 1
sudo pcs constraint order g-QAS_ASCS then g-QAS_AERS kind=Optional symmetrical=false

sudo pcs node unstandby anftstsapcl1


sudo pcs property set maintenance-mode=false

SAP introduced support for enqueue server 2, including replication, as of SAP NW 7.52. Starting with ABAP
Platform 1809, enqueue server 2 is installed by default. See SAP note 2630416 for enqueue server 2
support. If using enqueue server 2 architecture (ENSA2), install resource agent resource-agents-sap-4.1.1-
12.el7.x86_64 or newer and define the resources as follows:

sudo pcs property set maintenance-mode=true

sudo pcs resource create rsc_sap_QAS_ASCS00 SAPInstance \


InstanceName=QAS_ASCS00_anftstsapvh START_PROFILE="/sapmnt/QAS/profile/QAS_ASCS00_anftstsapvh" \
AUTOMATIC_RECOVER=false \
meta resource-stickiness=5000 migration-threshold=1 failure-timeout=60 \
op monitor interval=20 on-fail=restart timeout=60 \
op start interval=0 timeout=600 op stop interval=0 timeout=600 \
--group g-QAS_ASCS

sudo pcs resource create rsc_sap_QAS_ERS01 SAPInstance \


InstanceName=QAS_ERS01_anftstsapers START_PROFILE="/sapmnt/QAS/profile/QAS_ERS01_anftstsapers" \
AUTOMATIC_RECOVER=false IS_ERS=true \
op monitor interval=20 on-fail=restart timeout=60 op start interval=0 timeout=600 op stop interval=0
timeout=600 \
--group g-QAS_AERS

sudo pcs constraint colocation add g-QAS_AERS with g-QAS_ASCS -5000


sudo pcs constraint order g-QAS_ASCS then g-QAS_AERS kind=Optional symmetrical=false
sudo pcs constraint order start g-QAS_ASCS then stop g-QAS_AERS symmetrical=false

sudo pcs node unstandby anftstsapcl1


sudo pcs property set maintenance-mode=false

If you are upgrading from an older version and switching to enqueue server 2, see SAP note 2641322.

NOTE
The timeouts in the above configuration are just examples and may need to be adapted to the specific SAP setup.
Make sure that the cluster status is ok and that all resources are started. It is not important on which node
the resources are running.

sudo pcs status

# Online: [ anftstsapcl1 anftstsapcl2 ]


#
# Full list of resources:
#
# rsc_st_azure (stonith:fence_azure_arm): Started anftstsapcl2
# Resource Group: g-QAS_ASCS
# fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl2
# nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl2
# vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
# rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl2
# Resource Group: g-QAS_AERS
# fs_QAS_AERS (ocf::heartbeat:Filesystem): Started anftstsapcl1
# nc_QAS_AERS (ocf::heartbeat:azure-lb): Started anftstsapcl1
# vip_QAS_AERS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
# rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl1

9. [A] Add firewall rules for ASCS and ERS on both nodes Add the firewall rules for ASCS and ERS on both
nodes.

# Probe Port of ASCS


sudo firewall-cmd --zone=public --add-port=62000/tcp --permanent
sudo firewall-cmd --zone=public --add-port=62000/tcp
sudo firewall-cmd --zone=public --add-port=3200/tcp --permanent
sudo firewall-cmd --zone=public --add-port=3200/tcp
sudo firewall-cmd --zone=public --add-port=3600/tcp --permanent
sudo firewall-cmd --zone=public --add-port=3600/tcp
sudo firewall-cmd --zone=public --add-port=3900/tcp --permanent
sudo firewall-cmd --zone=public --add-port=3900/tcp
sudo firewall-cmd --zone=public --add-port=8100/tcp --permanent
sudo firewall-cmd --zone=public --add-port=8100/tcp
sudo firewall-cmd --zone=public --add-port=50013/tcp --permanent
sudo firewall-cmd --zone=public --add-port=50013/tcp
sudo firewall-cmd --zone=public --add-port=50014/tcp --permanent
sudo firewall-cmd --zone=public --add-port=50014/tcp
sudo firewall-cmd --zone=public --add-port=50016/tcp --permanent
sudo firewall-cmd --zone=public --add-port=50016/tcp
# Probe Port of ERS
sudo firewall-cmd --zone=public --add-port=62101/tcp --permanent
sudo firewall-cmd --zone=public --add-port=62101/tcp
sudo firewall-cmd --zone=public --add-port=3301/tcp --permanent
sudo firewall-cmd --zone=public --add-port=3301/tcp
sudo firewall-cmd --zone=public --add-port=50113/tcp --permanent
sudo firewall-cmd --zone=public --add-port=50113/tcp
sudo firewall-cmd --zone=public --add-port=50114/tcp --permanent
sudo firewall-cmd --zone=public --add-port=50114/tcp
sudo firewall-cmd --zone=public --add-port=50116/tcp --permanent
sudo firewall-cmd --zone=public --add-port=50116/tcp

SAP NetWeaver application server preparation


Some databases require that the database instance installation is executed on an application server. Prepare the
application server virtual machines to be able to use them in these cases.
The steps bellow assume that you install the application server on a server different from the ASCS/SCS and HANA
servers. Otherwise some of the steps below (like configuring host name resolution) are not needed.
The following items are prefixed with either [A] - applicable to both PAS and AAS, [P] - only applicable to PAS or [S]
- only applicable to AAS.
1. [A] Setup host name resolution You can either use a DNS server or modify the /etc/hosts on all nodes. This
example shows how to use the /etc/hosts file. Replace the IP address and the hostname in the following
commands:

sudo vi /etc/hosts

Insert the following lines to /etc/hosts. Change the IP address and hostname to match your environment.

# IP address of the load balancer frontend configuration for SAP NetWeaver ASCS
192.168.14.9 anftstsapvh
# IP address of the load balancer frontend configuration for SAP NetWeaver ASCS ERS
192.168.14.10 anftstsapers
192.168.14.7 anftstsapa01
192.168.14.8 anftstsapa02

2. [A] Create the sapmnt directory Create the sapmnt directory.

sudo mkdir -p /sapmnt/QAS


sudo mkdir -p /usr/sap/trans

sudo chattr +i /sapmnt/QAS


sudo chattr +i /usr/sap/trans

3. [A] Install NFS client and other requirements

sudo yum -y install nfs-utils uuidd

4. [A] Add mount entries


If using NFSv3:

sudo vi /etc/fstab

# Add the following lines to fstab, save and exit


192.168.24.5:/sapQAS/sapmntQAS /sapmnt/QAS nfs rw,hard,rsize=65536,wsize=65536,vers=3
192.168.24.4:/transSAP /usr/sap/trans nfs rw,hard,rsize=65536,wsize=65536,vers=3

If using NFSv4.1:

sudo vi /etc/fstab

# Add the following lines to fstab, save and exit


192.168.24.5:/sapQAS/sapmntQAS /sapmnt/QAS nfs rw,hard,rsize=65536,wsize=65536,vers=4.1,sec=sys
192.168.24.4:/transSAP /usr/sap/trans nfs rw,hard,rsize=65536,wsize=65536,vers=4.1,sec=sys

Mount the new shares

sudo mount -a

5. [P] Create and mount the PAS directory


If using NFSv3:
sudo mkdir -p /usr/sap/QAS/D02
sudo chattr +i /usr/sap/QAS/D02

sudo vi /etc/fstab
# Add the following line to fstab
92.168.24.5:/sapQAS/usrsapQASpas /usr/sap/QAS/D02 nfs rw,hard,rsize=65536,wsize=65536,vers=3

# Mount
sudo mount -a

If using NFSv4.1:

sudo mkdir -p /usr/sap/QAS/D02


sudo chattr +i /usr/sap/QAS/D02

sudo vi /etc/fstab
# Add the following line to fstab
92.168.24.5:/sapQAS/usrsapQASpas /usr/sap/QAS/D02 nfs rw,hard,rsize=65536,wsize=65536,vers=4.1,sec=sys

# Mount
sudo mount -a

6. [S] Create and mount the AAS directory


If using NFSv3:

sudo mkdir -p /usr/sap/QAS/D03


sudo chattr +i /usr/sap/QAS/D03

sudo vi /etc/fstab
# Add the following line to fstab
92.168.24.5:/sapQAS/usrsapQASaas /usr/sap/QAS/D03 nfs rw,hard,rsize=65536,wsize=65536,vers=3

# Mount
sudo mount -a

If using NFSv4.1:

sudo mkdir -p /usr/sap/QAS/D03


sudo chattr +i /usr/sap/QAS/D03

sudo vi /etc/fstab
# Add the following line to fstab
92.168.24.5:/sapQAS/usrsapQASaas /usr/sap/QAS/D03 nfs rw,hard,rsize=65536,wsize=65536,vers=4.1,sec=sys

# Mount
sudo mount -a

7. [A] Configure SWAP file


sudo vi /etc/waagent.conf

# Set the property ResourceDisk.EnableSwap to y


# Create and use swapfile on resource disk.
ResourceDisk.EnableSwap=y

# Set the size of the SWAP file with property ResourceDisk.SwapSizeMB


# The free space of resource disk varies by virtual machine size. Make sure that you do not set a value
that is too big. You can check the SWAP space with command swapon
# Size of the swapfile.
ResourceDisk.SwapSizeMB=2000

Restart the Agent to activate the change

sudo service waagent restart

Install database
In this example, SAP NetWeaver is installed on SAP HANA. You can use every supported database for this
installation. For more information on how to install SAP HANA in Azure, see High availability of SAP HANA on
Azure VMs on Red Hat Enterprise Linux. For a list of supported databases, see SAP Note 1928533.
1. Run the SAP database instance installation
Install the SAP NetWeaver database instance as root using a virtual hostname that maps to the IP address of
the load balancer frontend configuration for the database.
You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a non-root user to connect to
sapinst.

sudo <swpm>/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin

SAP NetWeaver application server installation


Follow these steps to install an SAP application server.
1. Prepare application server
Follow the steps in the chapter SAP NetWeaver application server preparation above to prepare the
application server.
2. Install SAP NetWeaver application server
Install a primary or additional SAP NetWeaver applications server.
You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a non-root user to connect to
sapinst.

sudo <swpm>/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin

3. Update SAP HANA secure store


Update the SAP HANA secure store to point to the virtual name of the SAP HANA System Replication setup.
Run the following command to list the entries as <sapsid>adm
hdbuserstore List

This should list all entries and should look similar to

DATA FILE : /home/qasadm/.hdb/anftstsapa01/SSFS_HDB.DAT


KEY FILE : /home/qasadm/.hdb/anftstsapa01/SSFS_HDB.KEY

KEY DEFAULT
ENV : 192.168.14.4:30313
USER: SAPABAP1
DATABASE: QAS

The output shows that the IP address of the default entry is pointing to the virtual machine and not to the
load balancer's IP address. This entry needs to be changed to point to the virtual hostname of the load
balancer. Make sure to use the same port (30313 in the output above) and database name (QAS in the
output above)!

su - qasadm
hdbuserstore SET DEFAULT qasdb:30313@QAS SAPABAP1 <password of ABAP schema>

Test the cluster setup


1. Manually migrate the ASCS instance
Resource state before starting the test:

rsc_st_azure (stonith:fence_azure_arm): Started anftstsapcl1


Resource Group: g-QAS_ASCS
fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl1
nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl1
vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl1
Resource Group: g-QAS_AERS
fs_QAS_AERS (ocf::heartbeat:Filesystem): Started anftstsapcl2
nc_QAS_AERS (ocf::heartbeat:azure-lb): Started anftstsapcl2
vip_QAS_AERS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl2

Run the following commands as root to migrate the ASCS instance.

[root@anftstsapcl1 ~]# pcs resource move rsc_sap_QAS_ASCS00

[root@anftstsapcl1 ~]# pcs resource clear rsc_sap_QAS_ASCS00

# Remove failed actions for the ERS that occurred as part of the migration
[root@anftstsapcl1 ~]# pcs resource cleanup rsc_sap_QAS_ERS01

Resource state after the test:


rsc_st_azure (stonith:fence_azure_arm): Started anftstsapcl1
Resource Group: g-QAS_ASCS
fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl2
nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl2
vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl2
Resource Group: g-QAS_AERS
fs_QAS_AERS (ocf::heartbeat:Filesystem): Started anftstsapcl1
nc_QAS_AERS (ocf::heartbeat:azure-lb): Started anftstsapcl1
vip_QAS_AERS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl1

2. Simulate node crash


Resource state before starting the test:

rsc_st_azure (stonith:fence_azure_arm): Started anftstsapcl1


Resource Group: g-QAS_ASCS
fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl2
nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl2
vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl2
Resource Group: g-QAS_AERS
fs_QAS_AERS (ocf::heartbeat:Filesystem): Started anftstsapcl1
nc_QAS_AERS (ocf::heartbeat:azure-lb): Started anftstsapcl1
vip_QAS_AERS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl1

Run the following command as root on the node where the ASCS instance is running

[root@anftstsapcl2 ~]# echo b > /proc/sysrq-trigger

The status after the node is started again should look like this.

Online: [ anftstsapcl1 anftstsapcl2 ]

Full list of resources:

rsc_st_azure (stonith:fence_azure_arm): Started anftstsapcl1


Resource Group: g-QAS_ASCS
fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl1
nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl1
vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl1
Resource Group: g-QAS_AERS
fs_QAS_AERS (ocf::heartbeat:Filesystem): Started anftstsapcl2
nc_QAS_AERS (ocf::heartbeat:azure-lb): Started anftstsapcl2
vip_QAS_AERS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl2

Failed Actions:
* rsc_sap_QAS_ERS01_monitor_11000 on anftstsapcl1 'not running' (7): call=45, status=complete,
exitreason='',

Use the following command to clean the failed resources.

[root@anftstsapcl1 ~]# pcs resource cleanup rsc_sap_QAS_ERS01

Resource state after the test:


rsc_st_azure (stonith:fence_azure_arm): Started anftstsapcl1
Resource Group: g-QAS_ASCS
fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl1
nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl1
vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl1
Resource Group: g-QAS_AERS
fs_QAS_AERS (ocf::heartbeat:Filesystem): Started anftstsapcl2
nc_QAS_AERS (ocf::heartbeat:azure-lb): Started anftstsapcl2
vip_QAS_AERS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl2

3. Kill message server process


Resource state before starting the test:

rsc_st_azure (stonith:fence_azure_arm): Started anftstsapcl1


Resource Group: g-QAS_ASCS
fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl1
nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl1
vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl1
Resource Group: g-QAS_AERS
fs_QAS_AERS (ocf::heartbeat:Filesystem): Started anftstsapcl2
nc_QAS_AERS (ocf::heartbeat:azure-lb): Started anftstsapcl2
vip_QAS_AERS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl2

Run the following commands as root to identify the process of the message server and kill it.

[root@anftstsapcl1 ~]# pgrep ms.sapQAS | xargs kill -9

If you only kill the message server once, it will be restarted by sapstart . If you kill it often enough,
Pacemaker will eventually move the ASCS instance to the other node. Run the following commands as root
to clean up the resource state of the ASCS and ERS instance after the test.

[root@anftstsapcl1 ~]# pcs resource cleanup rsc_sap_QAS_ASCS00


[root@anftstsapcl1 ~]# pcs resource cleanup rsc_sap_QAS_ERS01

Resource state after the test:

rsc_st_azure (stonith:fence_azure_arm): Started anftstsapcl1


Resource Group: g-QAS_ASCS
fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl2
nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl2
vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl2
Resource Group: g-QAS_AERS
fs_QAS_AERS (ocf::heartbeat:Filesystem): Started anftstsapcl1
nc_QAS_AERS (ocf::heartbeat:azure-lb): Started anftstsapcl1
vip_QAS_AERS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl1

4. Kill enqueue server process


Resource state before starting the test:
rsc_st_azure (stonith:fence_azure_arm): Started anftstsapcl1
Resource Group: g-QAS_ASCS
fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl2
nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl2
vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl2
Resource Group: g-QAS_AERS
fs_QAS_AERS (ocf::heartbeat:Filesystem): Started anftstsapcl1
nc_QAS_AERS (ocf::heartbeat:azure-lb): Started anftstsapcl1
vip_QAS_AERS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl1

Run the following commands as root on the node where the ASCS instance is running to kill the enqueue
server.

[root@anftstsapcl2 ~]# pgrep en.sapQAS | xargs kill -9

The ASCS instance should immediately fail over to the other node. The ERS instance should also fail over
after the ASCS instance is started. Run the following commands as root to clean up the resource state of the
ASCS and ERS instance after the test.

[root@anftstsapcl2 ~]# pcs resource cleanup rsc_sap_QAS_ASCS00


[root@anftstsapcl2 ~]# pcs resource cleanup rsc_sap_QAS_ERS01

Resource state after the test:

rsc_st_azure (stonith:fence_azure_arm): Started anftstsapcl1


Resource Group: g-QAS_ASCS
fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl1
nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl1
vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl1
Resource Group: g-QAS_AERS
fs_QAS_AERS (ocf::heartbeat:Filesystem): Started anftstsapcl2
nc_QAS_AERS (ocf::heartbeat:azure-lb): Started anftstsapcl2
vip_QAS_AERS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl2

5. Kill enqueue replication server process


Resource state before starting the test:

rsc_st_azure (stonith:fence_azure_arm): Started anftstsapcl1


Resource Group: g-QAS_ASCS
fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl1
nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl1
vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl1
Resource Group: g-QAS_AERS
fs_QAS_AERS (ocf::heartbeat:Filesystem): Started anftstsapcl2
nc_QAS_AERS (ocf::heartbeat:azure-lb): Started anftstsapcl2
vip_QAS_AERS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl2

Run the following command as root on the node where the ERS instance is running to kill the enqueue
replication server process.
[root@anftstsapcl2 ~]# pgrep er.sapQAS | xargs kill -9

If you only run the command once, sapstart will restart the process. If you run it often enough, sapstart
will not restart the process and the resource will be in a stopped state. Run the following commands as root
to clean up the resource state of the ERS instance after the test.

[root@anftstsapcl2 ~]# pcs resource cleanup rsc_sap_QAS_ERS01

Resource state after the test:

rsc_st_azure (stonith:fence_azure_arm): Started anftstsapcl1


Resource Group: g-QAS_ASCS
fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl1
nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl1
vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl1
Resource Group: g-QAS_AERS
fs_QAS_AERS (ocf::heartbeat:Filesystem): Started anftstsapcl2
nc_QAS_AERS (ocf::heartbeat:azure-lb): Started anftstsapcl2
vip_QAS_AERS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl2

6. Kill enqueue sapstartsrv process


Resource state before starting the test:

rsc_st_azure (stonith:fence_azure_arm): Started anftstsapcl1


Resource Group: g-QAS_ASCS
fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl1
nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl1
vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl1
Resource Group: g-QAS_AERS
fs_QAS_AERS (ocf::heartbeat:Filesystem): Started anftstsapcl2
nc_QAS_AERS (ocf::heartbeat:azure-lb): Started anftstsapcl2
vip_QAS_AERS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl2

Run the following commands as root on the node where the ASCS is running.

[root@anftstsapcl1 ~]# pgrep -fl ASCS00.*sapstartsrv


# 59545 sapstartsrv

[root@anftstsapcl1 ~]# kill -9 59545

The sapstartsrv process should always be restarted by the Pacemaker resource agent as part of the
monitoring. Resource state after the test:
rsc_st_azure (stonith:fence_azure_arm): Started anftstsapcl1
Resource Group: g-QAS_ASCS
fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl1
nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl1
vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl1
Resource Group: g-QAS_AERS
fs_QAS_AERS (ocf::heartbeat:Filesystem): Started anftstsapcl2
nc_QAS_AERS (ocf::heartbeat:azure-lb): Started anftstsapcl2
vip_QAS_AERS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl2

Next steps
HA for SAP NW on Azure VMs on RHEL for SAP applications multi-SID guide
Azure Virtual Machines planning and implementation for SAP
Azure Virtual Machines deployment for SAP
Azure Virtual Machines DBMS deployment for SAP
To learn how to establish high availability and plan for disaster recovery of SAP HANA on Azure (large
instances), see SAP HANA (large instances) high availability and disaster recovery on Azure.
To learn how to establish high availability and plan for disaster recovery of SAP HANA on Azure VMs, see High
Availability of SAP HANA on Azure Virtual Machines (VMs)
Prepare the Azure infrastructure for SAP HA by
using a Windows failover cluster and shared disk for
SAP ASCS/SCS
12/22/2020 • 11 minutes to read • Edit Online

Windows

This article describes the steps you take to prepare the Azure infrastructure for installing and configuring a high-
availability SAP ASCS/SCS instance on a Windows failover cluster by using a cluster shared disk as an option for
clustering an SAP ASCS instance. Two alternatives for cluster shared disk are presented in the documentation:
Azure shared disks
Using SIOS DataKeeper Cluster Edition to create mirrored storage, that will simulate clustered shared disk
The presented configuration is relying on Azure proximity placement groups (PPG) to achieve optimal network
latency for SAP workloads. The documentation doesn't cover the database layer.

NOTE
Azure proximity placement groups are prerequisite for using Azure Shared Disk.

Prerequisites
Before you begin the installation, review this article:
Architecture guide: Cluster an SAP ASCS/SCS instance on a Windows failover cluster by using a cluster shared
disk

Create the ASCS VMs


For SAP ASCS / SCS cluster deploy two VMs in Azure Availability Set. Deploy the VMs in the same Proximity
Placement Group. Once the VMs are deployed:
Create Azure Internal Load Balancer for SAP ASCS /SCS instance
Add Windows VMs to the AD domain
The host names and the IP addresses for the presented scenario are:

P RO XIM IT Y
H O ST N A M E RO L E H O ST N A M E STAT IC IP A DDRESS AVA IL A B IL IT Y SET P L A C EM EN T GRO UP

1st cluster node pr1-ascs-10 10.0.0.4 pr1-ascs-avset PR1PPG


ASCS/SCS cluster

2nd cluster node pr1-ascs-11 10.0.0.5 pr1-ascs-avset PR1PPG


ASCS/SCS cluster

Cluster Network pr1clust 10.0.0.42(only for n/a n/a


Name Win 2016 cluster)
P RO XIM IT Y
H O ST N A M E RO L E H O ST N A M E STAT IC IP A DDRESS AVA IL A B IL IT Y SET P L A C EM EN T GRO UP

ASCS cluster network pr1-ascscl 10.0.0.43 n/a n/a


name

ERS cluster network pr1-erscl 10.0.0.44 n/a n/a


name (only for ERS2)

Create Azure internal load balancer


SAP ASCS, SAP SCS, and the new SAP ERS2, use virtual hostname and virtual IP addresses. On Azure a load
balancer is required to use a virtual IP address. We strongly recommend using Standard load balancer.

IMPORTANT
Floating IP is not supported on a NIC secondary IP configuration in load-balancing scenarios. For details see Azure Load
balancer Limitations. If you need additional IP address for the VM, deploy a second NIC.

The following list shows the configuration of the (A)SCS/ERS load balancer. The configuration for both SAP ASCS
and ERS2 in performed in the same Azure load balancer.
(A)SCS
Frontend configuration
Static ASCS/SCS IP address 10.0.0.43
Backend configuration
Add all virtual machines that should be part of the (A)SCS/ERS cluster. In this example VMs pr1-ascs-10 and
pr1-ascs-11 .
Probe Port
Port 620nr Leave the default option for Protocol (TCP), Interval (5), Unhealthy threshold (2)
Load-balancing rules
If using Standard Load Balancer, select HA ports
If using Basic Load Balancer, create Load balancing rules for the following ports
32nr TCP
36nr TCP
39nr TCP
81nr TCP
5nr 13 TCP
5nr 14 TCP
5nr 16 TCP
Make sure that Idle timeout (minutes) is set to max value 30, and that Floating IP (direct server
return) is Enabled.
ERS2
As Enqueue Replication Server 2 (ERS2) is also clustered, ERS2 virtual IP address must be also configured on
Azure ILB in addition to above SAP ASCS/SCS IP. This section only applies, if using Enqueue replication server 2
architecture.
2nd Frontend configuration
Static SAP ERS2 IP address 10.0.0.44
Backend configuration
The VMs were already added to the ILB backend pool.
2nd Probe Port
Port 621nr
Leave the default option for Protocol (TCP), Interval (5), Unhealthy threshold (2)
2nd Load-balancing rules
If using Standard Load Balancer, select HA ports
If using Basic Load Balancer, create Load balancing rules for the following ports
32nr TCP
33nr TCP
5nr 13 TCP
5nr 14 TCP
5nr 16 TCP
Make sure that Idle timeout (minutes) is set to max value 30, and that Floating IP (direct server
return) is Enabled.

TIP
With the Azure Resource Manager Template for WSFC for SAP ASCS/SCS instance with Azure Shared Disk, you can
automate the infrastructure preparation, using Azure Shared Disk for one SAP SID with ERS1.
The Azure ARM template will create two Windows 2019 or 2016 VMs, create Azure shared disk and attach to the VMs.
Azure Internal Load Balancer will be created and configured as well. For details - see the ARM template.

Add registry entries on both cluster nodes of the ASCS/SCS instance


Azure Load Balancer may close connections, if the connections are idle for a period and exceed the idle timeout.
The SAP work processes open connections to the SAP enqueue process as soon as the first enqueue/dequeue
request needs to be sent. To avoid interrupting these connections, change the TCP/IP KeepAliveTime and
KeepAliveInterval values on both cluster nodes. If using ERS1, it is also necessary to add SAP profile parameters,
as described later in this article. The following registry entries must be changed on both cluster nodes:
KeepAliveTime
KeepAliveInterval

PAT H VA RIA B L E N A M E VA RIA B L E T Y P E VA L UE DO C UM EN TAT IO N

HKLM\SYSTEM\Curre KeepAliveTime REG_DWORD 120000 KeepAliveTime


ntControlSet\Services (Decimal)
\Tcpip\Parameters

HKLM\SYSTEM\Curre KeepAliveInterval REG_DWORD 120000 KeepAliveInterval


ntControlSet\Services (Decimal)
\Tcpip\Parameters

To apply the changes, restart both cluster nodes.

Add the Windows VMs to the domain


After you assign static IP addresses to the virtual machines, add the virtual machines to the domain.
Install and configure Windows failover cluster
Install the Windows failover cluster feature
Run this command on one of the cluster nodes:

# Hostnames of the Win cluster for SAP ASCS/SCS


$SAPSID = "PR1"
$ClusterNodes = ("pr1-ascs-10","pr1-ascs-11")
$ClusterName = $SAPSID.ToLower() + "clust"

# Install Windows features.


# After the feature installs, manually reboot both nodes
Invoke-Command $ClusterNodes {Install-WindowsFeature Failover-Clustering, FS-FileServer -
IncludeAllSubFeature -IncludeManagementTools }

Once the feature installation has completed, reboot both cluster nodes.
Test and configure Windows failover cluster
On Windows 2019, the cluster will automatically recognize that it is running in Azure, and as a default option for
cluster management IP, it will use Distributed Network name. Therefore, it will use any of the cluster nodes local IP
addresses. As a result, there is no need for a dedicated (virtual) network name for the cluster, and there is no need
to configure this IP address on Azure Internal Load Balancer.
For more information see, Windows Server 2019 Failover Clustering New features Run this command on one of
the cluster nodes:

# Hostnames of the Win cluster for SAP ASCS/SCS


$SAPSID = "PR1"
$ClusterNodes = ("pr1-ascs-10","pr1-ascs-11")
$ClusterName = $SAPSID.ToLower() + "clust"

# IP adress for cluster network name is needed ONLY on Windows Server 2016 cluster
$ClusterStaticIPAddress = "10.0.0.42"

# Test cluster
Test-Cluster –Node $ClusterNodes -Verbose

$ComputerInfo = Get-ComputerInfo

$WindowsVersion = $ComputerInfo.WindowsProductName

if($WindowsVersion -eq "Windows Server 2019 Datacenter"){


write-host "Configuring Windows Failover Cluster on Windows Server 2019 Datacenter..."
New-Cluster –Name $ClusterName –Node $ClusterNodes -Verbose
}elseif($WindowsVersion -eq "Windows Server 2016 Datacenter"){
write-host "Configuring Windows Failover Cluster on Windows Server 2016 Datacenter..."
New-Cluster –Name $ClusterName –Node $ClusterNodes –StaticAddress $ClusterStaticIPAddress -Verbose
}else{
Write-Error "Not supported Windows version!"
}

Configure cluster cloud quorum


As you use Windows Server 2016 or 2019, we recommended configuring Azure Cloud Witness, as cluster
quorum.
Run this command on one of the cluster nodes:
$AzureStorageAccountName = "cloudquorumwitness"
Set-ClusterQuorum –CloudWitness –AccountName $AzureStorageAccountName -AccessKey <YourAzureStorageAccessKey>
-Verbose

Tuning the Windows failover cluster thresholds


After you successfully install the Windows failover cluster, you need to adjust some thresholds, to be suitable for
clusters deployed in Azure. The parameters to be changed are documented in Tuning failover cluster network
thresholds. Assuming that your two VMs that make up the Windows cluster configuration for ASCS/SCS are in
the same subnet, change the following parameters to these values:
SameSubNetDelay = 2000
SameSubNetThreshold = 15
RoutingHistoryLength = 30
These settings were tested with customers and offer a good compromise. They are resilient enough, but they also
provide failover that is fast enough for real error conditions in SAP workloads or VM failure.

Configure Azure shared disk


This section is only applicable, if you are using Azure shared disk.
Create and attach Azure shared disk with PowerShell
Run this command on one of the cluster nodes. You will need to adjust the values for your resource group, Azure
region, SAPSID, and so on.
#############################
# Create Azure Shared Disk
#############################

$ResourceGroupName = "MyResourceGroup"
$location = "MyAzureRegion"
$SAPSID = "PR1"

$DiskSizeInGB = 512
$DiskName = "$($SAPSID)ASCSSharedDisk"

# With parameter '-MaxSharesCount', we define the maximum number of cluster nodes to attach the shared disk
$NumberOfWindowsClusterNodes = 2

$diskConfig = New-AzDiskConfig -Location $location -SkuName Premium_LRS -CreateOption Empty -DiskSizeGB


$DiskSizeInGB -MaxSharesCount $NumberOfWindowsClusterNodes
$dataDisk = New-AzDisk -ResourceGroupName $ResourceGroupName -DiskName $DiskName -Disk $diskConfig

##################################
## Attach the disk to cluster VMs
##################################
# ASCS Cluster VM1
$ASCSClusterVM1 = "$SAPSID-ascs-10"

# ASCS Cluster VM2


$ASCSClusterVM2 = "$SAPSID-ascs-11"

# Add the Azure Shared Disk to Cluster Node 1


$vm = Get-AzVM -ResourceGroupName $ResourceGroupName -Name $ASCSClusterVM1
$vm = Add-AzVMDataDisk -VM $vm -Name $DiskName -CreateOption Attach -ManagedDiskId $dataDisk.Id -Lun 0
Update-AzVm -VM $vm -ResourceGroupName $ResourceGroupName -Verbose

# Add the Azure Shared Disk to Cluster Node 2


$vm = Get-AzVM -ResourceGroupName $ResourceGroupName -Name $ASCSClusterVM2
$vm = Add-AzVMDataDisk -VM $vm -Name $DiskName -CreateOption Attach -ManagedDiskId $dataDisk.Id -Lun 0
Update-AzVm -VM $vm -ResourceGroupName $ResourceGroupName -Verbose

Format the shared disk with PowerShell


1. Get the disk number. Run these PowerShell commands on one of the cluster nodes:

Get-Disk | Where-Object PartitionStyle -Eq "RAW" | Format-Table -AutoSize


# Example output
# Number Friendly Name Serial Number HealthStatus OperationalStatus Total Size Partition Style
# ------ ------------- ------------- ------------ ----------------- ---------- ---------------
# 2 Msft Virtual Disk Healthy Online 512 GB RAW

2. Format the disk. In this example, it is disk number 2.


# Format SAP ASCS Disk number '2', with drive letter 'S'
$SAPSID = "PR1"
$DiskNumber = 2
$DriveLetter = "S"
$DiskLabel = "$SAPSID" + "SAP"

Get-Disk -Number $DiskNumber | Where-Object PartitionStyle -Eq "RAW" | Initialize-Disk -


PartitionStyle GPT -PassThru | New-Partition -DriveLetter $DriveLetter -UseMaximumSize | Format-
Volume -FileSystem ReFS -NewFileSystemLabel $DiskLabel -Force -Verbose
# Example outout
# DriveLetter FileSystemLabel FileSystem DriveType HealthStatus OperationalStatus SizeRemaining
Size
# ----------- --------------- ---------- --------- ------------ ----------------- -------------
----
# S PR1SAP ReFS Fixed Healthy OK 504.98 GB
511.81 GB

3. Verify that the disk is now visible as a cluster disk.

# List all disks


Get-ClusterAvailableDisk -All
# Example output
# Cluster : pr1clust
# Id : 88ff1d94-0cf1-4c70-89ae-cbbb2826a484
# Name : Cluster Disk 1
# Number : 2
# Size : 549755813888
# Partitions : {\\?\GLOBALROOT\Device\Harddisk2\Partition2\}

4. Register the disk in the cluster.

# Add the disk to cluster


Get-ClusterAvailableDisk -All | Add-ClusterDisk
# Example output
# Name State OwnerGroup ResourceType
# ---- ----- ---------- ------------
# Cluster Disk 1 Online Available Storage Physical Disk

SIOS DataKeeper Cluster Edition for the SAP ASCS/SCS cluster share
disk
This section is only applicable, if you are using the third-party software SIOS DataKeeper Cluster Edition to create
a mirrored storage that simulates cluster shared disk.
Now, you have a working Windows Server failover clustering configuration in Azure. To install an SAP ASCS/SCS
instance, you need a shared disk resource. One of the options is to use SIOS DataKeeper Cluster Edition is a third-
party solution that you can use to create shared disk resources.
Installing SIOS DataKeeper Cluster Edition for the SAP ASCS/SCS cluster share disk involves these tasks:
Add Microsoft .NET Framework, if needed. See the [SIOS documentation]
((https://us.sios.com/products/datakeeper-cluster/) for the most up-to-date .NET framework requirements
Install SIOS DataKeeper
Configure SIOS DataKeeper
Install SIOS DataKeeper
Install SIOS DataKeeper Cluster Edition on each node in the cluster. To create virtual shared storage with SIOS
DataKeeper, create a synced mirror and then simulate cluster shared storage.
Before you install the SIOS software, create the DataKeeperSvc domain user.

NOTE
Add the DataKeeperSvc domain user to the Local Administrator group on both cluster nodes.

1. Install the SIOS software on both cluster nodes.

First page of the SIOS DataKeeper installation


2. In the dialog box, select Yes .

DataKeeper informs you that a service will be disabled


3. In the dialog box, we recommend that you select Domain or Ser ver account .
User selection for SIOS DataKeeper
4. Enter the domain account user name and password that you created for SIOS DataKeeper.

Enter the domain user name and password for the SIOS DataKeeper installation
5. Install the license key for your SIOS DataKeeper instance, as shown in Figure 35.
Enter your SIOS DataKeeper license key
6. When prompted, restart the virtual machine.
Configure SIOS DataKeeper
After you install SIOS DataKeeper on both nodes, start the configuration. The goal of the configuration is to have
synchronous data replication between the additional disks that are attached to each of the virtual machines.
1. Start the DataKeeper Management and Configuration tool, and then select Connect Ser ver .

SIOS DataKeeper Management and Configuration tool


2. Enter the name or TCP/IP address of the first node the Management and Configuration tool should connect
to, and, in a second step, the second node.
Insert the name or TCP/IP address of the first node the Management and Configuration tool should
connect to, and in a second step, the second node
3. Create the replication job between the two nodes.

Create a replication job


A wizard guides you through the process of creating a replication job.
4. Define the name of the replication job.

Define the name of the replication job

Define the base data for the node, which should be the current source node
5. Define the name, TCP/IP address, and disk volume of the target node.

Define the name, TCP/IP address, and disk volume of the current target node
6. Define the compression algorithms. In our example, we recommend that you compress the replication
stream. Especially in resynchronization situations, the compression of the replication stream dramatically
reduces resynchronization time. Compression uses the CPU and RAM resources of a virtual machine. As
the compression rate increases, so does the volume of CPU resources that are used. You can adjust this
setting later.
7. Another setting you need to check is whether the replication occurs asynchronously or synchronously.
When you protect SAP ASCS/SCS configurations, you must use synchronous replication.

Define replication details


8. Define whether the volume that is replicated by the replication job should be represented to a Windows
Server failover cluster configuration as a shared disk. For the SAP ASCS/SCS configuration, select Yes so
that the Windows cluster sees the replicated volume as a shared disk that it can use as a cluster volume.
Select Yes to set the replicated volume as a cluster volume
After the volume is created, the DataKeeper Management and Configuration tool shows that the replication
job is active.

DataKeeper synchronous mirroring for the SAP ASCS/SCS share disk is active
Failover Cluster Manager now shows the disk as a DataKeeper disk, as shown in Figure 45:

Failover Cluster Manager shows the disk that DataKeeper replicated

Next steps
Install SAP NetWeaver HA by using a Windows failover cluster and shared disk for an SAP ASCS/SCS instance
Prepare Azure infrastructure for SAP high availability
by using a Windows failover cluster and file share for
SAP ASCS/SCS instances
12/22/2020 • 3 minutes to read • Edit Online

This article describes the Azure infrastructure preparation steps that are needed to install and configure high-
availability SAP systems on a Windows Server Failover Clustering cluster (WSFC), using scale-out file share as an
option for clustering SAP ASCS/SCS instances.

Prerequisite
Before you start the installation, review the following article:
Architecture guide: Cluster SAP ASCS/SCS instances on a Windows failover cluster by using file share

Host names and IP addresses


VIRT UA L H O ST N A M E RO L E VIRT UA L H O ST N A M E STAT IC IP A DDRESS AVA IL A B IL IT Y SET

First cluster node ASCS/SCS ascs-1 10.0.6.4 ascs-as


cluster

Second cluster node ascs-2 10.0.6.5 ascs-as


ASCS/SCS cluster

Cluster network name ascs-cl 10.0.6.6 n/a

SAP PR1 ASCS cluster pr1-ascs 10.0.6.7 n/a


network name

Table 1 : ASCS/SCS cluster

SA P <SID> SA P A SC S/ SC S IN STA N C E N UM B ER

PR1 00

Table 2 : SAP ASCS/SCS instance details

VIRT UA L H O ST N A M E RO L E VIRT UA L H O ST N A M E STAT IC IP A DDRESS AVA IL A B IL IT Y SET

First cluster node sofs-1 10.0.6.10 sofs-as

Second cluster node sofs-2 10.0.6.11 sofs-as

Third cluster node sofs-3 10.0.6.12 sofs-as

Cluster network name sofs-cl 10.0.6.13 n/a


VIRT UA L H O ST N A M E RO L E VIRT UA L H O ST N A M E STAT IC IP A DDRESS AVA IL A B IL IT Y SET

SAP global host name sapglobal Use IPs of all cluster nodes n/a

Table 3 : Scale-Out File Server cluster

Deploy VMs for an SAP ASCS/SCS cluster, a Database Management


System (DBMS) cluster, and SAP Application Server instances
To prepare the Azure infrastructure, complete the following:
Deploy the VMs.
Create and configure Azure Load balancer for SAP ASCS.
If using Enqueue replication server 2 (ERS2), perform the Azure Load Balancer configuration for ERS2 .
Add Windows virtual machines to the domain.
Add registry entries on both cluster nodes of the SAP ASCS/SCS instance.
As you use Windows Server 2016, we recommend that you configure Azure Cloud Witness.

Deploy the Scale-Out File Server cluster manually


You can deploy the Microsoft Scale-Out File Server cluster manually, as described in the blog Storage Spaces
Direct in Azure, by executing the following code:

# Set an execution policy - all cluster nodes


Set-ExecutionPolicy Unrestricted

# Define Scale-Out File Server cluster nodes


$nodes = ("sofs-1", "sofs-2", "sofs-3")

# Add cluster and Scale-Out File Server features


Invoke-Command $nodes {Install-WindowsFeature Failover-Clustering, FS-FileServer -IncludeAllSubFeature -
IncludeManagementTools -Verbose}

# Test cluster
Test-Cluster -node $nodes -Verbose

# Install cluster
$ClusterNetworkName = "sofs-cl"
$ClusterIP = "10.0.6.13"
New-Cluster -Name $ClusterNetworkName -Node $nodes –NoStorage –StaticAddress $ClusterIP -Verbose

# Set Azure Quorum


Set-ClusterQuorum –CloudWitness –AccountName gorcloudwitness -AccessKey <YourAzureStorageAccessKey>

# Enable Storage Spaces Direct


Enable-ClusterS2D

# Create Scale-Out File Server with an SAP global host name


# SAPGlobalHostName
$SAPGlobalHostName = "sapglobal"
Add-ClusterScaleOutFileServerRole -Name $SAPGlobalHostName

Deploy Scale-Out File Server automatically


You can also automate the deployment of Scale-Out File Server by using Azure Resource Manager templates in
an existing virtual network and Active Directory environment.

IMPORTANT
We recommend that you have three or more cluster nodes for Scale-Out File Server with three-way mirroring.
In the Scale-Out File Server Resource Manager template UI, you must specify the VM count.

Use managed disks


The Azure Resource Manager template for deploying Scale-Out File Server with Storage Spaces Direct and Azure
Managed Disks is available on GitHub.
We recommend that you use Managed Disks.

Figure 1 : UI screen for Scale-Out File Server Resource Manager template with managed disks
In the template, do the following:
1. In the Vm Count box, enter a minimum count of 2 .
2. In the Vm Disk Count box, enter a minimum disk count of 3 (2 disks + 1 spare disk = 3 disks).
3. In the Sofs Name box, enter the SAP global host network name, sapglobalhost .
4. In the Share Name box, enter the file share name, sapmnt .
Use unmanaged disks
The Azure Resource Manager template for deploying Scale-Out File Server with Storage Spaces Direct and Azure
Unmanaged Disks is available on GitHub.
Figure 2 : UI screen for the Scale-Out File Server Azure Resource Manager template without managed disks
In the Storage Account Type box, select Premium Storage . All other settings are the same as the settings for
managed disks.

Adjust cluster timeout settings


After you successfully install the Windows Scale-Out File Server cluster, adapt timeout thresholds for failover
detection to conditions in Azure. The parameters to be changed are documented in Tuning failover cluster
network thresholds. Assuming that your clustered VMs are in the same subnet, change the following parameters
to these values:
SameSubNetDelay = 2000
SameSubNetThreshold = 15
RouteHistoryLength = 30
These settings were tested with customers, and offer a good compromise. They are resilient enough, but they
also provide fast enough failover in real error conditions or VM failure.

Next steps
Install SAP NetWeaver high availability on a Windows failover cluster and file share for SAP ASCS/SCS
instances
High availability for NFS on Azure VMs on SUSE
Linux Enterprise Server
12/22/2020 • 14 minutes to read • Edit Online

This article describes how to deploy the virtual machines, configure the virtual machines, install the cluster
framework, and install a highly available NFS server that can be used to store the shared data of a highly
available SAP system. This guide describes how to set up a highly available NFS server that is used by two SAP
systems, NW1 and NW2. The names of the resources (for example virtual machines, virtual networks) in the
example assume that you have used the SAP file server template with resource prefix prod .

NOTE
This article contains references to the terms slave and master, terms that Microsoft no longer uses. When the terms are
removed from the software, we’ll remove them from this article.

Read the following SAP Notes and papers first


SAP Note 1928533, which has:
List of Azure VM sizes that are supported for the deployment of SAP software
Important capacity information for Azure VM sizes
Supported SAP software, and operating system (OS) and database combinations
Required SAP kernel version for Windows and Linux on Microsoft Azure
SAP Note 2015553 lists prerequisites for SAP-supported SAP software deployments in Azure.
SAP Note 2205917 has recommended OS settings for SUSE Linux Enterprise Server for SAP Applications
SAP Note 1944799 has SAP HANA Guidelines for SUSE Linux Enterprise Server for SAP Applications
SAP Note 2178632 has detailed information about all monitoring metrics reported for SAP in Azure.
SAP Note 2191498 has the required SAP Host Agent version for Linux in Azure.
SAP Note 2243692 has information about SAP licensing on Linux in Azure.
SAP Note 1984787 has general information about SUSE Linux Enterprise Server 12.
SAP Note 1999351 has additional troubleshooting information for the Azure Enhanced Monitoring
Extension for SAP.
SAP Community WIKI has all required SAP Notes for Linux.
Azure Virtual Machines planning and implementation for SAP on Linux
Azure Virtual Machines deployment for SAP on Linux (this article)
Azure Virtual Machines DBMS deployment for SAP on Linux
SUSE Linux Enterprise High Availability Extension 12 SP3 best practices guides
Highly Available NFS Storage with DRBD and Pacemaker
SUSE Linux Enterprise Server for SAP Applications 12 SP3 best practices guides
SUSE High Availability Extension 12 SP3 Release Notes
Overview
To achieve high availability, SAP NetWeaver requires an NFS server. The NFS server is configured in a separate
cluster and can be used by multiple SAP systems.

The NFS server uses a dedicated virtual hostname and virtual IP addresses for every SAP system that uses this
NFS server. On Azure, a load balancer is required to use a virtual IP address. The following list shows the
configuration of the load balancer.
Frontend configuration
IP address 10.0.0.4 for NW1
IP address 10.0.0.5 for NW2
Backend configuration
Connected to primary network interfaces of all virtual machines that should be part of the NFS cluster
Probe Port
Port 61000 for NW1
Port 61001 for NW2
Load balancing rules (if using basic load balancer)
2049 TCP for NW1
2049 UDP for NW1
2049 TCP for NW2
2049 UDP for NW2

Set up a highly available NFS server


You can either use an Azure Template from GitHub to deploy all required Azure resources, including the virtual
machines, availability set, and load balancer or you can deploy the resources manually.
Deploy Linux via Azure Template
The Azure Marketplace contains an image for SUSE Linux Enterprise Server for SAP Applications 12 that you can
use to deploy new virtual machines. You can use one of the quickstart templates on GitHub to deploy all
required resources. The template deploys the virtual machines, the load balancer, availability set etc. Follow
these steps to deploy the template:
1. Open the SAP file server template in the Azure portal
2. Enter the following parameters
a. Resource Prefix
Enter the prefix you want to use. The value is used as a prefix for the resources that are deployed.
b. SAP System Count
Enter the number of SAP systems that will use this file server. This will deploy the required amount of
frontend configurations, load-balancing rules, probe ports, disks etc.
c. Os Type
Select one of the Linux distributions. For this example, select SLES 12
d. Admin Username and Admin Password
A new user is created that can be used to log on to the machine.
e. Subnet ID
If you want to deploy the VM into an existing VNet where you have a subnet defined the VM should
be assigned to, name the ID of that specific subnet. The ID usually looks like
/subscriptions/<subscription ID> /resourceGroups/<resource group
name> /providers/Microsoft.Network/virtualNetworks/<vir tual network
name> /subnets/<subnet name>
Deploy Linux manually via Azure portal
You first need to create the virtual machines for this NFS cluster. Afterwards, you create a load balancer and use
the virtual machines in the backend pools.
1. Create a Resource Group
2. Create a Virtual Network
3. Create an Availability Set
Set max update domain
4. Create Virtual Machine 1 Use at least SLES4SAP 12 SP3, in this example the SLES4SAP 12 SP3 BYOS image
SLES For SAP Applications 12 SP3 (BYOS) is used
Select Availability Set created earlier
5. Create Virtual Machine 2 Use at least SLES4SAP 12 SP3, in this example the SLES4SAP 12 SP3 BYOS image
SLES For SAP Applications 12 SP3 (BYOS) is used
Select Availability Set created earlier
6. Add one data disk for each SAP system to both virtual machines.
7. Create a Load Balancer (internal). We recommend standard load balancer.
a. Follow these instructions to create standard Load balancer:
a. Create the frontend IP addresses
a. IP address 10.0.0.4 for NW1
a. Open the load balancer, select frontend IP pool, and click Add
b. Enter the name of the new frontend IP pool (for example nw1-frontend )
c. Set the Assignment to Static and enter the IP address (for example 10.0.0.4 )
d. Click OK
b. IP address 10.0.0.5 for NW2
Repeat the steps above for NW2
b. Create the backend pools
a. Connected to primary network interfaces of all virtual machines that should be part of
the NFS cluster
a. Open the load balancer, select backend pools, and click Add
b. Enter the name of the new backend pool (for example nw-backend )
c. Select Virtual Network
d. Click Add a virtual machine
e. Select the virtual machines of the NFS cluster and their IP addresses.
f. Click Add.
c. Create the health probes
a. Port 61000 for NW1
a. Open the load balancer, select health probes, and click Add
b. Enter the name of the new health probe (for example nw1-hp )
c. Select TCP as protocol, port 61000 , keep Interval 5 and Unhealthy threshold 2
d. Click OK
b. Port 61001 for NW2
Repeat the steps above to create a health probe for NW2
d. Load balancing rules
a. Open the load balancer, select load-balancing rules and click Add
b. Enter the name of the new load balancer rule (for example nw1-lb )
c. Select the frontend IP address, backend pool, and health probe you created earlier (for
example nw1-frontend . nw-backend and nw1-hp )
d. Select HA Por ts .
e. Increase idle timeout to 30 minutes
f. Make sure to enable Floating IP
g. Click OK
Repeat the steps above to create load balancing rule for NW2
b. Alternatively, if your scenario requires basic load balancer, follow these instructions:
a. Create the frontend IP addresses
a. IP address 10.0.0.4 for NW1
a. Open the load balancer, select frontend IP pool, and click Add
b. Enter the name of the new frontend IP pool (for example nw1-frontend )
c. Set the Assignment to Static and enter the IP address (for example 10.0.0.4 )
d. Click OK
b. IP address 10.0.0.5 for NW2
Repeat the steps above for NW2
b. Create the backend pools
a. Connected to primary network interfaces of all virtual machines that should be part of
the NFS cluster
a. Open the load balancer, select backend pools, and click Add
b. Enter the name of the new backend pool (for example nw-backend )
c. Click Add a virtual machine
d. Select the Availability Set you created earlier
e. Select the virtual machines of the NFS cluster
f. Click OK
c. Create the health probes
a. Port 61000 for NW1
a. Open the load balancer, select health probes, and click Add
b. Enter the name of the new health probe (for example nw1-hp )
c. Select TCP as protocol, port 61000 , keep Interval 5 and Unhealthy threshold 2
d. Click OK
b. Port 61001 for NW2
Repeat the steps above to create a health probe for NW2
d. Load balancing rules
a. 2049 TCP for NW1
a. Open the load balancer, select load balancing rules and click Add
b. Enter the name of the new load balancer rule (for example nw1-lb-2049 )
c. Select the frontend IP address, backend pool, and health probe you created earlier
(for example nw1-frontend )
d. Keep protocol TCP , enter port 2049
e. Increase idle timeout to 30 minutes
f. Make sure to enable Floating IP
g. Click OK
b. 2049 UDP for NW1
Repeat the steps above for port 2049 and UDP for NW1
c. 2049 TCP for NW2
Repeat the steps above for port 2049 and TCP for NW2
d. 2049 UDP for NW2
Repeat the steps above for port 2049 and UDP for NW2

IMPORTANT
Floating IP is not supported on a NIC secondary IP configuration in load-balancing scenarios. For details see Azure Load
balancer Limitations. If you need additional IP address for the VM, deploy a second NIC.

NOTE
When VMs without public IP addresses are placed in the backend pool of internal (no public IP address) Standard Azure
load balancer, there will be no outbound internet connectivity, unless additional configuration is performed to allow
routing to public end points. For details on how to achieve outbound connectivity see Public endpoint connectivity for
Virtual Machines using Azure Standard Load Balancer in SAP high-availability scenarios.

IMPORTANT
Do not enable TCP timestamps on Azure VMs placed behind Azure Load Balancer. Enabling TCP timestamps will cause the
health probes to fail. Set parameter net.ipv4.tcp_timestamps to 0 . For details see Load Balancer health probes.

Create Pacemaker cluster


Follow the steps in Setting up Pacemaker on SUSE Linux Enterprise Server in Azure to create a basic Pacemaker
cluster for this NFS server.
Configure NFS server
The following items are prefixed with either [A] - applicable to all nodes, [1] - only applicable to node 1 or [2] -
only applicable to node 2.
1. [A] Setup host name resolution
You can either use a DNS server or modify the /etc/hosts on all nodes. This example shows how to use
the /etc/hosts file. Replace the IP address and the hostname in the following commands
sudo vi /etc/hosts

Insert the following lines to /etc/hosts. Change the IP address and hostname to match your environment

# IP address of the load balancer frontend configuration for NFS


10.0.0.4 nw1-nfs
10.0.0.5 nw2-nfs

2. [A] Enable NFS server


Create the root NFS export entry

sudo sh -c 'echo /srv/nfs/ *\(rw,no_root_squash,fsid=0\)>/etc/exports'

sudo mkdir /srv/nfs/

3. [A] Install drbd components

sudo zypper install drbd drbd-kmp-default drbd-utils

4. [A] Create a partition for the drbd devices


List all available data disks

sudo ls /dev/disk/azure/scsi1/

Example output

lun0 lun1

Create partitions for every data disk

sudo sh -c 'echo -e "n\n\n\n\n\nw\n" | fdisk /dev/disk/azure/scsi1/lun0'


sudo sh -c 'echo -e "n\n\n\n\n\nw\n" | fdisk /dev/disk/azure/scsi1/lun1'

5. [A] Create LVM configurations


List all available partitions

ls /dev/disk/azure/scsi1/lun*-part*

Example output

/dev/disk/azure/scsi1/lun0-part1 /dev/disk/azure/scsi1/lun1-part1

Create LVM volumes for every partition


sudo pvcreate /dev/disk/azure/scsi1/lun0-part1
sudo vgcreate vg-NW1-NFS /dev/disk/azure/scsi1/lun0-part1
sudo lvcreate -l 100%FREE -n NW1 vg-NW1-NFS

sudo pvcreate /dev/disk/azure/scsi1/lun1-part1


sudo vgcreate vg-NW2-NFS /dev/disk/azure/scsi1/lun1-part1
sudo lvcreate -l 100%FREE -n NW2 vg-NW2-NFS

6. [A] Configure drbd

sudo vi /etc/drbd.conf

Make sure that the drbd.conf file contains the following two lines

include "drbd.d/global_common.conf";
include "drbd.d/*.res";

Change the global drbd configuration

sudo vi /etc/drbd.d/global_common.conf

Add the following entries to the handler and net section.

global {
usage-count no;
}
common {
handlers {
fence-peer "/usr/lib/drbd/crm-fence-peer.sh";
after-resync-target "/usr/lib/drbd/crm-unfence-peer.sh";
split-brain "/usr/lib/drbd/notify-split-brain.sh root";
pri-lost-after-sb "/usr/lib/drbd/notify-pri-lost-after-sb.sh; /usr/lib/drbd/notify-
emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f";
}
startup {
wfc-timeout 0;
}
options {
}
disk {
md-flushes yes;
disk-flushes yes;
c-plan-ahead 1;
c-min-rate 100M;
c-fill-target 20M;
c-max-rate 4G;
}
net {
after-sb-0pri discard-younger-primary;
after-sb-1pri discard-secondary;
after-sb-2pri call-pri-lost-after-sb;
protocol C;
tcp-cork yes;
max-buffers 20000;
max-epoch-size 20000;
sndbuf-size 0;
rcvbuf-size 0;
}
}
7. [A] Create the NFS drbd devices

sudo vi /etc/drbd.d/NW1-nfs.res

Insert the configuration for the new drbd device and exit

resource NW1-nfs {
protocol C;
disk {
on-io-error detach;
}
on prod-nfs-0 {
address 10.0.0.6:7790;
device /dev/drbd0;
disk /dev/vg-NW1-NFS/NW1;
meta-disk internal;
}
on prod-nfs-1 {
address 10.0.0.7:7790;
device /dev/drbd0;
disk /dev/vg-NW1-NFS/NW1;
meta-disk internal;
}
}

sudo vi /etc/drbd.d/NW2-nfs.res

Insert the configuration for the new drbd device and exit

resource NW2-nfs {
protocol C;
disk {
on-io-error detach;
}
on prod-nfs-0 {
address 10.0.0.6:7791;
device /dev/drbd1;
disk /dev/vg-NW2-NFS/NW2;
meta-disk internal;
}
on prod-nfs-1 {
address 10.0.0.7:7791;
device /dev/drbd1;
disk /dev/vg-NW2-NFS/NW2;
meta-disk internal;
}
}

Create the drbd device and start it

sudo drbdadm create-md NW1-nfs


sudo drbdadm create-md NW2-nfs
sudo drbdadm up NW1-nfs
sudo drbdadm up NW2-nfs

8. [1] Skip initial synchronization


sudo drbdadm new-current-uuid --clear-bitmap NW1-nfs
sudo drbdadm new-current-uuid --clear-bitmap NW2-nfs

9. [1] Set the primary node

sudo drbdadm primary --force NW1-nfs


sudo drbdadm primary --force NW2-nfs

10. [1] Wait until the new drbd devices are synchronized

sudo drbdsetup wait-sync-resource NW1-nfs


sudo drbdsetup wait-sync-resource NW2-nfs

11. [1] Create file systems on the drbd devices

sudo mkfs.xfs /dev/drbd0


sudo mkdir /srv/nfs/NW1
sudo chattr +i /srv/nfs/NW1
sudo mount -t xfs /dev/drbd0 /srv/nfs/NW1
sudo mkdir /srv/nfs/NW1/sidsys
sudo mkdir /srv/nfs/NW1/sapmntsid
sudo mkdir /srv/nfs/NW1/trans
sudo mkdir /srv/nfs/NW1/ASCS
sudo mkdir /srv/nfs/NW1/ASCSERS
sudo mkdir /srv/nfs/NW1/SCS
sudo mkdir /srv/nfs/NW1/SCSERS
sudo umount /srv/nfs/NW1

sudo mkfs.xfs /dev/drbd1


sudo mkdir /srv/nfs/NW2
sudo chattr +i /srv/nfs/NW2
sudo mount -t xfs /dev/drbd1 /srv/nfs/NW2
sudo mkdir /srv/nfs/NW2/sidsys
sudo mkdir /srv/nfs/NW2/sapmntsid
sudo mkdir /srv/nfs/NW2/trans
sudo mkdir /srv/nfs/NW2/ASCS
sudo mkdir /srv/nfs/NW2/ASCSERS
sudo mkdir /srv/nfs/NW2/SCS
sudo mkdir /srv/nfs/NW2/SCSERS
sudo umount /srv/nfs/NW2

12. [A] Setup drbd split-brain detection


When using drbd to synchronize data from one host to another, a so called split brain can occur. A split
brain is a scenario where both cluster nodes promoted the drbd device to be the primary and went out of
sync. It might be a rare situation but you still want to handle and resolve a split brain as fast as possible. It
is therefore important to be notified when a split brain happened.
Read the official drbd documentation on how to set up a split brain notification.
It is also possible to automatically recover from a split brain scenario. For more information, read
Automatic split brain recovery policies
Configure Cluster Framework
1. [1] Add the NFS drbd devices for SAP system NW1 to the cluster configuration
IMPORTANT
Recent testing revealed situations, where netcat stops responding to requests due to backlog and its limitation of
handling only one connection. The netcat resource stops listening to the Azure Load balancer requests and the
floating IP becomes unavailable.
For existing Pacemaker clusters, we recommended in the past replacing netcat with socat. Currently we
recommend using azure-lb resource agent, which is part of package resource-agents, with the following package
version requirements:
For SLES 12 SP4/SP5, the version must be at least resource-agents-4.3.018.a7fb5035-3.30.1.
For SLES 15/15 SP1, the version must be at least resource-agents-4.3.0184.6ee15eb2-4.13.1.
Note that the change will require brief downtime.
For existing Pacemaker clusters, if the configuration was already changed to use socat as described in Azure Load-
Balancer Detection Hardening, there is no requirement to switch immediately to azure-lb resource agent.

sudo crm configure rsc_defaults resource-stickiness="200"

# Enable maintenance mode


sudo crm configure property maintenance-mode=true

sudo crm configure primitive drbd_NW1_nfs \


ocf:linbit:drbd \
params drbd_resource="NW1-nfs" \
op monitor interval="15" role="Master" \
op monitor interval="30" role="Slave"

sudo crm configure ms ms-drbd_NW1_nfs drbd_NW1_nfs \


meta master-max="1" master-node-max="1" clone-max="2" \
clone-node-max="1" notify="true" interleave="true"

sudo crm configure primitive fs_NW1_sapmnt \


ocf:heartbeat:Filesystem \
params device=/dev/drbd0 \
directory=/srv/nfs/NW1 \
fstype=xfs \
op monitor interval="10s"

sudo crm configure primitive nfsserver systemd:nfs-server \


op monitor interval="30s"
sudo crm configure clone cl-nfsserver nfsserver

sudo crm configure primitive exportfs_NW1 \


ocf:heartbeat:exportfs \
params directory="/srv/nfs/NW1" \
options="rw,no_root_squash,crossmnt" clientspec="*" fsid=1 wait_for_leasetime_on_stop=true op
monitor interval="30s"

sudo crm configure primitive vip_NW1_nfs \


IPaddr2 \
params ip=10.0.0.4 cidr_netmask=24 op monitor interval=10 timeout=20

sudo crm configure primitive nc_NW1_nfs azure-lb port=61000

sudo crm configure group g-NW1_nfs \


fs_NW1_sapmnt exportfs_NW1 nc_NW1_nfs vip_NW1_nfs

sudo crm configure order o-NW1_drbd_before_nfs inf: \


ms-drbd_NW1_nfs:promote g-NW1_nfs:start

sudo crm configure colocation col-NW1_nfs_on_drbd inf: \


g-NW1_nfs ms-drbd_NW1_nfs:Master
2. [1] Add the NFS drbd devices for SAP system NW2 to the cluster configuration

# Enable maintenance mode


sudo crm configure property maintenance-mode=true

sudo crm configure primitive drbd_NW2_nfs \


ocf:linbit:drbd \
params drbd_resource="NW2-nfs" \
op monitor interval="15" role="Master" \
op monitor interval="30" role="Slave"

sudo crm configure ms ms-drbd_NW2_nfs drbd_NW2_nfs \


meta master-max="1" master-node-max="1" clone-max="2" \
clone-node-max="1" notify="true" interleave="true"

sudo crm configure primitive fs_NW2_sapmnt \


ocf:heartbeat:Filesystem \
params device=/dev/drbd1 \
directory=/srv/nfs/NW2 \
fstype=xfs \
op monitor interval="10s"

sudo crm configure primitive exportfs_NW2 \


ocf:heartbeat:exportfs \
params directory="/srv/nfs/NW2" \
options="rw,no_root_squash,crossmnt" clientspec="*" fsid=2 wait_for_leasetime_on_stop=true op
monitor interval="30s"

sudo crm configure primitive vip_NW2_nfs \


IPaddr2 \
params ip=10.0.0.5 cidr_netmask=24 op monitor interval=10 timeout=20

sudo crm configure primitive nc_NW2_nfs azure-lb port=61001

sudo crm configure group g-NW2_nfs \


fs_NW2_sapmnt exportfs_NW2 nc_NW2_nfs vip_NW2_nfs

sudo crm configure order o-NW2_drbd_before_nfs inf: \


ms-drbd_NW2_nfs:promote g-NW2_nfs:start

sudo crm configure colocation col-NW2_nfs_on_drbd inf: \


g-NW2_nfs ms-drbd_NW2_nfs:Master

The crossmnt option in the exportfs cluster resources is present in our documentation for backward
compatibility with older SLES versions.
3. [1] Disable maintenance mode

sudo crm configure property maintenance-mode=false

Next steps
Install the SAP ASCS and database
Azure Virtual Machines planning and implementation for SAP
Azure Virtual Machines deployment for SAP
Azure Virtual Machines DBMS deployment for SAP
To learn how to establish high availability and plan for disaster recovery of SAP HANA on Azure VMs, see
High Availability of SAP HANA on Azure Virtual Machines (VMs)
GlusterFS on Azure VMs on Red Hat Enterprise
Linux for SAP NetWeaver
12/22/2020 • 9 minutes to read • Edit Online

This article describes how to deploy the virtual machines, configure the virtual machines, and install a GlusterFS
cluster that can be used to store the shared data of a highly available SAP system. This guide describes how to set
up GlusterFS that is used by two SAP systems, NW1 and NW2. The names of the resources (for example virtual
machines, virtual networks) in the example assume that you have used the SAP file server template with resource
prefix glust .
Read the following SAP Notes and papers first
SAP Note 1928533, which has:
List of Azure VM sizes that are supported for the deployment of SAP software
Important capacity information for Azure VM sizes
Supported SAP software, and operating system (OS) and database combinations
Required SAP kernel version for Windows and Linux on Microsoft Azure
SAP Note 2015553 lists prerequisites for SAP-supported SAP software deployments in Azure.
SAP Note 2002167 has recommended OS settings for Red Hat Enterprise Linux
SAP Note 2009879 has SAP HANA Guidelines for Red Hat Enterprise Linux
SAP Note 2178632 has detailed information about all monitoring metrics reported for SAP in Azure.
SAP Note 2191498 has the required SAP Host Agent version for Linux in Azure.
SAP Note 2243692 has information about SAP licensing on Linux in Azure.
SAP Note 1999351 has additional troubleshooting information for the Azure Enhanced Monitoring
Extension for SAP.
SAP Community WIKI has all required SAP Notes for Linux.
Azure Virtual Machines planning and implementation for SAP on Linux
Azure Virtual Machines deployment for SAP on Linux (this article)
Azure Virtual Machines DBMS deployment for SAP on Linux
Product Documentation for Red Hat Gluster Storage
General RHEL documentation
High Availability Add-On Overview
High Availability Add-On Administration
High Availability Add-On Reference
Azure specific RHEL documentation:
Support Policies for RHEL High Availability Clusters - Microsoft Azure Virtual Machines as Cluster
Members
Installing and Configuring a Red Hat Enterprise Linux 7.4 (and later) High-Availability Cluster on
Microsoft Azure
Overview
To achieve high availability, SAP NetWeaver requires shared storage. GlusterFS is configured in a separate cluster
and can be used by multiple SAP systems.

Set up GlusterFS
You can either use an Azure Template from github to deploy all required Azure resources, including the virtual
machines, availability set and network interfaces or you can deploy the resources manually.
Deploy Linux via Azure Template
The Azure Marketplace contains an image for Red Hat Enterprise Linux that you can use to deploy new virtual
machines. You can use one of the quickstart templates on github to deploy all required resources. The template
deploys the virtual machines, availability set etc. Follow these steps to deploy the template:
1. Open the SAP file server template in the Azure portal
2. Enter the following parameters
a. Resource Prefix
Enter the prefix you want to use. The value is used as a prefix for the resources that are deployed.
b. SAP System Count Enter the number of SAP systems that will use this file server. This will deploy the
required number of disks etc.
c. Os Type
Select one of the Linux distributions. For this example, select RHEL 7
d. Admin Username, Admin Password or SSH key
A new user is created that can be used to log on to the machine.
e. Subnet ID
If you want to deploy the VM into an existing VNet where you have a subnet defined the VM should be
assigned to, name the ID of that specific subnet. The ID usually looks like
/subscriptions/<subscription ID> /resourceGroups/<resource group
name> /providers/Microsoft.Network/virtualNetworks/<vir tual network name> /subnets/<subnet
name>
Deploy Linux manually via Azure portal
You first need to create the virtual machines for this cluster. Afterwards, you create a load balancer and use the
virtual machines in the backend pools. We recommend standard load balancer.
1. Create a Resource Group
2. Create a Virtual Network
3. Create an Availability Set
Set max update domain
4. Create Virtual Machine 1
Use at least RHEL 7, in this example the Red Hat Enterprise Linux 7.4 image
https://portal.azure.com/#create/RedHat.RedHatEnterpriseLinux74-ARM
Select Availability Set created earlier
5. Create Virtual Machine 2
Use at least RHEL 7, in this example the Red Hat Enterprise Linux 7.4 image
https://portal.azure.com/#create/RedHat.RedHatEnterpriseLinux74-ARM
Select Availability Set created earlier
6. Add one data disk for each SAP system to both virtual machines.
Configure GlusterFS
The following items are prefixed with either [A] - applicable to all nodes, [1] - only applicable to node 1, [2] -
only applicable to node 2, [3] - only applicable to node 3.
1. [A] Setup host name resolution
You can either use a DNS server or modify the /etc/hosts on all nodes. This example shows how to use the
/etc/hosts file. Replace the IP address and the hostname in the following commands

sudo vi /etc/hosts

Insert the following lines to /etc/hosts. Change the IP address and hostname to match your environment

# IP addresses of the Gluster nodes


10.0.0.40 glust-0
10.0.0.41 glust-1
10.0.0.42 glust-2

2. [A] Register
Register your virtual machines and attach it to a pool that contains repositories for RHEL 7 and GlusterFS

sudo subscription-manager register


sudo subscription-manager attach --pool=<pool id>

3. [A] Enable GlusterFS repos


In order to install the required packages, enable the following repositories.
sudo subscription-manager repos --disable "*"
sudo subscription-manager repos --enable=rhel-7-server-rpms
sudo subscription-manager repos --enable=rh-gluster-3-for-rhel-7-server-rpms

4. [A] Install GlusterFS packages


Install these packages on all GlusterFS nodes

sudo yum -y install redhat-storage-server

Reboot the nodes after the installation.


5. [A] Modify Firewall
Add firewall rules to allow client traffic to the GlusterFS nodes.

# list the available zones


firewall-cmd --get-active-zones

sudo firewall-cmd --zone=public --add-service=glusterfs --permanent


sudo firewall-cmd --zone=public --add-service=glusterfs

6. [A] Enable and start GlusterFS service


Start the GlusterFS service on all nodes.

sudo systemctl start glusterd


sudo systemctl enable glusterd

7. [1] Create GluserFS


Run the following commands to create the GlusterFS cluster

sudo gluster peer probe glust-1


sudo gluster peer probe glust-2

# Check gluster peer status


sudo gluster peer status

# Number of Peers: 2
#
# Hostname: glust-1
# Uuid: 10d43840-fee4-4120-bf5a-de9c393964cd
# State: Accepted peer request (Connected)
#
# Hostname: glust-2
# Uuid: 9e340385-12fe-495e-ab0f-4f851b588cba
# State: Accepted peer request (Connected)

8. [2] Test peer status


Test the peer status on the second node
sudo gluster peer status
# Number of Peers: 2
#
# Hostname: glust-0
# Uuid: 6bc6927b-7ee2-461b-ad04-da123124d6bd
# State: Peer in Cluster (Connected)
#
# Hostname: glust-2
# Uuid: 9e340385-12fe-495e-ab0f-4f851b588cba
# State: Peer in Cluster (Connected)

9. [3] Test peer status


Test the peer status on the third node

sudo gluster peer status


# Number of Peers: 2
#
# Hostname: glust-0
# Uuid: 6bc6927b-7ee2-461b-ad04-da123124d6bd
# State: Peer in Cluster (Connected)
#
# Hostname: glust-1
# Uuid: 10d43840-fee4-4120-bf5a-de9c393964cd
# State: Peer in Cluster (Connected)

10. [A] Create LVM


In this example, the GlusterFS is used for two SAP systems, NW1 and NW2. Use the following commands
to create LVM configurations for these SAP systems.
Use these commands for NW1
sudo pvcreate --dataalignment 1024K /dev/disk/azure/scsi1/lun0
sudo pvscan
sudo vgcreate --physicalextentsize 256K rhgs-NW1 /dev/disk/azure/scsi1/lun0
sudo vgscan
sudo lvcreate -l 50%FREE -n rhgs-NW1/sapmnt
sudo lvcreate -l 20%FREE -n rhgs-NW1/trans
sudo lvcreate -l 10%FREE -n rhgs-NW1/sys
sudo lvcreate -l 50%FREE -n rhgs-NW1/ascs
sudo lvcreate -l 100%FREE -n rhgs-NW1/aers
sudo lvscan

sudo mkfs.xfs -f -K -i size=512 -n size=8192 /dev/rhgs-NW1/sapmnt


sudo mkfs.xfs -f -K -i size=512 -n size=8192 /dev/rhgs-NW1/trans
sudo mkfs.xfs -f -K -i size=512 -n size=8192 /dev/rhgs-NW1/sys
sudo mkfs.xfs -f -K -i size=512 -n size=8192 /dev/rhgs-NW1/ascs
sudo mkfs.xfs -f -K -i size=512 -n size=8192 /dev/rhgs-NW1/aers

sudo mkdir -p /rhs/NW1/sapmnt


sudo mkdir -p /rhs/NW1/trans
sudo mkdir -p /rhs/NW1/sys
sudo mkdir -p /rhs/NW1/ascs
sudo mkdir -p /rhs/NW1/aers

sudo chattr +i /rhs/NW1/sapmnt


sudo chattr +i /rhs/NW1/trans
sudo chattr +i /rhs/NW1/sys
sudo chattr +i /rhs/NW1/ascs
sudo chattr +i /rhs/NW1/aers

echo -e "/dev/rhgs-NW1/sapmnt\t/rhs/NW1/sapmnt\txfs\tdefaults,inode64,nobarrier,noatime,nouuid 0 2" |


sudo tee -a /etc/fstab
echo -e "/dev/rhgs-NW1/trans\t/rhs/NW1/trans\txfs\tdefaults,inode64,nobarrier,noatime,nouuid 0 2" |
sudo tee -a /etc/fstab
echo -e "/dev/rhgs-NW1/sys\t/rhs/NW1/sys\txfs\tdefaults,inode64,nobarrier,noatime,nouuid 0 2" | sudo
tee -a /etc/fstab
echo -e "/dev/rhgs-NW1/ascs\t/rhs/NW1/ascs\txfs\tdefaults,inode64,nobarrier,noatime,nouuid 0 2" | sudo
tee -a /etc/fstab
echo -e "/dev/rhgs-NW1/aers\t/rhs/NW1/aers\txfs\tdefaults,inode64,nobarrier,noatime,nouuid 0 2" | sudo
tee -a /etc/fstab
sudo mount -a

Use these commands for NW2


sudo pvcreate --dataalignment 1024K /dev/disk/azure/scsi1/lun1
sudo pvscan
sudo vgcreate --physicalextentsize 256K rhgs-NW2 /dev/disk/azure/scsi1/lun1
sudo vgscan
sudo lvcreate -l 50%FREE -n rhgs-NW2/sapmnt
sudo lvcreate -l 20%FREE -n rhgs-NW2/trans
sudo lvcreate -l 10%FREE -n rhgs-NW2/sys
sudo lvcreate -l 50%FREE -n rhgs-NW2/ascs
sudo lvcreate -l 100%FREE -n rhgs-NW2/aers

sudo mkfs.xfs -f -K -i size=512 -n size=8192 /dev/rhgs-NW2/sapmnt


sudo mkfs.xfs -f -K -i size=512 -n size=8192 /dev/rhgs-NW2/trans
sudo mkfs.xfs -f -K -i size=512 -n size=8192 /dev/rhgs-NW2/sys
sudo mkfs.xfs -f -K -i size=512 -n size=8192 /dev/rhgs-NW2/ascs
sudo mkfs.xfs -f -K -i size=512 -n size=8192 /dev/rhgs-NW2/aers

sudo mkdir -p /rhs/NW2/sapmnt


sudo mkdir -p /rhs/NW2/trans
sudo mkdir -p /rhs/NW2/sys
sudo mkdir -p /rhs/NW2/ascs
sudo mkdir -p /rhs/NW2/aers

sudo chattr +i /rhs/NW2/sapmnt


sudo chattr +i /rhs/NW2/trans
sudo chattr +i /rhs/NW2/sys
sudo chattr +i /rhs/NW2/ascs
sudo chattr +i /rhs/NW2/aers
sudo lvscan

echo -e "/dev/rhgs-NW2/sapmnt\t/rhs/NW2/sapmnt\txfs\tdefaults,inode64,nobarrier,noatime,nouuid 0 2" |


sudo tee -a /etc/fstab
echo -e "/dev/rhgs-NW2/trans\t/rhs/NW2/trans\txfs\tdefaults,inode64,nobarrier,noatime,nouuid 0 2" |
sudo tee -a /etc/fstab
echo -e "/dev/rhgs-NW2/sys\t/rhs/NW2/sys\txfs\tdefaults,inode64,nobarrier,noatime,nouuid 0 2" | sudo
tee -a /etc/fstab
echo -e "/dev/rhgs-NW2/ascs\t/rhs/NW2/ascs\txfs\tdefaults,inode64,nobarrier,noatime,nouuid 0 2" | sudo
tee -a /etc/fstab
echo -e "/dev/rhgs-NW2/aers\t/rhs/NW2/aers\txfs\tdefaults,inode64,nobarrier,noatime,nouuid 0 2" | sudo
tee -a /etc/fstab
sudo mount -a

11. [1] Create the distributed volume


Use the following commands to create the GlusterFS volume for NW1 and start it.

sudo gluster vol create NW1-sapmnt replica 3 glust-0:/rhs/NW1/sapmnt glust-1:/rhs/NW1/sapmnt glust-


2:/rhs/NW1/sapmnt force
sudo gluster vol create NW1-trans replica 3 glust-0:/rhs/NW1/trans glust-1:/rhs/NW1/trans glust-
2:/rhs/NW1/trans force
sudo gluster vol create NW1-sys replica 3 glust-0:/rhs/NW1/sys glust-1:/rhs/NW1/sys glust-
2:/rhs/NW1/sys force
sudo gluster vol create NW1-ascs replica 3 glust-0:/rhs/NW1/ascs glust-1:/rhs/NW1/ascs glust-
2:/rhs/NW1/ascs force
sudo gluster vol create NW1-aers replica 3 glust-0:/rhs/NW1/aers glust-1:/rhs/NW1/aers glust-
2:/rhs/NW1/aers force

sudo gluster volume start NW1-sapmnt


sudo gluster volume start NW1-trans
sudo gluster volume start NW1-sys
sudo gluster volume start NW1-ascs
sudo gluster volume start NW1-aers

Use the following commands to create the GlusterFS volume for NW2 and start it.
sudo gluster vol create NW2-sapmnt replica 3 glust-0:/rhs/NW2/sapmnt glust-1:/rhs/NW2/sapmnt glust-
2:/rhs/NW2/sapmnt force
sudo gluster vol create NW2-trans replica 3 glust-0:/rhs/NW2/trans glust-1:/rhs/NW2/trans glust-
2:/rhs/NW2/trans force
sudo gluster vol create NW2-sys replica 3 glust-0:/rhs/NW2/sys glust-1:/rhs/NW2/sys glust-
2:/rhs/NW2/sys force
sudo gluster vol create NW2-ascs replica 3 glust-0:/rhs/NW2/ascs glust-1:/rhs/NW2/ascs glust-
2:/rhs/NW2/ascs force
sudo gluster vol create NW2-aers replica 3 glust-0:/rhs/NW2/aers glust-1:/rhs/NW2/aers glust-
2:/rhs/NW2/aers force

sudo gluster volume start NW2-sapmnt


sudo gluster volume start NW2-trans
sudo gluster volume start NW2-sys
sudo gluster volume start NW2-ascs
sudo gluster volume start NW2-aers

Next steps
Install the SAP ASCS and database
Azure Virtual Machines planning and implementation for SAP
Azure Virtual Machines deployment for SAP
Azure Virtual Machines DBMS deployment for SAP
To learn how to establish high availability and plan for disaster recovery of SAP HANA on Azure (large
instances), see SAP HANA (large instances) high availability and disaster recovery on Azure.
To learn how to establish high availability and plan for disaster recovery of SAP HANA on Azure VMs, see High
Availability of SAP HANA on Azure Virtual Machines (VMs)
Setting up Pacemaker on SUSE Linux Enterprise
Server in Azure
12/22/2020 • 19 minutes to read • Edit Online

There are two options to set up a Pacemaker cluster in Azure. You can either use a fencing agent, which takes
care of restarting a failed node via the Azure APIs or you can use an SBD device.
The SBD device requires at least one additional virtual machine that acts as an iSCSI target server and
provides an SBD device. These iSCSI target servers can however be shared with other Pacemaker clusters. The
advantage of using an SBD device is, if you are already using SBD devices on-premises, doesn't require any
changes on how you operate the pacemaker cluster. You can use up to three SBD devices for a Pacemaker
cluster to allow an SBD device to become unavailable, for example during OS patching of the iSCSI target
server. If you want to use more than one SBD device per Pacemaker, make sure to deploy multiple iSCSI target
servers and connect one SBD from each iSCSI target server. We recommend using either one SBD device or
three. Pacemaker will not be able to automatically fence a cluster node if you only configure two SBD devices
and one of them is not available. If you want to be able to fence when one iSCSI target server is down, you
have to use three SBD devices and therefore three iSCSI target servers, which is the most resilient
configuration when using SBDs.
Azure Fence agent doesn't require deploying additional virtual machine(s).

IMPORTANT
When planning and deploying Linux Pacemaker clustered nodes and SBD devices, it is essential for the overall reliability
of the complete cluster configuration that the routing between the VMs involved and the VM(s) hosting the SBD
device(s) is not passing through any other devices like NVAs. Otherwise, issues and maintenance events with the NVA
can have a negative impact on the stability and reliability of the overall cluster configuration. In order to avoid such
obstacles, don't define routing rules of NVAs or User Defined Routing rules that route traffic between clustered nodes
and SBD devices through NVAs and similar devices when planning and deploying Linux Pacemaker clustered nodes and
SBD devices.
SBD fencing
Follow these steps if you want to use an SBD device for fencing.
Set up iSCSI target servers
You first need to create the iSCSI target virtual machines. iSCSI target servers can be shared with multiple
Pacemaker clusters.
1. Deploy new SLES 12 SP1 or higher virtual machines and connect to them via ssh. The machines don't need
to be large. A virtual machine size like Standard_E2s_v3 or Standard_D2s_v3 is sufficient. Make sure to use
Premium storage the OS disk.
Run the following commands on all iSCSI target vir tual machines .
1. Update SLES

sudo zypper update

NOTE
You might need to reboot the OS after you upgrade or update the OS.

2. Remove packages
To avoid a known issue with targetcli and SLES 12 SP3, uninstall the following packages. You can ignore
errors about packages that cannot be found

sudo zypper remove lio-utils python-rtslib python-configshell targetcli

3. Install iSCSI target packages

sudo zypper install targetcli-fb dbus-1-python

4. Enable the iSCSI target service

sudo systemctl enable targetcli


sudo systemctl start targetcli

Create iSCSI device on iSCSI target server


Run the following commands on all iSCSI target vir tual machines to create the iSCSI disks for the clusters
used by your SAP systems. In the following example, SBD devices for multiple clusters are created. It shows
you how you would use one iSCSI target server for multiple clusters. The SBD devices are placed on the OS
disk. Make sure that you have enough space.
nfs is used to identify the NFS cluster, ascsnw1 is used to identify the ASCS cluster of NW1 , dbnw1 is used
to identify the database cluster of NW1 , nfs-0 and nfs-1 are the hostnames of the NFS cluster nodes, nw1-
xscs-0 and nw1-xscs-1 are the hostnames of the NW1 ASCS cluster nodes, and nw1-db-0 and nw1-db-1
are the hostnames of the database cluster nodes. Replace them with the hostnames of your cluster nodes and
the SID of your SAP system.
# Create the root folder for all SBD devices
sudo mkdir /sbd

# Create the SBD device for the NFS server


sudo targetcli backstores/fileio create sbdnfs /sbd/sbdnfs 50M write_back=false
sudo targetcli iscsi/ create iqn.2006-04.nfs.local:nfs
sudo targetcli iscsi/iqn.2006-04.nfs.local:nfs/tpg1/luns/ create /backstores/fileio/sbdnfs
sudo targetcli iscsi/iqn.2006-04.nfs.local:nfs/tpg1/acls/ create iqn.2006-04.nfs-0.local:nfs-0
sudo targetcli iscsi/iqn.2006-04.nfs.local:nfs/tpg1/acls/ create iqn.2006-04.nfs-1.local:nfs-1

# Create the SBD device for the ASCS server of SAP System NW1
sudo targetcli backstores/fileio create sbdascsnw1 /sbd/sbdascsnw1 50M write_back=false
sudo targetcli iscsi/ create iqn.2006-04.ascsnw1.local:ascsnw1
sudo targetcli iscsi/iqn.2006-04.ascsnw1.local:ascsnw1/tpg1/luns/ create /backstores/fileio/sbdascsnw1
sudo targetcli iscsi/iqn.2006-04.ascsnw1.local:ascsnw1/tpg1/acls/ create iqn.2006-04.nw1-xscs-0.local:nw1-
xscs-0
sudo targetcli iscsi/iqn.2006-04.ascsnw1.local:ascsnw1/tpg1/acls/ create iqn.2006-04.nw1-xscs-1.local:nw1-
xscs-1

# Create the SBD device for the database cluster of SAP System NW1
sudo targetcli backstores/fileio create sbddbnw1 /sbd/sbddbnw1 50M write_back=false
sudo targetcli iscsi/ create iqn.2006-04.dbnw1.local:dbnw1
sudo targetcli iscsi/iqn.2006-04.dbnw1.local:dbnw1/tpg1/luns/ create /backstores/fileio/sbddbnw1
sudo targetcli iscsi/iqn.2006-04.dbnw1.local:dbnw1/tpg1/acls/ create iqn.2006-04.nw1-db-0.local:nw1-db-0
sudo targetcli iscsi/iqn.2006-04.dbnw1.local:dbnw1/tpg1/acls/ create iqn.2006-04.nw1-db-1.local:nw1-db-1

# save the targetcli changes


sudo targetcli saveconfig

You can check if everything was set up correctly with

sudo targetcli ls

o- /
..........................................................................................................
[...]
o- backstores
............................................................................................... [...]
| o- block ................................................................................... [Storage
Objects: 0]
| o- fileio .................................................................................. [Storage
Objects: 3]
| | o- sbdascsnw1 ................................................ [/sbd/sbdascsnw1 (50.0MiB) write-thru
activated]
| | | o- alua .................................................................................... [ALUA
Groups: 1]
| | | o- default_tg_pt_gp ........................................................ [ALUA state:
Active/optimized]
| | o- sbddbnw1 .................................................... [/sbd/sbddbnw1 (50.0MiB) write-thru
activated]
| | | o- alua .................................................................................... [ALUA
Groups: 1]
| | | o- default_tg_pt_gp ........................................................ [ALUA state:
Active/optimized]
| | o- sbdnfs ........................................................ [/sbd/sbdnfs (50.0MiB) write-thru
activated]
| | o- alua .................................................................................... [ALUA
Groups: 1]
| | o- default_tg_pt_gp ........................................................ [ALUA state:
Active/optimized]
| o- pscsi ................................................................................... [Storage
Objects: 0]
| o- ramdisk ................................................................................. [Storage
Objects: 0]
o- iscsi .............................................................................................
[Targets: 3]
| o- iqn.2006-04.ascsnw1.local:ascsnw1
.................................................................. [TPGs: 1]
| | o- tpg1 ................................................................................ [no-gen-
acls, no-auth]
| | o- acls
........................................................................................... [ACLs: 2]
| | | o- iqn.2006-04.nw1-xscs-0.local:nw1-xscs-0 ...............................................
[Mapped LUNs: 1]
| | | | o- mapped_lun0 ............................................................ [lun0
fileio/sbdascsnw1 (rw)]
| | | o- iqn.2006-04.nw1-xscs-1.local:nw1-xscs-1 ...............................................
[Mapped LUNs: 1]
| | | o- mapped_lun0 ............................................................ [lun0
fileio/sbdascsnw1 (rw)]
| | o- luns
........................................................................................... [LUNs: 1]
| | | o- lun0 .......................................... [fileio/sbdascsnw1 (/sbd/sbdascsnw1)
(default_tg_pt_gp)]
| | o- portals .....................................................................................
[Portals: 1]
| | o- 0.0.0.0:3260
...................................................................................... [OK]
| o- iqn.2006-04.dbnw1.local:dbnw1
...................................................................... [TPGs: 1]
| | o- tpg1 ................................................................................ [no-gen-
acls, no-auth]
| | o- acls
........................................................................................... [ACLs: 2]
| | | o- iqn.2006-04.nw1-db-0.local:nw1-db-0 ...................................................
[Mapped LUNs: 1]
| | | | o- mapped_lun0 .............................................................. [lun0
fileio/sbddbnw1 (rw)]
| | | o- iqn.2006-04.nw1-db-1.local:nw1-db-1 ...................................................
[Mapped LUNs: 1]
| | | o- mapped_lun0 .............................................................. [lun0
fileio/sbddbnw1 (rw)]
| | o- luns
........................................................................................... [LUNs: 1]
| | | o- lun0 .............................................. [fileio/sbddbnw1 (/sbd/sbddbnw1)
(default_tg_pt_gp)]
| | o- portals .....................................................................................
[Portals: 1]
| | o- 0.0.0.0:3260
...................................................................................... [OK]
| o- iqn.2006-04.nfs.local:nfs
.......................................................................... [TPGs: 1]
| o- tpg1 ................................................................................ [no-gen-
acls, no-auth]
| o- acls
........................................................................................... [ACLs: 2]
| | o- iqn.2006-04.nfs-0.local:nfs-0 .........................................................
[Mapped LUNs: 1]
| | | o- mapped_lun0 ................................................................ [lun0
fileio/sbdnfs (rw)]
| | o- iqn.2006-04.nfs-1.local:nfs-1 .........................................................
[Mapped LUNs: 1]
| | o- mapped_lun0 ................................................................ [lun0
fileio/sbdnfs (rw)]
| o- luns
........................................................................................... [LUNs: 1]
| | o- lun0 .................................................. [fileio/sbdnfs (/sbd/sbdnfs)
(default_tg_pt_gp)]
| o- portals .....................................................................................
[Portals: 1]
| o- 0.0.0.0:3260
...................................................................................... [OK]
o- loopback ..........................................................................................
[Targets: 0]
o- vhost .............................................................................................
o- vhost .............................................................................................
[Targets: 0]
o- xen-pvscsi ........................................................................................
[Targets: 0]

Set up SBD device


Connect to the iSCSI device that was created in the last step from the cluster. Run the following commands on
the nodes of the new cluster you want to create. The following items are prefixed with either [A] - applicable to
all nodes, [1] - only applicable to node 1 or [2] - only applicable to node 2.
1. [A] Connect to the iSCSI devices
First, enable the iSCSI and SBD services.

sudo systemctl enable iscsid


sudo systemctl enable iscsi
sudo systemctl enable sbd

2. [1] Change the initiator name on the first node

sudo vi /etc/iscsi/initiatorname.iscsi

Change the content of the file to match the ACLs you used when creating the iSCSI device on the iSCSI
target server, for example for the NFS server.

InitiatorName=iqn.2006-04.nfs-0.local:nfs-0

3. [2] Change the initiator name on the second node

sudo vi /etc/iscsi/initiatorname.iscsi

Change the content of the file to match the ACLs you used when creating the iSCSI device on the iSCSI
target server

InitiatorName=iqn.2006-04.nfs-1.local:nfs-1

4. [A] Restart the iSCSI service


Now restart the iSCSI service to apply the change

sudo systemctl restart iscsid


sudo systemctl restart iscsi

Connect the iSCSI devices. In the example below, 10.0.0.17 is the IP address of the iSCSI target server
and 3260 is the default port. iqn.2006-04.nfs.local:nfs is one of the target names that is listed when
you run the first command below (iscsiadm -m discovery).
sudo iscsiadm -m discovery --type=st --portal=10.0.0.17:3260
sudo iscsiadm -m node -T iqn.2006-04.nfs.local:nfs --login --portal=10.0.0.17:3260
sudo iscsiadm -m node -p 10.0.0.17:3260 -T iqn.2006-04.nfs.local:nfs --op=update --
name=node.startup --value=automatic

# If you want to use multiple SBD devices, also connect to the second iSCSI target server
sudo iscsiadm -m discovery --type=st --portal=10.0.0.18:3260
sudo iscsiadm -m node -T iqn.2006-04.nfs.local:nfs --login --portal=10.0.0.18:3260
sudo iscsiadm -m node -p 10.0.0.18:3260 -T iqn.2006-04.nfs.local:nfs --op=update --
name=node.startup --value=automatic

# If you want to use multiple SBD devices, also connect to the third iSCSI target server
sudo iscsiadm -m discovery --type=st --portal=10.0.0.19:3260
sudo iscsiadm -m node -T iqn.2006-04.nfs.local:nfs --login --portal=10.0.0.19:3260
sudo iscsiadm -m node -p 10.0.0.19:3260 -T iqn.2006-04.nfs.local:nfs --op=update --
name=node.startup --value=automatic

Make sure that the iSCSI devices are available and note down the device name (in the following
example /dev/sde)

lsscsi

# [2:0:0:0] disk Msft Virtual Disk 1.0 /dev/sda


# [3:0:1:0] disk Msft Virtual Disk 1.0 /dev/sdb
# [5:0:0:0] disk Msft Virtual Disk 1.0 /dev/sdc
# [5:0:0:1] disk Msft Virtual Disk 1.0 /dev/sdd
# [6:0:0:0] disk LIO-ORG sbdnfs 4.0 /dev/sdd
# [7:0:0:0] disk LIO-ORG sbdnfs 4.0 /dev/sde
# [8:0:0:0] disk LIO-ORG sbdnfs 4.0 /dev/sdf

Now, retrieve the IDs of the iSCSI devices.

ls -l /dev/disk/by-id/scsi-* | grep sdd

# lrwxrwxrwx 1 root root 9 Aug 9 13:20 /dev/disk/by-id/scsi-1LIO-ORG_sbdnfs:afb0ba8d-3a3c-413b-


8cc2-cca03e63ef42 -> ../../sdd
# lrwxrwxrwx 1 root root 9 Aug 9 13:20 /dev/disk/by-id/scsi-36001405afb0ba8d3a3c413b8cc2cca03 ->
../../sdd
# lrwxrwxrwx 1 root root 9 Aug 9 13:20 /dev/disk/by-id/scsi-SLIO-ORG_sbdnfs_afb0ba8d-3a3c-413b-
8cc2-cca03e63ef42 -> ../../sdd

ls -l /dev/disk/by-id/scsi-* | grep sde

# lrwxrwxrwx 1 root root 9 Feb 7 12:39 /dev/disk/by-id/scsi-1LIO-ORG_cl1:3fe4da37-1a5a-4bb6-9a41-


9a4df57770e4 -> ../../sde
# lrwxrwxrwx 1 root root 9 Feb 7 12:39 /dev/disk/by-id/scsi-360014053fe4da371a5a4bb69a419a4df ->
../../sde
# lrwxrwxrwx 1 root root 9 Feb 7 12:39 /dev/disk/by-id/scsi-SLIO-ORG_cl1_3fe4da37-1a5a-4bb6-9a41-
9a4df57770e4 -> ../../sde

ls -l /dev/disk/by-id/scsi-* | grep sdf

# lrwxrwxrwx 1 root root 9 Aug 9 13:32 /dev/disk/by-id/scsi-1LIO-ORG_sbdnfs:f88f30e7-c968-4678-


bc87-fe7bfcbdb625 -> ../../sdf
# lrwxrwxrwx 1 root root 9 Aug 9 13:32 /dev/disk/by-id/scsi-36001405f88f30e7c9684678bc87fe7bf ->
../../sdf
# lrwxrwxrwx 1 root root 9 Aug 9 13:32 /dev/disk/by-id/scsi-SLIO-ORG_sbdnfs_f88f30e7-c968-4678-
bc87-fe7bfcbdb625 -> ../../sdf

The command list three device IDs for every SBD device. We recommend using the ID that starts with
scsi-3, in the example above this is
/dev/disk/by-id/scsi-36001405afb0ba8d3a3c413b8cc2cca03
/dev/disk/by-id/scsi-360014053fe4da371a5a4bb69a419a4df
/dev/disk/by-id/scsi-36001405f88f30e7c9684678bc87fe7bf
5. [1] Create the SBD device
Use the device ID of the iSCSI devices to create the new SBD devices on the first cluster node.

sudo sbd -d /dev/disk/by-id/scsi-36001405afb0ba8d3a3c413b8cc2cca03 -1 60 -4 120 create

# Also create the second and third SBD devices if you want to use more than one.
sudo sbd -d /dev/disk/by-id/scsi-360014053fe4da371a5a4bb69a419a4df -1 60 -4 120 create
sudo sbd -d /dev/disk/by-id/scsi-36001405f88f30e7c9684678bc87fe7bf -1 60 -4 120 create

6. [A] Adapt the SBD config


Open the SBD config file

sudo vi /etc/sysconfig/sbd

Change the property of the SBD device, enable the pacemaker integration, and change the start mode
of SBD.

[...]
SBD_DEVICE="/dev/disk/by-id/scsi-36001405afb0ba8d3a3c413b8cc2cca03;/dev/disk/by-id/scsi-
360014053fe4da371a5a4bb69a419a4df;/dev/disk/by-id/scsi-36001405f88f30e7c9684678bc87fe7bf"
[...]
SBD_PACEMAKER="yes"
[...]
SBD_STARTMODE="always"
[...]

Create the softdog configuration file

echo softdog | sudo tee /etc/modules-load.d/softdog.conf

Now load the module

sudo modprobe -v softdog

Cluster installation
The following items are prefixed with either [A] - applicable to all nodes, [1] - only applicable to node 1 or [2]
- only applicable to node 2.
1. [A] Update SLES

sudo zypper update

2. [A] Install component, needed for cluster resources

sudo zypper in socat


3. [A] Install azure-lb component, needed for cluster resources

sudo zypper in resource-agents

NOTE
Check the version of package resource-agents and make sure the minimum version requirements are met:
For SLES 12 SP4/SP5, the version must be at least resource-agents-4.3.018.a7fb5035-3.30.1.
For SLES 15/15 SP1, the version must be at least resource-agents-4.3.0184.6ee15eb2-4.13.1.

4. [A] Configure the operating system


In some cases, Pacemaker creates many processes and thereby exhausts the allowed number of
processes. In such a case, a heartbeat between the cluster nodes might fail and lead to failover of your
resources. We recommend increasing the maximum allowed processes by setting the following
parameter.

# Edit the configuration file


sudo vi /etc/systemd/system.conf

# Change the DefaultTasksMax


#DefaultTasksMax=512
DefaultTasksMax=4096

#and to activate this setting


sudo systemctl daemon-reload

# test if the change was successful


sudo systemctl --no-pager show | grep DefaultTasksMax

Reduce the size of the dirty cache. For more information, see Low write performance on SLES 11/12
servers with large RAM.

sudo vi /etc/sysctl.conf

# Change/set the following settings


vm.dirty_bytes = 629145600
vm.dirty_background_bytes = 314572800

5. [A] Configure cloud-netconfig-azure for HA Cluster

NOTE
Check the installed version of package cloud-netconfig-azure by running zypper info cloud-netconfig-
azure . If the version in your environment is 1.3 or higher, it is no longer necessary to suppress the
management of network interfaces by the cloud network plugin. If the version is lower than 1.3, we suggest to
update package cloud-netconfig-azure to the latest available version.

Change the configuration file for the network interface as shown below to prevent the cloud network
plugin from removing the virtual IP address (Pacemaker must control the VIP assignment). For more
information, see SUSE KB 7023633.
# Edit the configuration file
sudo vi /etc/sysconfig/network/ifcfg-eth0

# Change CLOUD_NETCONFIG_MANAGE
# CLOUD_NETCONFIG_MANAGE="yes"
CLOUD_NETCONFIG_MANAGE="no"

6. [1] Enable ssh access

sudo ssh-keygen

# Enter file in which to save the key (/root/.ssh/id_rsa): -> Press ENTER
# Enter passphrase (empty for no passphrase): -> Press ENTER
# Enter same passphrase again: -> Press ENTER

# copy the public key


sudo cat /root/.ssh/id_rsa.pub

7. [2] Enable ssh access

sudo ssh-keygen

# Enter file in which to save the key (/root/.ssh/id_rsa): -> Press ENTER
# Enter passphrase (empty for no passphrase): -> Press ENTER
# Enter same passphrase again: -> Press ENTER

# insert the public key you copied in the last step into the authorized keys file on the second
server
sudo vi /root/.ssh/authorized_keys

# copy the public key


sudo cat /root/.ssh/id_rsa.pub

8. [1] Enable ssh access

# insert the public key you copied in the last step into the authorized keys file on the first
server
sudo vi /root/.ssh/authorized_keys

9. [A] Install Fence agents package, if using STONITH device, based on Azure Fence Agent.

sudo zypper install fence-agents

IMPORTANT
The installed version of package fence-agents must be at least 4.4.0 to benefit from the faster failover times
with Azure Fence Agent, if a cluster nodes needs to be fenced. We recommend that you update the package, if
running a lower version.

10. [A] Install Azure Python SDK


On SLES 12 SP4 or SLES 12 SP5
# You may need to activate the Public cloud extention first
SUSEConnect -p sle-module-public-cloud/12/x86_64
sudo zypper install python-azure-mgmt-compute

On SLES 15 and higher

# You may need to activate the Public cloud extention first. In this example the SUSEConnect
command is for SLES 15 SP1
SUSEConnect -p sle-module-public-cloud/15.1/x86_64
sudo zypper install python3-azure-mgmt-compute

IMPORTANT
Depending on your version and image type, you may need to activate the Public cloud extension for your OS
release, before you can install Azure Python SDK. You can check the extension, by running SUSEConnect ---list-
extensions.
To achieve the faster failover times with Azure Fence Agent:
on SLES 12 SP4 or SLES 12 SP5 install version 4.6.2 or higher of package python-azure-mgmt-compute
on SLES 15 install version 4.6.2 or higher of package python3 -azure-mgmt-compute

11. [A] Setup host name resolution


You can either use a DNS server or modify the /etc/hosts on all nodes. This example shows how to use
the /etc/hosts file. Replace the IP address and the hostname in the following commands.

IMPORTANT
If using host names in the cluster configuration, it is vital to have reliable host name resolution. The cluster
communication will fail, if the names are not available and that can lead to cluster failover delays. The benefit of
using /etc/hosts is that your cluster becomes independent of DNS, which could be a single point of failures too.

sudo vi /etc/hosts

Insert the following lines to /etc/hosts. Change the IP address and hostname to match your
environment

# IP address of the first cluster node


10.0.0.6 prod-cl1-0
# IP address of the second cluster node
10.0.0.7 prod-cl1-1

12. [1] Install Cluster


if using SBD devices for fencing
sudo ha-cluster-init -u

# ! NTP is not configured to start at system boot.


# Do you want to continue anyway (y/n)? y
# /root/.ssh/id_rsa already exists - overwrite (y/n)? n
# Address for ring0 [10.0.0.6] Press ENTER
# Port for ring0 [5405] Press ENTER
# SBD is already configured to use /dev/disk/by-id/scsi-
36001405639245768818458b930abdf69;/dev/disk/by-id/scsi-
36001405afb0ba8d3a3c413b8cc2cca03;/dev/disk/by-id/scsi-36001405f88f30e7c9684678bc87fe7bf -
overwrite (y/n)? n
# Do you wish to configure an administration IP (y/n)? n

if not using SBD devices for fencing

sudo ha-cluster-init -u

# ! NTP is not configured to start at system boot.


# Do you want to continue anyway (y/n)? y
# /root/.ssh/id_rsa already exists - overwrite (y/n)? n
# Address for ring0 [10.0.0.6] Press ENTER
# Port for ring0 [5405] Press ENTER
# Do you wish to use SBD (y/n)? n
#WARNING: Not configuring SBD - STONITH will be disabled.
# Do you wish to configure an administration IP (y/n)? n

1. [2] Add node to cluster

sudo ha-cluster-join

# ! NTP is not configured to start at system boot.


# Do you want to continue anyway (y/n)? y
# IP address or hostname of existing node (e.g.: 192.168.1.1) []10.0.0.6
# /root/.ssh/id_rsa already exists - overwrite (y/n)? n

2. [A] Change hacluster password to the same password

sudo passwd hacluster

3. [A] Adjust corosync settings.

sudo vi /etc/corosync/corosync.conf

Add the following bold content to the file if the values are not there or different. Make sure to change
the token to 30000 to allow Memory preserving maintenance. For more information, see this article for
Linux or Windows.
[...]
token: 30000
token_retransmits_before_loss_const: 10
join: 60
consensus: 36000
max_messages: 20

interface {
[...]
}
transport: udpu
}
nodelist {
node {
ring0_addr:10.0.0.6
}
node {
ring0_addr:10.0.0.7
}
}
logging {
[...]
}
quorum {
# Enable and configure quorum subsystem (default: off)
# see also corosync.conf.5 and votequorum.5
provider: corosync_votequorum
expected_votes: 2
two_node: 1
}

Then restart the corosync service

sudo service corosync restart

Default Pacemaker configuration for SBD


The configuration in this section is only applicable, if using SBD STONITH.
1. [1] Enable the use of a STONITH device and set the fence delay

sudo crm configure property stonith-timeout=144


sudo crm configure property stonith-enabled=true

# List the resources to find the name of the SBD device


sudo crm resource list
sudo crm resource stop stonith-sbd
sudo crm configure delete stonith-sbd
sudo crm configure primitive stonith-sbd stonith:external/sbd \
params pcmk_delay_max="15" \
op monitor interval="15" timeout="15"

Create Azure Fence agent STONITH device


This section of the documentation is only applicable, if using STONITH, based on Azure Fence agent. The
STONITH device uses a Service Principal to authorize against Microsoft Azure. Follow these steps to create a
Service Principal.
1. Go to https://portal.azure.com
2. Open the Azure Active Directory blade
Go to Properties and write down the Directory ID. This is the tenant ID .
3. Click App registrations
4. Click New Registration
5. Enter a Name, select "Accounts in this organization directory only"
6. Select Application Type "Web", enter a sign-on URL (for example http://localhost) and click Add
The sign-on URL is not used and can be any valid URL
7. Select Certificates and Secrets, then click New client secret
8. Enter a description for a new key, select "Never expires" and click Add
9. Write down the Value. It is used as the password for the Service Principal
10. Select Overview. Write down the Application ID. It is used as the username (login ID in the steps below) of
the Service Principal
[1] Create a custom role for the fence agent
The Service Principal doesn't have permissions to access your Azure resources by default. You need to give the
Service Principal permissions to start and stop (deallocate) all virtual machines of the cluster. If you did not
already create the custom role, you can create it using PowerShell or Azure CLI
Use the following content for the input file. You need to adapt the content to your subscriptions that is, replace
c276fc76-9cd4-44c9-99a7-4fd71546436e and e91d47c4-76f3-4271-a796-21b4ecfe3624 with the Ids of your
subscription. If you only have one subscription, remove the second entry in AssignableScopes.

{
"properties": {
"roleName": "Linux Fence Agent Role",
"description": "Allows to power-off and start virtual machines",
"assignableScopes": [
"/subscriptions/c276fc76-9cd4-44c9-99a7-4fd71546436e",
"/subscriptions/e91d47c4-76f3-4271-a796-21b4ecfe3624"
],
"permissions": [
{
"actions": [
"Microsoft.Compute/*/read",
"Microsoft.Compute/virtualMachines/powerOff/action",
"Microsoft.Compute/virtualMachines/start/action"
],
"notActions": [],
"dataActions": [],
"notDataActions": []
}
]
}
}

[A ] Assign the custom role to the Service Principal


Assign the custom role "Linux Fence Agent Role" that was created in the last chapter to the Service Principal.
Don't use the Owner role anymore!
1. Go to https://portal.azure.com
2. Open the All resources blade
3. Select the virtual machine of the first cluster node
4. Click Access control (IAM)
5. Click Add role assignment
6. Select the role "Linux Fence Agent Role"
7. Enter the name of the application you created above
8. Click Save
Repeat the steps above for the second cluster node.
[1] Create the STONITH devices
After you edited the permissions for the virtual machines, you can configure the STONITH devices in the
cluster.

sudo crm configure property stonith-enabled=true


crm configure property concurrent-fencing=true
# replace the bold string with your subscription ID, resource group, tenant ID, service principal ID and
password
sudo crm configure primitive rsc_st_azure stonith:fence_azure_arm \
params subscriptionId="subscription ID" resourceGroup="resource group" tenantId="tenant ID" login="login
ID" passwd="password" \
pcmk_monitor_retries=4 pcmk_action_limit=3 power_timeout=240 pcmk_reboot_timeout=900 \
op monitor interval=3600 timeout=120

sudo crm configure property stonith-timeout=900

IMPORTANT
The monitoring and fencing operations are de-serialized. As a result, if there is a longer running monitoring operation
and simultaneous fencing event, there is no delay to the cluster failover, due to the already running monitoring
operation.

TIP
Azure Fence Agent requires outbound connectivity to public end points as documented, along with possible solutions,
in Public endpoint connectivity for VMs using standard ILB.

Pacemaker configuration for Azure scheduled events


Azure offers scheduled events. Scheduled events are provided via meta-data service and allow time for the
application to prepare for events like VM shutdown, VM redeployment, etc. Resource agent azure-events
monitors for scheduled Azure events. If events are detected, the agent will attempt to stop all resources on the
impacted VM and move them to another node in the cluster. To achieve that additional Pacemaker resources
must be configured.
1. [A] Make sure the package for the azure-events agent is already installed and up to date.

sudo zypper info resource-agents

2. [1] Configure the resources in Pacemaker.


#Place the cluster in maintenance mode
sudo crm configure property maintenance-mode=true

#Create Pacemaker resources for the Azure agent


sudo crm configure primitive rsc_azure-events ocf:heartbeat:azure-events op monitor interval=10s
sudo crm configure clone cln_azure-events rsc_azure-events

#Take the cluster out of maintenance mode


sudo crm configure property maintenance-mode=false

NOTE
After you configure the Pacemaker resources for azure-events agent, when you place the cluster in or out of
maintenance mode, you may get warning messages like:
WARNING: cib-bootstrap-options: unknown attribute 'hostName_ hostname '
WARNING: cib-bootstrap-options: unknown attribute 'azure-events_globalPullState'
WARNING: cib-bootstrap-options: unknown attribute 'hostName_ hostname '
These warning messages can be ignored.

Next steps
Azure Virtual Machines planning and implementation for SAP
Azure Virtual Machines deployment for SAP
Azure Virtual Machines DBMS deployment for SAP
High availability for NFS on Azure VMs on SUSE Linux Enterprise Server
High availability for SAP NetWeaver on Azure VMs on SUSE Linux Enterprise Server for SAP applications
To learn how to establish high availability and plan for disaster recovery of SAP HANA on Azure VMs, see
High Availability of SAP HANA on Azure Virtual Machines (VMs)
Setting up Pacemaker on Red Hat Enterprise Linux
in Azure
12/22/2020 • 9 minutes to read • Edit Online

Read the following SAP Notes and papers first:


SAP Note 1928533, which has:
The list of Azure VM sizes that are supported for the deployment of SAP software.
Important capacity information for Azure VM sizes.
The supported SAP software, and operating system (OS) and database combinations.
The required SAP kernel version for Windows and Linux on Microsoft Azure.
SAP Note 2015553 lists prerequisites for SAP-supported SAP software deployments in Azure.
SAP Note 2002167 has recommended OS settings for Red Hat Enterprise Linux
SAP Note 2009879 has SAP HANA Guidelines for Red Hat Enterprise Linux
SAP Note 2178632 has detailed information about all monitoring metrics reported for SAP in Azure.
SAP Note 2191498 has the required SAP Host Agent version for Linux in Azure.
SAP Note 2243692 has information about SAP licensing on Linux in Azure.
SAP Note 1999351 has additional troubleshooting information for the Azure Enhanced Monitoring
Extension for SAP.
SAP Community WIKI has all required SAP Notes for Linux.
Azure Virtual Machines planning and implementation for SAP on Linux
Azure Virtual Machines deployment for SAP on Linux (this article)
Azure Virtual Machines DBMS deployment for SAP on Linux
SAP HANA system replication in pacemaker cluster
General RHEL documentation
High Availability Add-On Overview
High Availability Add-On Administration
High Availability Add-On Reference
Support Policies for RHEL High Availability Clusters - sbd and fence_sbd
Azure-specific RHEL documentation:
Support Policies for RHEL High Availability Clusters - Microsoft Azure Virtual Machines as Cluster
Members
Installing and Configuring a Red Hat Enterprise Linux 7.4 (and later) High-Availability Cluster on
Microsoft Azure
Considerations in adopting RHEL 8 - High availability and clusters
Configure SAP S/4HANA ASCS/ERS with Standalone Enqueue Server 2 (ENSA2) in Pacemaker on
RHEL 7.6
RHEL for SAP Offerings on Azure

Cluster installation
NOTE
Red Hat doesn't support software-emulated watchdog. Red Hat doesn't support SBD on cloud platforms. For details see
Support Policies for RHEL High Availability Clusters - sbd and fence_sbd. The only supported fencing mechanism for
Pacemaker Red Hat Enterprise Linux clusters on Azure, is Azure fence agent.

The following items are prefixed with either [A] - applicable to all nodes, [1] - only applicable to node 1 or [2] -
only applicable to node 2.
1. [A] Register. This step is not required, if using RHEL SAP HA-enabled images.
Register your virtual machines and attach it to a pool that contains repositories for RHEL 7.

sudo subscription-manager register


# List the available pools
sudo subscription-manager list --available --matches '*SAP*'
sudo subscription-manager attach --pool=<pool id>

By attaching a pool to an Azure Marketplace PAYG RHEL image, you will be effectively double-billed for
your RHEL usage: once for the PAYG image, and once for the RHEL entitlement in the pool you attach. To
mitigate this, Azure now provides BYOS RHEL images. More information is available here.
2. [A] Enable RHEL for SAP repos. This step is not required, if using RHEL SAP HA-enabled images.
In order to install the required packages, enable the following repositories.

sudo subscription-manager repos --disable "*"


sudo subscription-manager repos --enable=rhel-7-server-rpms
sudo subscription-manager repos --enable=rhel-ha-for-rhel-7-server-rpms
sudo subscription-manager repos --enable=rhel-sap-for-rhel-7-server-rpms
sudo subscription-manager repos --enable=rhel-ha-for-rhel-7-server-eus-rpms

3. [A] Install RHEL HA Add-On

sudo yum install -y pcs pacemaker fence-agents-azure-arm nmap-ncat


IMPORTANT
We recommend the following versions of Azure Fence agent (or later) for customers to benefit from a faster
failover time, if a resource stop fails or the cluster nodes cannot communicate which each other anymore:
RHEL 7.7 or higher use the latest available version of fence-agents package
RHEL 7.6: fence-agents-4.2.1-11.el7_6.8
RHEL 7.5: fence-agents-4.0.11-86.el7_5.8
RHEL 7.4: fence-agents-4.0.11-66.el7_4.12
For more information, see Azure VM running as a RHEL High Availability cluster member take a very long time to
be fenced, or fencing fails / times-out before the VM shuts down.

Check the version of the Azure fence agent. If necessary, update it to a version equal to or later than the
stated above.

# Check the version of the Azure Fence Agent


sudo yum info fence-agents-azure-arm

IMPORTANT
If you need to update the Azure Fence agent, and if using custom role, make sure to update the custom role to
include action powerOff . For details see Create a custom role for the fence agent.

4. [A] Setup host name resolution


You can either use a DNS server or modify the /etc/hosts on all nodes. This example shows how to use
the /etc/hosts file. Replace the IP address and the hostname in the following commands.

IMPORTANT
If using host names in the cluster configuration, it is vital to have reliable host name resolution. The cluster
communication will fail, if the names are not available and that can lead to cluster failover delays. The benefit of
using /etc/hosts is that your cluster becomes independent of DNS, which could be a single point of failures too.

sudo vi /etc/hosts

Insert the following lines to /etc/hosts. Change the IP address and hostname to match your environment

# IP address of the first cluster node


10.0.0.6 prod-cl1-0
# IP address of the second cluster node
10.0.0.7 prod-cl1-1

5. [A] Change hacluster password to the same password

sudo passwd hacluster

6. [A] Add firewall rules for pacemaker


Add the following firewall rules to all cluster communication between the cluster nodes.
sudo firewall-cmd --add-service=high-availability --permanent
sudo firewall-cmd --add-service=high-availability

7. [A] Enable basic cluster services


Run the following commands to enable the Pacemaker service and start it.

sudo systemctl start pcsd.service


sudo systemctl enable pcsd.service

8. [1] Create Pacemaker cluster


Run the following commands to authenticate the nodes and create the cluster. Set the token to 30000 to
allow Memory preserving maintenance. For more information, see this article for Linux.
If building a cluster on RHEL 7.x , use the following commands:

sudo pcs cluster auth prod-cl1-0 prod-cl1-1 -u hacluster


sudo pcs cluster setup --name nw1-azr prod-cl1-0 prod-cl1-1 --token 30000
sudo pcs cluster start --all

If building a cluster on RHEL 8.X , use the following commands:

sudo pcs host auth prod-cl1-0 prod-cl1-1 -u hacluster


sudo pcs cluster setup nw1-azr prod-cl1-0 prod-cl1-1 totem token=30000
sudo pcs cluster start --all

Verify the cluster status, by executing the following command:

# Run the following command until the status of both nodes is online
sudo pcs status
# Cluster name: nw1-azr
# WARNING: no stonith devices and stonith-enabled is not false
# Stack: corosync
# Current DC: prod-cl1-1 (version 1.1.18-11.el7_5.3-2b07d5c5a9) - partition with quorum
# Last updated: Fri Aug 17 09:18:24 2018
# Last change: Fri Aug 17 09:17:46 2018 by hacluster via crmd on prod-cl1-1
#
# 2 nodes configured
# 0 resources configured
#
# Online: [ prod-cl1-0 prod-cl1-1 ]
#
# No resources
#
# Daemon Status:
# corosync: active/disabled
# pacemaker: active/disabled
# pcsd: active/enabled

9. [A] Set Expected Votes.

# Check the quorum votes


pcs quorum status
# If the quorum votes are not set to 2, execute the next command
sudo pcs quorum expected-votes 2
TIP
If building multi-node cluster, that is cluster with more than two nodes, don't set the votes to 2.

10. [1] Allow concurrent fence actions

sudo pcs property set concurrent-fencing=true

Create STONITH device


The STONITH device uses a Service Principal to authorize against Microsoft Azure. Follow these steps to create a
Service Principal.
1. Go to https://portal.azure.com
2. Open the Azure Active Directory blade
Go to Properties and make a note of the Directory ID. This is the tenant ID .
3. Click App registrations
4. Click New Registration
5. Enter a Name, select "Accounts in this organization directory only"
6. Select Application Type "Web", enter a sign-on URL (for example http://localhost) and click Add
The sign-on URL is not used and can be any valid URL
7. Select Certificates and Secrets, then click New client secret
8. Enter a description for a new key, select "Never expires" and click Add
9. Make a node the Value. It is used as the password for the Service Principal
10. Select Overview. Make a note the Application ID. It is used as the username (login ID in the steps below) of
the Service Principal
[1] Create a custom role for the fence agent
The Service Principal does not have permissions to access your Azure resources by default. You need to give the
Service Principal permissions to start and stop (power-off) all virtual machines of the cluster. If you did not
already create the custom role, you can create it using PowerShell or Azure CLI
Use the following content for the input file. You need to adapt the content to your subscriptions that is, replace
c276fc76-9cd4-44c9-99a7-4fd71546436e and e91d47c4-76f3-4271-a796-21b4ecfe3624 with the Ids of your
subscription. If you only have one subscription, remove the second entry in AssignableScopes.
{
"properties": {
"roleName": "Linux Fence Agent Role",
"description": "Allows to power-off and start virtual machines",
"assignableScopes": [
"/subscriptions/c276fc76-9cd4-44c9-99a7-4fd71546436e",
"/subscriptions/e91d47c4-76f3-4271-a796-21b4ecfe3624"
],
"permissions": [
{
"actions": [
"Microsoft.Compute/*/read",
"Microsoft.Compute/virtualMachines/powerOff/action",
"Microsoft.Compute/virtualMachines/start/action"
],
"notActions": [],
"dataActions": [],
"notDataActions": []
}
]
}
}

[A ] Assign the custom role to the Service Principal


Assign the custom role "Linux Fence Agent Role" that was created in the last chapter to the Service Principal. Do
not use the Owner role anymore!
1. Go to https://portal.azure.com
2. Open the All resources blade
3. Select the virtual machine of the first cluster node
4. Click Access control (IAM)
5. Click Add role assignment
6. Select the role "Linux Fence Agent Role"
7. Enter the name of the application you created above
8. Click Save
Repeat the steps above for the second cluster node.
[1] Create the STONITH devices
After you edited the permissions for the virtual machines, you can configure the STONITH devices in the cluster.

sudo pcs property set stonith-timeout=900

NOTE
Option 'pcmk_host_map' is ONLY required in the command, if the RHEL host names and the Azure node names are NOT
identical. Refer to the bold section in the command.

For RHEL 7.X , use the following command to configure the fence device:
sudo pcs stonith create rsc_st_azure fence_azure_arm login="login ID" passwd="password"
resourceGroup="resource group" tenantId="tenant ID" subscriptionId="subscription id" pcmk_host_map="prod-
cl1-0:10.0.0.6;prod-cl1-1:10.0.0.7" \
power_timeout=240 pcmk_reboot_timeout=900 pcmk_monitor_timeout=120 pcmk_monitor_retries=4
pcmk_action_limit=3 \
op monitor interval=3600

For RHEL 8.X , use the following command to configure the fence device:

sudo pcs stonith create rsc_st_azure fence_azure_arm username="login ID" password="password"


resourceGroup="resource group" tenantId="tenant ID" subscriptionId="subscription id" pcmk_host_map="prod-
cl1-0:10.0.0.6;prod-cl1-1:10.0.0.7" \
power_timeout=240 pcmk_reboot_timeout=900 pcmk_monitor_timeout=120 pcmk_monitor_retries=4
pcmk_action_limit=3 \
op monitor interval=3600

IMPORTANT
The monitoring and fencing operations are de-serialized. As a result, if there is a longer running monitoring operation
and simultaneous fencing event, there is no delay to the cluster failover, due to the already running monitoring
operation.

[1] Enable the use of a STONITH device

sudo pcs property set stonith-enabled=true

TIP
Azure Fence Agent requires outbound connectivity to public end points as documented, along with possible solutions, in
Public endpoint connectivity for VMs using standard ILB.

Next steps
Azure Virtual Machines planning and implementation for SAP
Azure Virtual Machines deployment for SAP
Azure Virtual Machines DBMS deployment for SAP
To learn how to establish high availability and plan for disaster recovery of SAP HANA on Azure VMs, see
High Availability of SAP HANA on Azure Virtual Machines (VMs)
Public endpoint connectivity for Virtual Machines
using Azure Standard Load Balancer in SAP high-
availability scenarios
12/22/2020 • 10 minutes to read • Edit Online

The scope of this article is to describe configurations, that will enable outbound connectivity to public end
point(s). The configurations are mainly in the context of High Availability with Pacemaker for SUSE / RHEL.
If you are using Pacemaker with Azure fence agent in your high availability solution, then the VMs must have
outbound connectivity to the Azure management API. The article presents several options to enable you to
select the option that is best suited for your scenario.

Overview
When implementing high availability for SAP solutions via clustering, one of the necessary components is
Azure Load Balancer. Azure offers two load balancer SKUs: standard and basic.
Standard Azure load balancer offers some advantages over the Basic load balancer. For instance, it works
across Azure Availability zones, it has better monitoring and logging capabilities for easier troubleshooting,
reduced latency. The “HA ports” feature covers all ports, that is, it is no longer necessary to list all individual
ports.
There are some important differences between the basic and the standard SKU of Azure load balancer. One of
them is the handling of outbound traffic to public end point. For full Basic versus Standard SKU load balancer
comparison, see Load Balancer SKU comparison.
When VMs without public IP addresses are placed in the backend pool of internal (no public IP address)
Standard Azure load balancer, there is no outbound connectivity to public end points, unless additional
configuration is done.
If a VM is assigned a public IP address, or the VM is in the backend pool of a load balancer with public IP
address, it will have outbound connectivity to public end points.
SAP systems often contain sensitive business data. It is rarely acceptable for VMs hosting SAP systems to be
accessible via public IP addresses. At the same time, there are scenarios, which would require outbound
connectivity from the VM to public end points.
Examples of scenarios, requiring access to Azure public end point are:
Azure Fence Agent requires access to management.azure.com and login.microsoftonline.com
Azure Backup
Azure Site Recovery
Using public repository for patching the Operating system
The SAP application data flow may require outbound connectivity to public end point
If your SAP deployment doesn’t require outbound connectivity to public end points, you don’t need to
implement the additional configuration. It is sufficient to create internal standard SKU Azure Load Balancer for
your high availability scenario, assuming that there is also no need for inbound connectivity from public end
points.
NOTE
When VMs without public IP addresses are placed in the backend pool of internal (no public IP address) Standard
Azure load balancer, there will be no outbound internet connectivity, unless additional configuration is performed to
allow routing to public end points.
If the VMs have either public IP addresses or are already in the backend pool of Azure Load balancer with public IP
address, the VM will already have outbound connectivity to public end points.

Read the following papers first:


Azure Standard Load Balancer
Azure Standard Load Balancer overview - comprehensive overview of Azure Standard Load
balancer, important principles, concepts, and tutorials
Outbound connections in Azure - scenarios on how to achieve outbound connectivity in Azure
Load balancer outbound rules- explains the concepts of load balancer outbound rules and how to
create outbound rules
Azure Firewall
Azure Firewall Overview- overview of Azure Firewall
Tutorial: Deploy and configure Azure Firewall - instructions on how to configure Azure Firewall via
Azure portal
Virtual Networks -User defined rules - Azure routing concepts and rules
Security Groups Service Tags - how to simplify your Network Security Groups and Firewall configuration
with service tags

Option 1: Additional external Azure Standard Load Balancer for


outbound connections to internet
One option to achieve outbound connectivity to public end points, without allowing inbound connectivity to
the VM from public end point, is to create a second load balancer with public IP address, add the VMs to the
backend pool of the second load balancer and define only outbound rules.
Use Network Security Groups to control the public end points, that are accessible for outbound calls from the
VM.
For more information, see Scenario 2 in document Outbound connections.
The configuration would look like:
Important considerations
You can use one additional Public Load Balancer for multiple VMs in the same subnet to achieve outbound
connectivity to public end point and optimize cost
Use Network Security Groups to control which public end points are accessible from the VMs. You can
assign the Network Security Group either to the subnet, or to each VM. Where possible, use Service tags
to reduce the complexity of the security rules.
Azure standard Load balancer with public IP address and outbound rules allows direct access to public end
point. If you have corporate security requirements to have all outbound traffic pass via centralized
corporate solution for auditing and logging, you may not be able to fulfill the requirement with this
scenario.

TIP
Where possible, use Service tags to reduce the complexity of the Network Security Group .

Deployment steps
1. Create Load Balancer
a. In the Azure portal , click All resources, Add, then search for Load Balancer
b. Click Create
c. Load Balancer Name MyPublicILB
d. Select Public as a Type, Standard as SKU
e. Select Create Public IP address and specify as a name MyPublicILBFrondEndIP
f. Select Zone Redundant as Availability zone
g. Click Review and Create, then click Create
2. Create Backend pool MyBackendPoolOfPublicILB and add the VMs.
a. Select the Virtual network
b. Select the VMs and their IP addresses and add them to the backend pool
3. Create outbound rules. Currently it is not possible to create outbound rules from the Azure portal. You
can create outbound rules with Azure CLI.

az network lb outbound-rule create --address-pool MyBackendPoolOfPublicILB --frontend-ip-configs


MyPublicILBFrondEndIP --idle-timeout 30 --lb-name MyPublicILB --name MyOutBoundRules --outbound-
ports 10000 --enable-tcp-reset true --protocol All --resource-group MyResourceGroup

4. Create Network Security group rules to restrict access to specific Public End Points. If there is existing
Network Security Group, you can adjust it. The example below shows how to enable access to the
Azure management API:
a. Navigate to the Network Security Group
b. Click Outbound Security Rules
c. Add a rule to Deny all outbound Access to Internet .
d. Add a rule to Allow access to AzureCloud , with priority lower than the priority of the rule to deny
all internet access.
The outbound security rules would look like:

For more information on Azure Network security groups, see Security Groups .

Option 2: Azure Firewall for outbound connections to internet


Another option to achieve outbound connectivity to public end points, without allowing inbound connectivity
to the VM from public end points, is with Azure Firewall. Azure Firewall is a managed service, with built-in
High Availability and it can span multiple Availability Zones.
You will also need to deploy User Defined Route, associated with subnet where VMs and the Azure load
balancer are deployed, pointing to the Azure firewall, to route traffic through the Azure Firewall.
For details on how to deploy Azure Firewall, see Deploy And Configure Azure Firewall.
The architecture would look like:
Important considerations
Azure firewall is cloud native service, with built-in High Availability and it supports zonal deployment.
Requires additional subnet that must be named AzureFirewallSubnet.
If transferring large data sets outbound of the virtual network where the SAP VMs are located, to a VM in
another virtual network, or to public end point, it may not be cost effective solution. One such example is
copying large backups across virtual networks. For details see Azure Firewall pricing.
If the corporate Firewall solution is not Azure Firewall, and you have security requirements to have all
outbound traffic pass though centralized corporate solution, this solution may not be practical.

TIP
Where possible, use Service tags to reduce the complexity of the Azure Firewall rules.

Deployment steps
1. The deployment steps assume that you already have Virtual network and subnet defined for your VMs.
2. Create Subnet AzureFirewallSubnet in the same Virtual Network, where the VMS and the Standard
Load Balancer are deployed.
a. In Azure portal, Navigate to the Virtual Network: Click All Resources, Search for the Virtual
Network, Click on the Virtual Network, Select Subnets.
b. Click Add Subnet. Enter AzureFirewallSubnet as Name. Enter appropriate Address Range. Save.
3. Create Azure Firewall.
a. In Azure portal select All resources, click Add, Firewall, Create. Select Resource group (select the
same resource group, where the Virtual Network is).
b. Enter name for the Azure Firewall resource. For instance, MyAzureFirewall .
c. Select Region and select at least two Availability zones, aligned with the Availability zones where
your VMs are deployed.
d. Select your Virtual Network, where the SAP VMs and Azure Standard Load balancer are deployed.
e. Public IP Address: Click create and enter a name. For Instance MyFirewallPublicIP .
4. Create Azure Firewall Rule to allow outbound connectivity to specified public end points. The example
shows how to allow access to the Azure Management API public endpoint.
a. Select Rules, Network Rule Collection, then click Add network rule collection.
b. Name: MyOutboundRule , enter Priority, Select Action Allow .
c. Service: Name ToAzureAPI . Protocol: Select Any . Source Address: enter the range for your subnet,
where the VMs and Standard Load Balancer are deployed for instance: 11.97.0.0/24 . Destination
ports: enter * .
d. Save
e. As you are still positioned on the Azure Firewall, Select Overview. Note down the Private IP Address
of the Azure Firewall.
5. Create route to Azure Firewall
a. In Azure portal select All resources, then click Add, Route Table, Create.
b. Enter Name MyRouteTable, select Subscription, Resource group, and Location (matching the
location of your Virtual network and Firewall).
c. Save
The firewall rule would look like:

6. Create User Defined Route from the subnet of your VMs to the private IP of MyAzureFirewall .
a. As you are positioned on the Route Table, click Routes. Select Add.
b. Route name: ToMyAzureFirewall, Address prefix: 0.0.0.0/0 . Next hop type: Select Virtual Appliance.
Next hop address: enter the private IP address of the firewall you configured: 11.97.1.4 .
c. Save

Option 3: Using Proxy for Pacemaker calls to Azure Management


API
You could use proxy to allow Pacemaker calls to the Azure management API public end point.
Important considerations
If there is already corporate proxy in place, you could route outbound calls to public end points through it.
Outbound calls to public end points will go through the corporate control point.
Make sure the proxy configuration allows outbound connectivity to Azure management API:
https://management.azure.com and https://login.microsoftonline.com
Make sure there is a route from the VMs to the Proxy
Proxy will handle only HTTP/HTTPS calls. If there is additional need to make outbound calls to public end
point over different protocols (like RFC), alternative solution will be needed
The Proxy solution must be highly available, to avoid instability in the Pacemaker cluster
Depending on the location of the proxy, it may introduce additional latency in the calls from the Azure
Fence Agent to the Azure Management API. If your corporate proxy is still on the premises, while your
Pacemaker cluster is in Azure, measure latency and consider, if this solution is suitable for you
If there isn’t already highly available corporate proxy in place, we do not recommend this option as the
customer would be incurring extra cost and complexity. Nevertheless, if you decide to deploy additional
proxy solution, for the purpose of allowing outbound connectivity from Pacemaker to Azure Management
public API, make sure the proxy is highly available, and the latency from the VMs to the proxy is low.
Pacemaker configuration with Proxy
There are many different Proxy options available in the industry. Step-by-step instructions for the proxy
deployment are outside of the scope of this document. In the example below, we assume that your proxy is
responding to MyProxySer vice and listening to port MyProxyPor t .
To allow pacemaker to communicate with the Azure management API, perform the following steps on all
cluster nodes:
1. Edit the pacemaker configuration file /etc/sysconfig/pacemaker and add the following lines (all cluster
nodes):

sudo vi /etc/sysconfig/pacemaker
# Add the following lines
http_proxy=http://MyProxyService:MyProxyPort
https_proxy=http://MyProxyService:MyProxyPort

2. Restart the pacemaker service on all cluster nodes.


SUSE

# Place the cluster in maintenance mode


sudo crm configure property maintenance-mode=true
#Restart on all nodes
sudo systemctl restart pacemaker
# Take the cluster out of maintenance mode
sudo crm configure property maintenance-mode=true

Red Hat

# Place the cluster in maintenance mode


sudo pcs property set maintenance-mode=true
#Restart on all nodes
sudo systemctl restart pacemaker
# Take the cluster out of maintenance mode
sudo pcs property set maintenance-mode=false

Other options
If outbound traffic is routed via third party, URL-based firewall proxy:
if using Azure fence agent make sure the firewall configuration allows outbound connectivity to the Azure
management API: https://management.azure.com and https://login.microsoftonline.com
if using SUSE's Azure public cloud update infrastructure for applying updates and patches, see Azure
Public Cloud Update Infrastructure 101

Next steps
Learn how to configure Pacemaker on SUSE in Azure
Learn how to configure Pacemaker on Red Hat in Azure
Install SAP NetWeaver HA on a Windows failover
cluster and shared disk for an SAP ASCS/SCS
instance in Azure
12/22/2020 • 9 minutes to read • Edit Online

This article describes how to install and configure a high-availability SAP system in Azure by using a Windows
Server failover cluster and cluster shared disk for clustering an SAP ASCS/SCS instance. As described in
Architecture guide: Cluster an SAP ASCS/SCS instance on a Windows failover cluster by using a cluster shared
disk, there are two alternatives for cluster shared disk:
Azure shared disks
Using SIOS DataKeeper Cluster Edition to create mirrored storage, that will simulate clustered shared disk

Prerequisites
Before you begin the installation, review these documents:
Architecture guide: Cluster an SAP ASCS/SCS instance on a Windows failover cluster by using a cluster
shared disk
Prepare the Azure infrastructure for SAP HA by using a Windows failover cluster and shared disk for an
SAP ASCS/SCS instance
We don't describe the DBMS setup in this article because setups vary depending on the DBMS system you use.
We assume that high-availability concerns with the DBMS are addressed with the functionalities that different
DBMS vendors support for Azure. Examples are AlwaysOn or database mirroring for SQL Server and Oracle Data
Guard for Oracle databases. The high availability scenarios for the DBMS are not covered in this article.
There are no special considerations when different DBMS services interact with a clustered SAP ASCS or SCS
configuration in Azure.

NOTE
The installation procedures of SAP NetWeaver ABAP systems, Java systems, and ABAP+Java systems are almost identical.
The most significant difference is that an SAP ABAP system has one ASCS instance. The SAP Java system has one SCS
instance. The SAP ABAP+Java system has one ASCS instance and one SCS instance running in the same Microsoft failover
cluster group. Any installation differences for each SAP NetWeaver installation stack are explicitly mentioned. You can
assume that the rest of the steps are the same.

Install SAP with a high-availability ASCS/SCS instance


IMPORTANT
If you use SIOS to present shared disk, don't place your page file on the SIOS DataKeeper mirrored volumes. You can leave
your page file on the temporary drive D of an Azure virtual machine, which is the default. If it's not already there, move the
Windows page file to drive D of your Azure virtual machine.

Installing SAP with a high-availability ASCS/SCS instance involves these tasks:


Create a virtual host name for the clustered SAP ASCS/SCS instance.
Install SAP on the first cluster node.
Modify the SAP profile of the ASCS/SCS instance.
Add a probe port.
Open the Windows firewall probe port.
Create a virtual host name for the clustered SAP ASCS/SCS instance
1. In the Windows DNS manager, create a DNS entry for the virtual host name of the ASCS/SCS instance.

IMPORTANT
The IP address that you assign to the virtual host name of the ASCS/SCS instance must be the same as the IP
address that you assigned to Azure Load Balancer.

Define the DNS entry for the SAP ASCS/SCS cluster virtual name and TCP/IP address
2. If are using the new SAP Enqueue Replication Server 2, which is also clustered instance, then you need to
reserve in DNS a virtual host name for ERS2 as well.

IMPORTANT
The IP address that you assign to the virtual host name of the ERS2 instance must be the second the IP address
that you assigned to Azure Load Balancer.

Define the DNS entry for the SAP ERS2 cluster virtual name and TCP/IP address
3. To define the IP address that's assigned to the virtual host name, select DNS Manager > Domain .
New virtual name and TCP/IP address for SAP ASCS/SCS cluster configuration
Install the SAP first cluster node
1. Execute the first cluster node option on cluster node A. Select:
ABAP system : ASCS instance number 00
Java system : SCS instance number 01
ABAP+Java system : ASCS instance number 00 and SCS instance number 01

IMPORTANT
Keep in mind that the configuration in the Azure internal load balancer load balancing rules(if using Basic SKU) and
the selected SAP instance numbers must match.

2. Follow the SAP described installation procedure. Make sure in the start installation option “First Cluster
Node”, to choose “Cluster Shared Disk” as configuration option.

TIP
The SAP installation documentation describes how to install the first ASCS/SCS cluster node.

Modify the SAP profile of the ASCS/SCS instance


If you have Enqueue Replication Server 1, add SAP profile parameter enque/encni/set_so_keepalive as described
below. The profile parameter prevents connections between SAP work processes and the enqueue server from
closing when they are idle for too long. The SAP parameter is not required for ERS2.
1. Add this profile parameter to the SAP ASCS/SCS instance profile, if using ERS1.

enque/encni/set_so_keepalive = true

For both ERS1 and ERS2, make sure that the keepalive OS parameters are set as described in SAP note
1410736.
2. To apply the SAP profile parameter changes, restart the SAP ASCS/SCS instance.
Add a probe port
Use the internal load balancer's probe functionality to make the entire cluster configuration work with Azure Load
Balancer. The Azure internal load balancer usually distributes the incoming workload equally between
participating virtual machines.
However, this won't work in some cluster configurations because only one instance is active. The other instance is
passive and can’t accept any of the workload. A probe functionality helps when the Azure internal load balancer
detect which instance is active, and only target the active instance.

IMPORTANT
In this example configuration, the ProbePor t is set to 620Nr . For SAP ASCS instance with number 00 it is 62000 . You will
need to adjust the configuration to match your SAP instance numbers and your SAP SID.

To add a probe port run this PowerShell Module on one of the cluster VMs:
In the case of SAP ASC/SCS Instance

Set-AzureLoadBalancerHealthCheckProbePortOnSAPClusterIPResource -SAPSID SID -ProbePort 62000

If using ERS2, which is clustered. There is no need to configure probe port for ERS1, as it is not clustered.

Set-AzureLoadBalancerHealthCheckProbePortOnSAPClusterIPResource -SAPSID SID -ProbePort 62001 -


IsSAPERSClusteredInstance $True

The code for function Set-AzureLoadBalancerHealthCheckProbePortOnSAPClusterIPResource would look like:

function Set-AzureLoadBalancerHealthCheckProbePortOnSAPClusterIPResource {

<#
.SYNOPSIS
Set-AzureLoadBalancerHealthProbePortOnSAPClusterIPResource will set a new Azure Load Balancer Health Probe
Port on 'SAP $SAPSID IP' cluster resource.

.DESCRIPTION
Set-AzureLoadBalancerHealthProbePortOnSAPClusterIPResource will set a new Azure Load Balancer Health Probe
Port on 'SAP $SAPSID IP' cluster resource.
It will also restart SAP Cluster group (default behavior), to activate the changes.

You need to run it on one of the SAP ASCS/SCS Windows cluster nodes.

Expectation is that SAP group is installed with official SWPM installation tool, which will set default
expected naming convention for:
- SAP Cluster Group: 'SAP $SAPSID'
- SAP Cluster IP Address Resource: 'SAP $SAPSID IP'

.PARAMETER SAPSID
SAP SID - 3 characters staring with letter.

.PARAMETER ProbePort
Azure Load Balancer Health Check Probe Port.

.PARAMETER RestartSAPClusterGroup
Optional parameter. Default value is '$True', so SAP cluster group will be restarted to activate the
changes.

.PARAMETER IsSAPERSClusteredInstance
Optional parameter.Default value is '$False'.
If set to $True , then handle clsutered new SAP ERS2 instance.

.EXAMPLE
# Set probe port to 62000, on SAP cluster resource 'SAP AB1 IP', and restart the SAP cluster group 'SAP
AB1', to activate the changes.
Set-AzureLoadBalancerHealthCheckProbePortOnSAPClusterIPResource -SAPSID AB1 -ProbePort 62000

.EXAMPLE
# Set probe port to 62000, on SAP cluster resource 'SAP AB1 IP'. SAP cluster group 'SAP AB1' IS NOT
restarted, therefore changes are NOT active.
# To activate the changes you need to manualy restart 'SAP AB1' cluster group.
# To activate the changes you need to manualy restart 'SAP AB1' cluster group.
Set-AzureLoadBalancerHealthCheckProbePortOnSAPClusterIPResource -SAPSID AB1 -ProbePort 62000 -
RestartSAPClusterGroup $False

.EXAMPLE
# Set probe port to 62001, on SAP cluster resource 'SAP AB1 ERS IP'. SAP cluster group 'SAP AB1 ERS' IS
restarted, to activate the changes.
Set-AzureLoadBalancerHealthCheckProbePortOnSAPClusterIPResource -SAPSID AB1 -ProbePort 62000 -
IsSAPERSClusteredInstance $True

#>

[CmdletBinding()]
param(

[Parameter(Mandatory=$True)]
[ValidateNotNullOrEmpty()]
[ValidateLength(3,3)]
[string]$SAPSID,

[Parameter(Mandatory=$True)]
[ValidateNotNullOrEmpty()]
[int] $ProbePort,

[Parameter(Mandatory=$False)]
[bool] $RestartSAPClusterGroup = $True,

[Parameter(Mandatory=$False)]
[bool] $IsSAPERSClusteredInstance = $False
)

BEGIN{}

PROCESS{
try{

if($IsSAPERSClusteredInstance){
#Handle clustered SAP ERS Instance
$SAPClusterRoleName = "SAP $SAPSID ERS"
$SAPIPresourceName = "SAP $SAPSID ERS IP"
}else{
#Handle clustered SAP ASCS/SCS Instance
$SAPClusterRoleName = "SAP $SAPSID"
$SAPIPresourceName = "SAP $SAPSID IP"
}

$SAPIPResourceClusterParameters = Get-ClusterResource $SAPIPresourceName | Get-ClusterParameter


$IPAddress = ($SAPIPResourceClusterParameters | Where-Object {$_.Name -eq "Address" }).Value
$NetworkName = ($SAPIPResourceClusterParameters | Where-Object {$_.Name -eq "Network" }).Value
$SubnetMask = ($SAPIPResourceClusterParameters | Where-Object {$_.Name -eq "SubnetMask" }).Value
$OverrideAddressMatch = ($SAPIPResourceClusterParameters | Where-Object {$_.Name -eq
"OverrideAddressMatch" }).Value
$EnableDhcp = ($SAPIPResourceClusterParameters | Where-Object {$_.Name -eq "EnableDhcp" }).Value
$OldProbePort = ($SAPIPResourceClusterParameters | Where-Object {$_.Name -eq "ProbePort"
}).Value

$var = Get-ClusterResource | Where-Object { $_.name -eq $SAPIPresourceName }


Write-Output "Current configuration parameters for SAP IP cluster resource '$SAPIPresourceName'
are:"

Get-ClusterResource -Name $SAPIPresourceName | Get-ClusterParameter

Write-Output " "


Write-Output "Current probe port property of the SAP cluster resource '$SAPIPresourceName' is
'$OldProbePort'."
Write-Output " "
Write-Output "Setting the new probe port property of the SAP cluster resource
'$SAPIPresourceName' to '$ProbePort' ..."
Write-Output " "
$var | Set-ClusterParameter -Multiple
@{"Address"=$IPAddress;"ProbePort"=$ProbePort;"Subnetmask"=$SubnetMask;"Network"=$NetworkName;"OverrideAddres
sMatch"=$OverrideAddressMatch;"EnableDhcp"=$EnableDhcp}

Write-Output " "

if($RestartSAPClusterGroup){
Write-Output ""
Write-Output "Activating changes..."

Write-Output " "


Write-Output "Taking SAP cluster IP resource '$SAPIPresourceName' offline ..."
Stop-ClusterResource -Name $SAPIPresourceName
sleep 5

Write-Output "Starting SAP cluster role '$SAPClusterRoleName' ..."


Start-ClusterGroup -Name $SAPClusterRoleName

Write-Output "New ProbePort parameter is active."


Write-Output " "

Write-Output "New configuration parameters for SAP IP cluster resource


'$SAPIPresourceName':"
Write-Output " "
Get-ClusterResource -Name $SAPIPresourceName | Get-ClusterParameter
}else
{
Write-Output "SAP cluster role '$SAPClusterRoleName' is not restarted, therefore changes are
not activated."
}
}
catch{
Write-Error $_.Exception.Message
}
}
END {}
}

Open the Windows firewall probe port


Open a Windows firewall probe port on both cluster nodes. Use the following script to open a Windows firewall
probe port. Update the PowerShell variables for your environment.
If using ERS2, you will also need to open the firewall port for the ERS2 probe port.

$ProbePort = 62000 # ProbePort of the Azure internal load balancer


New-NetFirewallRule -Name AzureProbePort -DisplayName "Rule for Azure Probe Port" -Direction Inbound -
Action Allow -Protocol TCP -LocalPort $ProbePort

Install the database instance


To install the database instance, follow the process that's described in the SAP installation documentation.

Install the second cluster node


To install the second cluster, follow the steps that are described in the SAP installation guide.

Install the SAP Primary Application Server


Install the Primary Application Server (PAS) instance <SID>-di-0 on the virtual machine that you've designated to
host the PAS. There are no dependencies on Azure. If using SIOS, there are no DataKeeper-specific settings.
Install the SAP Additional Application Server
Install an SAP Additional Application Server (AAS) on all the virtual machines that you've designated to host an
SAP Application Server instance.

Test the SAP ASCS/SCS instance failover


For the outlined failover tests, we assume that SAP ASCS is active on node A.
1. Verify that the SAP system can successfully failover from node A to node B Choose one of these options to
initiate a failover of the SAP <SID> cluster group from cluster node A to cluster node B:
Failover Cluster Manager
Failover Cluster PowerShell

$SAPSID = "PR1" # SAP <SID>

$SAPClusterGroup = "SAP $SAPSID"


Move-ClusterGroup -Name $SAPClusterGroup

2. Restart cluster node A within the Windows guest operating system. This initiates an automatic failover of
the SAP <SID> cluster group from node A to node B.
3. Restart cluster node A from the Azure portal. This initiates an automatic failover of the SAP <SID> cluster
group from node A to node B.
4. Restart cluster node A by using Azure PowerShell. This initiates an automatic failover of the SAP <SID>
cluster group from node A to node B.
5. Verification
After failover, verify that the the SAP <SID> cluster group is running on cluster node B.

In Failover Cluster Manager, the SAP <SID> cluster group is running on cluster node B
After failover, verify shared disk is now mounted on cluster node B.
After failover, if using SIOS, verify that SIOS DataKeeper is replicating data from source volume drive
S on cluster node B to target volume drive S on cluster node A.

SIOS DataKeeper replicates the local volume from cluster node B to cluster node A
Install SAP NetWeaver high availability on a
Windows failover cluster and file share for SAP
ASCS/SCS instances on Azure
12/22/2020 • 4 minutes to read • Edit Online

This article describes how to install and configure a high-availability SAP system on Azure, with Windows Server
Failover Cluster (WSFC) and Scale-Out File Server as an option for clustering SAP ASCS/SCS instances.

Prerequisites
Before you start the installation, review the following articles:
Architecture guide: Cluster an SAP ASCS/SCS instance on a Windows failover cluster by using file share
Prepare Azure infrastructure SAP high availability by using a Windows failover cluster and file share for
SAP ASCS/SCS instances
High availability for SAP NetWeaver on Azure VMs
You need the following executables and DLLs from SAP:
SAP Software Provisioning Manager (SWPM) installation tool version SPS25 or later.
SAP Kernel 7.49 or later

IMPORTANT
Clustering SAP ASCS/SCS instances by using a file share is supported for SAP NetWeaver 7.40 (and later), with SAP Kernel
7.49 (and later).

We do not describe the Database Management System (DBMS) setup because setups vary depending on the
DBMS you use. However, we assume that high-availability concerns with the DBMS are addressed with the
functionalities that various DBMS vendors support for Azure. Such functionalities include AlwaysOn or database
mirroring for SQL Server, and Oracle Data Guard for Oracle databases. In the scenario we use in this article, we
didn't add more protection to the DBMS.
There are no special considerations when various DBMS services interact with this kind of clustered SAP
ASCS/SCS configuration in Azure.

NOTE
The installation procedures of SAP NetWeaver ABAP systems, Java systems, and ABAP+Java systems are almost identical.
The most significant difference is that an SAP ABAP system has one ASCS instance. The SAP Java system has one SCS
instance. The SAP ABAP+Java system has one ASCS instance and one SCS instance running in the same Microsoft failover
cluster group. Any installation differences for each SAP NetWeaver installation stack are explicitly mentioned. You can
assume that all other parts are the same.

Prepare an SAP global host on the SOFS cluster


Create the following volume and file share on the SOFS cluster:
SAP GLOBALHOST file C:\ClusterStorage\Volume1\usr\sap\<SID>\SYS\ structure on SOFS cluster shared
volume (CSV)
SAPMNT file share
Set security on the SAPMNT file share and folder with full control for:
The <DOMAIN>\SAP_<SID>_GlobalAdmin user group
The SAP ASCS/SCS cluster node computer objects <DOMAIN>\ClusterNode1$ and
<DOMAIN>\ClusterNode2$
To create a CSV volume with mirror resiliency, execute the following PowerShell cmdlet on one of the SOFS
cluster nodes:

New-Volume -StoragePoolFriendlyName S2D* -FriendlyName SAPPR1 -FileSystem CSVFS_ReFS -Size 5GB -


ResiliencySettingName Mirror

To create SAPMNT and set folder and share security, execute the following PowerShell script on one of the SOFS
cluster nodes:
# Create SAPMNT on file share
$SAPSID = "PR1"
$DomainName = "SAPCLUSTER"
$SAPSIDGlobalAdminGroupName = "$DomainName\SAP_" + $SAPSID + "_GlobalAdmin"

# SAP ASCS/SCS cluster nodes


$ASCSClusterNode1 = "ascs-1"
$ASCSClusterNode2 = "ascs-2"

# Define SAP ASCS/SCS cluster node computer objects


$ASCSClusterObjectNode1 = "$DomainName\$ASCSClusterNode1$"
$ASCSClusterObjectNode2 = "$DomainName\$ASCSClusterNode2$"

# Create usr\sap\.. folders on CSV


$SAPGlobalFolder = "C:\ClusterStorage\SAP$SAPSID\usr\sap\$SAPSID\SYS"
New-Item -Path $SAPGlobalFOlder -ItemType Directory

$UsrSAPFolder = "C:\ClusterStorage\SAP$SAPSID\usr\sap\"

# Create a SAPMNT file share and set share security


New-SmbShare -Name sapmnt -Path $UsrSAPFolder -FullAccess "BUILTIN\Administrators", $ASCSClusterObjectNode1,
$ASCSClusterObjectNode2 -ContinuouslyAvailable $true -CachingMode None -Verbose

# Get SAPMNT file share security settings


Get-SmbShareAccess sapmnt

# Set file and folder security


$Acl = Get-Acl $UsrSAPFolder

# Add a security object of the clusternode1$ computer object


$Ar = New-Object
system.security.accesscontrol.filesystemaccessrule($ASCSClusterObjectNode1,"FullControl",'ContainerInherit,Ob
jectInherit', 'None', 'Allow')
$Acl.SetAccessRule($Ar)

# Add a security object of the clusternode2$ computer object


$Ar = New-Object
system.security.accesscontrol.filesystemaccessrule($ASCSClusterObjectNode2,"FullControl",'ContainerInherit,Ob
jectInherit', 'None', 'Allow')
$Acl.SetAccessRule($Ar)

# Set security
Set-Acl $UsrSAPFolder $Acl -Verbose

Create a virtual host name for the clustered SAP ASCS/SCS instance
Create an SAP ASCS/SCS cluster network name (for example, pr1-ascs [10.0.6.7] ), as described in Create a
virtual host name for the clustered SAP ASCS/SCS instance.

Install an ASCS/SCS and ERS instances in the cluster


Install an ASCS/SCS instance on the first ASCS/SCS cluster node
Install an SAP ASCS/SCS instance on the first cluster node. To install the instance, in the SAP SWPM installation
tool, go to:
<Product> > <DBMS> > Installation > Application Ser ver ABAP (or Java ) > High-Availability System
> ASCS/SCS instance > First cluster node .
Add a probe port
Configure an SAP cluster resource, the SAP-SID-IP probe port, by using PowerShell. Execute this configuration on
one of the SAP ASCS/SCS cluster nodes, as described in this article.
Install an ASCS/SCS instance on the second ASCS/SCS cluster node
Install an SAP ASCS/SCS instance on the second cluster node. To install the instance, in the SAP SWPM installation
tool, go to:
<Product> > <DBMS> > Installation > Application Ser ver ABAP (or Java ) > High-Availability System
> ASCS/SCS instance > Additional cluster node .

Update the SAP ASCS/SCS instance profile


Update parameters in the SAP ASCS/SCS instance profile <SID>ASCS/SCS<Nr><Host>.

PA RA M ET ER N A M E PA RA M ET ER VA L UE

gw/netstat_once 0

enque/encni/set_so_keepalive true

service/ha_check_node 1

Parameter enque/encni/set_so_keepalive is only needed if using ENSA1.


Restart the SAP ASCS/SCS instance. Set KeepAlive parameters on both SAP ASCS/SCS cluster nodes follow the
instructions to Set registry entries on the cluster nodes of the SAP ASCS/SCS instance.

Install a DBMS instance and SAP application servers


Finalize your SAP system installation by installing:
A DBMS instance.
A primary SAP application server.
An additional SAP application server.

Next steps
Install an ASCS/SCS instance on a failover cluster with no shared disks - Official SAP guidelines for high-
availability file share
Storage Spaces Direct in Windows Server 2016
Scale-Out File Server for application data overview
What's new in storage in Windows Server 2016
High availability for SAP NetWeaver on Azure VMs
on Windows with Azure NetApp Files(SMB) for SAP
applications
12/22/2020 • 7 minutes to read • Edit Online

This article describes how to deploy, configure the virtual machines, install the cluster framework, and install a
highly available SAP NetWeaver 7.50 system on Windows VMs, using SMB on Azure NetApp Files.
The database layer isn't covered in detail in this article. We assume that the Azure virtual network has already
been created.
Read the following SAP Notes and papers first:
Azure NetApp Files documentation
SAP Note 1928533, which contains:
A list of Azure VM sizes that are supported for the deployment of SAP software
Important capacity information for Azure VM sizes
Supported SAP software, and operating system (OS) and database combinations
Required SAP kernel version for Windows on Microsoft Azure
SAP Note 2015553 lists prerequisites for SAP-supported SAP software deployments in Azure.
SAP Note 2178632 has detailed information about all monitoring metrics reported for SAP in Azure.
SAP Note 1999351 has additional troubleshooting information for the Azure Enhanced Monitoring Extension
for SAP.
SAP Note 2287140 lists prerequisites for SAP-supported CA feature of SMB 3.x protocol.
SAP Note 2802770 has troubleshooting information for the slow running SAP transaction AL11 on Windows
2012 and 2016.
SAP Note 1911507 has information about transparent failover feature for a file share on Windows Server with
the SMB 3.0 protocol.
SAP Note 662452 has recommendation(deactivating 8.3 name generation) to address Poor file system
performance/errors during data accesses.
Install SAP NetWeaver high availability on a Windows failover cluster and file share for SAP ASCS/SCS
instances on Azure
Azure Virtual Machines high-availability architecture and scenarios for SAP NetWeaver
Add probe port in ASCS cluster configuration
Installation of an (A)SCS Instance on a Failover Cluster
Create an SMB volume for Azure NetApp Files
NetApp SAP Applications on Microsoft Azure using Azure NetApp Files

Overview
SAP developed a new approach, and an alternative to cluster shared disks, for clustering an SAP ASCS/SCS
instance on a Windows failover cluster. Instead of using cluster shared disks, one can use an SMB file share to
deploy SAP global host files. Azure NetApp Files supports SMBv3 (along with NFS) with NTFS ACL using Active
Directory. Azure NetApp Files is automatically highly available (as it is a PaaS service). These features make Azure
NetApp Files great option for hosting the SMB file share for SAP global.
Both Azure Active Directory (AD) Domain Services and Active Directory Domain Services (AD DS) are supported.
You can use existing Active Directory domain controllers with Azure NetApp Files. Domain controllers can be in
Azure as virtual machines, or on premises via ExpressRoute or S2S VPN. In this article, we will use Domain
controller in an Azure VM.
High availability(HA) for SAP Netweaver central services requires shared storage. To achieve that on Windows, so
far it was necessary to build either SOFS cluster or use cluster shared disk s/w like SIOS. Now it is possible to
achieve SAP Netweaver HA by using shared storage, deployed on Azure NetApp Files. Using Azure NetApp Files
for the shared storage eliminates the need for either SOFS or SIOS.

NOTE
Clustering SAP ASCS/SCS instances by using a file share is supported for SAP NetWeaver 7.40 (and later), with SAP Kernel
7.49 (and later).

The prerequisites for an SMB file share are:


SMB 3.0 (or later) protocol.
Ability to set Active Directory access control lists (ACLs) for Active Directory user groups and the computer$
computer object.
The file share must be HA-enabled.
The share for the SAP Central services in this reference architecture is offered by Azure NetApp Files:
Create and mount SMB volume for Azure NetApp Files
Perform the following steps, as preparation for using Azure NetApp Files.
1. Follow the steps to Register for Azure NetApp Files
2. Create Azure NetApp account, following the steps described in Create a NetApp account
3. Set up capacity pool, following the instructions in Set up a capacity pool
4. Azure NetApp Files resources must reside in delegated subnet. Follow the instructions in Delegate a subnet
to Azure NetApp Files to create delegated subnet.

IMPORTANT
You need to create Active Directory connections before creating an SMB volume. Review the requirements for Active
Directory connections.

5. Create Active Directory connection, as described in Create an Active Directory connection


6. Create SMB Azure NetApp Files SMB volume, following the instructions in Add an SMB volume
7. Mount the SMB volume on your Windows Virtual Machine.

TIP
You can find the instructions on how to mount the Azure NetApp Files volume, if you navigate in Azure Portal to the Azure
NetApp Files object, click on the Volumes blade, then Mount Instructions .
Prepare the infrastructure for SAP HA by using a Windows failover
cluster
1. Set the ASCS/SCS load balancing rules for the Azure internal load balancer.
2. Add Windows virtual machines to the domain.
3. Add registry entries on both cluster nodes of the SAP ASCS/SCS instance
4. Set up a Windows Server failover cluster for an SAP ASCS/SCS instance
5. If you are using Windows Server 2016, we recommend that you configure Azure Cloud Witness.

Install SAP ASCS instance on both nodes


You need the following software from SAP:
SAP Software Provisioning Manager (SWPM) installation tool version SPS25 or later.
SAP Kernel 7.49 or later
Create a virtual host name (cluster network name) for the clustered SAP ASCS/SCS instance, as described in
Create a virtual host name for the clustered SAP ASCS/SCS instance.

NOTE
Clustering SAP ASCS/SCS instances by using a file share is supported for SAP NetWeaver 7.40 (and later), with SAP Kernel
7.49 (and later).

Install an ASCS/SCS instance on the first ASCS/SCS cluster node


1. Install an SAP ASCS/SCS instance on the first cluster node. Start the SAP SWPM installation tool, then
navigate to: Product > DBMS > Installation > Application Server ABAP (or Java) > High-Availability
System > ASCS/SCS instance > First cluster node.
2. Select File Share Cluster as the Cluster share Configuration in SWPM.
3. When prompted at step SAP System Cluster Parameters , enter the host name for the Azure NetApp
Files SMB share you already created as File Share Host Name . In this example, the SMB share host name
is anfsmb-9562 .

IMPORTANT
If Pre-requisite checker Results in SWPM shows Continuous availability feature condition not met, it can be
addressed by following the instructions in Delayed error message when you try to access a shared folder that no
longer exists in Windows.

TIP
If Pre-requisite checker Results in SWPM shows Swap Size condition not met, you can adjust the SWAP size by
navigating to My Computer>System Properties>Performance Settings> Advanced> Virtual memory> Change.

4. Configure an SAP cluster resource, the SAP-SID-IP probe port, by using PowerShell. Execute this
configuration on one of the SAP ASCS/SCS cluster nodes, as described in Configure probe port.
Install an ASCS/SCS instance on the second ASCS/SCS cluster node
1. Install an SAP ASCS/SCS instance on the second cluster node. Start the SAP SWPM installation tool, then
navigate to Product > DBMS > Installation > Application Server ABAP (or Java) > High-Availability System >
ASCS/SCS instance > Additional cluster node.
Install a DBMS instance and SAP application servers
Complete your SAP installation, by installing:
A DBMS instance
A primary SAP application server
An additional SAP application server

Test the SAP ASCS/SCS instance failover


Fail over from cluster node A to cluster node B and back
In this test scenario we will refer to cluster node sapascs1 as node A, and to cluster node sapascs2 as node B.
1. Verify that the cluster resources are running on node A.

2. Restart cluster node A. The SAP cluster resources will move to cluster node B.

Lock entry test


1.Verify that the SAP Enqueue Replication Server (ERS) is active
2. Log on to the SAP system, execute transaction SU01 and open a user ID in change mode. That will generate SAP
lock entry.
3. As you are logged in the SAP system, display the lock entry, by navigating to transaction ST12.
4. Fail over ASCS resources from cluster node A to cluster node B.
5. Verify that the lock entry, generated before the SAP ASCS/SCS cluster resources failover is retained.
For more information, see Troubleshooting for Enqueue Failover in ASCS with ERS

Next steps
Azure Virtual Machines planning and implementation for SAP
Azure Virtual Machines deployment for SAP
Azure Virtual Machines DBMS deployment for SAP
To learn how to establish high availability and plan for disaster recovery of SAP
HANA on Azure (large instances), see SAP HANA (large instances) high availability and disaster recovery on
Azure.
To learn how to establish high availability and plan for disaster recovery of SAP HANA on Azure VMs, see High
Availability of SAP HANA on Azure Virtual Machines (VMs)
High availability for SAP NetWeaver on Azure VMs
on SUSE Linux Enterprise Server for SAP
applications
12/22/2020 • 34 minutes to read • Edit Online

This article describes how to deploy the virtual machines, configure the virtual machines, install the cluster
framework, and install a highly available SAP NetWeaver 7.50 system. In the example configurations,
installation commands etc. ASCS instance number 00, ERS instance number 02, and SAP System ID NW1 is
used. The names of the resources (for example virtual machines, virtual networks) in the example assume that
you have used the converged template with SAP system ID NW1 to create the resources.
Read the following SAP Notes and papers first
SAP Note 1928533, which has:
List of Azure VM sizes that are supported for the deployment of SAP software
Important capacity information for Azure VM sizes
Supported SAP software, and operating system (OS) and database combinations
Required SAP kernel version for Windows and Linux on Microsoft Azure
SAP Note 2015553 lists prerequisites for SAP-supported SAP software deployments in Azure.
SAP Note 2205917 has recommended OS settings for SUSE Linux Enterprise Server for SAP
Applications
SAP Note 1944799 has SAP HANA Guidelines for SUSE Linux Enterprise Server for SAP Applications
SAP Note 2178632 has detailed information about all monitoring metrics reported for SAP in Azure.
SAP Note 2191498 has the required SAP Host Agent version for Linux in Azure.
SAP Note 2243692 has information about SAP licensing on Linux in Azure.
SAP Note 1984787 has general information about SUSE Linux Enterprise Server 12.
SAP Note 1999351 has additional troubleshooting information for the Azure Enhanced Monitoring
Extension for SAP.
SAP Community WIKI has all required SAP Notes for Linux.
Azure Virtual Machines planning and implementation for SAP on Linux
Azure Virtual Machines deployment for SAP on Linux
Azure Virtual Machines DBMS deployment for SAP on Linux
SUSE SAP HA Best Practice Guides The guides contain all required information to set up Netweaver HA
and SAP HANA System Replication on-premises. Use these guides as a general baseline. They provide
much more detailed information.
SUSE High Availability Extension 12 SP3 Release Notes

Overview
To achieve high availability, SAP NetWeaver requires an NFS server. The NFS server is configured in a separate
cluster and can be used by multiple SAP systems.

The NFS server, SAP NetWeaver ASCS, SAP NetWeaver SCS, SAP NetWeaver ERS, and the SAP HANA database
use virtual hostname and virtual IP addresses. On Azure, a load balancer is required to use a virtual IP address.
We recommend using Standard load balancer. The following list shows the configuration of the (A)SCS and
ERS load balancer.
(A )SCS
Frontend configuration
IP address 10.0.0.7
Probe Port
Port 620<nr>
Load balancing rules
If using Standard Load Balancer, select HA por ts
If using Basic Load Balancer, create Load balancing rules for the following ports
32<nr> TCP
36<nr> TCP
39<nr> TCP
81<nr> TCP
5<nr> 13 TCP
5<nr> 14 TCP
5<nr> 16 TCP
ERS
Frontend configuration
IP address 10.0.0.8
Probe Port
Port 621<nr>
Load-balancing rules
If using Standard Load Balancer, select HA por ts
If using Basic Load Balancer, create Load balancing rules for the following ports
32<nr> TCP
33<nr> TCP
5<nr> 13 TCP
5<nr> 14 TCP
5<nr> 16 TCP
Backend configuration
Connected to primary network interfaces of all virtual machines that should be part of the
(A)SCS/ERS cluster

Setting up a highly available NFS server


SAP NetWeaver requires shared storage for the transport and profile directory. Read High availability for NFS
on Azure VMs on SUSE Linux Enterprise Server on how to set up an NFS server for SAP NetWeaver.

Setting up (A)SCS
You can either use an Azure Template from GitHub to deploy all required Azure resources, including the virtual
machines, availability set and load balancer or you can deploy the resources manually.
Deploy Linux via Azure Template
The Azure Marketplace contains an image for SUSE Linux Enterprise Server for SAP Applications 12 that you
can use to deploy new virtual machines. The marketplace image contains the resource agent for SAP
NetWeaver.
You can use one of the quickstart templates on GitHub to deploy all required resources. The template deploys
the virtual machines, the load balancer, availability set etc. Follow these steps to deploy the template:
1. Open the ASCS/SCS Multi SID template or the converged template on the Azure portal. The ASCS/SCS
template only creates the load-balancing rules for the SAP NetWeaver ASCS/SCS and ERS (Linux only)
instances whereas the converged template also creates the load-balancing rules for a database (for
example Microsoft SQL Server or SAP HANA). If you plan to install an SAP NetWeaver based system and
you also want to install the database on the same machines, use the converged template.
2. Enter the following parameters
a. Resource Prefix (ASCS/SCS Multi SID template only)
Enter the prefix you want to use. The value is used as a prefix for the resources that are deployed.
b. Sap System ID (converged template only)
Enter the SAP system ID of the SAP system you want to install. The ID is used as a prefix for the
resources that are deployed.
c. Stack Type
Select the SAP NetWeaver stack type
d. Os Type
Select one of the Linux distributions. For this example, select SLES 12 BYOS
e. Db Type
Select HANA
f. Sap System Size.
The amount of SAPS the new system provides. If you are not sure how many SAPS the system
requires, ask your SAP Technology Partner or System Integrator
g. System Availability
Select HA
h. Admin Username and Admin Password
A new user is created that can be used to log on to the machine.
i. Subnet ID
If you want to deploy the VM into an existing VNet where you have a subnet defined the VM should
be assigned to, name the ID of that specific subnet. The ID usually looks like
/subscriptions/<subscription ID> /resourceGroups/<resource group
name> /providers/Microsoft.Network/virtualNetworks/<vir tual network
name> /subnets/<subnet name>
Deploy Linux manually via Azure portal
You first need to create the virtual machines for this NFS cluster. Afterwards, you create a load balancer and
use the virtual machines in the backend pool.
1. Create a Resource Group
2. Create a Virtual Network
3. Create an Availability Set
Set max update domain
4. Create Virtual Machine 1
Use at least SLES4SAP 12 SP1, in this example the SLES4SAP 12 SP1 image
https://portal.azure.com/#create/SUSE.SUSELinuxEnterpriseServerforSAPApplications12SP1PremiumImage-
ARM
SLES For SAP Applications 12 SP1 is used
Select Availability Set created earlier
5. Create Virtual Machine 2
Use at least SLES4SAP 12 SP1, in this example the SLES4SAP 12 SP1 image
https://portal.azure.com/#create/SUSE.SUSELinuxEnterpriseServerforSAPApplications12SP1PremiumImage-
ARM
SLES For SAP Applications 12 SP1 is used
Select Availability Set created earlier
6. Add at least one data disk to both virtual machines
The data disks are used for the /usr/sap/ <SAPSID > directory
7. Create load balancer (internal, standard):
a. Create the frontend IP addresses
a. IP address 10.0.0.7 for the ASCS
a. Open the load balancer, select frontend IP pool, and click Add
b. Enter the name of the new frontend IP pool (for example nw1-ascs-frontend )
c. Set the Assignment to Static and enter the IP address (for example 10.0.0.7 )
d. Click OK
b. IP address 10.0.0.8 for the ASCS ERS
Repeat the steps above to create an IP address for the ERS (for example 10.0.0.8 and
nw1-aers-backend )
b. Create the backend pool
a. Open the load balancer, select backend pools, and click Add
b. Enter the name of the new backend pool (for example nw1-backend )
c. Click Add a virtual machine.
d. Select Virtual Machine
e. Select the virtual machines of the (A)SCS cluster and their IP addresses.
f. Click Add
c. Create the health probes
a. Port 62000 for ASCS
a. Open the load balancer, select health probes, and click Add
b. Enter the name of the new health probe (for example nw1-ascs-hp )
c. Select TCP as protocol, port 62000 , keep Interval 5 and Unhealthy threshold 2
d. Click OK
b. Port 62102 for ASCS ERS
Repeat the steps above to create a health probe for the ERS (for example 62102 and
nw1-aers-hp )
d. Load-balancing rules
a. Load-balancing rules for ASCS
a. Open the load balancer, select load-balancing rules and click Add
b. Enter the name of the new load balancer rule (for example nw1-lb-ascs )
c. Select the frontend IP address, backend pool, and health probe you created earlier (for
example nw1-ascs-frontend , nw1-backend and nw1-ascs-hp )
d. Select HA por ts
e. Increase idle timeout to 30 minutes
f. Make sure to enable Floating IP
g. Click OK
Repeat the steps above to create load balancing rules for ERS (for example nw1-lb-
ers )
8. Alternatively, if your scenario requires basic load balancer (internal), follow these steps:
a. Create the frontend IP addresses
a. IP address 10.0.0.7 for the ASCS
a. Open the load balancer, select frontend IP pool, and click Add
b. Enter the name of the new frontend IP pool (for example nw1-ascs-frontend )
c. Set the Assignment to Static and enter the IP address (for example 10.0.0.7 )
d. Click OK
b. IP address 10.0.0.8 for the ASCS ERS
Repeat the steps above to create an IP address for the ERS (for example 10.0.0.8 and
nw1-aers-frontend )
b. Create the backend pool
a. Open the load balancer, select backend pools, and click Add
b. Enter the name of the new backend pool (for example nw1-backend )
c. Click Add a virtual machine.
d. Select the Availability Set you created earlier
e. Select the virtual machines of the (A)SCS cluster
f. Click OK
c. Create the health probes
a. Port 62000 for ASCS
a. Open the load balancer, select health probes, and click Add
b. Enter the name of the new health probe (for example nw1-ascs-hp )
c. Select TCP as protocol, port 62000 , keep Interval 5 and Unhealthy threshold 2
d. Click OK
b. Port 62102 for ASCS ERS
Repeat the steps above to create a health probe for the ERS (for example 62102 and
nw1-aers-hp )
d. Load-balancing rules
a. 3200 TCP for ASCS
a. Open the load balancer, select load-balancing rules and click Add
b. Enter the name of the new load balancer rule (for example nw1-lb-3200 )
c. Select the frontend IP address, backend pool, and health probe you created earlier (for
example nw1-ascs-frontend )
d. Keep protocol TCP , enter port 3200
e. Increase idle timeout to 30 minutes
f. Make sure to enable Floating IP
g. Click OK
b. Additional ports for the ASCS
Repeat the steps above for ports 3600 , 3900 , 8100 , 500 13, 500 14, 500 16 and TCP
for the ASCS
c. Additional ports for the ASCS ERS
Repeat the steps above for ports 3302 , 502 13, 502 14, 502 16 and TCP for the ASCS
ERS

IMPORTANT
Floating IP is not supported on a NIC secondary IP configuration in load-balancing scenarios. For details see Azure Load
balancer Limitations. If you need additional IP address for the VM, deploy a second NIC.

NOTE
When VMs without public IP addresses are placed in the backend pool of internal (no public IP address) Standard Azure
load balancer, there will be no outbound internet connectivity, unless additional configuration is performed to allow
routing to public end points. For details on how to achieve outbound connectivity see Public endpoint connectivity for
Virtual Machines using Azure Standard Load Balancer in SAP high-availability scenarios.

IMPORTANT
Do not enable TCP timestamps on Azure VMs placed behind Azure Load Balancer. Enabling TCP timestamps will cause
the health probes to fail. Set parameter net.ipv4.tcp_timestamps to 0 . For details see Load Balancer health probes.

Create Pacemaker cluster


Follow the steps in Setting up Pacemaker on SUSE Linux Enterprise Server in Azure to create a basic
Pacemaker cluster for this (A)SCS server.
Installation
The following items are prefixed with either [A] - applicable to all nodes, [1] - only applicable to node 1 or [2]
- only applicable to node 2.
1. [A] Install SUSE Connector

sudo zypper install sap-suse-cluster-connector

NOTE
The known issue with using a dash in host names is fixed with version 3.1.1 of package sap-suse-cluster-
connector . Make sure that you are using at least version 3.1.1 of package sap-suse-cluster-connector, if using
cluster nodes with dash in the host name. Otherwise your cluster will not work.

Make sure that you installed the new version of the SAP SUSE cluster connector. The old one was called
sap_suse_cluster_connector and the new one is called sap-suse-cluster-connector .

sudo zypper info sap-suse-cluster-connector

Information for package sap-suse-cluster-connector:


---------------------------------------------------
Repository : SLE-12-SP3-SAP-Updates
Name : sap-suse-cluster-connector
<b>Version : 3.0.0-2.2</b>
Arch : noarch
Vendor : SUSE LLC <https://www.suse.com/>
Support Level : Level 3
Installed Size : 41.6 KiB
<b>Installed : Yes</b>
Status : up-to-date
Source package : sap-suse-cluster-connector-3.0.0-2.2.src
Summary : SUSE High Availability Setup for SAP Products

2. [A] Update SAP resource agents


A patch for the resource-agents package is required to use the new configuration, that is described in
this article. You can check, if the patch is already installed with the following command

sudo grep 'parameter name="IS_ERS"' /usr/lib/ocf/resource.d/heartbeat/SAPInstance

The output should be similar to

<parameter name="IS_ERS" unique="0" required="0">

If the grep command does not find the IS_ERS parameter, you need to install the patch listed on the
SUSE download page

# example for patch for SLES 12 SP1


sudo zypper in -t patch SUSE-SLE-HA-12-SP1-2017-885=1
# example for patch for SLES 12 SP2
sudo zypper in -t patch SUSE-SLE-HA-12-SP2-2017-886=1

3. [A] Setup host name resolution


You can either use a DNS server or modify the /etc/hosts on all nodes. This example shows how to use
the /etc/hosts file. Replace the IP address and the hostname in the following commands

sudo vi /etc/hosts

Insert the following lines to /etc/hosts. Change the IP address and hostname to match your
environment

# IP address of the load balancer frontend configuration for NFS


10.0.0.4 nw1-nfs
# IP address of the load balancer frontend configuration for SAP NetWeaver ASCS
10.0.0.7 nw1-ascs
# IP address of the load balancer frontend configuration for SAP NetWeaver ASCS ERS
10.0.0.8 nw1-aers
# IP address of the load balancer frontend configuration for database
10.0.0.13 nw1-db

Prepare for SAP NetWeaver installation


1. [A] Create the shared directories

sudo mkdir -p /sapmnt/NW1


sudo mkdir -p /usr/sap/trans
sudo mkdir -p /usr/sap/NW1/SYS
sudo mkdir -p /usr/sap/NW1/ASCS00
sudo mkdir -p /usr/sap/NW1/ERS02

sudo chattr +i /sapmnt/NW1


sudo chattr +i /usr/sap/trans
sudo chattr +i /usr/sap/NW1/SYS
sudo chattr +i /usr/sap/NW1/ASCS00
sudo chattr +i /usr/sap/NW1/ERS02

2. [A] Configure autofs

sudo vi /etc/auto.master

# Add the following line to the file, save and exit


+auto.master
/- /etc/auto.direct

Create a file with

sudo vi /etc/auto.direct

# Add the following lines to the file, save and exit


/sapmnt/NW1 -nfsvers=4,nosymlink,sync nw1-nfs:/NW1/sapmntsid
/usr/sap/trans -nfsvers=4,nosymlink,sync nw1-nfs:/NW1/trans
/usr/sap/NW1/SYS -nfsvers=4,nosymlink,sync nw1-nfs:/NW1/sidsys

Restart autofs to mount the new shares

sudo systemctl enable autofs


sudo service autofs restart

3. [A] Configure SWAP file


sudo vi /etc/waagent.conf

# Set the property ResourceDisk.EnableSwap to y


# Create and use swapfile on resource disk.
ResourceDisk.EnableSwap=y

# Set the size of the SWAP file with property ResourceDisk.SwapSizeMB


# The free space of resource disk varies by virtual machine size. Make sure that you do not set a
value that is too big. You can check the SWAP space with command swapon
# Size of the swapfile.
ResourceDisk.SwapSizeMB=2000

Restart the Agent to activate the change

sudo service waagent restart

Installing SAP NetWeaver ASCS/ERS


1. [1] Create a virtual IP resource and health-probe for the ASCS instance

IMPORTANT
Recent testing revealed situations, where netcat stops responding to requests due to backlog and its limitation
of handling only one connection. The netcat resource stops listening to the Azure Load balancer requests and
the floating IP becomes unavailable.
For existing Pacemaker clusters, we recommended in the past replacing netcat with socat. Currently we
recommend using azure-lb resource agent, which is part of package resource-agents, with the following package
version requirements:
For SLES 12 SP4/SP5, the version must be at least resource-agents-4.3.018.a7fb5035-3.30.1.
For SLES 15/15 SP1, the version must be at least resource-agents-4.3.0184.6ee15eb2-4.13.1.
Note that the change will require brief downtime.
For existing Pacemaker clusters, if the configuration was already changed to use socat as described in Azure
Load-Balancer Detection Hardening, there is no requirement to switch immediately to azure-lb resource agent.

sudo crm node standby nw1-cl-1

sudo crm configure primitive fs_NW1_ASCS Filesystem device='nw1-nfs:/NW1/ASCS'


directory='/usr/sap/NW1/ASCS00' fstype='nfs4' \
op start timeout=60s interval=0 \
op stop timeout=60s interval=0 \
op monitor interval=20s timeout=40s

sudo crm configure primitive vip_NW1_ASCS IPaddr2 \


params ip=10.0.0.7 cidr_netmask=24 \
op monitor interval=10 timeout=20

sudo crm configure primitive nc_NW1_ASCS azure-lb port=62000

sudo crm configure group g-NW1_ASCS fs_NW1_ASCS nc_NW1_ASCS vip_NW1_ASCS \


meta resource-stickiness=3000

Make sure that the cluster status is ok and that all resources are started. It is not important on which
node the resources are running.
sudo crm_mon -r

# Node nw1-cl-1: standby


# Online: [ nw1-cl-0 ]
#
# Full list of resources:
#
# stonith-sbd (stonith:external/sbd): Started nw1-cl-0
# Resource Group: g-NW1_ASCS
# fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-0
# nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-0
# vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-0

2. [1] Install SAP NetWeaver ASCS


Install SAP NetWeaver ASCS as root on the first node using a virtual hostname that maps to the IP
address of the load balancer frontend configuration for the ASCS, for example nw1-ascs , 10.0.0.7 and
the instance number that you used for the probe of the load balancer, for example 00 .
You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a non-root user to
connect to sapinst.

sudo <swpm>/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin

If the installation fails to create a subfolder in /usr/sap/NW1 /ASCS00 , try setting the owner and group
of the ASCS00 folder and retry.

chown nw1adm /usr/sap/NW1/ASCS00


chgrp sapsys /usr/sap/NW1/ASCS00

3. [1] Create a virtual IP resource and health-probe for the ERS instance

sudo crm node online nw1-cl-1


sudo crm node standby nw1-cl-0

sudo crm configure primitive fs_NW1_ERS Filesystem device='nw1-nfs:/NW1/ASCSERS'


directory='/usr/sap/NW1/ERS02' fstype='nfs4' \
op start timeout=60s interval=0 \
op stop timeout=60s interval=0 \
op monitor interval=20s timeout=40s

sudo crm configure primitive vip_NW1_ERS IPaddr2 \


params ip=10.0.0.8 cidr_netmask=24 \
op monitor interval=10 timeout=20

sudo crm configure primitive nc_NW1_ERS azure-lb port=62102

sudo crm configure group g-NW1_ERS fs_NW1_ERS nc_NW1_ERS vip_NW1_ERS

Make sure that the cluster status is ok and that all resources are started. It is not important on which
node the resources are running.
sudo crm_mon -r

# Node nw1-cl-0: standby


# Online: [ nw1-cl-1 ]
#
# Full list of resources:
#
# stonith-sbd (stonith:external/sbd): Started nw1-cl-1
# Resource Group: g-NW1_ASCS
# fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-1
# nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-1
# vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-1
# Resource Group: g-NW1_ERS
# fs_NW1_ERS (ocf::heartbeat:Filesystem): Started nw1-cl-1
# nc_NW1_ERS (ocf::heartbeat:azure-lb): Started nw1-cl-1
# vip_NW1_ERS (ocf::heartbeat:IPaddr2): Started nw1-cl-1

4. [2] Install SAP NetWeaver ERS


Install SAP NetWeaver ERS as root on the second node using a virtual hostname that maps to the IP
address of the load balancer frontend configuration for the ERS, for example nw1-aers , 10.0.0.8 and
the instance number that you used for the probe of the load balancer, for example 02 .
You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a non-root user to
connect to sapinst.

sudo <swpm>/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin

NOTE
Use SWPM SP 20 PL 05 or higher. Lower versions do not set the permissions correctly and the installation will
fail.

If the installation fails to create a subfolder in /usr/sap/NW1 /ERS02 , try setting the owner and group of
the ERS02 folder and retry.

chown nw1adm /usr/sap/NW1/ERS02


chgrp sapsys /usr/sap/NW1/ERS02

5. [1] Adapt the ASCS/SCS and ERS instance profiles


ASCS/SCS profile

sudo vi /sapmnt/NW1/profile/NW1_ASCS00_nw1-ascs

# Change the restart command to a start command


#Restart_Program_01 = local $(_EN) pf=$(_PF)
Start_Program_01 = local $(_EN) pf=$(_PF)

# Add the following lines


service/halib = $(DIR_CT_RUN)/saphascriptco.so
service/halib_cluster_connector = /usr/bin/sap_suse_cluster_connector

# Add the keep alive parameter, if using ENSA1


enque/encni/set_so_keepalive = true

For both ENSA1 and ENSA2, make sure that the keepalive OS parameters are set as described in SAP
note 1410736.
ERS profile

sudo vi /sapmnt/NW1/profile/NW1_ERS02_nw1-aers

# Change the restart command to a start command


#Restart_Program_00 = local $(_ER) pf=$(_PFL) NR=$(SCSID)
Start_Program_00 = local $(_ER) pf=$(_PFL) NR=$(SCSID)

# Add the following lines


service/halib = $(DIR_CT_RUN)/saphascriptco.so
service/halib_cluster_connector = /usr/bin/sap_suse_cluster_connector

# remove Autostart from ERS profile


# Autostart = 1

6. [A] Configure Keep Alive


The communication between the SAP NetWeaver application server and the ASCS/SCS is routed
through a software load balancer. The load balancer disconnects inactive connections after a
configurable timeout. To prevent this you need to set a parameter in the SAP NetWeaver ASCS/SCS
profile, if using ENSA1, and change the Linux system keepalive settings on all SAP servers for both
ENSA1/ENSA2. Read SAP Note 1410736 for more information.

# Change the Linux system configuration


sudo sysctl net.ipv4.tcp_keepalive_time=300

7. [A] Configure the SAP users after the installation

# Add sidadm to the haclient group


sudo usermod -aG haclient nw1adm

8. [1] Add the ASCS and ERS SAP services to the sapservice file
Add the ASCS service entry to the second node and copy the ERS service entry to the first node.

cat /usr/sap/sapservices | grep ASCS00 | sudo ssh nw1-cl-1 "cat >>/usr/sap/sapservices"


sudo ssh nw1-cl-1 "cat /usr/sap/sapservices" | grep ERS02 | sudo tee -a /usr/sap/sapservices

9. [1] Create the SAP cluster resources


If using enqueue server 1 architecture (ENSA1), define the resources as follows:
sudo crm configure property maintenance-mode="true"

sudo crm configure primitive rsc_sap_NW1_ASCS00 SAPInstance \


operations \$id=rsc_sap_NW1_ASCS00-operations \
op monitor interval=11 timeout=60 on-fail=restart \
params InstanceName=NW1_ASCS00_nw1-ascs START_PROFILE="/sapmnt/NW1/profile/NW1_ASCS00_nw1-ascs" \
AUTOMATIC_RECOVER=false \
meta resource-stickiness=5000 failure-timeout=60 migration-threshold=1 priority=10

sudo crm configure primitive rsc_sap_NW1_ERS02 SAPInstance \


operations \$id=rsc_sap_NW1_ERS02-operations \
op monitor interval=11 timeout=60 on-fail=restart \
params InstanceName=NW1_ERS02_nw1-aers START_PROFILE="/sapmnt/NW1/profile/NW1_ERS02_nw1-aers"
AUTOMATIC_RECOVER=false IS_ERS=true \
meta priority=1000

sudo crm configure modgroup g-NW1_ASCS add rsc_sap_NW1_ASCS00


sudo crm configure modgroup g-NW1_ERS add rsc_sap_NW1_ERS02

sudo crm configure colocation col_sap_NW1_no_both -5000: g-NW1_ERS g-NW1_ASCS


sudo crm configure location loc_sap_NW1_failover_to_ers rsc_sap_NW1_ASCS00 rule 2000: runs_ers_NW1 eq 1
sudo crm configure order ord_sap_NW1_first_start_ascs Optional: rsc_sap_NW1_ASCS00:start
rsc_sap_NW1_ERS02:stop symmetrical=false

sudo crm node online nw1-cl-0


sudo crm configure property maintenance-mode="false"

SAP introduced support for enqueue server 2, including replication, as of SAP NW 7.52. Starting with ABAP
Platform 1809, enqueue server 2 is installed by default. See SAP note 2630416 for enqueue server 2 support.
If using enqueue server 2 architecture (ENSA2), define the resources as follows:

sudo crm configure property maintenance-mode="true"

sudo crm configure primitive rsc_sap_NW1_ASCS00 SAPInstance \


operations \$id=rsc_sap_NW1_ASCS00-operations \
op monitor interval=11 timeout=60 on-fail=restart \
params InstanceName=NW1_ASCS00_nw1-ascs START_PROFILE="/sapmnt/NW1/profile/NW1_ASCS00_nw1-ascs" \
AUTOMATIC_RECOVER=false \
meta resource-stickiness=5000

sudo crm configure primitive rsc_sap_NW1_ERS02 SAPInstance \


operations \$id=rsc_sap_NW1_ERS02-operations \
op monitor interval=11 timeout=60 on-fail=restart \
params InstanceName=NW1_ERS02_nw1-aers START_PROFILE="/sapmnt/NW1/profile/NW1_ERS02_nw1-aers"
AUTOMATIC_RECOVER=false IS_ERS=true

sudo crm configure modgroup g-NW1_ASCS add rsc_sap_NW1_ASCS00


sudo crm configure modgroup g-NW1_ERS add rsc_sap_NW1_ERS02

sudo crm configure colocation col_sap_NW1_no_both -5000: g-NW1_ERS g-NW1_ASCS


sudo crm configure order ord_sap_NW1_first_start_ascs Optional: rsc_sap_NW1_ASCS00:start
rsc_sap_NW1_ERS02:stop symmetrical=false

sudo crm node online nw1-cl-0


sudo crm configure property maintenance-mode="false"

If you are upgrading from an older version and switching to enqueue server 2, see SAP note 2641019.
Make sure that the cluster status is ok and that all resources are started. It is not important on which node the
resources are running.
sudo crm_mon -r

# Online: [ nw1-cl-0 nw1-cl-1 ]


#
# Full list of resources:
#
# stonith-sbd (stonith:external/sbd): Started nw1-cl-1
# Resource Group: g-NW1_ASCS
# fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-1
# nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-1
# vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-1
# rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-1
# Resource Group: g-NW1_ERS
# fs_NW1_ERS (ocf::heartbeat:Filesystem): Started nw1-cl-0
# nc_NW1_ERS (ocf::heartbeat:azure-lb): Started nw1-cl-0
# vip_NW1_ERS (ocf::heartbeat:IPaddr2): Started nw1-cl-0
# rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-0

SAP NetWeaver application server preparation


Some databases require that the database instance installation is executed on an application server. Prepare
the application server virtual machines to be able to use them in these cases.
The steps bellow assume that you install the application server on a server different from the ASCS/SCS and
HANA servers. Otherwise some of the steps below (like configuring host name resolution) are not needed.
1. Configure operating system
Reduce the size of the dirty cache. For more information, see Low write performance on SLES 11/12
servers with large RAM.

sudo vi /etc/sysctl.conf

# Change/set the following settings


vm.dirty_bytes = 629145600
vm.dirty_background_bytes = 314572800

2. Setup host name resolution


You can either use a DNS server or modify the /etc/hosts on all nodes. This example shows how to use
the /etc/hosts file. Replace the IP address and the hostname in the following commands

sudo vi /etc/hosts

Insert the following lines to /etc/hosts. Change the IP address and hostname to match your
environment
# IP address of the load balancer frontend configuration for NFS
10.0.0.4 nw1-nfs
# IP address of the load balancer frontend configuration for SAP NetWeaver ASCS/SCS
10.0.0.7 nw1-ascs
# IP address of the load balancer frontend configuration for SAP NetWeaver ERS
10.0.0.8 nw1-aers
# IP address of the load balancer frontend configuration for database
10.0.0.13 nw1-db
# IP address of all application servers
10.0.0.20 nw1-di-0
10.0.0.21 nw1-di-1

3. Create the sapmnt directory

sudo mkdir -p /sapmnt/NW1


sudo mkdir -p /usr/sap/trans

sudo chattr +i /sapmnt/NW1


sudo chattr +i /usr/sap/trans

4. Configure autofs

sudo vi /etc/auto.master

# Add the following line to the file, save and exit


+auto.master
/- /etc/auto.direct

Create a new file with

sudo vi /etc/auto.direct

# Add the following lines to the file, save and exit


/sapmnt/NW1 -nfsvers=4,nosymlink,sync nw1-nfs:/NW1/sapmntsid
/usr/sap/trans -nfsvers=4,nosymlink,sync nw1-nfs:/NW1/trans

Restart autofs to mount the new shares

sudo systemctl enable autofs


sudo service autofs restart

5. Configure SWAP file

sudo vi /etc/waagent.conf

# Set the property ResourceDisk.EnableSwap to y


# Create and use swapfile on resource disk.
ResourceDisk.EnableSwap=y

# Set the size of the SWAP file with property ResourceDisk.SwapSizeMB


# The free space of resource disk varies by virtual machine size. Make sure that you do not set a
value that is too big. You can check the SWAP space with command swapon
# Size of the swapfile.
ResourceDisk.SwapSizeMB=2000

Restart the Agent to activate the change


sudo service waagent restart

Install database
In this example, SAP NetWeaver is installed on SAP HANA. You can use every supported database for this
installation. For more information on how to install SAP HANA in Azure, see High Availability of SAP HANA on
Azure Virtual Machines (VMs). For a list of supported databases, see SAP Note 1928533.
1. Run the SAP database instance installation
Install the SAP NetWeaver database instance as root using a virtual hostname that maps to the IP
address of the load balancer frontend configuration for the database for example nw1-db and
10.0.0.13 .
You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a non-root user to
connect to sapinst.

sudo <swpm>/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin

SAP NetWeaver application server installation


Follow these steps to install an SAP application server.
1. Prepare application server
Follow the steps in the chapter SAP NetWeaver application server preparation above to prepare the
application server.
2. Install SAP NetWeaver application server
Install a primary or additional SAP NetWeaver applications server.
You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a non-root user to
connect to sapinst.

sudo <swpm>/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin

3. Update SAP HANA secure store


Update the SAP HANA secure store to point to the virtual name of the SAP HANA System Replication
setup.
Run the following command to list the entries

hdbuserstore List

This should list all entries and should look similar to


DATA FILE : /home/nw1adm/.hdb/nw1-di-0/SSFS_HDB.DAT
KEY FILE : /home/nw1adm/.hdb/nw1-di-0/SSFS_HDB.KEY

KEY DEFAULT
ENV : 10.0.0.14:30313
USER: SAPABAP1
DATABASE: HN1

The output shows that the IP address of the default entry is pointing to the virtual machine and not to
the load balancer's IP address. This entry needs to be changed to point to the virtual hostname of the
load balancer. Make sure to use the same port (30313 in the output above) and database name (HN1
in the output above)!

su - nw1adm
hdbuserstore SET DEFAULT nw1-db:30313@HN1 SAPABAP1 <password of ABAP schema>

Test the cluster setup


The following tests are a copy of the test cases in the best practices guides of SUSE. They are copied for your
convenience. Always also read the best practices guides and perform all additional tests that might have been
added.
1. Test HAGetFailoverConfig, HACheckConfig and HACheckFailoverConfig
Run the following commands as <sapsid>adm on the node where the ASCS instance is currently
running. If the commands fail with FAIL: Insufficient memory, it might be caused by dashes in your
hostname. This is a known issue and will be fixed by SUSE in the sap-suse-cluster-connector package.
nw1-cl-0:nw1adm 54> sapcontrol -nr 00 -function HAGetFailoverConfig

# 15.08.2018 13:50:36
# HAGetFailoverConfig
# OK
# HAActive: TRUE
# HAProductVersion: Toolchain Module
# HASAPInterfaceVersion: Toolchain Module (sap_suse_cluster_connector 3.0.1)
# HADocumentation: https://www.suse.com/products/sles-for-sap/resource-library/sap-best-practices/
# HAActiveNode:
# HANodes: nw1-cl-0, nw1-cl-1

nw1-cl-0:nw1adm 55> sapcontrol -nr 00 -function HACheckConfig

# 15.08.2018 14:00:04
# HACheckConfig
# OK
# state, category, description, comment
# SUCCESS, SAP CONFIGURATION, Redundant ABAP instance configuration, 2 ABAP instances detected
# SUCCESS, SAP CONFIGURATION, Redundant Java instance configuration, 0 Java instances detected
# SUCCESS, SAP CONFIGURATION, Enqueue separation, All Enqueue server separated from application
server
# SUCCESS, SAP CONFIGURATION, MessageServer separation, All MessageServer separated from
application server
# SUCCESS, SAP CONFIGURATION, ABAP instances on multiple hosts, ABAP instances on multiple hosts
detected
# SUCCESS, SAP CONFIGURATION, Redundant ABAP SPOOL service configuration, 2 ABAP instances with
SPOOL service detected
# SUCCESS, SAP STATE, Redundant ABAP SPOOL service state, 2 ABAP instances with active SPOOL
service detected
# SUCCESS, SAP STATE, ABAP instances with ABAP SPOOL service on multiple hosts, ABAP instances with
active ABAP SPOOL service on multiple hosts detected
# SUCCESS, SAP CONFIGURATION, Redundant ABAP BATCH service configuration, 2 ABAP instances with
BATCH service detected
# SUCCESS, SAP STATE, Redundant ABAP BATCH service state, 2 ABAP instances with active BATCH
service detected
# SUCCESS, SAP STATE, ABAP instances with ABAP BATCH service on multiple hosts, ABAP instances with
active ABAP BATCH service on multiple hosts detected
# SUCCESS, SAP CONFIGURATION, Redundant ABAP DIALOG service configuration, 2 ABAP instances with
DIALOG service detected
# SUCCESS, SAP STATE, Redundant ABAP DIALOG service state, 2 ABAP instances with active DIALOG
service detected
# SUCCESS, SAP STATE, ABAP instances with ABAP DIALOG service on multiple hosts, ABAP instances
with active ABAP DIALOG service on multiple hosts detected
# SUCCESS, SAP CONFIGURATION, Redundant ABAP UPDATE service configuration, 2 ABAP instances with
UPDATE service detected
# SUCCESS, SAP STATE, Redundant ABAP UPDATE service state, 2 ABAP instances with active UPDATE
service detected
# SUCCESS, SAP STATE, ABAP instances with ABAP UPDATE service on multiple hosts, ABAP instances
with active ABAP UPDATE service on multiple hosts detected
# SUCCESS, SAP STATE, SCS instance running, SCS instance status ok
# SUCCESS, SAP CONFIGURATION, SAPInstance RA sufficient version (nw1-ascs_NW1_00), SAPInstance
includes is-ers patch
# SUCCESS, SAP CONFIGURATION, Enqueue replication (nw1-ascs_NW1_00), Enqueue replication enabled
# SUCCESS, SAP STATE, Enqueue replication state (nw1-ascs_NW1_00), Enqueue replication active

nw1-cl-0:nw1adm 56> sapcontrol -nr 00 -function HACheckFailoverConfig

# 15.08.2018 14:04:08
# HACheckFailoverConfig
# OK
# state, category, description, comment
# SUCCESS, SAP CONFIGURATION, SAPInstance RA sufficient version, SAPInstance includes is-ers patch

2. Manually migrate the ASCS instance


Resource state before starting the test:

stonith-sbd (stonith:external/sbd): Started nw1-cl-0


Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-0
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-0
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-0
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-0
Resource Group: g-NW1_ERS
fs_NW1_ERS (ocf::heartbeat:Filesystem): Started nw1-cl-1
nc_NW1_ERS (ocf::heartbeat:azure-lb): Started nw1-cl-1
vip_NW1_ERS (ocf::heartbeat:IPaddr2): Started nw1-cl-1
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-1

Run the following commands as root to migrate the ASCS instance.

nw1-cl-0:~ # crm resource migrate rsc_sap_NW1_ASCS00 force


# INFO: Move constraint created for rsc_sap_NW1_ASCS00

nw1-cl-0:~ # crm resource unmigrate rsc_sap_NW1_ASCS00


# INFO: Removed migration constraints for rsc_sap_NW1_ASCS00

# Remove failed actions for the ERS that occurred as part of the migration
nw1-cl-0:~ # crm resource cleanup rsc_sap_NW1_ERS02

Resource state after the test:

stonith-sbd (stonith:external/sbd): Started nw1-cl-0


Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-1
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-1
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-1
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-1
Resource Group: g-NW1_ERS
fs_NW1_ERS (ocf::heartbeat:Filesystem): Started nw1-cl-0
nc_NW1_ERS (ocf::heartbeat:azure-lb): Started nw1-cl-0
vip_NW1_ERS (ocf::heartbeat:IPaddr2): Started nw1-cl-0
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-0

3. Test HAFailoverToNode
Resource state before starting the test:

stonith-sbd (stonith:external/sbd): Started nw1-cl-0


Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-1
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-1
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-1
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-1
Resource Group: g-NW1_ERS
fs_NW1_ERS (ocf::heartbeat:Filesystem): Started nw1-cl-0
nc_NW1_ERS (ocf::heartbeat:azure-lb): Started nw1-cl-0
vip_NW1_ERS (ocf::heartbeat:IPaddr2): Started nw1-cl-0
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-0

Run the following commands as <sapsid>adm to migrate the ASCS instance.


nw1-cl-0:nw1adm 55> sapcontrol -nr 00 -host nw1-ascs -user nw1adm <password> -function
HAFailoverToNode ""

# run as root
# Remove failed actions for the ERS that occurred as part of the migration
nw1-cl-0:~ # crm resource cleanup rsc_sap_NW1_ERS02
# Remove migration constraints
nw1-cl-0:~ # crm resource clear rsc_sap_NW1_ASCS00
#INFO: Removed migration constraints for rsc_sap_NW1_ASCS00

Resource state after the test:

stonith-sbd (stonith:external/sbd): Started nw1-cl-0


Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-0
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-0
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-0
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-0
Resource Group: g-NW1_ERS
fs_NW1_ERS (ocf::heartbeat:Filesystem): Started nw1-cl-1
nc_NW1_ERS (ocf::heartbeat:azure-lb): Started nw1-cl-1
vip_NW1_ERS (ocf::heartbeat:IPaddr2): Started nw1-cl-1
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-1

4. Simulate node crash


Resource state before starting the test:

stonith-sbd (stonith:external/sbd): Started nw1-cl-0


Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-0
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-0
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-0
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-0
Resource Group: g-NW1_ERS
fs_NW1_ERS (ocf::heartbeat:Filesystem): Started nw1-cl-1
nc_NW1_ERS (ocf::heartbeat:azure-lb): Started nw1-cl-1
vip_NW1_ERS (ocf::heartbeat:IPaddr2): Started nw1-cl-1
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-1

Run the following command as root on the node where the ASCS instance is running

nw1-cl-0:~ # echo b > /proc/sysrq-trigger

If you use SBD, Pacemaker should not automatically start on the killed node. The status after the node is
started again should look like this.
Online: [ nw1-cl-1 ]
OFFLINE: [ nw1-cl-0 ]

Full list of resources:

stonith-sbd (stonith:external/sbd): Started nw1-cl-1


Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-1
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-1
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-1
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-1
Resource Group: g-NW1_ERS
fs_NW1_ERS (ocf::heartbeat:Filesystem): Started nw1-cl-1
nc_NW1_ERS (ocf::heartbeat:azure-lb): Started nw1-cl-1
vip_NW1_ERS (ocf::heartbeat:IPaddr2): Started nw1-cl-1
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-1

Failed Actions:
* rsc_sap_NW1_ERS02_monitor_11000 on nw1-cl-1 'not running' (7): call=219, status=complete,
exitreason='none',
last-rc-change='Wed Aug 15 14:38:38 2018', queued=0ms, exec=0ms

Use the following commands to start Pacemaker on the killed node, clean the SBD messages, and clean
the failed resources.

# run as root
# list the SBD device(s)
nw1-cl-0:~ # cat /etc/sysconfig/sbd | grep SBD_DEVICE=
# SBD_DEVICE="/dev/disk/by-id/scsi-36001405772fe8401e6240c985857e116;/dev/disk/by-id/scsi-
36001405034a84428af24ddd8c3a3e9e1;/dev/disk/by-id/scsi-36001405cdd5ac8d40e548449318510c3"

nw1-cl-0:~ # sbd -d /dev/disk/by-id/scsi-36001405772fe8401e6240c985857e116 -d /dev/disk/by-id/scsi-


36001405034a84428af24ddd8c3a3e9e1 -d /dev/disk/by-id/scsi-36001405cdd5ac8d40e548449318510c3 message
nw1-cl-0 clear

nw1-cl-0:~ # systemctl start pacemaker


nw1-cl-0:~ # crm resource cleanup rsc_sap_NW1_ASCS00
nw1-cl-0:~ # crm resource cleanup rsc_sap_NW1_ERS02

Resource state after the test:

stonith-sbd (stonith:external/sbd): Started nw1-cl-1


Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-1
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-1
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-1
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-1
Resource Group: g-NW1_ERS
fs_NW1_ERS (ocf::heartbeat:Filesystem): Started nw1-cl-0
nc_NW1_ERS (ocf::heartbeat:azure-lb): Started nw1-cl-0
vip_NW1_ERS (ocf::heartbeat:IPaddr2): Started nw1-cl-0
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-0

5. Test manual restart of ASCS instance


Resource state before starting the test:
stonith-sbd (stonith:external/sbd): Started nw1-cl-1
Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-1
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-1
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-1
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-1
Resource Group: g-NW1_ERS
fs_NW1_ERS (ocf::heartbeat:Filesystem): Started nw1-cl-0
nc_NW1_ERS (ocf::heartbeat:azure-lb): Started nw1-cl-0
vip_NW1_ERS (ocf::heartbeat:IPaddr2): Started nw1-cl-0
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-0

Create an enqueue lock by, for example edit a user in transaction su01. Run the following commands as
<sapsid>adm on the node where the ASCS instance is running. The commands will stop the ASCS
instance and start it again. If using enqueue server 1 architecture, the enqueue lock is expected to be
lost in this test. If using enqueue server 2 architecture, the enqueue will be retained.

nw1-cl-1:nw1adm 54> sapcontrol -nr 00 -function StopWait 600 2

The ASCS instance should now be disabled in Pacemaker

rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Stopped (disabled)

Start the ASCS instance again on the same node.

nw1-cl-1:nw1adm 54> sapcontrol -nr 00 -function StartWait 600 2

The enqueue lock of transaction su01 should be lost and the back-end should have been reset. Resource
state after the test:

stonith-sbd (stonith:external/sbd): Started nw1-cl-1


Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-1
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-1
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-1
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-1
Resource Group: g-NW1_ERS
fs_NW1_ERS (ocf::heartbeat:Filesystem): Started nw1-cl-0
nc_NW1_ERS (ocf::heartbeat:azure-lb): Started nw1-cl-0
vip_NW1_ERS (ocf::heartbeat:IPaddr2): Started nw1-cl-0
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-0

6. Kill message server process


Resource state before starting the test:
stonith-sbd (stonith:external/sbd): Started nw1-cl-1
Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-1
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-1
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-1
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-1
Resource Group: g-NW1_ERS
fs_NW1_ERS (ocf::heartbeat:Filesystem): Started nw1-cl-0
nc_NW1_ERS (ocf::heartbeat:azure-lb): Started nw1-cl-0
vip_NW1_ERS (ocf::heartbeat:IPaddr2): Started nw1-cl-0
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-0

Run the following commands as root to identify the process of the message server and kill it.

nw1-cl-1:~ # pgrep ms.sapNW1 | xargs kill -9

If you only kill the message server once, it will be restarted by sapstart. If you kill it often enough,
Pacemaker will eventually move the ASCS instance to the other node. Run the following commands as
root to clean up the resource state of the ASCS and ERS instance after the test.

nw1-cl-0:~ # crm resource cleanup rsc_sap_NW1_ASCS00


nw1-cl-0:~ # crm resource cleanup rsc_sap_NW1_ERS02

Resource state after the test:

stonith-sbd (stonith:external/sbd): Started nw1-cl-1


Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-0
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-0
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-0
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-0
Resource Group: g-NW1_ERS
fs_NW1_ERS (ocf::heartbeat:Filesystem): Started nw1-cl-1
nc_NW1_ERS (ocf::heartbeat:azure-lb): Started nw1-cl-1
vip_NW1_ERS (ocf::heartbeat:IPaddr2): Started nw1-cl-1
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-1

7. Kill enqueue server process


Resource state before starting the test:

stonith-sbd (stonith:external/sbd): Started nw1-cl-1


Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-0
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-0
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-0
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-0
Resource Group: g-NW1_ERS
fs_NW1_ERS (ocf::heartbeat:Filesystem): Started nw1-cl-1
nc_NW1_ERS (ocf::heartbeat:azure-lb): Started nw1-cl-1
vip_NW1_ERS (ocf::heartbeat:IPaddr2): Started nw1-cl-1
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-1

Run the following commands as root on the node where the ASCS instance is running to kill the
enqueue server.
nw1-cl-0:~ # pgrep en.sapNW1 | xargs kill -9

The ASCS instance should immediately fail over to the other node. The ERS instance should also fail
over after the ASCS instance is started. Run the following commands as root to clean up the resource
state of the ASCS and ERS instance after the test.

nw1-cl-0:~ # crm resource cleanup rsc_sap_NW1_ASCS00


nw1-cl-0:~ # crm resource cleanup rsc_sap_NW1_ERS02

Resource state after the test:

stonith-sbd (stonith:external/sbd): Started nw1-cl-1


Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-1
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-1
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-1
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-1
Resource Group: g-NW1_ERS
fs_NW1_ERS (ocf::heartbeat:Filesystem): Started nw1-cl-0
nc_NW1_ERS (ocf::heartbeat:azure-lb): Started nw1-cl-0
vip_NW1_ERS (ocf::heartbeat:IPaddr2): Started nw1-cl-0
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-0

8. Kill enqueue replication server process


Resource state before starting the test:

stonith-sbd (stonith:external/sbd): Started nw1-cl-1


Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-1
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-1
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-1
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-1
Resource Group: g-NW1_ERS
fs_NW1_ERS (ocf::heartbeat:Filesystem): Started nw1-cl-0
nc_NW1_ERS (ocf::heartbeat:azure-lb): Started nw1-cl-0
vip_NW1_ERS (ocf::heartbeat:IPaddr2): Started nw1-cl-0
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-0

Run the following command as root on the node where the ERS instance is running to kill the enqueue
replication server process.

nw1-cl-0:~ # pgrep er.sapNW1 | xargs kill -9

If you only run the command once, sapstart will restart the process. If you run it often enough, sapstart
will not restart the process and the resource will be in a stopped state. Run the following commands as
root to clean up the resource state of the ERS instance after the test.

nw1-cl-0:~ # crm resource cleanup rsc_sap_NW1_ERS02

Resource state after the test:


stonith-sbd (stonith:external/sbd): Started nw1-cl-1
Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-1
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-1
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-1
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-1
Resource Group: g-NW1_ERS
fs_NW1_ERS (ocf::heartbeat:Filesystem): Started nw1-cl-0
nc_NW1_ERS (ocf::heartbeat:azure-lb): Started nw1-cl-0
vip_NW1_ERS (ocf::heartbeat:IPaddr2): Started nw1-cl-0
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-0

9. Kill enqueue sapstartsrv process


Resource state before starting the test:

stonith-sbd (stonith:external/sbd): Started nw1-cl-1


Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-1
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-1
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-1
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-1
Resource Group: g-NW1_ERS
fs_NW1_ERS (ocf::heartbeat:Filesystem): Started nw1-cl-0
nc_NW1_ERS (ocf::heartbeat:azure-lb): Started nw1-cl-0
vip_NW1_ERS (ocf::heartbeat:IPaddr2): Started nw1-cl-0
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-0

Run the following commands as root on the node where the ASCS is running.

nw1-cl-1:~ # pgrep -fl ASCS00.*sapstartsrv


# 59545 sapstartsrv

nw1-cl-1:~ # kill -9 59545

The sapstartsrv process should always be restarted by the Pacemaker resource agent. Resource state
after the test:

stonith-sbd (stonith:external/sbd): Started nw1-cl-1


Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-1
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-1
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-1
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-1
Resource Group: g-NW1_ERS
fs_NW1_ERS (ocf::heartbeat:Filesystem): Started nw1-cl-0
nc_NW1_ERS (ocf::heartbeat:azure-lb): Started nw1-cl-0
vip_NW1_ERS (ocf::heartbeat:IPaddr2): Started nw1-cl-0
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-0

Next steps
HA for SAP NW on Azure VMs on SLES for SAP applications multi-SID guide
Azure Virtual Machines planning and implementation for SAP
Azure Virtual Machines deployment for SAP
Azure Virtual Machines DBMS deployment for SAP
To learn how to establish high availability and plan for disaster recovery of SAP HANA on Azure VMs, see
High Availability of SAP HANA on Azure Virtual Machines (VMs)
High availability for SAP NetWeaver on Azure
VMs on SUSE Linux Enterprise Server with Azure
NetApp Files for SAP applications
12/22/2020 • 40 minutes to read • Edit Online

This article describes how to deploy the virtual machines, configure the virtual machines, install the cluster
framework, and install a highly available SAP NetWeaver 7.50 system, using Azure NetApp Files. In the
example configurations, installation commands etc., the ASCS instance is number 00, the ERS instance
number 01, the Primary Application instance (PAS) is 02 and the Application instance (AAS) is 03. SAP System
ID QAS is used.
This article explains how to achieve high availability for SAP NetWeaver application with Azure NetApp Files.
The database layer isn't covered in detail in this article.
Read the following SAP Notes and papers first:
Azure NetApp Files documentation
SAP Note 1928533, which has:
List of Azure VM sizes that are supported for the deployment of SAP software
Important capacity information for Azure VM sizes
Supported SAP software, and operating system (OS) and database combinations
Required SAP kernel version for Windows and Linux on Microsoft Azure
SAP Note 2015553 lists prerequisites for SAP-supported SAP software deployments in Azure.
SAP Note 2205917 has recommended OS settings for SUSE Linux Enterprise Server for SAP Applications
SAP Note 1944799 has SAP HANA Guidelines for SUSE Linux Enterprise Server for SAP Applications
SAP Note 2178632 has detailed information about all monitoring metrics reported for SAP in Azure.
SAP Note 2191498 has the required SAP Host Agent version for Linux in Azure.
SAP Note 2243692 has information about SAP licensing on Linux in Azure.
SAP Note 1984787 has general information about SUSE Linux Enterprise Server 12.
SAP Note 1999351 has additional troubleshooting information for the Azure Enhanced Monitoring
Extension for SAP.
SAP Community WIKI](https://wiki.scn.sap.com/wiki/display/HOME/SAPonLinuxNotes) has all required
SAP Notes for Linux.
Azure Virtual Machines planning and implementation for SAP on Linux
Azure Virtual Machines deployment for SAP on Linux
Azure Virtual Machines DBMS deployment for SAP on Linux
SUSE SAP HA Best Practice Guides The guides contain all required information to set up Netweaver HA and
SAP HANA System Replication on-premises. Use these guides as a general baseline. They provide much
more detailed information.
SUSE High Availability Extension 12 SP3 Release Notes
NetApp SAP Applications on Microsoft Azure using Azure NetApp Files

Overview
High availability(HA) for SAP Netweaver central services requires shared storage. To achieve that on SUSE
Linux so far it was necessary to build separate highly available NFS cluster.
Now it is possible to achieve SAP Netweaver HA by using shared storage, deployed on Azure NetApp Files.
Using Azure NetApp Files for the shared storage eliminates the need for additional NFS cluster. Pacemaker is
still needed for HA of the SAP Netweaver central services(ASCS/SCS).

SAP NetWeaver ASCS, SAP NetWeaver SCS, SAP NetWeaver ERS, and the SAP HANA database use virtual
hostname and virtual IP addresses. On Azure, a load balancer is required to use a virtual IP address. We
recommend using Standard load balancer. The following list shows the configuration of the (A)SCS and ERS
load balancer.
(A )SCS
Frontend configuration
IP address 10.1.1.20
Probe Port
Port 620<nr>
Load-balancing rules
If using Standard Load Balancer, select HA por ts
If using Basic Load Balancer, create Load balancing rules for the following ports
32<nr> TCP
36<nr> TCP
39<nr> TCP
81<nr> TCP
5<nr> 13 TCP
5<nr> 14 TCP
5<nr> 16 TCP
ERS
Frontend configuration
IP address 10.1.1.21
Probe Port
Port 621<nr>
Load-balancing rules
If using Standard Load Balancer, select HA por ts
If using Basic Load Balancer, create Load balancing rules for the following ports
32<nr> TCP
33<nr> TCP
5<nr> 13 TCP
5<nr> 14 TCP
5<nr> 16 TCP
Backend configuration
Connected to primary network interfaces of all virtual machines that should be part of the
(A)SCS/ERS cluster

Setting up the Azure NetApp Files infrastructure


SAP NetWeaver requires shared storage for the transport and profile directory. Before proceeding with the
setup for Azure NetApp files infrastructure, familiarize yourself with the Azure NetApp Files documentation.
Check if your selected Azure region offers Azure NetApp Files. The following link shows the availability of
Azure NetApp Files by Azure region: Azure NetApp Files Availability by Azure Region.
Azure NetApp files is available in several Azure regions. Before deploying Azure NetApp Files, request
onboarding to Azure NetApp Files, following the Register for Azure NetApp files instructions.
Deploy Azure NetApp Files resources
The steps assume that you have already deployed Azure Virtual Network. The Azure NetApp Files resources
and the VMs, where the Azure NetApp Files resources will be mounted must be deployed in the same Azure
Virtual Network or in peered Azure Virtual Networks.
1. If you haven't done that already, request onboarding to Azure NetApp Files.
2. Create the NetApp account in the selected Azure region, following the instructions to create NetApp
Account.
3. Set up Azure NetApp Files capacity pool, following the instructions on how to set up Azure NetApp
Files capacity pool.
The SAP Netweaver architecture presented in this article uses single Azure NetApp Files capacity pool,
Premium SKU. We recommend Azure NetApp Files Premium SKU for SAP Netweaver application
workload on Azure.
4. Delegate a subnet to Azure NetApp files as described in the instructions Delegate a subnet to Azure
NetApp Files.
5. Deploy Azure NetApp Files volumes, following the instructions to create a volume for Azure NetApp
Files. Deploy the volumes in the designated Azure NetApp Files subnet. The IP addresses of the Azure
NetApp volumes are assigned automatically. Keep in mind that the Azure NetApp Files resources and
the Azure VMs must be in the same Azure Virtual Network or in peered Azure Virtual Networks. In this
example we use two Azure NetApp Files volumes: sapQAS and trans. The file paths that are mounted
to the corresponding mount points are /usrsapqas /sapmntQAS , /usrsapqas /usrsapQAS sys, etc.
a. volume sapQAS (nfs://10.1.0.4/usrsapqas /sapmntQAS )
b. volume sapQAS (nfs://10.1.0.4/usrsapqas /usrsapQAS ascs)
c. volume sapQAS (nfs://10.1.0.4/usrsapqas /usrsapQAS sys)
d. volume sapQAS (nfs://10.1.0.4/usrsapqas /usrsapQAS ers)
e. volume trans (nfs://10.1.0.4/trans)
f. volume sapQAS (nfs://10.1.0.4/usrsapqas /usrsapQAS pas)
g. volume sapQAS (nfs://10.1.0.4/usrsapqas /usrsapQAS aas)
In this example, we used Azure NetApp Files for all SAP Netweaver file systems to demonstrate how Azure
NetApp Files can be used. The SAP file systems that don't need to be mounted via NFS can also be deployed
as Azure disk storage . In this example a-e must be on Azure NetApp Files and f-g (that is,
/usr/sap/QAS /D02 , /usr/sap/QAS /D03 ) could be deployed as Azure disk storage.
Important considerations
When considering Azure NetApp Files for the SAP Netweaver on SUSE High Availability architecture, be aware
of the following important considerations:
The minimum capacity pool is 4 TiB. The capacity pool size can be increased be in 1 TiB increments.
The minimum volume is 100 GiB
Azure NetApp Files and all virtual machines, where Azure NetApp Files volumes will be mounted, must be
in the same Azure Virtual Network or in peered virtual networks in the same region. Azure NetApp Files
access over VNET peering in the same region is supported now. Azure NetApp access over global peering
is not yet supported.
The selected virtual network must have a subnet, delegated to Azure NetApp Files.
Azure NetApp Files offers export policy: you can control the allowed clients, the access type (Read&Write,
Read Only, etc.).
Azure NetApp Files feature isn't zone aware yet. Currently Azure NetApp Files feature isn't deployed in all
Availability zones in an Azure region. Be aware of the potential latency implications in some Azure regions.
Azure NetApp Files volumes can be deployed as NFSv3 or NFSv4.1 volumes. Both protocols are supported
for the SAP application layer (ASCS/ERS, SAP application servers).

Deploy Linux VMs manually via Azure portal


First you need to create the Azure NetApp Files volumes. Deploy the VMs. Afterwards, you create a load
balancer and use the virtual machines in the backend pools.
1. Create a Resource Group
2. Create a Virtual Network
3. Create an Availability Set for ASCS
Set max update domain
4. Create Virtual Machine 1
Use at least SLES4SAP 12 SP3, in this example the SLES4SAP 12 SP3 image is used
Select Availability Set created earlier for ASCS
5. Create Virtual Machine 2
Use at least SLES4SAP 12 SP3, in this example the SLES4SAP 12 SP3 image is used
Select Availability Set created earlier for ASCS
6. Create an Availability Set for the SAP application instances (PAS, AAS)
Set max update domain
7. Create Virtual Machine 3
Use at least SLES4SAP 12 SP3, in this example the SLES4SAP 12 SP3 image is used
Select Availability Set created earlier for PAS/AAS
8. Create Virtual Machine 4
Use at least SLES4SAP 12 SP3, in this example the SLES4SAP 12 SP3 image is used
Select Availability Set created earlier for PAS/AAS

Disable ID mapping (if using NFSv4.1)


The instructions in this section are only applicable, if using Azure NetApp Files volumes with NFSv4.1
protocol. Perform the configuration on all VMs, where Azure NetApp Files NFSv4.1 volumes will be mounted.
1. Verify the NFS domain setting. Make sure that the domain is configured as the default Azure NetApp
Files domain, i.e. defaultv4iddomain.com and the mapping is set to nobody .

IMPORTANT
Make sure to set the NFS domain in /etc/idmapd.conf on the VM to match the default domain configuration
on Azure NetApp Files: defaultv4iddomain.com . If there's a mismatch between the domain configuration on
the NFS client (i.e. the VM) and the NFS server, i.e. the Azure NetApp configuration, then the permissions for
files on Azure NetApp volumes that are mounted on the VMs will be displayed as nobody .

sudo cat /etc/idmapd.conf


# Example
[General]
Verbosity = 0
Pipefs-Directory = /var/lib/nfs/rpc_pipefs
Domain = defaultv4iddomain.com
[Mapping]
Nobody-User = nobody
Nobody-Group = nobody

2. [A] Verify nfs4_disable_idmapping. It should be set to Y . To create the directory structure where
nfs4_disable_idmapping is located, execute the mount command. You won't be able to manually create
the directory under /sys/modules, because access is reserved for the kernel / drivers.

# Check nfs4_disable_idmapping
cat /sys/module/nfs/parameters/nfs4_disable_idmapping
# If you need to set nfs4_disable_idmapping to Y
mkdir /mnt/tmp
mount 10.1.0.4:/sapmnt/qas /mnt/tmp
umount /mnt/tmp
echo "Y" > /sys/module/nfs/parameters/nfs4_disable_idmapping
# Make the configuration permanent
echo "options nfs nfs4_disable_idmapping=Y" >> /etc/modprobe.d/nfs.conf

Setting up (A)SCS
In this example, the resources were deployed manually via the Azure portal .
Deploy Azure Load Balancer manually via Azure portal
First you need to create the Azure NetApp Files volumes. Deploy the VMs. Afterwards, you create a load
balancer and use the virtual machines in the backend pool.
1. Create load balancer (internal, standard):
a. Create the frontend IP addresses
a. IP address 10.1.1.20 for the ASCS
a. Open the load balancer, select frontend IP pool, and click Add
b. Enter the name of the new frontend IP pool (for example frontend.QAS.ASCS )
c. Set the Assignment to Static and enter the IP address (for example 10.1.1.20 )
d. Click OK
b. IP address 10.1.1.21 for the ASCS ERS
Repeat the steps above under "a" to create an IP address for the ERS (for example
10.1.1.21 and frontend.QAS.ERS )
b. Create the backend pool
a. Open the load balancer, select backend pools, and click Add
b. Enter the name of the new backend pool (for example backend.QAS )
c. Click Add a virtual machine.
d. Select Virtual machine
e. Select the virtual machines of the (A)SCS cluster and their IP addresses.
f. Click Add
c. Create the health probes
a. Port 62000 for ASCS
a. Open the load balancer, select health probes, and click Add
b. Enter the name of the new health probe (for example health.QAS.ASCS )
c. Select TCP as protocol, port 62000 , keep Interval 5 and Unhealthy threshold 2
d. Click OK
b. Port 62101 for ASCS ERS
Repeat the steps above under "c" to create a health probe for the ERS (for example
62101 and health.QAS.ERS )
d. Load-balancing rules
a. Create a backend pool for the ASCS
a. Open the load balancer, select Load-balancing rules and click Add
b. Enter the name of the new load balancer rule (for example lb.QAS.ASCS )
c. Select the frontend IP address for ASCS, backend pool, and health probe you created
earlier (for example frontend.QAS.ASCS , backend.QAS and health.QAS.ASCS )
d. Select HA por ts
e. Increase idle timeout to 30 minutes
f. Make sure to enable Floating IP
g. Click OK
Repeat the steps above to create load balancing rules for ERS (for example
lb.QAS.ERS )
2. Alternatively, if your scenario requires basic load balancer (internal), follow these steps:
a. Create the frontend IP addresses
a. IP address 10.1.1.20 for the ASCS
a. Open the load balancer, select frontend IP pool, and click Add
b. Enter the name of the new frontend IP pool (for example frontend.QAS.ASCS )
c. Set the Assignment to Static and enter the IP address (for example 10.1.1.20 )
d. Click OK
b. IP address 10.1.1.21 for the ASCS ERS
Repeat the steps above under "a" to create an IP address for the ERS (for example
10.1.1.21 and frontend.QAS.ERS )
b. Create the backend pool
a. Open the load balancer, select backend pools, and click Add
b. Enter the name of the new backend pool (for example backend.QAS )
c. Click Add a virtual machine.
d. Select the Availability Set you created earlier for ASCS
e. Select the virtual machines of the (A)SCS cluster
f. Click OK
c. Create the health probes
a. Port 62000 for ASCS
a. Open the load balancer, select health probes, and click Add
b. Enter the name of the new health probe (for example health.QAS.ASCS )
c. Select TCP as protocol, port 62000 , keep Interval 5 and Unhealthy threshold 2
d. Click OK
b. Port 62101 for ASCS ERS
Repeat the steps above under "c" to create a health probe for the ERS (for example
62101 and health.QAS.ERS )
d. Load-balancing rules
a. 3200 TCP for ASCS
a. Open the load balancer, select Load-balancing rules and click Add
b. Enter the name of the new load balancer rule (for example lb.QAS.ASCS.3200 )
c. Select the frontend IP address for ASCS, backend pool, and health probe you created
earlier (for example frontend.QAS.ASCS )
d. Keep protocol TCP , enter port 3200
e. Increase idle timeout to 30 minutes
f. Make sure to enable Floating IP
g. Click OK
b. Additional ports for the ASCS
Repeat the steps above under "d" for ports 3600 , 3900 , 8100 , 500 13, 500 14, 500 16
and TCP for the ASCS
c. Additional ports for the ASCS ERS
Repeat the steps above under "d" for ports 3201 , 3301 , 501 13, 501 14, 501 16 and
TCP for the ASCS ERS

IMPORTANT
Floating IP is not supported on a NIC secondary IP configuration in load-balancing scenarios. For details
see Azure Load balancer Limitations. If you need additional IP address for the VM, deploy a second NIC.
NOTE
When VMs without public IP addresses are placed in the backend pool of internal (no public IP address)
Standard Azure load balancer, there will be no outbound internet connectivity, unless additional
configuration is performed to allow routing to public end points. For details on how to achieve
outbound connectivity see Public endpoint connectivity for Virtual Machines using Azure Standard Load
Balancer in SAP high-availability scenarios.

IMPORTANT
Do not enable TCP timestamps on Azure VMs placed behind Azure Load Balancer. Enabling TCP
timestamps will cause the health probes to fail. Set parameter net.ipv4.tcp_timestamps to 0 . For
details see Load Balancer health probes.

Create Pacemaker cluster


Follow the steps in Setting up Pacemaker on SUSE Linux Enterprise Server in Azure to create a basic
Pacemaker cluster for this (A)SCS server.
Installation
The following items are prefixed with either [A] - applicable to all nodes, [1] - only applicable to node 1 or [2]
- only applicable to node 2.
1. [A] Install SUSE Connector

sudo zypper install sap-suse-cluster-connector

NOTE
The known issue with using a dash in host names is fixed with version 3.1.1 of package sap-suse-cluster-
connector . Make sure that you are using at least version 3.1.1 of package sap-suse-cluster-connector, if using
cluster nodes with dash in the host name. Otherwise your cluster will not work.

Make sure that you installed the new version of the SAP SUSE cluster connector. The old one was called
sap_suse_cluster_connector and the new one is called sap-suse-cluster-connector .

sudo zypper info sap-suse-cluster-connector

# Information for package sap-suse-cluster-connector:


# ---------------------------------------------------
# Repository : SLE-12-SP3-SAP-Updates
# Name : sap-suse-cluster-connector
# Version : 3.1.0-8.1
# Arch : noarch
# Vendor : SUSE LLC <https://www.suse.com/>
# Support Level : Level 3
# Installed Size : 45.6 KiB
# Installed : Yes
# Status : up-to-date
# Source package : sap-suse-cluster-connector-3.1.0-8.1.src
# Summary : SUSE High Availability Setup for SAP Products

2. [A] Update SAP resource agents


A patch for the resource-agents package is required to use the new configuration, that is described in
this article. You can check, if the patch is already installed with the following command

sudo grep 'parameter name="IS_ERS"' /usr/lib/ocf/resource.d/heartbeat/SAPInstance

The output should be similar to

<parameter name="IS_ERS" unique="0" required="0">

If the grep command does not find the IS_ERS parameter, you need to install the patch listed on the
SUSE download page

# example for patch for SLES 12 SP1


sudo zypper in -t patch SUSE-SLE-HA-12-SP1-2017-885=1
# example for patch for SLES 12 SP2
sudo zypper in -t patch SUSE-SLE-HA-12-SP2-2017-886=1

3. [A] Setup host name resolution


You can either use a DNS server or modify the /etc/hosts on all nodes. This example shows how to use
the /etc/hosts file. Replace the IP address and the hostname in the following commands

sudo vi /etc/hosts

Insert the following lines to /etc/hosts. Change the IP address and hostname to match your
environment

# IP address of cluster node 1


10.1.1.18 anftstsapcl1
# IP address of cluster node 2
10.1.1.6 anftstsapcl2
# IP address of the load balancer frontend configuration for SAP Netweaver ASCS
10.1.1.20 anftstsapvh
# IP address of the load balancer frontend configuration for SAP Netweaver ERS
10.1.1.21 anftstsapers

4. [1] Create SAP directories in the Azure NetApp Files volume.


Mount temporarily the Azure NetApp Files volume on one of the VMs and create the SAP
directories(file paths).
# mount temporarily the volume
sudo mkdir -p /saptmp
# If using NFSv3
sudo mount -t nfs -o rw,hard,rsize=65536,wsize=65536,vers=3,tcp 10.1.0.4:/sapQAS /saptmp
# If using NFSv4.1
sudo mount -t nfs -o rw,hard,rsize=65536,wsize=65536,vers=4.1,sec=sys,tcp 10.1.0.4:/sapQAS /saptmp
# create the SAP directories
sudo cd /saptmp
sudo mkdir -p sapmntQAS
sudo mkdir -p usrsapQASascs
sudo mkdir -p usrsapQASers
sudo mkdir -p usrsapQASsys
sudo mkdir -p usrsapQASpas
sudo mkdir -p usrsapQASaas
# unmount the volume and delete the temporary directory
sudo cd ..
sudo umount /saptmp
sudo rmdir /saptmp

Prepare for SAP NetWeaver installation


1. [A] Create the shared directories

sudo mkdir -p /sapmnt/QAS


sudo mkdir -p /usr/sap/trans
sudo mkdir -p /usr/sap/QAS/SYS
sudo mkdir -p /usr/sap/QAS/ASCS00
sudo mkdir -p /usr/sap/QAS/ERS01

sudo chattr +i /sapmnt/QAS


sudo chattr +i /usr/sap/trans
sudo chattr +i /usr/sap/QAS/SYS
sudo chattr +i /usr/sap/QAS/ASCS00
sudo chattr +i /usr/sap/QAS/ERS01

2. [A] Configure autofs

sudo vi /etc/auto.master
# Add the following line to the file, save and exit
/- /etc/auto.direct

If using NFSv3, create a file with:

sudo vi /etc/auto.direct
# Add the following lines to the file, save and exit
/sapmnt/QAS -nfsvers=3,nobind 10.1.0.4:/usrsapqas/sapmntQAS
/usr/sap/trans -nfsvers=3,nobind 10.1.0.4:/trans
/usr/sap/QAS/SYS -nfsvers=3,nobind 10.1.0.4:/usrsapqas/usrsapQASsys

If using NFSv4.1, create a file with:


sudo vi /etc/auto.direct
# Add the following lines to the file, save and exit
/sapmnt/QAS -nfsvers=4.1,nobind,sec=sys 10.1.0.4:/usrsapqas/sapmntQAS
/usr/sap/trans -nfsvers=4.1,nobind,sec=sys 10.1.0.4:/trans
/usr/sap/QAS/SYS -nfsvers=4.1,nobind,sec=sys 10.1.0.4:/usrsapqas/usrsapQASsys

NOTE
Make sure to match the NFS protocol version of the Azure NetApp Files volumes, when mounting the volumes.
If the Azure NetApp Files volumes are created as NFSv3 volumes, use the corresponding NFSv3 configuration. If
the Azure NetApp Files volumes are created as NFSv4.1 volumes, follow the instructions to disable ID mapping
and make sure to use the corresponding NFSv4.1 configuration. In this example the Azure NetApp Files
volumes were created as NFSv3 volumes.

Restart autofs to mount the new shares

sudo systemctl enable autofs


sudo service autofs restart

3. [A] Configure SWAP file

sudo vi /etc/waagent.conf

# Set the property ResourceDisk.EnableSwap to y


# Create and use swapfile on resource disk.
ResourceDisk.EnableSwap=y

# Set the size of the SWAP file with property ResourceDisk.SwapSizeMB


# The free space of resource disk varies by virtual machine size. Make sure that you do not set a
value that is too big. You can check the SWAP space with command swapon
# Size of the swapfile.
ResourceDisk.SwapSizeMB=2000

Restart the Agent to activate the change

sudo service waagent restart

Installing SAP NetWeaver ASCS/ERS


1. [1] Create a virtual IP resource and health-probe for the ASCS instance
IMPORTANT
Recent testing revealed situations, where netcat stops responding to requests due to backlog and its limitation
of handling only one connection. The netcat resource stops listening to the Azure Load balancer requests and
the floating IP becomes unavailable.
For existing Pacemaker clusters, we recommended in the past replacing netcat with socat. Currently we
recommend using azure-lb resource agent, which is part of package resource-agents, with the following
package version requirements:
For SLES 12 SP4/SP5, the version must be at least resource-agents-4.3.018.a7fb5035-3.30.1.
For SLES 15/15 SP1, the version must be at least resource-agents-4.3.0184.6ee15eb2-4.13.1.
Note that the change will require brief downtime.
For existing Pacemaker clusters, if the configuration was already changed to use socat as described in Azure
Load-Balancer Detection Hardening, there is no requirement to switch immediately to azure-lb resource agent.

sudo crm node standby anftstsapcl2


# If using NFSv3
sudo crm configure primitive fs_QAS_ASCS Filesystem device='10.1.0.4/usrsapqas/usrsapQASascs'
directory='/usr/sap/QAS/ASCS00' fstype='nfs' \
op start timeout=60s interval=0 \
op stop timeout=60s interval=0 \
op monitor interval=20s timeout=40s

# If using NFSv4.1
sudo crm configure primitive fs_QAS_ASCS Filesystem device='10.1.0.4:/usrsapqas/usrsapQASascs'
directory='/usr/sap/QAS/ASCS00' fstype='nfs' options='sec=sys,vers=4.1' \
op start timeout=60s interval=0 \
op stop timeout=60s interval=0 \
op monitor interval=20s timeout=40s

sudo crm configure primitive vip_QAS_ASCS IPaddr2 \


params ip=10.1.1.20 cidr_netmask=24 \
op monitor interval=10 timeout=20

sudo crm configure primitive nc_QAS_ASCS azure-lb port=62000

sudo crm configure group g-QAS_ASCS fs_QAS_ASCS nc_QAS_ASCS vip_QAS_ASCS \


meta resource-stickiness=3000

Make sure that the cluster status is ok and that all resources are started. It is not important on which
node the resources are running.

sudo crm_mon -r

# Node anftstsapcl2: standby


# Online: [ anftstsapcl1 ]
#
# Full list of resources:
#
# Resource Group: g-QAS_ASCS
# fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl1
# nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl1
# vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
# stonith-sbd (stonith:external/sbd): Started anftstsapcl2

2. [1] Install SAP NetWeaver ASCS


Install SAP NetWeaver ASCS as root on the first node using a virtual hostname that maps to the IP
address of the load balancer frontend configuration for the ASCS, for example anftstsapvh , 10.1.1.20
and the instance number that you used for the probe of the load balancer, for example 00 .
You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a non-root user to
connect to sapinst. You can use parameter SAPINST_USE_HOSTNAME to install SAP, using virtual
hostname.

sudo <swpm>/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin SAPINST_USE_HOSTNAME=virtual_hostname

If the installation fails to create a subfolder in /usr/sap/QAS /ASCS00 , try setting the owner and group
of the ASCS00 folder and retry.

chown qasadm /usr/sap/QAS/ASCS00


chgrp sapsys /usr/sap/QAS/ASCS00

3. [1] Create a virtual IP resource and health-probe for the ERS instance

sudo crm node online anftstsapcl2


sudo crm node standby anftstsapcl1
# If using NFSv3
sudo crm configure primitive fs_QAS_ERS Filesystem device='10.1.0.4:/usrsapqas/usrsapQASers'
directory='/usr/sap/QAS/ERS01' fstype='nfs' \
op start timeout=60s interval=0 \
op stop timeout=60s interval=0 \
op monitor interval=20s timeout=40s

# If using NFSv4.1
sudo crm configure primitive fs_QAS_ERS Filesystem device='10.1.0.4:/usrsapqas/usrsapQASers'
directory='/usr/sap/QAS/ERS01' fstype='nfs' options='sec=sys,vers=4.1' \
op start timeout=60s interval=0 \
op stop timeout=60s interval=0 \
op monitor interval=20s timeout=40s

sudo crm configure primitive vip_QAS_ERS IPaddr2 \


params ip=10.1.1.21 cidr_netmask=24 \
op monitor interval=10 timeout=20

sudo crm configure primitive nc_QAS_ERS azure-lb port=62101

sudo crm configure group g-QAS_ERS fs_QAS_ERS nc_QAS_ERS vip_QAS_ERS

Make sure that the cluster status is ok and that all resources are started. It is not important on which
node the resources are running.
sudo crm_mon -r

# Node anftstsapcl1: standby


# Online: [ anftstsapcl2 ]
#
# Full list of resources:
#
# stonith-sbd (stonith:external/sbd): Started anftstsapcl2
# Resource Group: g-QAS_ASCS
# fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl2
# nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl2
# vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
# Resource Group: g-QAS_ERS
# fs_QAS_ERS (ocf::heartbeat:Filesystem): Started anftstsapcl2
# nc_QAS_ERS (ocf::heartbeat:azure-lb): Started anftstsapcl2
# vip_QAS_ERS (ocf::heartbeat:IPaddr2): Started anftstsapcl2

4. [2] Install SAP NetWeaver ERS


Install SAP NetWeaver ERS as root on the second node using a virtual hostname that maps to the IP
address of the load balancer frontend configuration for the ERS, for example anftstsapers , 10.1.1.21
and the instance number that you used for the probe of the load balancer, for example 01 .
You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a non-root user to
connect to sapinst. You can use parameter SAPINST_USE_HOSTNAME to install SAP, using virtual
hostname.

sudo <swpm>/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin SAPINST_USE_HOSTNAME=virtual_hostname

NOTE
Use SWPM SP 20 PL 05 or higher. Lower versions do not set the permissions correctly and the installation will
fail.

If the installation fails to create a subfolder in /usr/sap/QAS /ERS01 , try setting the owner and group of
the ERS01 folder and retry.

chown qasadm /usr/sap/QAS/ERS01


chgrp sapsys /usr/sap/QAS/ERS01

5. [1] Adapt the ASCS/SCS and ERS instance profiles


ASCS/SCS profile
sudo vi /sapmnt/QAS/profile/QAS_ASCS00_anftstsapvh

# Change the restart command to a start command


#Restart_Program_01 = local $(_EN) pf=$(_PF)
Start_Program_01 = local $(_EN) pf=$(_PF)

# Add the following lines


service/halib = $(DIR_CT_RUN)/saphascriptco.so
service/halib_cluster_connector = /usr/bin/sap_suse_cluster_connector

# Add the keep alive parameter, if using ENSA1


enque/encni/set_so_keepalive = true

For both ENSA1 and ENSA2, make sure that the keepalive OS parameters are set as described in SAP
note 1410736.
ERS profile

sudo vi /sapmnt/QAS/profile/QAS_ERS01_anftstsapers

# Change the restart command to a start command


#Restart_Program_00 = local $(_ER) pf=$(_PFL) NR=$(SCSID)
Start_Program_00 = local $(_ER) pf=$(_PFL) NR=$(SCSID)

# Add the following lines


service/halib = $(DIR_CT_RUN)/saphascriptco.so
service/halib_cluster_connector = /usr/bin/sap_suse_cluster_connector

# remove Autostart from ERS profile


# Autostart = 1

6. [A] Configure Keep Alive


The communication between the SAP NetWeaver application server and the ASCS/SCS is routed
through a software load balancer. The load balancer disconnects inactive connections after a
configurable timeout. To prevent this you need to set a parameter in the SAP NetWeaver ASCS/SCS
profile, if using ENSA1, and change the Linux system keepalive settings on all SAP servers for both
ENSA1/ENSA2. Read SAP Note 1410736 for more information.

# Change the Linux system configuration


sudo sysctl net.ipv4.tcp_keepalive_time=300

7. [A] Configure the SAP users after the installation

# Add sidadm to the haclient group


sudo usermod -aG haclient qasadm

8. [1] Add the ASCS and ERS SAP services to the sapservice file
Add the ASCS service entry to the second node and copy the ERS service entry to the first node.
cat /usr/sap/sapservices | grep ASCS00 | sudo ssh anftstsapcl2 "cat >>/usr/sap/sapservices"
sudo ssh anftstsapcl2 "cat /usr/sap/sapservices" | grep ERS01 | sudo tee -a /usr/sap/sapservices

9. [1] Create the SAP cluster resources


If using enqueue server 1 architecture (ENSA1), define the resources as follows:

sudo crm configure property maintenance-mode="true"

sudo crm configure primitive rsc_sap_QAS_ASCS00 SAPInstance \


operations \$id=rsc_sap_QAS_ASCS00-operations \
op monitor interval=11 timeout=60 on-fail=restart \
params InstanceName=QAS_ASCS00_anftstsapvh START_PROFILE="/sapmnt/QAS/profile/QAS_ASCS00_anftstsapvh"
\
AUTOMATIC_RECOVER=false \
meta resource-stickiness=5000 failure-timeout=60 migration-threshold=1 priority=10

sudo crm configure primitive rsc_sap_QAS_ERS01 SAPInstance \


operations \$id=rsc_sap_QAS_ERS01-operations \
op monitor interval=11 timeout=60 on-fail=restart \
params InstanceName=QAS_ERS01_anftstsapers START_PROFILE="/sapmnt/QAS/profile/QAS_ERS01_anftstsapers"
AUTOMATIC_RECOVER=false IS_ERS=true \
meta priority=1000

sudo crm configure modgroup g-QAS_ASCS add rsc_sap_QAS_ASCS00


sudo crm configure modgroup g-QAS_ERS add rsc_sap_QAS_ERS01

sudo crm configure colocation col_sap_QAS_no_both -5000: g-QAS_ERS g-QAS_ASCS


sudo crm configure location loc_sap_QAS_failover_to_ers rsc_sap_QAS_ASCS00 rule 2000: runs_ers_QAS eq
1
sudo crm configure order ord_sap_QAS_first_start_ascs Optional: rsc_sap_QAS_ASCS00:start
rsc_sap_QAS_ERS01:stop symmetrical=false

sudo crm node online anftstsapcl1


sudo crm configure property maintenance-mode="false"

SAP introduced support for enqueue server 2, including replication, as of SAP NW 7.52. Starting with ABAP
Platform 1809, enqueue server 2 is installed by default. See SAP note 2630416 for enqueue server 2 support.
If using enqueue server 2 architecture (ENSA2), define the resources as follows:
sudo crm configure property maintenance-mode="true"

sudo crm configure primitive rsc_sap_QAS_ASCS00 SAPInstance \


operations \$id=rsc_sap_QAS_ASCS00-operations \
op monitor interval=11 timeout=60 on-fail=restart \
params InstanceName=QAS_ASCS00_anftstsapvh START_PROFILE="/sapmnt/QAS/profile/QAS_ASCS00_anftstsapvh"
\
AUTOMATIC_RECOVER=false \
meta resource-stickiness=5000

sudo crm configure primitive rsc_sap_QAS_ERS01 SAPInstance \


operations \$id=rsc_sap_QAS_ERS01-operations \
op monitor interval=11 timeout=60 on-fail=restart \
params InstanceName=QAS_ERS01_anftstsapers START_PROFILE="/sapmnt/QAS/profile/QAS_ERS01_anftstsapers"
AUTOMATIC_RECOVER=false IS_ERS=true

sudo crm configure modgroup g-QAS_ASCS add rsc_sap_QAS_ASCS00


sudo crm configure modgroup g-QAS_ERS add rsc_sap_QAS_ERS01

sudo crm configure colocation col_sap_QAS_no_both -5000: g-QAS_ERS g-QAS_ASCS


sudo crm configure order ord_sap_QAS_first_start_ascs Optional: rsc_sap_QAS_ASCS00:start
rsc_sap_QAS_ERS01:stop symmetrical=false

sudo crm node online anftstsapcl1


sudo crm configure property maintenance-mode="false"

If you are upgrading from an older version and switching to enqueue server 2, see SAP note 2641019.
Make sure that the cluster status is ok and that all resources are started. It is not important on which node the
resources are running.

sudo crm_mon -r
# Full list of resources:
#
# stonith-sbd (stonith:external/sbd): Started anftstsapcl2
# Resource Group: g-QAS_ASCS
# fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl1
# nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl1
# vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
# rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl1
# Resource Group: g-QAS_ERS
# fs_QAS_ERS (ocf::heartbeat:Filesystem): Started anftstsapcl2
# nc_QAS_ERS (ocf::heartbeat:azure-lb): Started anftstsapcl2
# vip_QAS_ERS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
# rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl2

SAP NetWeaver application server preparation


Some databases require that the database instance installation is executed on an application server. Prepare
the application server virtual machines to be able to use them in these cases.
The steps bellow assume that you install the application server on a server different from the ASCS/SCS and
HANA servers. Otherwise some of the steps below (like configuring host name resolution) are not needed.
The following items are prefixed with either [A] - applicable to both PAS and AAS, [P] - only applicable to PAS
or [S] - only applicable to AAS.
1. [A] Configure operating system
Reduce the size of the dirty cache. For more information, see Low write performance on SLES 11/12
servers with large RAM.

sudo vi /etc/sysctl.conf
# Change/set the following settings
vm.dirty_bytes = 629145600
vm.dirty_background_bytes = 314572800

2. [A] Setup host name resolution


You can either use a DNS server or modify the /etc/hosts on all nodes. This example shows how to use
the /etc/hosts file. Replace the IP address and the hostname in the following commands

sudo vi /etc/hosts

Insert the following lines to /etc/hosts. Change the IP address and hostname to match your
environment

# IP address of the load balancer frontend configuration for SAP NetWeaver ASCS/SCS
10.1.1.20 anftstsapvh
# IP address of the load balancer frontend configuration for SAP NetWeaver ERS
10.1.1.21 anftstsapers
# IP address of all application servers
10.1.1.15 anftstsapa01
10.1.1.16 anftstsapa02

3. [A] Create the sapmnt directory

sudo mkdir -p /sapmnt/QAS


sudo mkdir -p /usr/sap/trans

sudo chattr +i /sapmnt/QAS


sudo chattr +i /usr/sap/trans

4. [P] Create the PAS directory

sudo mkdir -p /usr/sap/QAS/D02


sudo chattr +i /usr/sap/QAS/D02

5. [S] Create the AAS directory

sudo mkdir -p /usr/sap/QAS/D03


sudo chattr +i /usr/sap/QAS/D03

6. [P] Configure autofs on PAS

sudo vi /etc/auto.master

# Add the following line to the file, save and exit


/- /etc/auto.direct
If using NFSv3, create a new file with:

sudo vi /etc/auto.direct
# Add the following lines to the file, save and exit
/sapmnt/QAS -nfsvers=3,nobind 10.1.0.4:/usrsapqas/sapmntQAS
/usr/sap/trans -nfsvers=3,nobind 10.1.0.4:/trans
/usr/sap/QAS/D02 -nfsvers=3,nobind 10.1.0.4:/usrsapqas/usrsapQASpas

If using NFSv4.1, create a new file with:

sudo vi /etc/auto.direct
# Add the following lines to the file, save and exit
/sapmnt/QAS -nfsvers=4.1,nobind,sec=sys 10.1.0.4:/usrsapqas/sapmntQAS
/usr/sap/trans -nfsvers=4.1,nobind,sec=sys 10.1.0.4:/trans
/usr/sap/QAS/D02 -nfsvers=4.1,nobind,sec=sys 10.1.0.4:/usrsapqas/usrsapQASpas

Restart autofs to mount the new shares

sudo systemctl enable autofs


sudo service autofs restart

7. [P] Configure autofs on AAS

sudo vi /etc/auto.master

# Add the following line to the file, save and exit


/- /etc/auto.direct

If using NFSv3, create a new file with:

sudo vi /etc/auto.direct
# Add the following lines to the file, save and exit
/sapmnt/QAS -nfsvers=3,nobind 10.1.0.4:/usrsapqas/sapmntQAS
/usr/sap/trans -nfsvers=3,nobind 10.1.0.4:/trans
/usr/sap/QAS/D03 -nfsvers=3,nobind 10.1.0.4:/usrsapqas/usrsapQASaas

If using NFSv4.1, create a new file with:

sudo vi /etc/auto.direct
# Add the following lines to the file, save and exit
/sapmnt/QAS -nfsvers=4.1,nobind,sec=sys 10.1.0.4:/usrsapqas/sapmntQAS
/usr/sap/trans -nfsvers=4.1,nobind,sec=sys 10.1.0.4:/trans
/usr/sap/QAS/D03 -nfsvers=4.1,nobind,sec=sys 10.1.0.4:/usrsapqas/usrsapQASaas

Restart autofs to mount the new shares

sudo systemctl enable autofs


sudo service autofs restart
8. [A] Configure SWAP file

sudo vi /etc/waagent.conf

# Set the property ResourceDisk.EnableSwap to y


# Create and use swapfile on resource disk.
ResourceDisk.EnableSwap=y

# Set the size of the SWAP file with property ResourceDisk.SwapSizeMB


# The free space of resource disk varies by virtual machine size. Make sure that you do not set a
value that is too big. You can check the SWAP space with command swapon
# Size of the swapfile.
ResourceDisk.SwapSizeMB=2000

Restart the Agent to activate the change

sudo service waagent restart

Install database
In this example, SAP NetWeaver is installed on SAP HANA. You can use every supported database for this
installation. For more information on how to install SAP HANA in Azure, see High Availability of SAP HANA on
Azure Virtual Machines (VMs). For a list of supported databases, see SAP Note 1928533.
Run the SAP database instance installation
Install the SAP NetWeaver database instance as root using a virtual hostname that maps to the IP
address of the load balancer frontend configuration for the database.
You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a non-root user to
connect to sapinst.

sudo <swpm>/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin

SAP NetWeaver application server installation


Follow these steps to install an SAP application server.
1. [A] Prepare application server Follow the steps in the chapter SAP NetWeaver application server
preparation above to prepare the application server.
2. [A] Install SAP NetWeaver application server Install a primary or additional SAP NetWeaver
applications server.
You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a non-root user to
connect to sapinst.

sudo <swpm>/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin

3. [A] Update SAP HANA secure store


Update the SAP HANA secure store to point to the virtual name of the SAP HANA System Replication
setup.
Run the following command to list the entries
hdbuserstore List

This should list all entries and should look similar to

DATA FILE : /home/qasadm/.hdb/anftstsapa01/SSFS_HDB.DAT


KEY FILE : /home/qasadm/.hdb/anftstsapa01/SSFS_HDB.KEY

KEY DEFAULT
ENV : 10.1.1.5:30313
USER: SAPABAP1
DATABASE: QAS

The output shows that the IP address of the default entry is pointing to the virtual machine and not to
the load balancer's IP address. This entry needs to be changed to point to the virtual hostname of the
load balancer. Make sure to use the same port (30313 in the output above) and database name (QAS
in the output above)!

su - qasadm
hdbuserstore SET DEFAULT qasdb:30313@QAS SAPABAP1 <password of ABAP schema>

Test the cluster setup


The following tests are a copy of the test cases in the best practices guides of SUSE. They are copied for your
convenience. Always also read the best practices guides and perform all additional tests that might have been
added.
1. Test HAGetFailoverConfig, HACheckConfig, and HACheckFailoverConfig
Run the following commands as <sapsid>adm on the node where the ASCS instance is currently
running. If the commands fail with FAIL: Insufficient memory, it might be caused by dashes in your
hostname. This is a known issue and will be fixed by SUSE in the sap-suse-cluster-connector package.
anftstsapcl1:qasadm 52> sapcontrol -nr 00 -function HAGetFailoverConfig
07.03.2019 20:08:59
HAGetFailoverConfig
OK
HAActive: TRUE
HAProductVersion: SUSE Linux Enterprise Server for SAP Applications 12 SP3
HASAPInterfaceVersion: SUSE Linux Enterprise Server for SAP Applications 12 SP3
(sap_suse_cluster_connector 3.1.0)
HADocumentation: https://www.suse.com/products/sles-for-sap/resource-library/sap-best-practices/
HAActiveNode: anftstsapcl1
HANodes: anftstsapcl1, anftstsapcl2

anftstsapcl1:qasadm 54> sapcontrol -nr 00 -function HACheckConfig


07.03.2019 23:28:29
HACheckConfig
OK
state, category, description, comment
SUCCESS, SAP CONFIGURATION, Redundant ABAP instance configuration, 2 ABAP instances detected
SUCCESS, SAP CONFIGURATION, Redundant Java instance configuration, 0 Java instances detected
SUCCESS, SAP CONFIGURATION, Enqueue separation, All Enqueue server separated from application
server
SUCCESS, SAP CONFIGURATION, MessageServer separation, All MessageServer separated from application
server
SUCCESS, SAP CONFIGURATION, ABAP instances on multiple hosts, ABAP instances on multiple hosts
detected
SUCCESS, SAP CONFIGURATION, Redundant ABAP SPOOL service configuration, 2 ABAP instances with SPOOL
service detected
SUCCESS, SAP STATE, Redundant ABAP SPOOL service state, 2 ABAP instances with active SPOOL service
detected
SUCCESS, SAP STATE, ABAP instances with ABAP SPOOL service on multiple hosts, ABAP instances with
active ABAP SPOOL service on multiple hosts detected
SUCCESS, SAP CONFIGURATION, Redundant ABAP BATCH service configuration, 2 ABAP instances with BATCH
service detected
SUCCESS, SAP STATE, Redundant ABAP BATCH service state, 2 ABAP instances with active BATCH service
detected
SUCCESS, SAP STATE, ABAP instances with ABAP BATCH service on multiple hosts, ABAP instances with
active ABAP BATCH service on multiple hosts detected
SUCCESS, SAP CONFIGURATION, Redundant ABAP DIALOG service configuration, 2 ABAP instances with
DIALOG service detected
SUCCESS, SAP STATE, Redundant ABAP DIALOG service state, 2 ABAP instances with active DIALOG
service detected
SUCCESS, SAP STATE, ABAP instances with ABAP DIALOG service on multiple hosts, ABAP instances with
active ABAP DIALOG service on multiple hosts detected
SUCCESS, SAP CONFIGURATION, Redundant ABAP UPDATE service configuration, 2 ABAP instances with
UPDATE service detected
SUCCESS, SAP STATE, Redundant ABAP UPDATE service state, 2 ABAP instances with active UPDATE
service detected
SUCCESS, SAP STATE, ABAP instances with ABAP UPDATE service on multiple hosts, ABAP instances with
active ABAP UPDATE service on multiple hosts detected
SUCCESS, SAP STATE, SCS instance running, SCS instance status ok
SUCCESS, SAP CONFIGURATION, SAPInstance RA sufficient version (anftstsapvh_QAS_00), SAPInstance
includes is-ers patch
SUCCESS, SAP CONFIGURATION, Enqueue replication (anftstsapvh_QAS_00), Enqueue replication enabled
SUCCESS, SAP STATE, Enqueue replication state (anftstsapvh_QAS_00), Enqueue replication active

anftstsapcl1:qasadm 55> sapcontrol -nr 00 -function HACheckFailoverConfig


07.03.2019 23:30:48
HACheckFailoverConfig
OK
state, category, description, comment
SUCCESS, SAP CONFIGURATION, SAPInstance RA sufficient version, SAPInstance includes is-ers patch

2. Manually migrate the ASCS instance


Resource state before starting the test:
Resource Group: g-QAS_ASCS
fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl2
nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl2
vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
rscsap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl2
stonith-sbd (stonith:external/sbd): Started anftstsapcl1
Resource Group: g-QAS_ERS
fs_QAS_ERS (ocf::heartbeat:Filesystem): Started anftstsapcl1
nc_QAS_ERS (ocf::heartbeat:azure-lb): Started anftstsapcl1
vip_QAS_ERS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Starting anftstsapcl1

Run the following commands as root to migrate the ASCS instance.

anftstsapcl1:~ # crm resource migrate rsc_sap_QAS_ASCS00 force


INFO: Move constraint created for rsc_sap_QAS_ASCS00

anftstsapcl1:~ # crm resource unmigrate rsc_sap_QAS_ASCS00


INFO: Removed migration constraints for rsc_sap_QAS_ASCS00

# Remove failed actions for the ERS that occurred as part of the migration
anftstsapcl1:~ # crm resource cleanup rsc_sap_QAS_ERS01

Resource state after the test:

Resource Group: g-QAS_ASCS


fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl1
nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl1
vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl1
stonith-sbd (stonith:external/sbd): Started anftstsapcl1
Resource Group: g-QAS_ERS
fs_QAS_ERS (ocf::heartbeat:Filesystem): Started anftstsapcl2
nc_QAS_ERS (ocf::heartbeat:azure-lb): Started anftstsapcl2
vip_QAS_ERS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl2

3. Test HAFailoverToNode
Resource state before starting the test:

Resource Group: g-QAS_ASCS


fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl1
nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl1
vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl1
stonith-sbd (stonith:external/sbd): Started anftstsapcl1
Resource Group: g-QAS_ERS
fs_QAS_ERS (ocf::heartbeat:Filesystem): Started anftstsapcl2
nc_QAS_ERS (ocf::heartbeat:azure-lb): Started anftstsapcl2
vip_QAS_ERS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl2

Run the following commands as <sapsid>adm to migrate the ASCS instance.


anftstsapcl1:qasadm 53> sapcontrol -nr 00 -host anftstsapvh -user qasadm <password> -function
HAFailoverToNode ""

# run as root
# Remove failed actions for the ERS that occurred as part of the migration
anftstsapcl1:~ # crm resource cleanup rsc_sap_QAS_ERS01
# Remove migration constraints
anftstsapcl1:~ # crm resource clear rsc_sap_QAS_ASCS00
#INFO: Removed migration constraints for rsc_sap_QAS_ASCS00

Resource state after the test:

Resource Group: g-QAS_ASCS


fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl2
nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl2
vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl2
stonith-sbd (stonith:external/sbd): Started anftstsapcl1
Resource Group: g-QAS_ERS
fs_QAS_ERS (ocf::heartbeat:Filesystem): Started anftstsapcl1
nc_QAS_ERS (ocf::heartbeat:azure-lb): Started anftstsapcl1
vip_QAS_ERS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl1

4. Simulate node crash


Resource state before starting the test:

Resource Group: g-QAS_ASCS


fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl2
nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl2
vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl2
stonith-sbd (stonith:external/sbd): Started anftstsapcl1
Resource Group: g-QAS_ERS
fs_QAS_ERS (ocf::heartbeat:Filesystem): Started anftstsapcl1
nc_QAS_ERS (ocf::heartbeat:azure-lb): Started anftstsapcl1
vip_QAS_ERS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl1

Run the following command as root on the node where the ASCS instance is running

anftstsapcl2:~ # echo b > /proc/sysrq-trigger

If you use SBD, Pacemaker should not automatically start on the killed node. The status after the node is
started again should look like this.
Online:
Online: [ anftstsapcl1 ]
OFFLINE: [ anftstsapcl2 ]

Full list of resources:

Resource Group: g-QAS_ASCS


fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl1
nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl1
vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl1
stonith-sbd (stonith:external/sbd): Started anftstsapcl1
Resource Group: g-QAS_ERS
fs_QAS_ERS (ocf::heartbeat:Filesystem): Started anftstsapcl1
nc_QAS_ERS (ocf::heartbeat:azure-lb): Started anftstsapcl1
vip_QAS_ERS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl1

Failed Actions:
* rsc_sap_QAS_ERS01_monitor_11000 on anftstsapcl1 'not running' (7): call=166, status=complete,
exitreason='',
last-rc-change='Fri Mar 8 18:26:10 2019', queued=0ms, exec=0ms

Use the following commands to start Pacemaker on the killed node, clean the SBD messages, and clean
the failed resources.

# run as root
# list the SBD device(s)
anftstsapcl2:~ # cat /etc/sysconfig/sbd | grep SBD_DEVICE=
# SBD_DEVICE="/dev/disk/by-id/scsi-36001405b730e31e7d5a4516a2a697dcf;/dev/disk/by-id/scsi-
36001405f69d7ed91ef54461a442c676e;/dev/disk/by-id/scsi-360014058e5f335f2567488882f3a2c3a"

anftstsapcl2:~ # sbd -d /dev/disk/by-id/scsi-36001405772fe8401e6240c985857e11 -d /dev/disk/by-


id/scsi-36001405f69d7ed91ef54461a442c676e -d /dev/disk/by-id/scsi-360014058e5f335f2567488882f3a2c3a
message anftstsapcl2 clear

anftstsapcl2:~ # systemctl start pacemaker


anftstsapcl2:~ # crm resource cleanup rsc_sap_QAS_ASCS00
anftstsapcl2:~ # crm resource cleanup rsc_sap_QAS_ERS01

Resource state after the test:

Full list of resources:

Resource Group: g-QAS_ASCS


fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl1
nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl1
vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl1
stonith-sbd (stonith:external/sbd): Started anftstsapcl1
Resource Group: g-QAS_ERS
fs_QAS_ERS (ocf::heartbeat:Filesystem): Started anftstsapcl2
nc_QAS_ERS (ocf::heartbeat:azure-lb): Started anftstsapcl2
vip_QAS_ERS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl2

5. Test manual restart of ASCS instance


Resource state before starting the test:
Resource Group: g-QAS_ASCS
fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl2
nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl2
vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl2
stonith-sbd (stonith:external/sbd): Started anftstsapcl1
Resource Group: g-QAS_ERS
fs_QAS_ERS (ocf::heartbeat:Filesystem): Started anftstsapcl1
nc_QAS_ERS (ocf::heartbeat:azure-lb): Started anftstsapcl1
vip_QAS_ERS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl1

Create an enqueue lock by, for example edit a user in transaction su01. Run the following commands as
<sapsid>adm on the node where the ASCS instance is running. The commands will stop the ASCS
instance and start it again. If using enqueue server 1 architecture, the enqueue lock is expected to be
lost in this test. If using enqueue server 2 architecture, the enqueue will be retained.

anftstsapcl2:qasadm 51> sapcontrol -nr 00 -function StopWait 600 2

The ASCS instance should now be disabled in Pacemaker

rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Stopped (disabled)

Start the ASCS instance again on the same node.

anftstsapcl2:qasadm 52> sapcontrol -nr 00 -function StartWait 600 2

The enqueue lock of transaction su01 should be lost, if using enqueue server replication 1 architecture
and the back-end should have been reset. Resource state after the test:

Resource Group: g-QAS_ASCS


fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl2
nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl2
vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl2
stonith-sbd (stonith:external/sbd): Started anftstsapcl1
Resource Group: g-QAS_ERS
fs_QAS_ERS (ocf::heartbeat:Filesystem): Started anftstsapcl1
nc_QAS_ERS (ocf::heartbeat:azure-lb): Started anftstsapcl1
vip_QAS_ERS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl1

6. Kill message server process


Resource state before starting the test:
Resource Group: g-QAS_ASCS
fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl2
nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl2
vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl2
stonith-sbd (stonith:external/sbd): Started anftstsapcl1
Resource Group: g-QAS_ERS
fs_QAS_ERS (ocf::heartbeat:Filesystem): Started anftstsapcl1
nc_QAS_ERS (ocf::heartbeat:azure-lb): Started anftstsapcl1
vip_QAS_ERS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl1

Run the following commands as root to identify the process of the message server and kill it.

anftstsapcl2:~ # pgrep ms.sapQAS | xargs kill -9

If you only kill the message server once, it will be restarted by sapstart . If you kill it often enough,
Pacemaker will eventually move the ASCS instance to the other node. Run the following commands as
root to clean up the resource state of the ASCS and ERS instance after the test.

anftstsapcl2:~ # crm resource cleanup rsc_sap_QAS_ASCS00


anftstsapcl2:~ # crm resource cleanup rsc_sap_QAS_ERS01

Resource state after the test:

Resource Group: g-QAS_ASCS


fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl1
nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl1
vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl1
stonith-sbd (stonith:external/sbd): Started anftstsapcl1
Resource Group: g-QAS_ERS
fs_QAS_ERS (ocf::heartbeat:Filesystem): Started anftstsapcl2
nc_QAS_ERS (ocf::heartbeat:azure-lb): Started anftstsapcl2
vip_QAS_ERS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl2

7. Kill enqueue server process


Resource state before starting the test:

Resource Group: g-QAS_ASCS


fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl1
nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl1
vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl1
stonith-sbd (stonith:external/sbd): Started anftstsapcl1
Resource Group: g-QAS_ERS
fs_QAS_ERS (ocf::heartbeat:Filesystem): Started anftstsapcl2
nc_QAS_ERS (ocf::heartbeat:azure-lb): Started anftstsapcl2
vip_QAS_ERS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl2

Run the following commands as root on the node where the ASCS instance is running to kill the
enqueue server.

anftstsapcl1:~ # pgrep en.sapQAS | xargs kill -9

The ASCS instance should immediately fail over to the other node. The ERS instance should also fail
over after the ASCS instance is started. Run the following commands as root to clean up the resource
state of the ASCS and ERS instance after the test.

anftstsapcl1:~ # crm resource cleanup rsc_sap_QAS_ASCS00


anftstsapcl1:~ # crm resource cleanup rsc_sap_QAS_ERS01

Resource state after the test:

Resource Group: g-QAS_ASCS


fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl2
nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl2
vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl2
stonith-sbd (stonith:external/sbd): Started anftstsapcl1
Resource Group: g-QAS_ERS
fs_QAS_ERS (ocf::heartbeat:Filesystem): Started anftstsapcl1
nc_QAS_ERS (ocf::heartbeat:azure-lb): Started anftstsapcl1
vip_QAS_ERS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl1

8. Kill enqueue replication server process


Resource state before starting the test:

Resource Group: g-QAS_ASCS


fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl2
nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl2
vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl2
stonith-sbd (stonith:external/sbd): Started anftstsapcl1
Resource Group: g-QAS_ERS
fs_QAS_ERS (ocf::heartbeat:Filesystem): Started anftstsapcl1
nc_QAS_ERS (ocf::heartbeat:azure-lb): Started anftstsapcl1
vip_QAS_ERS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl1

Run the following command as root on the node where the ERS instance is running to kill the enqueue
replication server process.

anftstsapcl1:~ # pgrep er.sapQAS | xargs kill -9

If you only run the command once, sapstart will restart the process. If you run it often enough,
sapstart will not restart the process and the resource will be in a stopped state. Run the following
commands as root to clean up the resource state of the ERS instance after the test.

anftstsapcl1:~ # crm resource cleanup rsc_sap_QAS_ERS01

Resource state after the test:


Resource Group: g-QAS_ASCS
fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl2
nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl2
vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl2
stonith-sbd (stonith:external/sbd): Started anftstsapcl1
Resource Group: g-QAS_ERS
fs_QAS_ERS (ocf::heartbeat:Filesystem): Started anftstsapcl1
nc_QAS_ERS (ocf::heartbeat:azure-lb): Started anftstsapcl1
vip_QAS_ERS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl1

9. Kill enqueue sapstartsrv process


Resource state before starting the test:

Resource Group: g-QAS_ASCS


fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl2
nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl2
vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl2
stonith-sbd (stonith:external/sbd): Started anftstsapcl1
Resource Group: g-QAS_ERS
fs_QAS_ERS (ocf::heartbeat:Filesystem): Started anftstsapcl1
nc_QAS_ERS (ocf::heartbeat:azure-lb): Started anftstsapcl1
vip_QAS_ERS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl1

Run the following commands as root on the node where the ASCS is running.

anftstsapcl2:~ # pgrep -fl ASCS00.*sapstartsrv


#67625 sapstartsrv

anftstsapcl2:~ # kill -9 67625

The sapstartsrv process should always be restarted by the Pacemaker resource agent. Resource state
after the test:

Resource Group: g-QAS_ASCS


fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl2
nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl2
vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl2
stonith-sbd (stonith:external/sbd): Started anftstsapcl1
Resource Group: g-QAS_ERS
fs_QAS_ERS (ocf::heartbeat:Filesystem): Started anftstsapcl1
nc_QAS_ERS (ocf::heartbeat:azure-lb): Started anftstsapcl1
vip_QAS_ERS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl1

Next steps
HA for SAP NW on Azure VMs on SLES for SAP applications multi-SID guide
Azure Virtual Machines planning and implementation for SAP
Azure Virtual Machines deployment for SAP
Azure Virtual Machines DBMS deployment for SAP
To learn how to establish high availability and plan for disaster recovery of SAP HANA on Azure VMs, see
High Availability of SAP HANA on Azure Virtual Machines (VMs)
Azure Virtual Machines high availability for SAP
NetWeaver on Red Hat Enterprise Linux
12/22/2020 • 28 minutes to read • Edit Online

This article describes how to deploy the virtual machines, configure the virtual machines, install the cluster
framework, and install a highly available SAP NetWeaver 7.50 system. In the example configurations,
installation commands etc. ASCS instance number 00, ERS instance number 02, and SAP System ID NW1 is
used. The names of the resources (for example virtual machines, virtual networks) in the example assume that
you have used the ASCS/SCS template with Resource Prefix NW1 to create the resources.
Read the following SAP Notes and papers first
SAP Note 1928533, which has:
List of Azure VM sizes that are supported for the deployment of SAP software
Important capacity information for Azure VM sizes
Supported SAP software, and operating system (OS) and database combinations
Required SAP kernel version for Windows and Linux on Microsoft Azure
SAP Note 2015553 lists prerequisites for SAP-supported SAP software deployments in Azure.
SAP Note 2002167 has recommended OS settings for Red Hat Enterprise Linux
SAP Note 2009879 has SAP HANA Guidelines for Red Hat Enterprise Linux
SAP Note 2178632 has detailed information about all monitoring metrics reported for SAP in Azure.
SAP Note 2191498 has the required SAP Host Agent version for Linux in Azure.
SAP Note 2243692 has information about SAP licensing on Linux in Azure.
SAP Note 1999351 has additional troubleshooting information for the Azure Enhanced Monitoring
Extension for SAP.
SAP Community WIKI has all required SAP Notes for Linux.
Azure Virtual Machines planning and implementation for SAP on Linux
Azure Virtual Machines deployment for SAP on Linux
Azure Virtual Machines DBMS deployment for SAP on Linux
Product Documentation for Red Hat Gluster Storage
SAP Netweaver in pacemaker cluster
General RHEL documentation
High Availability Add-On Overview
High Availability Add-On Administration
High Availability Add-On Reference
Configuring ASCS/ERS for SAP Netweaver with standalone resources in RHEL 7.5
Configure SAP S/4HANA ASCS/ERS with Standalone Enqueue Server 2 (ENSA2) in Pacemaker on
RHEL
Azure specific RHEL documentation:
Support Policies for RHEL High Availability Clusters - Microsoft Azure Virtual Machines as Cluster
Members
Installing and Configuring a Red Hat Enterprise Linux 7.4 (and later) High-Availability Cluster on
Microsoft Azure

Overview
To achieve high availability, SAP NetWeaver requires shared storage. GlusterFS is configured in a separate
cluster and can be used by multiple SAP systems.

SAP NetWeaver ASCS, SAP NetWeaver SCS, SAP NetWeaver ERS, and the SAP HANA database use virtual
hostname and virtual IP addresses. On Azure, a load balancer is required to use a virtual IP address. We
recommend using Standard load balancer. The following list shows the configuration of the (A)SCS and ERS
load balancer.
(A )SCS
Frontend configuration
IP address 10.0.0.7
Probe Port
Port 620<nr>
Load-balancing rules
If using Standard Load Balancer, select HA por ts
If using Basic Load Balancer, create Load balancing rules for the following ports
32<nr> TCP
36<nr> TCP
39<nr> TCP
81<nr> TCP
5<nr> 13 TCP
5<nr> 14 TCP
5<nr> 16 TCP
ERS
Frontend configuration
IP address 10.0.0.8
Probe Port
Port 621<nr>
Load-balancing rules
If using Standard Load Balancer, select HA por ts
If using Basic Load Balancer, create Load balancing rules for the following ports
32<nr> TCP
33<nr> TCP
5<nr> 13 TCP
5<nr> 14 TCP
5<nr> 16 TCP
Backend configuration
Connected to primary network interfaces of all virtual machines that should be part of the
(A)SCS/ERS cluster

Setting up GlusterFS
SAP NetWeaver requires shared storage for the transport and profile directory. Read GlusterFS on Azure VMs
on Red Hat Enterprise Linux for SAP NetWeaver on how to set up GlusterFS for SAP NetWeaver.

Setting up (A)SCS
You can either use an Azure Template from GitHub to deploy all required Azure resources, including the virtual
machines, availability set and load balancer or you can deploy the resources manually.
Deploy Linux via Azure Template
The Azure Marketplace contains an image for Red Hat Enterprise Linux that you can use to deploy new virtual
machines. You can use one of the quickstart templates on GitHub to deploy all required resources. The template
deploys the virtual machines, the load balancer, availability set etc. Follow these steps to deploy the template:
1. Open the ASCS/SCS template on the Azure portal
2. Enter the following parameters
a. Resource Prefix
Enter the prefix you want to use. The value is used as a prefix for the resources that are deployed.
b. Stack Type
Select the SAP NetWeaver stack type
c. Os Type
Select one of the Linux distributions. For this example, select RHEL 7
d. Db Type
Select HANA
e. Sap System Count
The number of SAP system that run in this cluster. Select 1.
f. System Availability
Select HA
g. Admin Username, Admin Password or SSH key
A new user is created that can be used to sign in to the machine.
h. Subnet ID
If you want to deploy the VM into an existing VNet where you have a subnet defined the VM should
be assigned to, name the ID of that specific subnet. The ID usually looks like
/subscriptions/<subscription ID> /resourceGroups/<resource group
name> /providers/Microsoft.Network/virtualNetworks/<vir tual network
name> /subnets/<subnet name>
Deploy Linux manually via Azure portal
You first need to create the virtual machines for this cluster. Afterwards, you create a load balancer and use the
virtual machines in the backend pool.
1. Create a Resource Group
2. Create a Virtual Network
3. Create an Availability Set
Set max update domain
4. Create Virtual Machine 1
Use at least RHEL 7, in this example the Red Hat Enterprise Linux 7.4 image
https://portal.azure.com/#create/RedHat.RedHatEnterpriseLinux74-ARM
Select Availability Set created earlier
5. Create Virtual Machine 2
Use at least RHEL 7, in this example the Red Hat Enterprise Linux 7.4 image
https://portal.azure.com/#create/RedHat.RedHatEnterpriseLinux74-ARM
Select Availability Set created earlier
6. Add at least one data disk to both virtual machines
The data disks are used for the /usr/sap/ <SAPSID > directory
7. Create load balancer (internal, standard):
a. Create the frontend IP addresses
a. IP address 10.0.0.7 for the ASCS
a. Open the load balancer, select frontend IP pool, and click Add
b. Enter the name of the new frontend IP pool (for example nw1-ascs-frontend )
c. Set the Assignment to Static and enter the IP address (for example 10.0.0.7 )
d. Click OK
b. IP address 10.0.0.8 for the ASCS ERS
Repeat the steps above to create an IP address for the ERS (for example 10.0.0.8 and
nw1-aers-frontend )
b. Create the backend pool
a. Open the load balancer, select backend pools, and click Add
b. Enter the name of the new backend pool (for example nw1-backend )
c. Click Add a virtual machine.
d. Select Virtual machine.
e. Select the virtual machines of the (A)SCS cluster and their IP addresses.
f. Click Add
c. Create the health probes
a. Port 62000 for ASCS
a. Open the load balancer, select health probes, and click Add
b. Enter the name of the new health probe (for example nw1-ascs-hp )
c. Select TCP as protocol, port 62000 , keep Interval 5 and Unhealthy threshold 2
d. Click OK
b. Port 62102 for ASCS ERS
Repeat the steps above to create a health probe for the ERS (for example 62102 and
nw1-aers-hp )
d. Load-balancing rules
a. Load-balancing rules for ASCS
a. Open the load balancer, select load-balancing rules and click Add
b. Enter the name of the new load balancer rule (for example nw1-lb-ascs )
c. Select the frontend IP address, backend pool, and health probe you created earlier (for
example nw1-ascs-frontend , nw1-backend and nw1-ascs-hp )
d. Select HA por ts
e. Increase idle timeout to 30 minutes
f. Make sure to enable Floating IP
g. Click OK
Repeat the steps above to create load balancing rules for ERS (for example nw1-lb-ers )
8. Alternatively, if your scenario requires basic load balancer (internal), follow these steps:
a. Create the frontend IP addresses
a. IP address 10.0.0.7 for the ASCS
a. Open the load balancer, select frontend IP pool, and click Add
b. Enter the name of the new frontend IP pool (for example nw1-ascs-frontend )
c. Set the Assignment to Static and enter the IP address (for example 10.0.0.7 )
d. Click OK
b. IP address 10.0.0.8 for the ASCS ERS
Repeat the steps above to create an IP address for the ERS (for example 10.0.0.8 and
nw1-aers-frontend )
b. Create the backend pool
a. Open the load balancer, select backend pools, and click Add
b. Enter the name of the new backend pool (for example nw1-backend )
c. Click Add a virtual machine.
d. Select the Availability Set you created earlier
e. Select the virtual machines of the (A)SCS cluster
f. Click OK
c. Create the health probes
a. Port 62000 for ASCS
a. Open the load balancer, select health probes, and click Add
b. Enter the name of the new health probe (for example nw1-ascs-hp )
c. Select TCP as protocol, port 62000 , keep Interval 5 and Unhealthy threshold 2
d. Click OK
b. Port 62102 for ASCS ERS
Repeat the steps above to create a health probe for the ERS (for example 62102 and
nw1-aers-hp )
d. Load-balancing rules
a. 3200 TCP for ASCS
a. Open the load balancer, select load-balancing rules and click Add
b. Enter the name of the new load balancer rule (for example nw1-lb-3200 )
c. Select the frontend IP address, backend pool, and health probe you created earlier (for
example nw1-ascs-frontend )
d. Keep protocol TCP , enter port 3200
e. Increase idle timeout to 30 minutes
f. Make sure to enable Floating IP
g. Click OK
b. Additional ports for the ASCS
Repeat the steps above for ports 3600 , 3900 , 8100 , 500 13, 500 14, 500 16 and TCP for
the ASCS
c. Additional ports for the ASCS ERS
Repeat the steps above for ports 3302 , 502 13, 502 14, 502 16 and TCP for the ASCS
ERS

IMPORTANT
Floating IP is not supported on a NIC secondary IP configuration in load-balancing scenarios. For details see Azure Load
balancer Limitations. If you need additional IP address for the VM, deploy a second NIC.

NOTE
When VMs without public IP addresses are placed in the backend pool of internal (no public IP address) Standard Azure
load balancer, there will be no outbound internet connectivity, unless additional configuration is performed to allow
routing to public end points. For details on how to achieve outbound connectivity see Public endpoint connectivity for
Virtual Machines using Azure Standard Load Balancer in SAP high-availability scenarios.

IMPORTANT
Do not enable TCP timestamps on Azure VMs placed behind Azure Load Balancer. Enabling TCP timestamps will cause the
health probes to fail. Set parameter net.ipv4.tcp_timestamps to 0 . For details see Load Balancer health probes.

Create Pacemaker cluster


Follow the steps in Setting up Pacemaker on Red Hat Enterprise Linux in Azure to create a basic Pacemaker
cluster for this (A)SCS server.
Prepare for SAP NetWeaver installation
The following items are prefixed with either [A] - applicable to all nodes, [1] - only applicable to node 1 or [2] -
only applicable to node 2.
1. [A] Setup host name resolution
You can either use a DNS server or modify the /etc/hosts on all nodes. This example shows how to use
the /etc/hosts file. Replace the IP address and the hostname in the following commands
sudo vi /etc/hosts

Insert the following lines to /etc/hosts. Change the IP address and hostname to match your environment

# IP addresses of the GlusterFS nodes


10.0.0.40 glust-0
10.0.0.41 glust-1
10.0.0.42 glust-2
# IP address of the load balancer frontend configuration for SAP NetWeaver ASCS
10.0.0.7 nw1-ascs
# IP address of the load balancer frontend configuration for SAP NetWeaver ASCS ERS
10.0.0.8 nw1-aers

2. [A] Create the shared directories

sudo mkdir -p /sapmnt/NW1


sudo mkdir -p /usr/sap/trans
sudo mkdir -p /usr/sap/NW1/SYS
sudo mkdir -p /usr/sap/NW1/ASCS00
sudo mkdir -p /usr/sap/NW1/ERS02

sudo chattr +i /sapmnt/NW1


sudo chattr +i /usr/sap/trans
sudo chattr +i /usr/sap/NW1/SYS
sudo chattr +i /usr/sap/NW1/ASCS00
sudo chattr +i /usr/sap/NW1/ERS02

3. [A] Install GlusterFS client and other requirements

sudo yum -y install glusterfs-fuse resource-agents resource-agents-sap

4. [A] Check version of resource-agents-sap


Make sure that the version of the installed resource-agents-sap package is at least 3.9.5-124.el7

sudo yum info resource-agents-sap

# Loaded plugins: langpacks, product-id, search-disabled-repos


# Repodata is over 2 weeks old. Install yum-cron? Or run: yum makecache fast
# Installed Packages
# Name : resource-agents-sap
# Arch : x86_64
# Version : 3.9.5
# Release : 124.el7
# Size : 100 k
# Repo : installed
# From repo : rhel-sap-for-rhel-7-server-rpms
# Summary : SAP cluster resource agents and connector script
# URL : https://github.com/ClusterLabs/resource-agents
# License : GPLv2+
# Description : The SAP resource agents and connector script interface with
# : Pacemaker to allow SAP instances to be managed in a cluster
# : environment.

5. [A] Add mount entries


sudo vi /etc/fstab

# Add the following lines to fstab, save and exit


glust-0:/NW1-sapmnt /sapmnt/NW1 glusterfs backup-volfile-servers=glust-1:glust-2 0 0
glust-0:/NW1-trans /usr/sap/trans glusterfs backup-volfile-servers=glust-1:glust-2 0 0
glust-0:/NW1-sys /usr/sap/NW1/SYS glusterfs backup-volfile-servers=glust-1:glust-2 0 0

Mount the new shares

sudo mount -a

6. [A] Configure SWAP file

sudo vi /etc/waagent.conf

# Set the property ResourceDisk.EnableSwap to y


# Create and use swapfile on resource disk.
ResourceDisk.EnableSwap=y

# Set the size of the SWAP file with property ResourceDisk.SwapSizeMB


# The free space of resource disk varies by virtual machine size. Make sure that you do not set a
value that is too big. You can check the SWAP space with command swapon
# Size of the swapfile.
ResourceDisk.SwapSizeMB=2000

Restart the Agent to activate the change

sudo service waagent restart

7. [A] RHEL configuration


Configure RHEL as described in SAP Note 2002167
Installing SAP NetWeaver ASCS/ERS
1. [1] Create a virtual IP resource and health-probe for the ASCS instance

sudo pcs node standby nw1-cl-1

sudo pcs resource create fs_NW1_ASCS Filesystem device='glust-0:/NW1-ascs' \


directory='/usr/sap/NW1/ASCS00' fstype='glusterfs' \
options='backup-volfile-servers=glust-1:glust-2' \
--group g-NW1_ASCS

sudo pcs resource create vip_NW1_ASCS IPaddr2 \


ip=10.0.0.7 cidr_netmask=24 \
--group g-NW1_ASCS

sudo pcs resource create nc_NW1_ASCS azure-lb port=62000 \


--group g-NW1_ASCS

Make sure that the cluster status is ok and that all resources are started. It is not important on which
node the resources are running.
sudo pcs status

# Node nw1-cl-1: standby


# Online: [ nw1-cl-0 ]
#
# Full list of resources:
#
# rsc_st_azure (stonith:fence_azure_arm): Started nw1-cl-0
# Resource Group: g-NW1_ASCS
# fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-0
# nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-0
# vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-0

2. [1] Install SAP NetWeaver ASCS


Install SAP NetWeaver ASCS as root on the first node using a virtual hostname that maps to the IP
address of the load balancer frontend configuration for the ASCS, for example nw1-ascs , 10.0.0.7 and
the instance number that you used for the probe of the load balancer, for example 00 .
You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a non-root user to connect
to sapinst.

# Allow access to SWPM. This rule is not permanent. If you reboot the machine, you have to run the
command again.
sudo firewall-cmd --zone=public --add-port=4237/tcp

sudo <swpm>/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin

If the installation fails to create a subfolder in /usr/sap/NW1 /ASCS00 , try setting the owner and group
of the ASCS00 folder and retry.

sudo chown nw1adm /usr/sap/NW1/ASCS00


sudo chgrp sapsys /usr/sap/NW1/ASCS00

3. [1] Create a virtual IP resource and health-probe for the ERS instance

sudo pcs node unstandby nw1-cl-1


sudo pcs node standby nw1-cl-0

sudo pcs resource create fs_NW1_AERS Filesystem device='glust-0:/NW1-aers' \


directory='/usr/sap/NW1/ERS02' fstype='glusterfs' \
options='backup-volfile-servers=glust-1:glust-2' \
--group g-NW1_AERS

sudo pcs resource create vip_NW1_AERS IPaddr2 \


ip=10.0.0.8 cidr_netmask=24 \
--group g-NW1_AERS

sudo pcs resource create nc_NW1_AERS azure-lb port=62102 \


--group g-NW1_AERS

Make sure that the cluster status is ok and that all resources are started. It is not important on which
node the resources are running.
sudo pcs status

# Node nw1-cl-0: standby


# Online: [ nw1-cl-1 ]
#
# Full list of resources:
#
# rsc_st_azure (stonith:fence_azure_arm): Started nw1-cl-1
# Resource Group: g-NW1_ASCS
# fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-1
# nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-1
# vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-1
# Resource Group: g-NW1_AERS
# fs_NW1_AERS (ocf::heartbeat:Filesystem): Started nw1-cl-1
# nc_NW1_AERS (ocf::heartbeat:azure-lb): Started nw1-cl-1
# vip_NW1_AERS (ocf::heartbeat:IPaddr2): Started nw1-cl-1

4. [2] Install SAP NetWeaver ERS


Install SAP NetWeaver ERS as root on the second node using a virtual hostname that maps to the IP
address of the load balancer frontend configuration for the ERS, for example nw1-aers , 10.0.0.8 and
the instance number that you used for the probe of the load balancer, for example 02 .
You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a non-root user to connect
to sapinst.

# Allow access to SWPM. This rule is not permanent. If you reboot the machine, you have to run the
command again.
sudo firewall-cmd --zone=public --add-port=4237/tcp

sudo <swpm>/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin

If the installation fails to create a subfolder in /usr/sap/NW1 /ERS02 , try setting the owner and group of
the ERS02 folder and retry.

sudo chown nw1adm /usr/sap/NW1/ERS02


sudo chgrp sapsys /usr/sap/NW1/ERS02

5. [1] Adapt the ASCS/SCS and ERS instance profiles


ASCS/SCS profile

sudo vi /sapmnt/NW1/profile/NW1_ASCS00_nw1-ascs

# Change the restart command to a start command


#Restart_Program_01 = local $(_EN) pf=$(_PF)
Start_Program_01 = local $(_EN) pf=$(_PF)

# Add the keep alive parameter, if using ENSA1


enque/encni/set_so_keepalive = true

For both ENSA1 and ENSA2, make sure that the keepalive OS parameters are set as described in SAP
note 1410736.
ERS profile
sudo vi /sapmnt/NW1/profile/NW1_ERS02_nw1-aers

# Change the restart command to a start command


#Restart_Program_00 = local $(_ER) pf=$(_PFL) NR=$(SCSID)
Start_Program_00 = local $(_ER) pf=$(_PFL) NR=$(SCSID)

# remove Autostart from ERS profile


# Autostart = 1

6. [A] Configure Keep Alive


The communication between the SAP NetWeaver application server and the ASCS/SCS is routed through
a software load balancer. The load balancer disconnects inactive connections after a configurable timeout.
To prevent this, you need to set a parameter in the SAP NetWeaver ASCS/SCS profile, if using ENSA1,
and change the Linux system keepalive settings on all SAP servers for both ENSA1/ENSA2. Read SAP
Note 1410736 for more information.

# Change the Linux system configuration


sudo sysctl net.ipv4.tcp_keepalive_time=300

7. [A] Update the /usr/sap/sapservices file


To prevent the start of the instances by the sapinit startup script, all instances managed by Pacemaker
must be commented out from /usr/sap/sapservices file. Do not comment out the SAP HANA instance if it
will be used with HANA SR.

sudo vi /usr/sap/sapservices

# On the node where you installed the ASCS, comment out the following line
# LD_LIBRARY_PATH=/usr/sap/NW1/ASCS00/exe:$LD_LIBRARY_PATH; export LD_LIBRARY_PATH;
/usr/sap/NW1/ASCS00/exe/sapstartsrv pf=/usr/sap/NW1/SYS/profile/NW1_ASCS00_nw1-ascs -D -u nw1adm

# On the node where you installed the ERS, comment out the following line
# LD_LIBRARY_PATH=/usr/sap/NW1/ERS02/exe:$LD_LIBRARY_PATH; export LD_LIBRARY_PATH;
/usr/sap/NW1/ERS02/exe/sapstartsrv pf=/usr/sap/NW1/ERS02/profile/NW1_ERS02_nw1-aers -D -u nw1adm

8. [1] Create the SAP cluster resources


If using enqueue server 1 architecture (ENSA1), define the resources as follows:
sudo pcs property set maintenance-mode=true

sudo pcs resource create rsc_sap_NW1_ASCS00 SAPInstance \


InstanceName=NW1_ASCS00_nw1-ascs START_PROFILE="/sapmnt/NW1/profile/NW1_ASCS00_nw1-ascs" \
AUTOMATIC_RECOVER=false \
meta resource-stickiness=5000 migration-threshold=1 failure-timeout=60 \
op monitor interval=20 on-fail=restart timeout=60 \
op start interval=0 timeout=600 op stop interval=0 timeout=600 \
--group g-NW1_ASCS

sudo pcs resource create rsc_sap_NW1_ERS02 SAPInstance \


InstanceName=NW1_ERS02_nw1-aers START_PROFILE="/sapmnt/NW1/profile/NW1_ERS02_nw1-aers" \
AUTOMATIC_RECOVER=false IS_ERS=true \
op monitor interval=20 on-fail=restart timeout=60 op start interval=0 timeout=600 op stop interval=0
timeout=600 \
--group g-NW1_AERS

sudo pcs constraint colocation add g-NW1_AERS with g-NW1_ASCS -5000


sudo pcs constraint location rsc_sap_NW1_ASCS00 rule score=2000 runs_ers_NW1 eq 1
sudo pcs constraint order g-NW1_ASCS then g-NW1_AERS kind=Optional symmetrical=false

sudo pcs node unstandby nw1-cl-0


sudo pcs property set maintenance-mode=false

SAP introduced support for enqueue server 2, including replication, as of SAP NW 7.52. Starting with ABAP
Platform 1809, enqueue server 2 is installed by default. See SAP note 2630416 for enqueue server 2 support. If
using enqueue server 2 architecture (ENSA2), install resource agent resource-agents-sap-4.1.1-12.el7.x86_64 or
newer and define the resources as follows:

sudo pcs property set maintenance-mode=true

sudo pcs resource create rsc_sap_NW1_ASCS00 SAPInstance \


InstanceName=NW1_ASCS00_nw1-ascs START_PROFILE="/sapmnt/NW1/profile/NW1_ASCS00_nw1-ascs" \
AUTOMATIC_RECOVER=false \
meta resource-stickiness=5000 migration-threshold=1 failure-timeout=60 \
op monitor interval=20 on-fail=restart timeout=60 \
op start interval=0 timeout=600 op stop interval=0 timeout=600 \
--group g-NW1_ASCS

sudo pcs resource create rsc_sap_NW1_ERS02 SAPInstance \


InstanceName=NW1_ERS02_nw1-aers START_PROFILE="/sapmnt/NW1/profile/NW1_ERS02_nw1-aers" \
AUTOMATIC_RECOVER=false IS_ERS=true \
op monitor interval=20 on-fail=restart timeout=60 op start interval=0 timeout=600 op stop interval=0
timeout=600 \
--group g-NW1_AERS

sudo pcs constraint colocation add g-NW1_AERS with g-NW1_ASCS -5000


sudo pcs constraint order g-NW1_ASCS then g-NW1_AERS kind=Optional symmetrical=false
sudo pcs constraint order start g-NW1_ASCS then stop g-NW1_AERS symmetrical=false

sudo pcs node unstandby nw1-cl-0


sudo pcs property set maintenance-mode=false

If you are upgrading from an older version and switching to enqueue server 2, see SAP note 2641322.

NOTE
The timeouts in the above configuration are just examples and may need to be adapted to the specific SAP setup.

Make sure that the cluster status is ok and that all resources are started. It is not important on which node the
resources are running.

sudo pcs status

# Online: [ nw1-cl-0 nw1-cl-1 ]


#
# Full list of resources:
#
# rsc_st_azure (stonith:fence_azure_arm): Started nw1-cl-0
# Resource Group: g-NW1_ASCS
# fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-1
# nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-1
# vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-1
# rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-1
# Resource Group: g-NW1_AERS
# fs_NW1_AERS (ocf::heartbeat:Filesystem): Started nw1-cl-0
# nc_NW1_AERS (ocf::heartbeat:azure-lb): Started nw1-cl-0
# vip_NW1_AERS (ocf::heartbeat:IPaddr2): Started nw1-cl-0
# rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-0

1. [A] Add firewall rules for ASCS and ERS on both nodes

# Probe Port of ASCS


sudo firewall-cmd --zone=public --add-port=62000/tcp --permanent
sudo firewall-cmd --zone=public --add-port=62000/tcp
sudo firewall-cmd --zone=public --add-port=3200/tcp --permanent
sudo firewall-cmd --zone=public --add-port=3200/tcp
sudo firewall-cmd --zone=public --add-port=3600/tcp --permanent
sudo firewall-cmd --zone=public --add-port=3600/tcp
sudo firewall-cmd --zone=public --add-port=3900/tcp --permanent
sudo firewall-cmd --zone=public --add-port=3900/tcp
sudo firewall-cmd --zone=public --add-port=8100/tcp --permanent
sudo firewall-cmd --zone=public --add-port=8100/tcp
sudo firewall-cmd --zone=public --add-port=50013/tcp --permanent
sudo firewall-cmd --zone=public --add-port=50013/tcp
sudo firewall-cmd --zone=public --add-port=50014/tcp --permanent
sudo firewall-cmd --zone=public --add-port=50014/tcp
sudo firewall-cmd --zone=public --add-port=50016/tcp --permanent
sudo firewall-cmd --zone=public --add-port=50016/tcp
# Probe Port of ERS
sudo firewall-cmd --zone=public --add-port=62102/tcp --permanent
sudo firewall-cmd --zone=public --add-port=62102/tcp
sudo firewall-cmd --zone=public --add-port=3302/tcp --permanent
sudo firewall-cmd --zone=public --add-port=3302/tcp
sudo firewall-cmd --zone=public --add-port=50213/tcp --permanent
sudo firewall-cmd --zone=public --add-port=50213/tcp
sudo firewall-cmd --zone=public --add-port=50214/tcp --permanent
sudo firewall-cmd --zone=public --add-port=50214/tcp
sudo firewall-cmd --zone=public --add-port=50216/tcp --permanent
sudo firewall-cmd --zone=public --add-port=50216/tcp

SAP NetWeaver application server preparation


Some databases require that the database instance installation is executed on an application server. Prepare the
application server virtual machines to be able to use them in these cases.
The steps bellow assume that you install the application server on a server different from the ASCS/SCS and
HANA servers. Otherwise some of the steps below (like configuring host name resolution) are not needed.
1. Setup host name resolution
You can either use a DNS server or modify the /etc/hosts on all nodes. This example shows how to use
the /etc/hosts file. Replace the IP address and the hostname in the following commands

sudo vi /etc/hosts

Insert the following lines to /etc/hosts. Change the IP address and hostname to match your environment

# IP addresses of the GlusterFS nodes


10.0.0.40 glust-0
10.0.0.41 glust-1
10.0.0.42 glust-2
# IP address of the load balancer frontend configuration for SAP NetWeaver ASCS
10.0.0.7 nw1-ascs
# IP address of the load balancer frontend configuration for SAP NetWeaver ASCS ERS
10.0.0.8 nw1-aers
# IP address of the load balancer frontend configuration for database
10.0.0.13 nw1-db

2. Create the sapmnt directory

sudo mkdir -p /sapmnt/NW1


sudo mkdir -p /usr/sap/trans

sudo chattr +i /sapmnt/NW1


sudo chattr +i /usr/sap/trans

3. Install GlusterFS client and other requirements

sudo yum -y install glusterfs-fuse uuidd

4. Add mount entries

sudo vi /etc/fstab

# Add the following lines to fstab, save and exit


glust-0:/NW1-sapmnt /sapmnt/NW1 glusterfs backup-volfile-servers=glust-1:glust-2 0 0
glust-0:/NW1-trans /usr/sap/trans glusterfs backup-volfile-servers=glust-1:glust-2 0 0

Mount the new shares

sudo mount -a

5. Configure SWAP file

sudo vi /etc/waagent.conf

# Set the property ResourceDisk.EnableSwap to y


# Create and use swapfile on resource disk.
ResourceDisk.EnableSwap=y

# Set the size of the SWAP file with property ResourceDisk.SwapSizeMB


# The free space of resource disk varies by virtual machine size. Make sure that you do not set a
value that is too big. You can check the SWAP space with command swapon
# Size of the swapfile.
ResourceDisk.SwapSizeMB=2000
Restart the Agent to activate the change

sudo service waagent restart

Install database
In this example, SAP NetWeaver is installed on SAP HANA. You can use every supported database for this
installation. For more information on how to install SAP HANA in Azure, see High availability of SAP HANA on
Azure VMs on Red Hat Enterprise Linux. For a list of supported databases, see SAP Note 1928533.
1. Run the SAP database instance installation
Install the SAP NetWeaver database instance as root using a virtual hostname that maps to the IP
address of the load balancer frontend configuration for the database for example nw1-db and
10.0.0.13 .
You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a non-root user to connect
to sapinst.

sudo <swpm>/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin

SAP NetWeaver application server installation


Follow these steps to install an SAP application server.
1. Prepare application server
Follow the steps in the chapter SAP NetWeaver application server preparation above to prepare the
application server.
2. Install SAP NetWeaver application server
Install a primary or additional SAP NetWeaver applications server.
You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a non-root user to connect
to sapinst.

sudo <swpm>/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin

3. Update SAP HANA secure store


Update the SAP HANA secure store to point to the virtual name of the SAP HANA System Replication
setup.
Run the following command to list the entries as <sapsid>adm

hdbuserstore List

This should list all entries and should look similar to


DATA FILE : /home/nw1adm/.hdb/nw1-di-0/SSFS_HDB.DAT
KEY FILE : /home/nw1adm/.hdb/nw1-di-0/SSFS_HDB.KEY

KEY DEFAULT
ENV : 10.0.0.14:30313
USER: SAPABAP1
DATABASE: NW1

The output shows that the IP address of the default entry is pointing to the virtual machine and not to
the load balancer's IP address. This entry needs to be changed to point to the virtual hostname of the
load balancer. Make sure to use the same port (30313 in the output above) and database name (HN1 in
the output above)!

su - nw1adm
hdbuserstore SET DEFAULT nw1-db:30313@NW1 SAPABAP1 <password of ABAP schema>

Test the cluster setup


1. Manually migrate the ASCS instance
Resource state before starting the test:

rsc_st_azure (stonith:fence_azure_arm): Started nw1-cl-0


Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-0
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-0
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-0
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-0
Resource Group: g-NW1_AERS
fs_NW1_AERS (ocf::heartbeat:Filesystem): Started nw1-cl-1
nc_NW1_AERS (ocf::heartbeat:azure-lb): Started nw1-cl-1
vip_NW1_AERS (ocf::heartbeat:IPaddr2): Started nw1-cl-1
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-1

Run the following commands as root to migrate the ASCS instance.

[root@nw1-cl-0 ~]# pcs resource move rsc_sap_NW1_ASCS00

[root@nw1-cl-0 ~]# pcs resource clear rsc_sap_NW1_ASCS00

# Remove failed actions for the ERS that occurred as part of the migration
[root@nw1-cl-0 ~]# pcs resource cleanup rsc_sap_NW1_ERS02

Resource state after the test:

rsc_st_azure (stonith:fence_azure_arm): Started nw1-cl-0


Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-1
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-1
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-1
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-1
Resource Group: g-NW1_AERS
fs_NW1_AERS (ocf::heartbeat:Filesystem): Started nw1-cl-0
nc_NW1_AERS (ocf::heartbeat:azure-lb): Started nw1-cl-0
vip_NW1_AERS (ocf::heartbeat:IPaddr2): Started nw1-cl-0
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-0
2. Simulate node crash
Resource state before starting the test:

rsc_st_azure (stonith:fence_azure_arm): Started nw1-cl-0


Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-1
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-1
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-1
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-1
Resource Group: g-NW1_AERS
fs_NW1_AERS (ocf::heartbeat:Filesystem): Started nw1-cl-0
nc_NW1_AERS (ocf::heartbeat:azure-lb): Started nw1-cl-0
vip_NW1_AERS (ocf::heartbeat:IPaddr2): Started nw1-cl-0
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-0

Run the following command as root on the node where the ASCS instance is running

[root@nw1-cl-1 ~]# echo b > /proc/sysrq-trigger

The status after the node is started again should look like this.

Online: [ nw1-cl-0 nw1-cl-1 ]

Full list of resources:

rsc_st_azure (stonith:fence_azure_arm): Started nw1-cl-0


Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-0
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-0
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-0
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-0
Resource Group: g-NW1_AERS
fs_NW1_AERS (ocf::heartbeat:Filesystem): Started nw1-cl-1
nc_NW1_AERS (ocf::heartbeat:azure-lb): Started nw1-cl-1
vip_NW1_AERS (ocf::heartbeat:IPaddr2): Started nw1-cl-1
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-1

Failed Actions:
* rsc_sap_NW1_ERS02_monitor_11000 on nw1-cl-0 'not running' (7): call=45, status=complete,
exitreason='',
last-rc-change='Tue Aug 21 13:52:39 2018', queued=0ms, exec=0ms

Use the following command to clean the failed resources.

[root@nw1-cl-0 ~]# pcs resource cleanup rsc_sap_NW1_ERS02

Resource state after the test:


rsc_st_azure (stonith:fence_azure_arm): Started nw1-cl-0
Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-0
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-0
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-0
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-0
Resource Group: g-NW1_AERS
fs_NW1_AERS (ocf::heartbeat:Filesystem): Started nw1-cl-1
nc_NW1_AERS (ocf::heartbeat:azure-lb): Started nw1-cl-1
vip_NW1_AERS (ocf::heartbeat:IPaddr2): Started nw1-cl-1
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-1

3. Kill message server process


Resource state before starting the test:

rsc_st_azure (stonith:fence_azure_arm): Started nw1-cl-0


Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-0
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-0
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-0
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-0
Resource Group: g-NW1_AERS
fs_NW1_AERS (ocf::heartbeat:Filesystem): Started nw1-cl-1
nc_NW1_AERS (ocf::heartbeat:azure-lb): Started nw1-cl-1
vip_NW1_AERS (ocf::heartbeat:IPaddr2): Started nw1-cl-1
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-1

Run the following commands as root to identify the process of the message server and kill it.

[root@nw1-cl-0 ~]# pgrep ms.sapNW1 | xargs kill -9

If you only kill the message server once, it will be restarted by sapstart . If you kill it often enough,
Pacemaker will eventually move the ASCS instance to the other node. Run the following commands as
root to clean up the resource state of the ASCS and ERS instance after the test.

[root@nw1-cl-0 ~]# pcs resource cleanup rsc_sap_NW1_ASCS00


[root@nw1-cl-0 ~]# pcs resource cleanup rsc_sap_NW1_ERS02

Resource state after the test:

rsc_st_azure (stonith:fence_azure_arm): Started nw1-cl-0


Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-1
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-1
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-1
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-1
Resource Group: g-NW1_AERS
fs_NW1_AERS (ocf::heartbeat:Filesystem): Started nw1-cl-0
nc_NW1_AERS (ocf::heartbeat:azure-lb): Started nw1-cl-0
vip_NW1_AERS (ocf::heartbeat:IPaddr2): Started nw1-cl-0
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-0

4. Kill enqueue server process


Resource state before starting the test:
rsc_st_azure (stonith:fence_azure_arm): Started nw1-cl-0
Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-1
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-1
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-1
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-1
Resource Group: g-NW1_AERS
fs_NW1_AERS (ocf::heartbeat:Filesystem): Started nw1-cl-0
nc_NW1_AERS (ocf::heartbeat:azure-lb): Started nw1-cl-0
vip_NW1_AERS (ocf::heartbeat:IPaddr2): Started nw1-cl-0
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-0

Run the following commands as root on the node where the ASCS instance is running to kill the enqueue
server.

[root@nw1-cl-1 ~]# pgrep en.sapNW1 | xargs kill -9

The ASCS instance should immediately fail over to the other node. The ERS instance should also fail over
after the ASCS instance is started. Run the following commands as root to clean up the resource state of
the ASCS and ERS instance after the test.

[root@nw1-cl-0 ~]# pcs resource cleanup rsc_sap_NW1_ASCS00


[root@nw1-cl-0 ~]# pcs resource cleanup rsc_sap_NW1_ERS02

Resource state after the test:

rsc_st_azure (stonith:fence_azure_arm): Started nw1-cl-0


Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-0
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-0
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-0
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-0
Resource Group: g-NW1_AERS
fs_NW1_AERS (ocf::heartbeat:Filesystem): Started nw1-cl-1
nc_NW1_AERS (ocf::heartbeat:azure-lb): Started nw1-cl-1
vip_NW1_AERS (ocf::heartbeat:IPaddr2): Started nw1-cl-1
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-1

5. Kill enqueue replication server process


Resource state before starting the test:

rsc_st_azure (stonith:fence_azure_arm): Started nw1-cl-0


Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-0
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-0
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-0
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-0
Resource Group: g-NW1_AERS
fs_NW1_AERS (ocf::heartbeat:Filesystem): Started nw1-cl-1
nc_NW1_AERS (ocf::heartbeat:azure-lb): Started nw1-cl-1
vip_NW1_AERS (ocf::heartbeat:IPaddr2): Started nw1-cl-1
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-1

Run the following command as root on the node where the ERS instance is running to kill the enqueue
replication server process.
[root@nw1-cl-1 ~]# pgrep er.sapNW1 | xargs kill -9

If you only run the command once, sapstart will restart the process. If you run it often enough,
sapstart will not restart the process and the resource will be in a stopped state. Run the following
commands as root to clean up the resource state of the ERS instance after the test.

[root@nw1-cl-0 ~]# pcs resource cleanup rsc_sap_NW1_ERS02

Resource state after the test:

rsc_st_azure (stonith:fence_azure_arm): Started nw1-cl-0


Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-0
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-0
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-0
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-0
Resource Group: g-NW1_AERS
fs_NW1_AERS (ocf::heartbeat:Filesystem): Started nw1-cl-1
nc_NW1_AERS (ocf::heartbeat:azure-lb): Started nw1-cl-1
vip_NW1_AERS (ocf::heartbeat:IPaddr2): Started nw1-cl-1
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-1

6. Kill enqueue sapstartsrv process


Resource state before starting the test:

rsc_st_azure (stonith:fence_azure_arm): Started nw1-cl-0


Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-0
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-0
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-0
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-0
Resource Group: g-NW1_AERS
fs_NW1_AERS (ocf::heartbeat:Filesystem): Started nw1-cl-1
nc_NW1_AERS (ocf::heartbeat:azure-lb): Started nw1-cl-1
vip_NW1_AERS (ocf::heartbeat:IPaddr2): Started nw1-cl-1
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-1

Run the following commands as root on the node where the ASCS is running.

[root@nw1-cl-0 ~]# pgrep -fl ASCS00.*sapstartsrv


# 59545 sapstartsrv

[root@nw1-cl-0 ~]# kill -9 59545

The sapstartsrv process should always be restarted by the Pacemaker resource agent as part of the
monitoring. Resource state after the test:
rsc_st_azure (stonith:fence_azure_arm): Started nw1-cl-0
Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-0
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-0
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-0
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-0
Resource Group: g-NW1_AERS
fs_NW1_AERS (ocf::heartbeat:Filesystem): Started nw1-cl-1
nc_NW1_AERS (ocf::heartbeat:azure-lb): Started nw1-cl-1
vip_NW1_AERS (ocf::heartbeat:IPaddr2): Started nw1-cl-1
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-1

Next steps
HA for SAP NW on Azure VMs on RHEL for SAP applications multi-SID guide
Azure Virtual Machines planning and implementation for SAP
Azure Virtual Machines deployment for SAP
Azure Virtual Machines DBMS deployment for SAP
To learn how to establish high availability and plan for disaster recovery of SAP HANA on Azure (large
instances), see SAP HANA (large instances) high availability and disaster recovery on Azure.
To learn how to establish high availability and plan for disaster recovery of SAP HANA on Azure VMs, see
High Availability of SAP HANA on Azure Virtual Machines (VMs)
Azure Virtual Machines high availability for SAP
NetWeaver on Red Hat Enterprise Linux with Azure
NetApp Files for SAP applications
12/22/2020 • 32 minutes to read • Edit Online

This article describes how to deploy the virtual machines, configure the virtual machines, install the cluster
framework, and install a highly available SAP NetWeaver 7.50 system, using Azure NetApp Files. In the example
configurations, installation commands etc. ASCS instance is number 00, the ERS instance is number 01, Primary
Application instance (PAS) is 02 and the Application instance (AAS) is 03. SAP System ID QAS is used.
The database layer isn't covered in detail in this article.
Read the following SAP Notes and papers first:
Azure NetApp Files documentation
SAP Note 1928533, which has:
List of Azure VM sizes that are supported for the deployment of SAP software
Important capacity information for Azure VM sizes
Supported SAP software, and operating system (OS) and database combinations
Required SAP kernel version for Windows and Linux on Microsoft Azure
SAP Note 2015553 lists prerequisites for SAP-supported SAP software deployments in Azure.
SAP Note 2002167 has recommended OS settings for Red Hat Enterprise Linux
SAP Note 2009879 has SAP HANA Guidelines for Red Hat Enterprise Linux
SAP Note 2178632 has detailed information about all monitoring metrics reported for SAP in Azure.
SAP Note 2191498 has the required SAP Host Agent version for Linux in Azure.
SAP Note 2243692 has information about SAP licensing on Linux in Azure.
SAP Note 1999351 has additional troubleshooting information for the Azure Enhanced Monitoring
Extension for SAP.
SAP Community WIKI has all required SAP Notes for Linux.
Azure Virtual Machines planning and implementation for SAP on Linux
Azure Virtual Machines deployment for SAP on Linux
Azure Virtual Machines DBMS deployment for SAP on Linux
SAP Netweaver in pacemaker cluster
General RHEL documentation
High Availability Add-On Overview
High Availability Add-On Administration
High Availability Add-On Reference
Configuring ASCS/ERS for SAP Netweaver with standalone resources in RHEL 7.5
Configure SAP S/4HANA ASCS/ERS with Standalone Enqueue Server 2 (ENSA2) in Pacemaker on
RHEL
Azure-specific RHEL documentation:
Support Policies for RHEL High Availability Clusters - Microsoft Azure Virtual Machines as Cluster
Members
Installing and Configuring a Red Hat Enterprise Linux 7.4 (and later) High-Availability Cluster on
Microsoft Azure
NetApp SAP Applications on Microsoft Azure using Azure NetApp Files

Overview
High availability(HA) for SAP Netweaver central services requires shared storage. To achieve that on Red Hat
Linux so far it was necessary to build separate highly available GlusterFS cluster.
Now it is possible to achieve SAP Netweaver HA by using shared storage, deployed on Azure NetApp Files.
Using Azure NetApp Files for the shared storage eliminates the need for additional GlusterFS cluster.
Pacemaker is still needed for HA of the SAP Netweaver central services(ASCS/SCS).

SAP NetWeaver ASCS, SAP NetWeaver SCS, SAP NetWeaver ERS, and the SAP HANA database use virtual
hostname and virtual IP addresses. On Azure, a load balancer is required to use a virtual IP address. We
recommend using Standard load balancer. The following list shows the configuration of the load balancer with
separate front-end IPs for (A)SCS and ERS.
(A )SCS
Frontend configuration
IP address 192.168.14.9
Probe Port
Port 620<nr>
Load-balancing rules
If using Standard Load Balancer, select HA por ts
32<nr> TCP
36<nr> TCP
39<nr> TCP
81<nr> TCP
5<nr> 13 TCP
5<nr> 14 TCP
5<nr> 16 TCP
ERS
Frontend configuration
IP address 192.168.14.10
Probe Port
Port 621<nr>
Load-balancing rules
If using Standard Load Balancer, select HA por ts
32<nr> TCP
33<nr> TCP
5<nr> 13 TCP
5<nr> 14 TCP
5<nr> 16 TCP
Backend configuration
Connected to primary network interfaces of all virtual machines that should be part of the
(A)SCS/ERS cluster

Setting up the Azure NetApp Files infrastructure


SAP NetWeaver requires shared storage for the transport and profile directory. Before proceeding with the
setup for Azure NetApp files infrastructure, familiarize yourself with the Azure NetApp Files documentation.
Check if your selected Azure region offers Azure NetApp Files. The following link shows the availability of Azure
NetApp Files by Azure region: Azure NetApp Files Availability by Azure Region.
Azure NetApp files are available in several Azure regions. Before deploying Azure NetApp Files, request
onboarding to Azure NetApp Files, following the Register for Azure NetApp files instructions.
Deploy Azure NetApp Files resources
The steps assume that you have already deployed Azure Virtual Network. The Azure NetApp Files resources and
the VMs, where the Azure NetApp Files resources will be mounted must be deployed in the same Azure Virtual
Network or in peered Azure Virtual Networks.
1. If you haven't done that already, request onboarding to Azure NetApp Files.
2. Create the NetApp account in the selected Azure region, following the instructions to create NetApp
Account.
3. Set up Azure NetApp Files capacity pool, following the instructions on how to set up Azure NetApp Files
capacity pool.
The SAP Netweaver architecture presented in this article uses single Azure NetApp Files capacity pool,
Premium SKU. We recommend Azure NetApp Files Premium SKU for SAP Netweaver application
workload on Azure.
4. Delegate a subnet to Azure NetApp files as described in the instructions Delegate a subnet to Azure
NetApp Files.
5. Deploy Azure NetApp Files volumes, following the instructions to create a volume for Azure NetApp
Files. Deploy the volumes in the designated Azure NetApp Files subnet. The IP addresses of the Azure
NetApp volumes are assigned automatically. Keep in mind that the Azure NetApp Files resources and the
Azure VMs must be in the same Azure Virtual Network or in peered Azure Virtual Networks. In this
example we use two Azure NetApp Files volumes: sapQAS and transSAP. The file paths that are mounted
to the corresponding mount points are /usrsapqas /sapmntQAS , /usrsapqas /usrsapQAS sys, etc.
a. volume sapQAS (nfs://192.168.24.5/usrsapqas /sapmntQAS )
b. volume sapQAS (nfs://192.168.24.5/usrsapqas /usrsapQAS ascs)
c. volume sapQAS (nfs://192.168.24.5/usrsapqas /usrsapQAS sys)
d. volume sapQAS (nfs://192.168.24.5/usrsapqas /usrsapQAS ers)
e. volume transSAP (nfs://192.168.24.4/transSAP)
f. volume sapQAS (nfs://192.168.24.5/usrsapqas /usrsapQAS pas)
g. volume sapQAS (nfs://192.168.24.5/usrsapqas /usrsapQAS aas)
In this example, we used Azure NetApp Files for all SAP Netweaver file systems to demonstrate how Azure
NetApp Files can be used. The SAP file systems that don't need to be mounted via NFS can also be deployed as
Azure disk storage . In this example a-e must be on Azure NetApp Files and f-g (that is, /usr/sap/QAS /D02 ,
/usr/sap/QAS /D03 ) could be deployed as Azure disk storage.
Important considerations
When considering Azure NetApp Files for the SAP Netweaver on SUSE High Availability architecture, be aware
of the following important considerations:
The minimum capacity pool is 4 TiB. The capacity pool size can be increased in 1 TiB increments.
The minimum volume is 100 GiB
Azure NetApp Files and all virtual machines, where Azure NetApp Files volumes will be mounted, must be in
the same Azure Virtual Network or in peered virtual networks in the same region. Azure NetApp Files access
over VNET peering in the same region is supported now. Azure NetApp access over global peering is not yet
supported.
The selected virtual network must have a subnet, delegated to Azure NetApp Files.
Azure NetApp Files offers export policy: you can control the allowed clients, the access type (Read&Write,
Read Only, etc.).
Azure NetApp Files feature isn't zone aware yet. Currently Azure NetApp Files feature isn't deployed in all
Availability zones in an Azure region. Be aware of the potential latency implications in some Azure regions.
Azure NetApp Files volumes can be deployed as NFSv3 or NFSv4.1 volumes. Both protocols are supported
for the SAP application layer (ASCS/ERS, SAP application servers).
Setting up (A)SCS
In this example, the resources were deployed manually via the Azure portal.
Deploy Linux manually via Azure portal
First you need to create the Azure NetApp Files volumes. Deploy the VMs. Afterwards, you create a load
balancer and use the virtual machines in the backend pool.
1. Create load balancer (internal, standard):
a. Create the frontend IP addresses
a. IP address 192.168.14.9 for the ASCS
a. Open the load balancer, select frontend IP pool, and click Add
b. Enter the name of the new frontend IP pool (for example frontend.QAS.ASCS )
c. Set the Assignment to Static and enter the IP address (for example 192.168.14.9 )
d. Click OK
b. IP address 192.168.14.10 for the ASCS ERS
Repeat the steps above under "a" to create an IP address for the ERS (for example
192.168.14.10 and frontend.QAS.ERS )
b. Create the backend pool
a. Open the load balancer, select backend pools, and click Add
b. Enter the name of the new backend pool (for example backend.QAS )
c. Click Add a virtual machine.
d. Select Virtual machine.
e. Select the virtual machines of the (A)SCS cluster and their IP addresses.
f. Click Add
c. Create the health probes
a. Port 62000 for ASCS
a. Open the load balancer, select health probes, and click Add
b. Enter the name of the new health probe (for example health.QAS.ASCS )
c. Select TCP as protocol, port 62000 , keep Interval 5 and Unhealthy threshold 2
d. Click OK
b. Port 62101 for ASCS ERS
Repeat the steps above under "c" to create a health probe for the ERS (for example
62101 and health.QAS.ERS )
d. Load-balancing rules
a. Load-balancing rules for ASCS
a. Open the load balancer, select Load-balancing rules, and click Add
b. Enter the name of the new load balancer rule (for example lb.QAS.ASCS )
c. Select the frontend IP address for ASCS, backend pool, and health probe you created
earlier (for example frontend.QAS.ASCS , backend.QAS and health.QAS.ASCS )
d. Select HA por ts
e. Increase idle timeout to 30 minutes
f. Make sure to enable Floating IP
g. Click OK
Repeat the steps above to create load balancing rules for ERS (for example lb.QAS.ERS )
2. Alternatively, if your scenario requires basic load balancer (internal), follow these steps:
a. Create the frontend IP addresses
a. IP address 192.168.14.9 for the ASCS
a. Open the load balancer, select frontend IP pool, and click Add
b. Enter the name of the new frontend IP pool (for example frontend.QAS.ASCS )
c. Set the Assignment to Static and enter the IP address (for example 192.168.14.9 )
d. Click OK
b. IP address 192.168.14.10 for the ASCS ERS
Repeat the steps above under "a" to create an IP address for the ERS (for example
192.168.14.10 and frontend.QAS.ERS )
b. Create the backend pool
a. Open the load balancer, select backend pools, and click Add
b. Enter the name of the new backend pool (for example backend.QAS )
c. Click Add a virtual machine.
d. Select the Availability Set you created earlier for ASCS
e. Select the virtual machines of the (A)SCS cluster
f. Click OK
c. Create the health probes
a. Port 62000 for ASCS
a. Open the load balancer, select health probes, and click Add
b. Enter the name of the new health probe (for example health.QAS.ASCS )
c. Select TCP as protocol, port 62000 , keep Interval 5 and Unhealthy threshold 2
d. Click OK
b. Port 62101 for ASCS ERS
Repeat the steps above under "c" to create a health probe for the ERS (for example
62101 and health.QAS.ERS )
d. Load-balancing rules
a. 3200 TCP for ASCS
a. Open the load balancer, select Load-balancing rules, and click Add
b. Enter the name of the new load balancer rule (for example lb.QAS.ASCS.3200 )
c. Select the frontend IP address for ASCS, backend pool, and health probe you created
earlier (for example frontend.QAS.ASCS )
d. Keep protocol TCP , enter port 3200
e. Increase idle timeout to 30 minutes
f. Make sure to enable Floating IP
g. Click OK
b. Additional ports for the ASCS
Repeat the steps above under "d" for ports 3600 , 3900 , 8100 , 500 13, 500 14, 500 16
and TCP for the ASCS
c. Additional ports for the ASCS ERS
Repeat the steps above under "d" for ports 3201 , 3301 , 501 13, 501 14, 501 16 and TCP
for the ASCS ERS

IMPORTANT
Floating IP is not supported on a NIC secondary IP configuration in load-balancing scenarios. For details
see Azure Load balancer Limitations. If you need additional IP address for the VM, deploy a second NIC.
NOTE
When VMs without public IP addresses are placed in the backend pool of internal (no public IP address)
Standard Azure load balancer, there will be no outbound internet connectivity, unless additional
configuration is performed to allow routing to public end points. For details on how to achieve outbound
connectivity see Public endpoint connectivity for Virtual Machines using Azure Standard Load Balancer in
SAP high-availability scenarios.

IMPORTANT
Do not enable TCP timestamps on Azure VMs placed behind Azure Load Balancer. Enabling TCP
timestamps will cause the health probes to fail. Set parameter net.ipv4.tcp_timestamps to 0 . For
details see Load Balancer health probes.

Disable ID mapping (if using NFSv4.1)


The instructions in this section are only applicable, if using Azure NetApp Files volumes with NFSv4.1 protocol.
Perform the configuration on all VMs, where Azure NetApp Files NFSv4.1 volumes will be mounted.
1. Verify the NFS domain setting. Make sure that the domain is configured as the default Azure NetApp
Files domain, i.e. defaultv4iddomain.com and the mapping is set to nobody .

IMPORTANT
Make sure to set the NFS domain in /etc/idmapd.conf on the VM to match the default domain configuration
on Azure NetApp Files: defaultv4iddomain.com . If there's a mismatch between the domain configuration on the
NFS client (i.e. the VM) and the NFS server, i.e. the Azure NetApp configuration, then the permissions for files on
Azure NetApp volumes that are mounted on the VMs will be displayed as nobody .

sudo cat /etc/idmapd.conf


# Example
[General]
Domain = defaultv4iddomain.com
[Mapping]
Nobody-User = nobody
Nobody-Group = nobody

2. [A] Verify nfs4_disable_idmapping . It should be set to Y . To create the directory structure where
nfs4_disable_idmapping is located, execute the mount command. You won't be able to manually create
the directory under /sys/modules, because access is reserved for the kernel / drivers.

# Check nfs4_disable_idmapping
cat /sys/module/nfs/parameters/nfs4_disable_idmapping
# If you need to set nfs4_disable_idmapping to Y
mkdir /mnt/tmp
mount 192.168.24.5:/sapQAS
umount /mnt/tmp
echo "Y" > /sys/module/nfs/parameters/nfs4_disable_idmapping
# Make the configuration permanent
echo "options nfs nfs4_disable_idmapping=Y" >> /etc/modprobe.d/nfs.conf
For more details on how to change nfs4_disable_idmapping parameter see
https://access.redhat.com/solutions/1749883.
Create Pacemaker cluster
Follow the steps in Setting up Pacemaker on Red Hat Enterprise Linux in Azure to create a basic Pacemaker
cluster for this (A)SCS server.
Prepare for SAP NetWeaver installation
The following items are prefixed with either [A] - applicable to all nodes, [1] - only applicable to node 1 or [2] -
only applicable to node 2.
1. [A] Setup host name resolution
You can either use a DNS server or modify the /etc/hosts on all nodes. This example shows how to use
the /etc/hosts file. Replace the IP address and the hostname in the following commands

sudo vi /etc/hosts

Insert the following lines to /etc/hosts. Change the IP address and hostname to match your environment

# IP address of cluster node 1


192.168.14.5 anftstsapcl1
# IP address of cluster node 2
192.168.14.6 anftstsapcl2
# IP address of the load balancer frontend configuration for SAP Netweaver ASCS
192.168.14.9 anftstsapvh
# IP address of the load balancer frontend configuration for SAP Netweaver ERS
192.168.14.10 anftstsapers

2. [1] Create SAP directories in the Azure NetApp Files volume.


Mount temporarily the Azure NetApp Files volume on one of the VMs and create the SAP directories(file
paths).

# mount temporarily the volume


sudo mkdir -p /saptmp
# If using NFSv3
sudo mount -t nfs -o rw,hard,rsize=65536,wsize=65536,vers=3,tcp 192.168.24.5:/sapQAS /saptmp
# If using NFSv4.1
sudo mount -t nfs -o rw,hard,rsize=65536,wsize=65536,vers=4.1,sec=sys,tcp 192.168.24.5:/sapQAS
/saptmp
# create the SAP directories
sudo cd /saptmp
sudo mkdir -p sapmntQAS
sudo mkdir -p usrsapQASascs
sudo mkdir -p usrsapQASers
sudo mkdir -p usrsapQASsys
sudo mkdir -p usrsapQASpas
sudo mkdir -p usrsapQASaas
# unmount the volume and delete the temporary directory
sudo cd ..
sudo umount /saptmp
sudo rmdir /saptmp

3. [A] Create the shared directories


sudo mkdir -p /sapmnt/QAS
sudo mkdir -p /usr/sap/trans
sudo mkdir -p /usr/sap/QAS/SYS
sudo mkdir -p /usr/sap/QAS/ASCS00
sudo mkdir -p /usr/sap/QAS/ERS01

sudo chattr +i /sapmnt/QAS


sudo chattr +i /usr/sap/trans
sudo chattr +i /usr/sap/QAS/SYS
sudo chattr +i /usr/sap/QAS/ASCS00
sudo chattr +i /usr/sap/QAS/ERS01

4. [A] Install NFS client and other requirements

sudo yum -y install nfs-utils resource-agents resource-agents-sap

5. [A] Check version of resource-agents-sap


Make sure that the version of the installed resource-agents-sap package is at least 3.9.5-124.el7

sudo yum info resource-agents-sap

# Loaded plugins: langpacks, product-id, search-disabled-repos


# Repodata is over 2 weeks old. Install yum-cron? Or run: yum makecache fast
# Installed Packages
# Name : resource-agents-sap
# Arch : x86_64
# Version : 3.9.5
# Release : 124.el7
# Size : 100 k
# Repo : installed
# From repo : rhel-sap-for-rhel-7-server-rpms
# Summary : SAP cluster resource agents and connector script
# URL : https://github.com/ClusterLabs/resource-agents
# License : GPLv2+
# Description : The SAP resource agents and connector script interface with
# : Pacemaker to allow SAP instances to be managed in a cluster
# : environment.

6. [A] Add mount entries


If using NFSv3:

sudo vi /etc/fstab

# Add the following lines to fstab, save and exit


192.168.24.5:/sapQAS/sapmntQAS /sapmnt/QAS nfs rw,hard,rsize=65536,wsize=65536,vers=3
192.168.24.5:/sapQAS/usrsapQASsys /usr/sap/QAS/SYS nfs rw,hard,rsize=65536,wsize=65536,vers=3
192.168.24.4:/transSAP /usr/sap/trans nfs rw,hard,rsize=65536,wsize=65536,vers=3

If using NFSv4.1:

sudo vi /etc/fstab

# Add the following lines to fstab, save and exit


192.168.24.5:/sapQAS/sapmntQAS /sapmnt/QAS nfs rw,hard,rsize=65536,wsize=65536,vers=4.1,sec=sys
192.168.24.5:/sapQAS/usrsapQASsys /usr/sap/QAS/SYS nfs
rw,hard,rsize=65536,wsize=65536,vers=4.1,sec=sys
192.168.24.4:/transSAP /usr/sap/trans nfs rw,hard,rsize=65536,wsize=65536,vers=4.1,sec=sys
NOTE
Make sure to match the NFS protocol version of the Azure NetApp Files volumes, when mounting the volumes. If
the Azure NetApp Files volumes are created as NFSv3 volumes, use the corresponding NFSv3 configuration. If the
Azure NetApp Files volumes are created as NFSv4.1 volumes, follow the instructions to disable ID mapping and
make sure to use the corresponding NFSv4.1 configuration. In this example the Azure NetApp Files volumes were
created as NFSv3 volumes.

Mount the new shares

sudo mount -a

7. [A] Configure SWAP file

sudo vi /etc/waagent.conf

# Set the property ResourceDisk.EnableSwap to y


# Create and use swapfile on resource disk.
ResourceDisk.EnableSwap=y

# Set the size of the SWAP file with property ResourceDisk.SwapSizeMB


# The free space of resource disk varies by virtual machine size. Make sure that you do not set a
value that is too big. You can check the SWAP space with command swapon
# Size of the swapfile.
ResourceDisk.SwapSizeMB=2000

Restart the Agent to activate the change

sudo service waagent restart

8. [A] RHEL configuration


Configure RHEL as described in SAP Note 2002167
Installing SAP NetWeaver ASCS/ERS
1. [1] Create a virtual IP resource and health-probe for the ASCS instance

sudo pcs node standby anftstsapcl2


# If using NFSv3
sudo pcs resource create fs_QAS_ASCS Filesystem device='192.168.24.5:/sapQAS/usrsapQASascs' \
directory='/usr/sap/QAS/ASCS00' fstype='nfs' force_unmount=safe \
op start interval=0 timeout=60 op stop interval=0 timeout=120 op monitor interval=200 timeout=40 \
--group g-QAS_ASCS

# If using NFSv4.1
sudo pcs resource create fs_QAS_ASCS Filesystem device='192.168.24.5:/sapQAS/usrsapQASascs' \
directory='/usr/sap/QAS/ASCS00' fstype='nfs' force_unmount=safe options='sec=sys,vers=4.1' \
op start interval=0 timeout=60 op stop interval=0 timeout=120 op monitor interval=200 timeout=40 \
--group g-QAS_ASCS

sudo pcs resource create vip_QAS_ASCS IPaddr2 \


ip=192.168.14.9 cidr_netmask=24 \
--group g-QAS_ASCS

sudo pcs resource create nc_QAS_ASCS azure-lb port=62000 \


--group g-QAS_ASCS
Make sure that the cluster status is ok and that all resources are started. It is not important on which
node the resources are running.

sudo pcs status

# Node anftstsapcl2: standby


# Online: [ anftstsapcl1 ]
#
# Full list of resources:
#
# rsc_st_azure (stonith:fence_azure_arm): Started anftstsapcl1
# Resource Group: g-QAS_ASCS
# fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl1
# nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl1
# vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl1

2. [1] Install SAP NetWeaver ASCS


Install SAP NetWeaver ASCS as root on the first node using a virtual hostname that maps to the IP
address of the load balancer frontend configuration for the ASCS, for example anftstsapvh ,
192.168.14.9 and the instance number that you used for the probe of the load balancer, for example
00 .
You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a non-root user to connect
to sapinst.

# Allow access to SWPM. This rule is not permanent. If you reboot the machine, you have to run the
command again.
sudo firewall-cmd --zone=public --add-port=4237/tcp

sudo <swpm>/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin SAPINST_USE_HOSTNAME=<virtual_hostname>

If the installation fails to create a subfolder in /usr/sap/QAS /ASCS00 , try setting the owner and group of
the ASCS00 folder and retry.

sudo chown qasadm /usr/sap/QAS/ASCS00


sudo chgrp sapsys /usr/sap/QAS/ASCS00

3. [1] Create a virtual IP resource and health-probe for the ERS instance
sudo pcs node unstandby anftstsapcl2
sudo pcs node standby anftstsapcl1

# If using NFSv3
sudo pcs resource create fs_QAS_AERS Filesystem device='192.168.24.5:/sapQAS/usrsapQASers' \
directory='/usr/sap/QAS/ERS01' fstype='nfs' force_unmount=safe \
op start interval=0 timeout=60 op stop interval=0 timeout=120 op monitor interval=200 timeout=40 \
--group g-QAS_AERS

# If using NFSv4.1
sudo pcs resource create fs_QAS_AERS Filesystem device='192.168.24.5:/sapQAS/usrsapQASers' \
directory='/usr/sap/QAS/ERS01' fstype='nfs' force_unmount=safe options='sec=sys,vers=4.1' \
op start interval=0 timeout=60 op stop interval=0 timeout=120 op monitor interval=200 timeout=40 \
--group g-QAS_AERS

sudo pcs resource create vip_QAS_AERS IPaddr2 \


ip=192.168.14.10 cidr_netmask=24 \
--group g-QAS_AERS

sudo pcs resource create nc_QAS_AERS azure-lb port=62101 \


--group g-QAS_AERS

Make sure that the cluster status is ok and that all resources are started. It is not important on which
node the resources are running.

sudo pcs status

# Node anftstsapcl1: standby


# Online: [ anftstsapcl2 ]
#
# Full list of resources:
#
# rsc_st_azure (stonith:fence_azure_arm): Started anftstsapcl2
# Resource Group: g-QAS_ASCS
# fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl2
# nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl2<
# vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
# Resource Group: g-QAS_AERS
# fs_QAS_AERS (ocf::heartbeat:Filesystem): Started anftstsapcl2
# nc_QAS_AERS (ocf::heartbeat:azure-lb): Started anftstsapcl2
# vip_QAS_AERS (ocf::heartbeat:IPaddr2): Started anftstsapcl2

4. [2] Install SAP NetWeaver ERS


Install SAP NetWeaver ERS as root on the second node using a virtual hostname that maps to the IP
address of the load balancer frontend configuration for the ERS, for example anftstsapers ,
192.168.14.10 and the instance number that you used for the probe of the load balancer, for example
01 .
You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a non-root user to connect
to sapinst.

# Allow access to SWPM. This rule is not permanent. If you reboot the machine, you have to run the
command again.
sudo firewall-cmd --zone=public --add-port=4237/tcp

sudo <swpm>/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin SAPINST_USE_HOSTNAME=<virtual_hostname>

If the installation fails to create a subfolder in /usr/sap/QAS /ERS01 , try setting the owner and group of
the ERS01 folder and retry.
sudo chown qaadm /usr/sap/QAS/ERS01
sudo chgrp sapsys /usr/sap/QAS/ERS01

5. [1] Adapt the ASCS/SCS and ERS instance profiles


ASCS/SCS profile

sudo vi /sapmnt/QAS/profile/QAS_ASCS00_anftstsapvh

# Change the restart command to a start command


#Restart_Program_01 = local $(_EN) pf=$(_PF)
Start_Program_01 = local $(_EN) pf=$(_PF)

# Add the keep alive parameter, if using ENSA1


enque/encni/set_so_keepalive = true

For both ENSA1 and ENSA2, make sure that the keepalive OS parameters are set as described in SAP
note 1410736.
ERS profile

sudo vi /sapmnt/QAS/profile/QAS_ERS01_anftstsapers

# Change the restart command to a start command


#Restart_Program_00 = local $(_ER) pf=$(_PFL) NR=$(SCSID)
Start_Program_00 = local $(_ER) pf=$(_PFL) NR=$(SCSID)

# remove Autostart from ERS profile


# Autostart = 1

6. [A] Configure Keep Alive


The communication between the SAP NetWeaver application server and the ASCS/SCS is routed
through a software load balancer. The load balancer disconnects inactive connections after a configurable
timeout. To prevent this, you need to set a parameter in the SAP NetWeaver ASCS/SCS profile, if using
ENSA1, and change the Linux system keepalive settings on all SAP servers for both ENSA1/ENSA2.
Read SAP Note 1410736 for more information.

# Change the Linux system configuration


sudo sysctl net.ipv4.tcp_keepalive_time=300

7. [A] Update the /usr/sap/sapservices file


To prevent the start of the instances by the sapinit startup script, all instances managed by Pacemaker
must be commented out from /usr/sap/sapservices file. Do not comment out the SAP HANA instance if it
will be used with HANA SR.

sudo vi /usr/sap/sapservices

# On the node where you installed the ASCS, comment out the following line
# LD_LIBRARY_PATH=/usr/sap/QAS/ASCS00/exe:$LD_LIBRARY_PATH; export LD_LIBRARY_PATH;
/usr/sap/QAS/ASCS00/exe/sapstartsrv pf=/usr/sap/QAS/SYS/profile/QAS_ASCS00_anftstsapvh -D -u qasadm

# On the node where you installed the ERS, comment out the following line
# LD_LIBRARY_PATH=/usr/sap/QAS/ERS01/exe:$LD_LIBRARY_PATH; export LD_LIBRARY_PATH;
/usr/sap/QAS/ERS01/exe/sapstartsrv pf=/usr/sap/QAS/ERS01/profile/QAS_ERS01_anftstsapers -D -u qasadm
8. [1] Create the SAP cluster resources
If using enqueue server 1 architecture (ENSA1), define the resources as follows:

sudo pcs property set maintenance-mode=true

sudo pcs resource create rsc_sap_QAS_ASCS00 SAPInstance \


InstanceName=QAS_ASCS00_anftstsapvh START_PROFILE="/sapmnt/QAS/profile/QAS_ASCS00_anftstsapvh" \
AUTOMATIC_RECOVER=false \
meta resource-stickiness=5000 migration-threshold=1 failure-timeout=60 \
op monitor interval=20 on-fail=restart timeout=60 \
op start interval=0 timeout=600 op stop interval=0 timeout=600 \
--group g-QAS_ASCS

sudo pcs resource create rsc_sap_QAS_ERS01 SAPInstance \


InstanceName=QAS_ERS01_anftstsapers START_PROFILE="/sapmnt/QAS/profile/QAS_ERS01_anftstsapers" \
AUTOMATIC_RECOVER=false IS_ERS=true \
op monitor interval=20 on-fail=restart timeout=60 op start interval=0 timeout=600 op stop
interval=0 timeout=600 \
--group g-QAS_AERS

sudo pcs constraint colocation add g-QAS_AERS with g-QAS_ASCS -5000


sudo pcs constraint location rsc_sap_QAS_ASCS00 rule score=2000 runs_ers_QAS eq 1
sudo pcs constraint order g-QAS_ASCS then g-QAS_AERS kind=Optional symmetrical=false

sudo pcs node unstandby anftstsapcl1


sudo pcs property set maintenance-mode=false

SAP introduced support for enqueue server 2, including replication, as of SAP NW 7.52. Starting with
ABAP Platform 1809, enqueue server 2 is installed by default. See SAP note 2630416 for enqueue server
2 support. If using enqueue server 2 architecture (ENSA2), install resource agent resource-agents-sap-
4.1.1-12.el7.x86_64 or newer and define the resources as follows:

sudo pcs property set maintenance-mode=true

sudo pcs resource create rsc_sap_QAS_ASCS00 SAPInstance \


InstanceName=QAS_ASCS00_anftstsapvh START_PROFILE="/sapmnt/QAS/profile/QAS_ASCS00_anftstsapvh" \
AUTOMATIC_RECOVER=false \
meta resource-stickiness=5000 migration-threshold=1 failure-timeout=60 \
op monitor interval=20 on-fail=restart timeout=60 \
op start interval=0 timeout=600 op stop interval=0 timeout=600 \
--group g-QAS_ASCS

sudo pcs resource create rsc_sap_QAS_ERS01 SAPInstance \


InstanceName=QAS_ERS01_anftstsapers START_PROFILE="/sapmnt/QAS/profile/QAS_ERS01_anftstsapers" \
AUTOMATIC_RECOVER=false IS_ERS=true \
op monitor interval=20 on-fail=restart timeout=60 op start interval=0 timeout=600 op stop interval=0
timeout=600 \
--group g-QAS_AERS

sudo pcs constraint colocation add g-QAS_AERS with g-QAS_ASCS -5000


sudo pcs constraint order g-QAS_ASCS then g-QAS_AERS kind=Optional symmetrical=false
sudo pcs constraint order start g-QAS_ASCS then stop g-QAS_AERS symmetrical=false

sudo pcs node unstandby anftstsapcl1


sudo pcs property set maintenance-mode=false

If you are upgrading from an older version and switching to enqueue server 2, see SAP note 2641322.

NOTE
The timeouts in the above configuration are just examples and may need to be adapted to the specific SAP setup.
Make sure that the cluster status is ok and that all resources are started. It is not important on which
node the resources are running.

sudo pcs status

# Online: [ anftstsapcl1 anftstsapcl2 ]


#
# Full list of resources:
#
# rsc_st_azure (stonith:fence_azure_arm): Started anftstsapcl2
# Resource Group: g-QAS_ASCS
# fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl2
# nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl2
# vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
# rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl2
# Resource Group: g-QAS_AERS
# fs_QAS_AERS (ocf::heartbeat:Filesystem): Started anftstsapcl1
# nc_QAS_AERS (ocf::heartbeat:azure-lb): Started anftstsapcl1
# vip_QAS_AERS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
# rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl1

9. [A] Add firewall rules for ASCS and ERS on both nodes Add the firewall rules for ASCS and ERS on both
nodes.

# Probe Port of ASCS


sudo firewall-cmd --zone=public --add-port=62000/tcp --permanent
sudo firewall-cmd --zone=public --add-port=62000/tcp
sudo firewall-cmd --zone=public --add-port=3200/tcp --permanent
sudo firewall-cmd --zone=public --add-port=3200/tcp
sudo firewall-cmd --zone=public --add-port=3600/tcp --permanent
sudo firewall-cmd --zone=public --add-port=3600/tcp
sudo firewall-cmd --zone=public --add-port=3900/tcp --permanent
sudo firewall-cmd --zone=public --add-port=3900/tcp
sudo firewall-cmd --zone=public --add-port=8100/tcp --permanent
sudo firewall-cmd --zone=public --add-port=8100/tcp
sudo firewall-cmd --zone=public --add-port=50013/tcp --permanent
sudo firewall-cmd --zone=public --add-port=50013/tcp
sudo firewall-cmd --zone=public --add-port=50014/tcp --permanent
sudo firewall-cmd --zone=public --add-port=50014/tcp
sudo firewall-cmd --zone=public --add-port=50016/tcp --permanent
sudo firewall-cmd --zone=public --add-port=50016/tcp
# Probe Port of ERS
sudo firewall-cmd --zone=public --add-port=62101/tcp --permanent
sudo firewall-cmd --zone=public --add-port=62101/tcp
sudo firewall-cmd --zone=public --add-port=3301/tcp --permanent
sudo firewall-cmd --zone=public --add-port=3301/tcp
sudo firewall-cmd --zone=public --add-port=50113/tcp --permanent
sudo firewall-cmd --zone=public --add-port=50113/tcp
sudo firewall-cmd --zone=public --add-port=50114/tcp --permanent
sudo firewall-cmd --zone=public --add-port=50114/tcp
sudo firewall-cmd --zone=public --add-port=50116/tcp --permanent
sudo firewall-cmd --zone=public --add-port=50116/tcp

SAP NetWeaver application server preparation


Some databases require that the database instance installation is executed on an application server. Prepare the
application server virtual machines to be able to use them in these cases.
The steps bellow assume that you install the application server on a server different from the ASCS/SCS and
HANA servers. Otherwise some of the steps below (like configuring host name resolution) are not needed.
The following items are prefixed with either [A] - applicable to both PAS and AAS, [P] - only applicable to PAS
or [S] - only applicable to AAS.
1. [A] Setup host name resolution You can either use a DNS server or modify the /etc/hosts on all nodes.
This example shows how to use the /etc/hosts file. Replace the IP address and the hostname in the
following commands:

sudo vi /etc/hosts

Insert the following lines to /etc/hosts. Change the IP address and hostname to match your environment.

# IP address of the load balancer frontend configuration for SAP NetWeaver ASCS
192.168.14.9 anftstsapvh
# IP address of the load balancer frontend configuration for SAP NetWeaver ASCS ERS
192.168.14.10 anftstsapers
192.168.14.7 anftstsapa01
192.168.14.8 anftstsapa02

2. [A] Create the sapmnt directory Create the sapmnt directory.

sudo mkdir -p /sapmnt/QAS


sudo mkdir -p /usr/sap/trans

sudo chattr +i /sapmnt/QAS


sudo chattr +i /usr/sap/trans

3. [A] Install NFS client and other requirements

sudo yum -y install nfs-utils uuidd

4. [A] Add mount entries


If using NFSv3:

sudo vi /etc/fstab

# Add the following lines to fstab, save and exit


192.168.24.5:/sapQAS/sapmntQAS /sapmnt/QAS nfs rw,hard,rsize=65536,wsize=65536,vers=3
192.168.24.4:/transSAP /usr/sap/trans nfs rw,hard,rsize=65536,wsize=65536,vers=3

If using NFSv4.1:

sudo vi /etc/fstab

# Add the following lines to fstab, save and exit


192.168.24.5:/sapQAS/sapmntQAS /sapmnt/QAS nfs rw,hard,rsize=65536,wsize=65536,vers=4.1,sec=sys
192.168.24.4:/transSAP /usr/sap/trans nfs rw,hard,rsize=65536,wsize=65536,vers=4.1,sec=sys

Mount the new shares

sudo mount -a

5. [P] Create and mount the PAS directory


If using NFSv3:
sudo mkdir -p /usr/sap/QAS/D02
sudo chattr +i /usr/sap/QAS/D02

sudo vi /etc/fstab
# Add the following line to fstab
92.168.24.5:/sapQAS/usrsapQASpas /usr/sap/QAS/D02 nfs rw,hard,rsize=65536,wsize=65536,vers=3

# Mount
sudo mount -a

If using NFSv4.1:

sudo mkdir -p /usr/sap/QAS/D02


sudo chattr +i /usr/sap/QAS/D02

sudo vi /etc/fstab
# Add the following line to fstab
92.168.24.5:/sapQAS/usrsapQASpas /usr/sap/QAS/D02 nfs
rw,hard,rsize=65536,wsize=65536,vers=4.1,sec=sys

# Mount
sudo mount -a

6. [S] Create and mount the AAS directory


If using NFSv3:

sudo mkdir -p /usr/sap/QAS/D03


sudo chattr +i /usr/sap/QAS/D03

sudo vi /etc/fstab
# Add the following line to fstab
92.168.24.5:/sapQAS/usrsapQASaas /usr/sap/QAS/D03 nfs rw,hard,rsize=65536,wsize=65536,vers=3

# Mount
sudo mount -a

If using NFSv4.1:

sudo mkdir -p /usr/sap/QAS/D03


sudo chattr +i /usr/sap/QAS/D03

sudo vi /etc/fstab
# Add the following line to fstab
92.168.24.5:/sapQAS/usrsapQASaas /usr/sap/QAS/D03 nfs
rw,hard,rsize=65536,wsize=65536,vers=4.1,sec=sys

# Mount
sudo mount -a

7. [A] Configure SWAP file


sudo vi /etc/waagent.conf

# Set the property ResourceDisk.EnableSwap to y


# Create and use swapfile on resource disk.
ResourceDisk.EnableSwap=y

# Set the size of the SWAP file with property ResourceDisk.SwapSizeMB


# The free space of resource disk varies by virtual machine size. Make sure that you do not set a
value that is too big. You can check the SWAP space with command swapon
# Size of the swapfile.
ResourceDisk.SwapSizeMB=2000

Restart the Agent to activate the change

sudo service waagent restart

Install database
In this example, SAP NetWeaver is installed on SAP HANA. You can use every supported database for this
installation. For more information on how to install SAP HANA in Azure, see High availability of SAP HANA on
Azure VMs on Red Hat Enterprise Linux. For a list of supported databases, see SAP Note 1928533.
1. Run the SAP database instance installation
Install the SAP NetWeaver database instance as root using a virtual hostname that maps to the IP
address of the load balancer frontend configuration for the database.
You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a non-root user to connect
to sapinst.

sudo <swpm>/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin

SAP NetWeaver application server installation


Follow these steps to install an SAP application server.
1. Prepare application server
Follow the steps in the chapter SAP NetWeaver application server preparation above to prepare the
application server.
2. Install SAP NetWeaver application server
Install a primary or additional SAP NetWeaver applications server.
You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a non-root user to connect
to sapinst.

sudo <swpm>/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin

3. Update SAP HANA secure store


Update the SAP HANA secure store to point to the virtual name of the SAP HANA System Replication
setup.
Run the following command to list the entries as <sapsid>adm
hdbuserstore List

This should list all entries and should look similar to

DATA FILE : /home/qasadm/.hdb/anftstsapa01/SSFS_HDB.DAT


KEY FILE : /home/qasadm/.hdb/anftstsapa01/SSFS_HDB.KEY

KEY DEFAULT
ENV : 192.168.14.4:30313
USER: SAPABAP1
DATABASE: QAS

The output shows that the IP address of the default entry is pointing to the virtual machine and not to
the load balancer's IP address. This entry needs to be changed to point to the virtual hostname of the
load balancer. Make sure to use the same port (30313 in the output above) and database name (QAS in
the output above)!

su - qasadm
hdbuserstore SET DEFAULT qasdb:30313@QAS SAPABAP1 <password of ABAP schema>

Test the cluster setup


1. Manually migrate the ASCS instance
Resource state before starting the test:

rsc_st_azure (stonith:fence_azure_arm): Started anftstsapcl1


Resource Group: g-QAS_ASCS
fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl1
nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl1
vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl1
Resource Group: g-QAS_AERS
fs_QAS_AERS (ocf::heartbeat:Filesystem): Started anftstsapcl2
nc_QAS_AERS (ocf::heartbeat:azure-lb): Started anftstsapcl2
vip_QAS_AERS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl2

Run the following commands as root to migrate the ASCS instance.

[root@anftstsapcl1 ~]# pcs resource move rsc_sap_QAS_ASCS00

[root@anftstsapcl1 ~]# pcs resource clear rsc_sap_QAS_ASCS00

# Remove failed actions for the ERS that occurred as part of the migration
[root@anftstsapcl1 ~]# pcs resource cleanup rsc_sap_QAS_ERS01

Resource state after the test:


rsc_st_azure (stonith:fence_azure_arm): Started anftstsapcl1
Resource Group: g-QAS_ASCS
fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl2
nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl2
vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl2
Resource Group: g-QAS_AERS
fs_QAS_AERS (ocf::heartbeat:Filesystem): Started anftstsapcl1
nc_QAS_AERS (ocf::heartbeat:azure-lb): Started anftstsapcl1
vip_QAS_AERS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl1

2. Simulate node crash


Resource state before starting the test:

rsc_st_azure (stonith:fence_azure_arm): Started anftstsapcl1


Resource Group: g-QAS_ASCS
fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl2
nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl2
vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl2
Resource Group: g-QAS_AERS
fs_QAS_AERS (ocf::heartbeat:Filesystem): Started anftstsapcl1
nc_QAS_AERS (ocf::heartbeat:azure-lb): Started anftstsapcl1
vip_QAS_AERS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl1

Run the following command as root on the node where the ASCS instance is running

[root@anftstsapcl2 ~]# echo b > /proc/sysrq-trigger

The status after the node is started again should look like this.

Online: [ anftstsapcl1 anftstsapcl2 ]

Full list of resources:

rsc_st_azure (stonith:fence_azure_arm): Started anftstsapcl1


Resource Group: g-QAS_ASCS
fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl1
nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl1
vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl1
Resource Group: g-QAS_AERS
fs_QAS_AERS (ocf::heartbeat:Filesystem): Started anftstsapcl2
nc_QAS_AERS (ocf::heartbeat:azure-lb): Started anftstsapcl2
vip_QAS_AERS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl2

Failed Actions:
* rsc_sap_QAS_ERS01_monitor_11000 on anftstsapcl1 'not running' (7): call=45, status=complete,
exitreason='',

Use the following command to clean the failed resources.

[root@anftstsapcl1 ~]# pcs resource cleanup rsc_sap_QAS_ERS01

Resource state after the test:


rsc_st_azure (stonith:fence_azure_arm): Started anftstsapcl1
Resource Group: g-QAS_ASCS
fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl1
nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl1
vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl1
Resource Group: g-QAS_AERS
fs_QAS_AERS (ocf::heartbeat:Filesystem): Started anftstsapcl2
nc_QAS_AERS (ocf::heartbeat:azure-lb): Started anftstsapcl2
vip_QAS_AERS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl2

3. Kill message server process


Resource state before starting the test:

rsc_st_azure (stonith:fence_azure_arm): Started anftstsapcl1


Resource Group: g-QAS_ASCS
fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl1
nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl1
vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl1
Resource Group: g-QAS_AERS
fs_QAS_AERS (ocf::heartbeat:Filesystem): Started anftstsapcl2
nc_QAS_AERS (ocf::heartbeat:azure-lb): Started anftstsapcl2
vip_QAS_AERS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl2

Run the following commands as root to identify the process of the message server and kill it.

[root@anftstsapcl1 ~]# pgrep ms.sapQAS | xargs kill -9

If you only kill the message server once, it will be restarted by sapstart . If you kill it often enough,
Pacemaker will eventually move the ASCS instance to the other node. Run the following commands as
root to clean up the resource state of the ASCS and ERS instance after the test.

[root@anftstsapcl1 ~]# pcs resource cleanup rsc_sap_QAS_ASCS00


[root@anftstsapcl1 ~]# pcs resource cleanup rsc_sap_QAS_ERS01

Resource state after the test:

rsc_st_azure (stonith:fence_azure_arm): Started anftstsapcl1


Resource Group: g-QAS_ASCS
fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl2
nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl2
vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl2
Resource Group: g-QAS_AERS
fs_QAS_AERS (ocf::heartbeat:Filesystem): Started anftstsapcl1
nc_QAS_AERS (ocf::heartbeat:azure-lb): Started anftstsapcl1
vip_QAS_AERS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl1

4. Kill enqueue server process


Resource state before starting the test:
rsc_st_azure (stonith:fence_azure_arm): Started anftstsapcl1
Resource Group: g-QAS_ASCS
fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl2
nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl2
vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl2
Resource Group: g-QAS_AERS
fs_QAS_AERS (ocf::heartbeat:Filesystem): Started anftstsapcl1
nc_QAS_AERS (ocf::heartbeat:azure-lb): Started anftstsapcl1
vip_QAS_AERS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl1

Run the following commands as root on the node where the ASCS instance is running to kill the
enqueue server.

[root@anftstsapcl2 ~]# pgrep en.sapQAS | xargs kill -9

The ASCS instance should immediately fail over to the other node. The ERS instance should also fail over
after the ASCS instance is started. Run the following commands as root to clean up the resource state of
the ASCS and ERS instance after the test.

[root@anftstsapcl2 ~]# pcs resource cleanup rsc_sap_QAS_ASCS00


[root@anftstsapcl2 ~]# pcs resource cleanup rsc_sap_QAS_ERS01

Resource state after the test:

rsc_st_azure (stonith:fence_azure_arm): Started anftstsapcl1


Resource Group: g-QAS_ASCS
fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl1
nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl1
vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl1
Resource Group: g-QAS_AERS
fs_QAS_AERS (ocf::heartbeat:Filesystem): Started anftstsapcl2
nc_QAS_AERS (ocf::heartbeat:azure-lb): Started anftstsapcl2
vip_QAS_AERS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl2

5. Kill enqueue replication server process


Resource state before starting the test:

rsc_st_azure (stonith:fence_azure_arm): Started anftstsapcl1


Resource Group: g-QAS_ASCS
fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl1
nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl1
vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl1
Resource Group: g-QAS_AERS
fs_QAS_AERS (ocf::heartbeat:Filesystem): Started anftstsapcl2
nc_QAS_AERS (ocf::heartbeat:azure-lb): Started anftstsapcl2
vip_QAS_AERS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl2

Run the following command as root on the node where the ERS instance is running to kill the enqueue
replication server process.
[root@anftstsapcl2 ~]# pgrep er.sapQAS | xargs kill -9

If you only run the command once, sapstart will restart the process. If you run it often enough,
sapstart will not restart the process and the resource will be in a stopped state. Run the following
commands as root to clean up the resource state of the ERS instance after the test.

[root@anftstsapcl2 ~]# pcs resource cleanup rsc_sap_QAS_ERS01

Resource state after the test:

rsc_st_azure (stonith:fence_azure_arm): Started anftstsapcl1


Resource Group: g-QAS_ASCS
fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl1
nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl1
vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl1
Resource Group: g-QAS_AERS
fs_QAS_AERS (ocf::heartbeat:Filesystem): Started anftstsapcl2
nc_QAS_AERS (ocf::heartbeat:azure-lb): Started anftstsapcl2
vip_QAS_AERS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl2

6. Kill enqueue sapstartsrv process


Resource state before starting the test:

rsc_st_azure (stonith:fence_azure_arm): Started anftstsapcl1


Resource Group: g-QAS_ASCS
fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl1
nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl1
vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl1
Resource Group: g-QAS_AERS
fs_QAS_AERS (ocf::heartbeat:Filesystem): Started anftstsapcl2
nc_QAS_AERS (ocf::heartbeat:azure-lb): Started anftstsapcl2
vip_QAS_AERS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl2

Run the following commands as root on the node where the ASCS is running.

[root@anftstsapcl1 ~]# pgrep -fl ASCS00.*sapstartsrv


# 59545 sapstartsrv

[root@anftstsapcl1 ~]# kill -9 59545

The sapstartsrv process should always be restarted by the Pacemaker resource agent as part of the
monitoring. Resource state after the test:
rsc_st_azure (stonith:fence_azure_arm): Started anftstsapcl1
Resource Group: g-QAS_ASCS
fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl1
nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl1
vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl1
Resource Group: g-QAS_AERS
fs_QAS_AERS (ocf::heartbeat:Filesystem): Started anftstsapcl2
nc_QAS_AERS (ocf::heartbeat:azure-lb): Started anftstsapcl2
vip_QAS_AERS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl2

Next steps
HA for SAP NW on Azure VMs on RHEL for SAP applications multi-SID guide
Azure Virtual Machines planning and implementation for SAP
Azure Virtual Machines deployment for SAP
Azure Virtual Machines DBMS deployment for SAP
To learn how to establish high availability and plan for disaster recovery of SAP HANA on Azure (large
instances), see SAP HANA (large instances) high availability and disaster recovery on Azure.
To learn how to establish high availability and plan for disaster recovery of SAP HANA on Azure VMs, see
High Availability of SAP HANA on Azure Virtual Machines (VMs)
SAP ASCS/SCS instance multi-SID high availability
with Windows server failover clustering and Azure
shared disk
12/22/2020 • 15 minutes to read • Edit Online

Windows

This article focuses on how to move from a single ASCS/SCS installation to an SAP multi-SID configuration by
installing additional SAP ASCS/SCS clustered instances into an existing Windows Server Failover Clustering
(WSFC) cluster with Azure shared disk. When this process is completed, you have configured an SAP multi-SID
cluster.

Prerequisites and limitations


Currently you can use Azure Premium SSD disks as an Azure shared disk for the SAP ASCS/SCS instance. The
following limitations are in place:
Azure Ultra disk is not supported as Azure Shared Disk for SAP workloads. Currently it is not possible to place
Azure VMs, using Azure Ultra Disk in Availability Set
Azure Shared disk with Premium SSD disks is only supported with VMs in Availability Set. It is not supported in
Availability Zones deployment.
Azure shared disk value maxShares determines how many cluster nodes can use the shared disk. Typically for
SAP ASCS/SCS instance you will configure two nodes in Windows Failover Cluster, therefore the value for
maxShares must be set to two.
All SAP ASCS/SCS cluster VMs must be deployed in the same Azure proximity placement group.
Although you can deploy Windows cluster VMs in Availability Set with Azure shared disk without PPG, PPG will
ensure close physical proximity of Azure shared disks and the cluster VMs, therefore achieving lower latency
between the VMs and the storage layer.
For further details on limitations for Azure shared disk, review carefully the Limitations section of Azure Shared
Disk documentation.

IMPORTANT
When deploying SAP ASCS/SCS Windows Failover cluster with Azure shared disk, be aware that your deployment will be
operating with a single shared disk in one storage cluster. Your SAP ASCS/SCS instance will be impacted, in case of issues with
the storage cluster, where the Azure shared disk is deployed.

IMPORTANT
The setup must meet the following conditions:
Each database management system (DBMS) SID must have its own dedicated WSFC cluster.
SAP application servers that belong to one SAP system SID must have their own dedicated VMs.
A mix of Enqueue Replication Server 1 and Enqueue Replication Server 2 in the same cluster is not supported.
Supported OS versions
Both Windows Server 2016 and Windows Server 2019 are supported (use the latest data center images).
We strongly recommend using Windows Ser ver 2019 Datacenter , as:
Windows 2019 Failover Cluster Service is Azure aware
There is added integration and awareness of Azure Host Maintenance and improved experience by monitoring
for Azure schedule events.
It is possible to use Distributed network name(it is the default option). Therefore, there is no need to have a
dedicated IP address for the cluster network name. Also, there is no need to configure this IP address on Azure
Internal Load Balancer.

Architecture
Both Enqueue replication server 1 (ERS1) and Enqueue replication server 2 (ERS2) are supported in multi-SID
configuration. A mix of ERS1 and ERS2 is not supported in the same cluster.
1. The first example shows two SAP SIDs, both with ERS1 architecture where:
SAP SID1 is deployed on shared disk, with ERS1. The ERS instance is installed on local host and on
local drive. SAP SID1 has its own (virtual) IP address (SID1 (A)SCS IP1), which is configured on the
Azure Internal Load balancer.
SAP SID2 is deployed on shared disk, with ERS1. The ERS instance is installed on local host and on
local drive. SAP SID2 has own (virtual) IP address (SID2 (A)SCS IP2), which is configured also on the
Azure Internal Load balancer.

2. The second example shows two SAP SIDs, both with ERS2 architecture where:
SAP SID1 with ERS2, is which also clustered and is deployed on local drive.
SAP SID1 has own (virtual) IP address (SID1 (A)SCS IP1), which is configured on the Azure Internal
Load balancer. SAP ERS2, used by SAP SID1 system has its own (virtual) IP address (SID1 ERS2 IP2),
which is configured on the Azure Internal Load balancer.
SAP SID2 with ERS2, is which also clustered and is deployed on local drive.
SAP SID2 has own (virtual) IP address (SID2 (A)SCS IP3), which is configured on the Azure Internal
Load balancer. SAP ERS2, used by SAP SID2 system has its own (virtual) IP address (SID2 ERS2 IP4),
which is configured on the Azure Internal Load balancer.
Here we have a total of four virtual IP addresses:
SID1 (A)SCS IP1
SID2 ERS2 IP2
SID2 (A)SCS IP3
SID2 ERS2 IP4

Infrastructure preparation
We'll install a new SAP SID PR2 , in addition to the existing clustered SAP PR1 ASCS/SCS instance.
Host names and IP addresses
P RO XIM IT Y
H O ST N A M E RO L E H O ST N A M E STAT IC IP A DDRESS AVA IL A B IL IT Y SET P L A C EM EN T GRO UP

1st cluster node pr1-ascs-10 10.0.0.4 pr1-ascs-avset PR1PPG


ASCS/SCS cluster

2nd cluster node pr1-ascs-11 10.0.0.5 pr1-ascs-avset PR1PPG


ASCS/SCS cluster

Cluster Network pr1clust 10.0.0.42(only for n/a n/a


Name Win 2016 cluster)

SID1 ASCS cluster pr1-ascscl 10.0.0.43 n/a n/a


network name

SID1 ERS cluster pr1-erscl 10.0.0.44 n/a n/a


network name (only
for ERS2)

SID2 ASCS cluster pr2-ascscl 10.0.0.45 n/a n/a


network name

SID2 ERS cluster pr1-erscl 10.0.0.46 n/a n/a


network name (only
for ERS2)

Create Azure internal load balancer


SAP ASCS, SAP SCS, and the new SAP ERS2, use virtual hostname and virtual IP addresses. On Azure a load
balancer is required to use a virtual IP address. We strongly recommend using Standard load balancer.
You will need to add configuration to the existing load balancer for the second SAP SID ASCS/SCS/ERS instance
PR2 . The configuration for the first SAP SID PR1 should be already in place.
(A)SCS PR2 [instance number 02]
Frontend configuration
Static ASCS/SCS IP address 10.0.0.45
Backend configuration
Already be in place - the VMs were already added to the backend pool, while configuring for SAP SID PR1
Probe Port
Port 620nr [62002 ] Leave the default option for Protocol (TCP), Interval (5), Unhealthy threshold (2)
Load-balancing rules
If using Standard Load Balancer, select HA ports
If using Basic Load Balancer, create Load balancing rules for the following ports
32nr TCP [3202 ]
36nr TCP [3602 ]
39nr TCP [3902 ]
81nr TCP [8102 ]
5nr 13 TCP [50213 ]
5nr 14 TCP [50214 ]
5nr 16 TCP [50216 ]
Associate with the PR2 ASCS Frontend IP, health probe, and the existing backend pool.
Make sure that Idle timeout (minutes) is set to the maximum value 30, and that Floating IP (direct
server return) is Enabled.
ERS2 PR2 [instance number 12]
As Enqueue Replication Server 2 (ERS2) is also clustered, ERS2 virtual IP address must be also configured on Azure
ILB in addition to above SAP ASCS/SCS IP. This section only applies, if using Enqueue replication server 2
architecture for PR2 .
New Frontend configuration
Static SAP ERS2 IP address 10.0.0.46
Backend configuration
The VMs were already added to the ILB backend pool.
New Probe Port
Port 621nr [62112 ] Leave the default option for Protocol (TCP), Interval (5), Unhealthy threshold (2)
New Load-balancing rules
If using Standard Load Balancer, select HA ports
If using Basic Load Balancer, create Load balancing rules for the following ports
32nr TCP [3212 ]
33nr TCP [3312 ]
5nr 13 TCP [51212 ]
5nr 14 TCP [51212 ]
5nr 16 TCP [51212 ]
Associate with the PR2 ERS2 Frontend IP, health probe and the existing backend pool.
Make sure that Idle timeout (minutes) is set to max value e.g. 30, and that Floating IP (direct server
return) is Enabled.
Create and attach second Azure shared disk
Run this command on one of the cluster nodes. You will need to adjust the values for your resource group, Azure
region, SAPSID, and so on.
$ResourceGroupName = "MyResourceGroup"
$location = "MyRegion"
$SAPSID = "PR2"
$DiskSizeInGB = 512
$DiskName = "$($SAPSID)ASCSSharedDisk"
$NumberOfWindowsClusterNodes = 2
$diskConfig = New-AzDiskConfig -Location $location -SkuName Premium_LRS -CreateOption Empty -DiskSizeGB
$DiskSizeInGB -MaxSharesCount $NumberOfWindowsClusterNodes

$dataDisk = New-AzDisk -ResourceGroupName $ResourceGroupName -DiskName $DiskName -Disk $diskConfig


##################################
## Attach the disk to cluster VMs
##################################
# ASCS Cluster VM1
$ASCSClusterVM1 = "pr1-ascs-10"
# ASCS Cluster VM2
$ASCSClusterVM2 = "pr1-ascs-11"
# next free LUN number
$LUNNumber = 1
# Add the Azure Shared Disk to Cluster Node 1
$vm = Get-AzVM -ResourceGroupName $ResourceGroupName -Name $ASCSClusterVM1
$vm = Add-AzVMDataDisk -VM $vm -Name $DiskName -CreateOption Attach -ManagedDiskId $dataDisk.Id -Lun
$LUNNumber
Update-AzVm -VM $vm -ResourceGroupName $ResourceGroupName -Verbose
# Add the Azure Shared Disk to Cluster Node 2
$vm = Get-AzVM -ResourceGroupName $ResourceGroupName -Name $ASCSClusterVM2
$vm = Add-AzVMDataDisk -VM $vm -Name $DiskName -CreateOption Attach -ManagedDiskId $dataDisk.Id -Lun
$LUNNumber
Update-AzVm -VM $vm -ResourceGroupName $ResourceGroupName -Verbose

Format the shared disk with PowerShell


1. Get the disk number. Run the PowerShell commands on one of the cluster nodes:

Get-Disk | Where-Object PartitionStyle -Eq "RAW" | Format-Table -AutoSize


# Example output
# Number Friendly Name Serial Number HealthStatus OperationalStatus Total Size Partition Style
# ------ ------------- ------------- ------------ ----------------- ---------- ---------------
# 3 Msft Virtual Disk Healthy Online 512 GB RAW

2. Format the disk. In this example it is disk number 3.

# Format SAP ASCS Disk number '3', with drive letter 'S'
$SAPSID = "PR2"
$DiskNumber = 3
$DriveLetter = "S"
$DiskLabel = "$SAPSID" + "SAP"

Get-Disk -Number $DiskNumber | Where-Object PartitionStyle -Eq "RAW" | Initialize-Disk -PartitionStyle


GPT -PassThru | New-Partition -DriveLetter $DriveLetter -UseMaximumSize | Format-Volume -FileSystem
ReFS -NewFileSystemLabel $DiskLabel -Force -Verbose
# Example outout
# DriveLetter FileSystemLabel FileSystem DriveType HealthStatus OperationalStatus SizeRemaining
Size
# ----------- --------------- ---------- --------- ------------ ----------------- ------------- -
---
# S PR2SAP ReFS Fixed Healthy OK 504.98 GB 511.81
GB

3. Verify that the disk is now visible as a cluster disk.


# List all disks
Get-ClusterAvailableDisk -All
# Example output
# Cluster : pr1clust
# Id : c469b5ad-d089-4d8f-ae4c-d834cbbde1a2
# Name : Cluster Disk 2
# Number : 3
# Size : 549755813888
# Partitions : {\\?\GLOBALROOT\Device\Harddisk3\Partition2\}

4. Register the disk in the cluster.

# Add the disk to cluster


Get-ClusterAvailableDisk -All | Add-ClusterDisk
# Example output
# Name State OwnerGroup ResourceType
# ---- ----- ---------- ------------
# Cluster Disk 2 Online Available Storage Physical Disk

Create a virtual host name for the clustered SAP ASCS/SCS instance
1. Create a DNS entry for the virtual host name for new the SAP ASCS/SCS instance in the Windows DNS
manager.
The IP address you assign to the virtual host name in DNS must be the same as the IP address you assigned
in Azure Load Balancer.

Define the DNS entry for the SAP ASCS/SCS cluster virtual name and IP address
2. If using SAP Enqueue Replication Server 2, which is also clustered instance, then you need to reserve in DNS
a virtual host name for ERS2 as well. The IP address you assign to the virtual host name for ERS2 in DNS
must be the same as the IP address you assigned in Azure Load Balancer.
Define the DNS entry for the SAP ERS2 cluster virtual name and IP address
3. To define the IP address that's assigned to the virtual host name, select DNS Manager > Domain .

New virtual name and TCP/IP address for SAP ASCS/SCS and ERS2 cluster configuration

SAP Installation
Install the SAP first cluster node
Follow the SAP described installation procedure. Make sure in the start installation option “First Cluster Node”, and
to choose “Cluster Shared Disk” as configuration option.
Choose the newly create shared disk.
Modify the SAP profile of the ASCS/SCS instance
If you are running Enqueue Replication Server 1, add SAP profile parameter enque/encni/set_so_keepalive as
described below. The profile parameter prevents connections between SAP work processes and the enqueue
server from closing when they are idle for too long. The SAP parameter is not required for ERS2.
1. Add this profile parameter to the SAP ASCS/SCS instance profile, if using ERS1.

enque/encni/set_so_keepalive = true

For both ERS1 and ERS2, make sure that the keepalive OS parameters are set as described in SAP note
1410736.
2. To apply the SAP profile parameter changes, restart the SAP ASCS/SCS instance.
Configure probe port on the cluster resource
Use the internal load balancer's probe functionality to make the entire cluster configuration work with Azure Load
Balancer. The Azure internal load balancer usually distributes the incoming workload equally between participating
virtual machines.
However, this won't work in some cluster configurations because only one instance is active. The other instance is
passive and can’t accept any of the workload. A probe functionality helps when the Azure internal load balancer
detect which instance is active, and only target the active instance.

IMPORTANT
In this example configuration, the ProbePor t is set to 620Nr . For SAP ASCS instance with number 02 it is 62002 . You will
need to adjust the configuration to match your SAP instance numbers and your SAP SID.

To add a probe port run this PowerShell Module on one of the cluster VMs:
In the case of SAP ASC/SCS Instance with instance number 02

Set-AzureLoadBalancerHealthCheckProbePortOnSAPClusterIPResource -SAPSID PR2 -ProbePort 62002

If using ERS2, with instance number 12 , which is clustered. There is no need to configure probe port for
ERS1, as it is not clustered.

Set-AzureLoadBalancerHealthCheckProbePortOnSAPClusterIPResource -SAPSID PR2 -ProbePort 62012 -


IsSAPERSClusteredInstance $True

The code for function Set-AzureLoadBalancerHealthCheckProbePortOnSAPClusterIPResource would look like:

function Set-AzureLoadBalancerHealthCheckProbePortOnSAPClusterIPResource {
<#
.SYNOPSIS
Set-AzureLoadBalancerHealthProbePortOnSAPClusterIPResource will set a new Azure Load Balancer Health Probe
Port on 'SAP $SAPSID IP' cluster resource.

.DESCRIPTION
Set-AzureLoadBalancerHealthProbePortOnSAPClusterIPResource will set a new Azure Load Balancer Health Probe
Port on 'SAP $SAPSID IP' cluster resource.
It will also restart SAP Cluster group (default behavior), to activate the changes.

You need to run it on one of the SAP ASCS/SCS Windows cluster nodes.

Expectation is that SAP group is installed with official SWPM installation tool, which will set default
Expectation is that SAP group is installed with official SWPM installation tool, which will set default
expected naming convention for:
- SAP Cluster Group: 'SAP $SAPSID'
- SAP Cluster IP Address Resource: 'SAP $SAPSID IP'

.PARAMETER SAPSID
SAP SID - 3 characters staring with letter.

.PARAMETER ProbePort
Azure Load Balancer Health Check Probe Port.

.PARAMETER RestartSAPClusterGroup
Optional parameter. Default value is '$True', so SAP cluster group will be restarted to activate the changes.

.PARAMETER IsSAPERSClusteredInstance
Optional parameter.Default value is '$False'.
If set to $True , then handle clsutered new SAP ERS2 instance.

.EXAMPLE
# Set probe port to 62000, on SAP cluster resource 'SAP AB1 IP', and restart the SAP cluster group 'SAP AB1',
to activate the changes.
Set-AzureLoadBalancerHealthCheckProbePortOnSAPClusterIPResource -SAPSID AB1 -ProbePort 62000

.EXAMPLE
# Set probe port to 62000, on SAP cluster resource 'SAP AB1 IP'. SAP cluster group 'SAP AB1' IS NOT
restarted, therefore changes are NOT active.
# To activate the changes you need to manually restart 'SAP AB1' cluster group.
Set-AzureLoadBalancerHealthCheckProbePortOnSAPClusterIPResource -SAPSID AB1 -ProbePort 62000 -
RestartSAPClusterGroup $False

.EXAMPLE
# Set probe port to 62001, on SAP cluster resource 'SAP AB1 ERS IP'. SAP cluster group 'SAP AB1 ERS' IS
restarted, to activate the changes.
Set-AzureLoadBalancerHealthCheckProbePortOnSAPClusterIPResource -SAPSID AB1 -ProbePort 62000 -
IsSAPERSClusteredInstance $True

#>

[CmdletBinding()]
param(

[Parameter(Mandatory=$True)]
[ValidateNotNullOrEmpty()]
[ValidateLength(3,3)]
[string]$SAPSID,

[Parameter(Mandatory=$True)]
[ValidateNotNullOrEmpty()]
[int] $ProbePort,

[Parameter(Mandatory=$False)]
[bool] $RestartSAPClusterGroup = $True,

[Parameter(Mandatory=$False)]
[bool] $IsSAPERSClusteredInstance = $False

BEGIN{}

PROCESS{
try{

if($IsSAPERSClusteredInstance){
#Handle clustered SAP ERS Instance
$SAPClusterRoleName = "SAP $SAPSID ERS"
$SAPIPresourceName = "SAP $SAPSID ERS IP"
}else{
#Handle clustered SAP ASCS/SCS Instance
#Handle clustered SAP ASCS/SCS Instance
$SAPClusterRoleName = "SAP $SAPSID"
$SAPIPresourceName = "SAP $SAPSID IP"
}

$SAPIPResourceClusterParameters = Get-ClusterResource $SAPIPresourceName | Get-ClusterParameter


$IPAddress = ($SAPIPResourceClusterParameters | Where-Object {$_.Name -eq "Address" }).Value
$NetworkName = ($SAPIPResourceClusterParameters | Where-Object {$_.Name -eq "Network" }).Value
$SubnetMask = ($SAPIPResourceClusterParameters | Where-Object {$_.Name -eq "SubnetMask" }).Value
$OverrideAddressMatch = ($SAPIPResourceClusterParameters | Where-Object {$_.Name -eq
"OverrideAddressMatch" }).Value
$EnableDhcp = ($SAPIPResourceClusterParameters | Where-Object {$_.Name -eq "EnableDhcp" }).Value
$OldProbePort = ($SAPIPResourceClusterParameters | Where-Object {$_.Name -eq "ProbePort" }).Value

$var = Get-ClusterResource | Where-Object { $_.name -eq $SAPIPresourceName }

#Write-Host "Current configuration parameters for SAP IP cluster resource '$SAPIPresourceName'


are:" -ForegroundColor Cyan
Write-Output "Current configuration parameters for SAP IP cluster resource '$SAPIPresourceName'
are:"

Get-ClusterResource -Name $SAPIPresourceName | Get-ClusterParameter

Write-Output " "


Write-Output "Current probe port property of the SAP cluster resource '$SAPIPresourceName' is
'$OldProbePort'."
Write-Output " "
Write-Output "Setting the new probe port property of the SAP cluster resource
'$SAPIPresourceName' to '$ProbePort' ..."
Write-Output " "

$var | Set-ClusterParameter -Multiple


@{"Address"=$IPAddress;"ProbePort"=$ProbePort;"Subnetmask"=$SubnetMask;"Network"=$NetworkName;"OverrideAddress
Match"=$OverrideAddressMatch;"EnableDhcp"=$EnableDhcp}

Write-Output " "

#$ActivateChanges = Read-Host "Do you want to take restart SAP cluster role
'$SAPClusterRoleName', to activate the changes (yes/no)?"

if($RestartSAPClusterGroup){
Write-Output ""
Write-Output "Activating changes..."

Write-Output " "


Write-Output "Taking SAP cluster IP resource '$SAPIPresourceName' offline ..."
Stop-ClusterResource -Name $SAPIPresourceName
sleep 5

Write-Output "Starting SAP cluster role '$SAPClusterRoleName' ..."


Start-ClusterGroup -Name $SAPClusterRoleName

Write-Output "New ProbePort parameter is active."


Write-Output " "

Write-Output "New configuration parameters for SAP IP cluster resource '$SAPIPresourceName':"


Write-Output " "
Get-ClusterResource -Name $SAPIPresourceName | Get-ClusterParameter
}else
{
Write-Output "SAP cluster role '$SAPClusterRoleName' is not restarted, therefore changes are
not activated."
}
}
catch{
Write-Error $_.Exception.Message
}

}
END {}
}

Continue with the SAP installation


1. Install the database instance, by following the process that's described in the SAP installation guide.
2. Install SAP on the second cluster node by following the steps that are described in the SAP installation guide.
3. Install the SAP Primary Application Server (PAS) instance on the virtual machine that you've designated to host
the PAS.
Follow the process described in the SAP installation guide. There are no dependencies on Azure.
4. Install additional SAP application servers on the virtual machines, designated to host SAP Application server
instances.
Follow the process described in the SAP installation guide. There are no dependencies on Azure.

Test the SAP ASCS/SCS instance failover


For the outlined failover tests, we assume that SAP ASCS is active on node A.
1. Verify that the SAP system can successfully failover from node A to node B. In this example, the test is done
for SAPSID PR2 .
Make sure that each of SAPSID can successfully move to the other cluster node.
Choose one of these options to initiate a failover of the SAP <SID> cluster group from cluster node A to
cluster node B:
Failover Cluster Manager
Failover Cluster PowerShell

$SAPSID = "PR2" # SAP <SID>

$SAPClusterGroup = "SAP $SAPSID"


Move-ClusterGroup -Name $SAPClusterGroup

2. Restart cluster node A within the Windows guest operating system. This initiates an automatic failover of
the SAP <SID> cluster group from node A to node B.
3. Restart cluster node A from the Azure portal. This initiates an automatic failover of the SAP <SID> cluster
group from node A to node B.
4. Restart cluster node A by using Azure PowerShell. This initiates an automatic failover of the SAP <SID>
cluster group from node A to node B.

Next steps
Prepare the Azure infrastructure for SAP HA by using a Windows failover cluster and shared disk for an SAP
ASCS/SCS instance
Install SAP NetWeaver HA on a Windows failover cluster and shared disk for an SAP ASCS/SCS instance
SAP ASCS/SCS instance multi-SID high availability
with Windows Server Failover Clustering and shared
disk on Azure
12/22/2020 • 8 minutes to read • Edit Online

Windows

If you have an SAP deployment, you must use an internal load balancer to create a Windows cluster configuration
for SAP Central Services (ASCS/SCS) instances.
This article focuses on how to move from a single ASCS/SCS installation to an SAP multi-SID configuration by
installing additional SAP ASCS/SCS clustered instances into an existing Windows Server Failover Clustering
(WSFC) cluster with shared disk, using SIOS to simulate shared disk. When this process is completed, you have
configured an SAP multi-SID cluster.

NOTE
This feature is available only in the Azure Resource Manager deployment model.
There is a limit on the number of private front-end IPs for each Azure internal load balancer.
The maximum number of SAP ASCS/SCS instances in one WSFC cluster is equal to the maximum number of private front-
end IPs for each Azure internal load balancer.

For more information about load-balancer limits, see the "Private front-end IP per load balancer" section in
Networking limits: Azure Resource Manager.

IMPORTANT
Floating IP is not supported on a NIC secondary IP configuration in load-balancing scenarios. For details see Azure Load
balancer Limitations. If you need additional IP address for the VM, deploy a second NIC.

NOTE
This article has been updated to use the new Azure PowerShell Az module. You can still use the AzureRM module, which will
continue to receive bug fixes until at least December 2020. To learn more about the new Az module and AzureRM
compatibility, see Introducing the new Azure PowerShell Az module. For Az module installation instructions, see Install Azure
PowerShell.

Prerequisites
You have already configured a WSFC cluster to use for one SAP ASCS/SCS instance by using file share , as shown
in this diagram.
IMPORTANT
The setup must meet the following conditions:
The SAP ASCS/SCS instances must share the same WSFC cluster.
Each database management system (DBMS) SID must have its own dedicated WSFC cluster.
SAP application servers that belong to one SAP system SID must have their own dedicated VMs.
A mix of Enqueue Replication Server 1 and Enqueue Replication Server 2 in the same cluster is not supported.

SAP ASCS/SCS multi-SID architecture with shared disk


The goal is to install multiple SAP ABAP ASCS or SAP Java SCS clustered instances in the same WSFC cluster, as
illustrated here:
For more information about load-balancer limits, see the "Private front-end IP per load balancer" section in
Networking limits: Azure Resource Manager.
The complete landscape with two high-availability SAP systems would look like this:
Prepare the infrastructure for an SAP multi-SID scenario
To prepare your infrastructure, you can install an additional SAP ASCS/SCS instance with the following
parameters:

PA RA M ET ER N A M E VA L UE

SAP ASCS/SCS SID pr1-lb-ascs

SAP DBMS internal load balancer PR5

SAP virtual host name pr5-sap-cl

SAP ASCS/SCS virtual host IP address (additional Azure load 10.0.0.50


balancer IP address)

SAP ASCS/SCS instance number 50

ILB probe port for additional SAP ASCS/SCS instance 62350


NOTE
For SAP ASCS/SCS cluster instances, each IP address requires a unique probe port. For example, if one IP address on an
Azure internal load balancer uses probe port 62300, no other IP address on that load balancer can use probe port 62300.
For our purposes, because probe port 62300 is already reserved, we are using probe port 62350.

You can install additional SAP ASCS/SCS instances in the existing WSFC cluster with two nodes:

VIRT UA L M A C H IN E RO L E VIRT UA L M A C H IN E H O ST N A M E STAT IC IP A DDRESS

First cluster node for ASCS/SCS pr1-ascs-0 10.0.0.10


instance

Second cluster node for ASCS/SCS pr1-ascs-1 10.0.0.9


instance

Create a virtual host name for the clustered SAP ASCS/SCS instance on the DNS server
You can create a DNS entry for the virtual host name of the ASCS/SCS instance by using the following
parameters:

N EW SA P A SC S/ SC S VIRT UA L H O ST N A M E A SSO C IAT ED IP A DDRESS

pr5-sap-cl 10.0.0.50

The new host name and IP address are displayed in DNS Manager, as shown in the following screenshot:
NOTE
The new IP address that you assign to the virtual host name of the additional ASCS/SCS instance must be the same as the
new IP address that you assigned to the SAP Azure load balancer.
In our scenario, the IP address is 10.0.0.50.

Add an IP address to an existing Azure internal load balancer by using PowerShell


To create more than one SAP ASCS/SCS instance in the same WSFC cluster, use PowerShell to add an IP address to
an existing Azure internal load balancer. Each IP address requires its own load-balancing rules, probe port, front-
end IP pool, and back-end pool.
The following script adds a new IP address to an existing load balancer. Update the PowerShell variables for your
environment. The script creates all the required load-balancing rules for all SAP ASCS/SCS ports.

# Select-AzSubscription -SubscriptionId <xxxxxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx>


Clear-Host
$ResourceGroupName = "SAP-MULTI-SID-Landscape" # Existing resource group name
$VNetName = "pr2-vnet" # Existing virtual network name
$SubnetName = "Subnet" # Existing subnet name
$ILBName = "pr2-lb-ascs" # Existing ILB name
$ILBIP = "10.0.0.50" # New IP address
$VMNames = "pr2-ascs-0","pr2-ascs-1" # Existing cluster virtual machine names
$SAPInstanceNumber = 50 # SAP ASCS/SCS instance number: must be a unique value for each
cluster
[int]$ProbePort = "623$SAPInstanceNumber" # Probe port: must be a unique value for each IP and load
balancer

$ILB = Get-AzLoadBalancer -Name $ILBName -ResourceGroupName $ResourceGroupName

$count = $ILB.FrontendIpConfigurations.Count + 1
$FrontEndConfigurationName ="lbFrontendASCS$count"
$LBProbeName = "lbProbeASCS$count"

# Get the Azure virtual network and subnet


$VNet = Get-AzVirtualNetwork -Name $VNetName -ResourceGroupName $ResourceGroupName
$Subnet = Get-AzVirtualNetworkSubnetConfig -VirtualNetwork $VNet -Name $SubnetName

# Add a second front-end and probe configuration


Write-Host "Adding new front end IP Pool '$FrontEndConfigurationName' ..." -ForegroundColor Green
$ILB | Add-AzLoadBalancerFrontendIpConfig -Name $FrontEndConfigurationName -PrivateIpAddress $ILBIP -SubnetId
$Subnet.Id
$ILB | Add-AzLoadBalancerProbeConfig -Name $LBProbeName -Protocol Tcp -Port $Probeport -ProbeCount 2 -
IntervalInSeconds 10 | Set-AzLoadBalancer

# Get a new updated configuration


$ILB = Get-AzLoadBalancer -Name $ILBname -ResourceGroupName $ResourceGroupName

# Get an updated LP FrontendIpConfig


$FEConfig = Get-AzLoadBalancerFrontendIpConfig -Name $FrontEndConfigurationName -LoadBalancer $ILB
$HealthProbe = Get-AzLoadBalancerProbeConfig -Name $LBProbeName -LoadBalancer $ILB

# Add a back-end configuration into an existing ILB


$BackEndConfigurationName = "backendPoolASCS$count"
Write-Host "Adding new backend Pool '$BackEndConfigurationName' ..." -ForegroundColor Green
$BEConfig = Add-AzLoadBalancerBackendAddressPoolConfig -Name $BackEndConfigurationName -LoadBalancer $ILB |
Set-AzLoadBalancer

# Get an updated config


$ILB = Get-AzLoadBalancer -Name $ILBname -ResourceGroupName $ResourceGroupName

# Assign VM NICs to the back-end pool


$BEPool = Get-AzLoadBalancerBackendAddressPoolConfig -Name $BackEndConfigurationName -LoadBalancer $ILB
foreach($VMName in $VMNames){
foreach($VMName in $VMNames){
$VM = Get-AzVM -ResourceGroupName $ResourceGroupName -Name $VMName
$NICName = ($VM.NetworkInterfaceIDs[0].Split('/') | select -last 1)
$NIC = Get-AzNetworkInterface -name $NICName -ResourceGroupName $ResourceGroupName
$NIC.IpConfigurations[0].LoadBalancerBackendAddressPools += $BEPool
Write-Host "Assigning network card '$NICName' of the '$VMName' VM to the backend pool
'$BackEndConfigurationName' ..." -ForegroundColor Green
Set-AzNetworkInterface -NetworkInterface $NIC
#start-AzVM -ResourceGroupName $ResourceGroupName -Name $VM.Name
}

# Create load-balancing rules


$Ports =
"445","32$SAPInstanceNumber","33$SAPInstanceNumber","36$SAPInstanceNumber","39$SAPInstanceNumber","5985","81$
SAPInstanceNumber","5$SAPInstanceNumber`13","5$SAPInstanceNumber`14","5$SAPInstanceNumber`16"
$ILB = Get-AzLoadBalancer -Name $ILBname -ResourceGroupName $ResourceGroupName
$FEConfig = get-AzLoadBalancerFrontendIpConfig -Name $FrontEndConfigurationName -LoadBalancer $ILB
$BEConfig = Get-AzLoadBalancerBackendAddressPoolConfig -Name $BackEndConfigurationName -LoadBalancer $ILB
$HealthProbe = Get-AzLoadBalancerProbeConfig -Name $LBProbeName -LoadBalancer $ILB

Write-Host "Creating load balancing rules for the ports: '$Ports' ... " -ForegroundColor Green

foreach ($Port in $Ports) {

$LBConfigrulename = "lbrule$Port" + "_$count"


Write-Host "Creating load balancing rule '$LBConfigrulename' for the port '$Port' ..." -
ForegroundColor Green

$ILB | Add-AzLoadBalancerRuleConfig -Name $LBConfigRuleName -FrontendIpConfiguration $FEConfig -


BackendAddressPool $BEConfig -Probe $HealthProbe -Protocol tcp -FrontendPort $Port -BackendPort $Port -
IdleTimeoutInMinutes 30 -LoadDistribution Default -EnableFloatingIP
}

$ILB | Set-AzLoadBalancer

Write-Host "Successfully added new IP '$ILBIP' to the internal load balancer '$ILBName'!" -ForegroundColor
Green

After the script has run, the results are displayed in the Azure portal, as shown in the following screenshot:

Add disks to cluster machines, and configure the SIOS cluster-share disk
You must add a new cluster-share disk for each additional SAP ASCS/SCS instance. For Windows Server 2012 R2,
the WSFC cluster share disk currently in use is the SIOS DataKeeper software solution.
Do the following:
1. Add an additional disk or disks of the same size (which you need to stripe) to each of the cluster nodes, and
format them.
2. Configure storage replication with SIOS DataKeeper.
This procedure assumes that you have already installed SIOS DataKeeper on the WSFC cluster machines. If you
have installed it, you must now configure replication between the machines. The process is described in detail in
Install SIOS DataKeeper Cluster Edition for the SAP ASCS/SCS cluster share disk.

Deploy VMs for SAP application servers and the DBMS cluster
To complete the infrastructure preparation for the second SAP system, do the following:
1. Deploy dedicated VMs for the SAP application servers, and put each in its own dedicated availability group.
2. Deploy dedicated VMs for the DBMS cluster, and put each in its own dedicated availability group.

Install an SAP NetWeaver multi-SID system


For a description of the complete process of installing a second SAP SID2 system, see SAP NetWeaver HA
installation on Windows Failover Cluster and shared disk for an SAP ASCS/SCS instance.
The high-level procedure is as follows:
1. Install SAP with a high-availability ASCS/SCS instance.
In this step, you are installing SAP with a high-availability ASCS/SCS instance on the existing WSFC cluster
node 1.
2. Modify the SAP profile of the ASCS/SCS instance.
3. Configure a probe port.
In this step, you are configuring an SAP cluster resource SAP-SID2-IP probe port by using PowerShell.
Execute this configuration on one of the SAP ASCS/SCS cluster nodes.
4. Install the database instance.
To install the second cluster, follow the steps in the SAP installation guide.
5. Install the second cluster node.
In this step, you are installing SAP with a high-availability ASCS/SCS instance on the existing WSFC cluster
node 2. To install the second cluster, follow the steps in the SAP installation guide.
6. Open Windows Firewall ports for the SAP ASCS/SCS instance and probe port.
On both cluster nodes that are used for SAP ASCS/SCS instances, you are opening all Windows Firewall
ports that are used by SAP ASCS/SCS. These SAP ASCS/SCS instance ports are listed in the chapter SAP
ASCS / SCS Ports.
For a list of all other SAP ports, see TCP/IP ports of all SAP products.
Also open the Azure internal load balancer probe port, which is 62350 in our scenario. It is described in this
article.
7. Install the SAP primary application server on the new dedicated VM, as described in the SAP installation
guide.
8. Install the SAP additional application server on the new dedicated VM, as described in the SAP installation
guide.
9. Test the SAP ASCS/SCS instance failover and SIOS replication.

Next steps
Networking limits: Azure Resource Manager
Multiple VIPs for Azure Load Balancer
SAP ASCS/SCS instance multi-SID high availability
with Windows Server Failover Clustering and file
share on Azure
12/22/2020 • 6 minutes to read • Edit Online

Windows

You can manage multiple virtual IP addresses by using an Azure internal load balancer.
If you have an SAP deployment, you can use an internal load balancer to create a Windows cluster configuration
for SAP Central Services (ASCS/SCS) instances.
This article focuses on how to move from a single ASCS/SCS installation to an SAP multi-SID configuration by
installing additional SAP ASCS/SCS clustered instances into an existing Windows Server Failover Clustering
(WSFC) cluster with file share . When this process is completed, you have configured an SAP multi-SID cluster.

NOTE
This feature is available only in the Azure Resource Manager deployment model.
There is a limit on the number of private front-end IPs for each Azure internal load balancer.
The maximum number of SAP ASCS/SCS instances in one WSFC cluster is equal to the maximum number of private front-
end IPs for each Azure internal load balancer.
The configuration introduced in this documentation is not yet supported to be used for Azure Availability Zones

For more information about load-balancer limits, see the "Private front-end IP per load balancer" section in
Networking limits: Azure Resource Manager. Also consider using the Azure Standard Load Balancer SKU instead of
the basic SKU of the Azure load balancer.

Prerequisites
You have already configured a WSFC cluster to use for one SAP ASCS/SCS instance by using file share , as shown
in this diagram.
Figure 1: An SAP ASCS/SCS instance and SOFS deployed in two clusters

IMPORTANT
The setup must meet the following conditions:
The SAP ASCS/SCS instances must share the same WSFC cluster.
Different SAP Global Hosts file shares belonging to different SAP SIDs must share the same SOFS cluster.
Each database management system (DBMS) SID must have its own dedicated WSFC cluster.
SAP application servers that belong to one SAP system SID must have their own dedicated VMs.
A mix of Enqueue Replication Server 1 and Enqueue Replication Server 2 in the same cluster is not supported.

SAP ASCS/SCS multi-SID architecture with file share


The goal is to install multiple SAP Advanced Business Application Programming (ASCS) or SAP Java (SCS)
clustered instances in the same WSFC cluster, as illustrated here:
Figure 2: SAP multi-SID configuration in two clusters
The installation of an additional SAP <SID2> system is identical to the installation of one <SID> system. Two
additional preparation steps are required on the ASCS/SCS cluster as well as on the file share SOFS cluster.

Prepare the infrastructure for an SAP multi-SID scenario


Prepare the infrastructure on the domain controller
Create the domain group <Domain>\SAP_<SID2>_GlobalAdmin , for example, with <SID2> = PR2. The
domain group name is <Domain>\SAP_PR2_GlobalAdmin.
Prepare the infrastructure on the ASCS/SCS cluster
You must prepare the infrastructure on the existing ASCS/SCS cluster for a second SAP <SID>:
Create a virtual host name for the clustered SAP ASCS/SCS instance on the DNS server.
Add an IP address to an existing Azure internal load balancer by using PowerShell.
These steps are described in Infrastructure preparation for an SAP multi-SID scenario.
Prepare the infrastructure on an SOFS cluster by using the existing SAP Global Host
You can reuse the existing <SAPGlobalHost> and Volume1 of the first SAP <SID1> system.
Figure 3: Multi-SID SOFS is the same as the SAP Global Host name

IMPORTANT
For the second SAP <SID2> system, the same Volume1 and the same <SAPGlobalHost> network name are used.
Because you have already set SAPMNT as the share name for various SAP systems, to reuse the <SAPGlobalHost>
network name, you must use the same Volume1 .
The file path for the <SID2> global host is C:\ClusterStorage\Volume1 \usr\sap<SID2>\SYS.

For the <SID2> system, you must prepare the SAP Global Host ..\SYS.. folder on the SOFS cluster.
To prepare the SAP Global Host for the <SID2> instance, execute the following PowerShell script:
##################
# SAP multi-SID
##################

$SAPSID2 = "PR2"
$DomainName2 = "SAPCLUSTER"
$SAPSIDGlobalAdminGroupName2 = "$DomainName2\SAP_" + $SAPSID2 + "_GlobalAdmin"

# SAP ASCS/SCS cluster nodes


$ASCSCluster2Node1 = "ja1-ascs-0"
$ASCSCluster2Node2 = "ja1-ascs-1"

# Define the SAP ASCS/SCS cluster node computer objects


$ASCSCluster2ObjectNode1 = "$DomainName2\$ASCSCluster2Node1$"
$ASCSCluster2ObjectNode2 = "$DomainName2\$ASCSCluster2Node2$"

# Create usr\sap\.. folders on CSV


$SAPGlobalFolder2 = "C:\ClusterStorage\Volume1\usr\sap\$SAPSID2\SYS"
New-Item -Path $SAPGlobalFolder2 -ItemType Directory

# Add permissions for the SAP SID2 system


Grant-SmbShareAccess -Name sapmnt -AccountName $SAPSIDGlobalAdminGroupName2, $ASCSCluster2ObjectNode1,
$ASCSCluster2ObjectNode2 -AccessRight Full -Force

$UsrSAPFolder = "C:\ClusterStorage\Volume1\usr\sap\"

# Set file and folder security


$Acl = Get-Acl $UsrSAPFolder

# Add the security object of the SAP_<sid>_GlobalAdmin group


$Ar = New-Object
system.security.accesscontrol.filesystemaccessrule($SAPSIDGlobalAdminGroupName2,"FullControl",
'ContainerInherit,ObjectInherit', 'None', 'Allow')
$Acl.SetAccessRule($Ar)

# Add the security object of the clusternode1$ computer object


$Ar = New-Object
system.security.accesscontrol.filesystemaccessrule($ASCSCluster2ObjectNode1,"FullControl",'ContainerInherit,Ob
jectInherit', 'None', 'Allow')
$Acl.SetAccessRule($Ar)

# Add the security object of the clusternode2$ computer object


$Ar = New-Object
system.security.accesscontrol.filesystemaccessrule($ASCSCluster2ObjectNode2,"FullControl",'ContainerInherit,Ob
jectInherit', 'None', 'Allow')
$Acl.SetAccessRule($Ar)

# Set security
Set-Acl $UsrSAPFolder $Acl -Verbose

Prepare the infrastructure on the SOFS cluster by using a different SAP Global Host
You can configure the second SOFS (for example, the second SOFS cluster role with <SAPGlobalHost2> and a
different Volume2 for the second <SID2> ).
Figure 4: Multi-SID SOFS is the same as SAP GLOBAL host name 2
To create the second SOFS role with <SAPGlobalHost2>, execute this PowerShell script:

# Create SOFS with SAP Global Host Name 2


$SAPGlobalHostName = "sapglobal2"
Add-ClusterScaleOutFileServerRole -Name $SAPGlobalHostName

Create the second Volume2 . Execute this PowerShell script:

New-Volume -StoragePoolFriendlyName S2D* -FriendlyName SAPPR2 -FileSystem CSVFS_ReFS -Size 5GB -


ResiliencySettingName Mirror
Figure 5: Second Volume2 in Failover Cluster Manager
Create an SAP Global folder for the second <SID2>, and set file security.
Execute this PowerShell script:
# Create a folder for <SID2> on a second Volume2 and set file security
$SAPSID = "PR2"
$DomainName = "SAPCLUSTER"
$SAPSIDGlobalAdminGroupName = "$DomainName\SAP_" + $SAPSID + "_GlobalAdmin"

# SAP ASCS/SCS cluster nodes


$ASCSClusterNode1 = "ascs-1"
$ASCSClusterNode2 = "ascs-2"

# Define SAP ASCS/SCS cluster node computer objects


$ASCSClusterObjectNode1 = "$DomainName\$ASCSClusterNode1$"
$ASCSClusterObjectNode2 = "$DomainName\$ASCSClusterNode2$"

# Create usr\sap\.. folders on CSV


$SAPGlobalFolder = "C:\ClusterStorage\Volume2\usr\sap\$SAPSID\SYS"
New-Item -Path $SAPGlobalFOlder -ItemType Directory

$UsrSAPFolder = "C:\ClusterStorage\Volume2\usr\sap\"

# Set file and folder security


$Acl = Get-Acl $UsrSAPFolder

# Add the file security object of the SAP_<sid>_GlobalAdmin group


$Ar = New-Object
system.security.accesscontrol.filesystemaccessrule($SAPSIDGlobalAdminGroupName,"FullControl",
'ContainerInherit,ObjectInherit', 'None', 'Allow')
$Acl.SetAccessRule($Ar)

# Add the security object of the clusternode1$ computer object


$Ar = New-Object
system.security.accesscontrol.filesystemaccessrule($ASCSClusterObjectNode1,"FullControl",'ContainerInherit,Obj
ectInherit', 'None', 'Allow')
$Acl.SetAccessRule($Ar)

# Add the security object of the clusternode2$ computer object


$Ar = New-Object
system.security.accesscontrol.filesystemaccessrule($ASCSClusterObjectNode2,"FullControl",'ContainerInherit,Obj
ectInherit', 'None', 'Allow')
$Acl.SetAccessRule($Ar)

# Set security
Set-Acl $UsrSAPFolder $Acl -Verbose

To create a SAPMNT file share on Volume2 with the <SAPGlobalHost2> host name for the second SAP <SID2>,
start the Add File Share wizard in Failover Cluster Manager.
Right-click the saoglobal2 SOFS cluster group, and then select Add File Share .
Figure 6: Start “Add File Share” wizard

Figure 7: Select "SMB Share – Quick"


Figure 8: Select "sapglobalhost2" and specify path on Volume2

Figure 9: Set file share name to "sapmnt"


Figure 10: Disable all settings

Assign Full control permissions to files and sapmnt share for:


The SAP_<SID>_GlobalAdmin domain user group
Computer object of ASCS/SCS cluster nodes ascs-1$ and ascs-2$

Figure 11: Assign "Full control" to user group and computer accounts

Figure 12: Select "Create"


Figure 13: The second sapmnt bound to sapglobal2 host and Volume2 is created

Install SAP NetWeaver multi-SID


Install SAP <SID2> ASCS/SCS and ERS instances
Follow the same installation and configuration steps as described earlier for one SAP <SID>.
Install DBMS and SAP application servers
Install DBMS and SAP application Servers as described earlier.

Next steps
Install an ASCS/SCS instance on a failover cluster with no shared disks: Official SAP guidelines for an HA file
share
Storage spaces direct in Windows Server 2016
Scale-out file server for application data overview
What's new in storage in Windows Server 2016
High availability for SAP NetWeaver on Azure VMs
on SUSE Linux Enterprise Server for SAP
applications multi-SID guide
12/22/2020 • 27 minutes to read • Edit Online

This article describes how to deploy multiple SAP NetWeaver or S4HANA highly available systems(that is,
multi-SID) in a two node cluster on Azure VMs with SUSE Linux Enterprise Server for SAP applications.
In the example configurations, installation commands etc. three SAP NetWeaver 7.50 systems are deployed in a
single, two node high availability cluster. The SAP systems SIDs are:
NW1 : ASCS instance number 00 and virtual host name msnw1ascs ; ERS instance number 02 and virtual
host name msnw1ers .
NW2 : ASCS instance number 10 and virtual hostname msnw2ascs ; ERS instance number 12 and virtual
host name msnw2ers .
NW3 : ASCS instance number 20 and virtual hostname msnw3ascs ; ERS instance number 22 and virtual
host name msnw3ers .
The article doesn't cover the database layer and the deployment of the SAP NFS shares. In the examples in this
article, we are using virtual names nw2-nfs for the NW2 NFS shares and nw3-nfs for the NW3 NFS shares,
assuming that NFS cluster was deployed.
Before you begin, refer to the following SAP Notes and papers first:
SAP Note 1928533, which has:
List of Azure VM sizes that are supported for the deployment of SAP software
Important capacity information for Azure VM sizes
Supported SAP software, and operating system (OS) and database combinations
Required SAP kernel version for Windows and Linux on Microsoft Azure
SAP Note 2015553 lists prerequisites for SAP-supported SAP software deployments in Azure.
SAP Note 2205917 has recommended OS settings for SUSE Linux Enterprise Server for SAP
Applications
SAP Note 1944799 has SAP HANA Guidelines for SUSE Linux Enterprise Server for SAP Applications
SAP Note 2178632 has detailed information about all monitoring metrics reported for SAP in Azure.
SAP Note 2191498 has the required SAP Host Agent version for Linux in Azure.
SAP Note 2243692 has information about SAP licensing on Linux in Azure.
SAP Note 1984787 has general information about SUSE Linux Enterprise Server 12.
SAP Note 1999351 has additional troubleshooting information for the Azure Enhanced Monitoring
Extension for SAP.
SAP Community WIKI has all required SAP Notes for Linux.
Azure Virtual Machines planning and implementation for SAP on Linux
Azure Virtual Machines deployment for SAP on Linux
Azure Virtual Machines DBMS deployment for SAP on Linux
SUSE SAP HA Best Practice Guides The guides contain all required information to set up Netweaver HA
and SAP HANA System Replication on-premises. Use these guides as a general baseline. They provide
much more detailed information.
SUSE High Availability Extension 12 SP3 Release Notes
SUSE multi-SID cluster guide for SLES 12 and SLES 15
NetApp SAP Applications on Microsoft Azure using Azure NetApp Files

Overview
The virtual machines that participate in the cluster must be sized to be able to run all resources, in case failover
occurs. Each SAP SID can fail over independent from each other in the multi-SID high availability cluster. If
using SBD fencing, the SBD devices can be shared between multiple clusters.
To achieve high availability, SAP NetWeaver requires highly available NFS shares. In this example, we assume
the SAP NFS shares are either hosted on highly available NFS file server, which can be used by multiple SAP
systems. Or the shares are deployed on Azure NetApp Files NFS volumes.

IMPORTANT
The support for multi-SID clustering of SAP ASCS/ERS with SUSE Linux as guest operating system in Azure VMs is limited
to five SAP SIDs on the same cluster. Each new SID increases the complexity. A mix of SAP Enqueue Replication Server 1
and Enqueue Replication Server 2 on the same cluster is not suppor ted . Multi-SID clustering describes the installation
of multiple SAP ASCS/ERS instances with different SIDs in one Pacemaker cluster. Currently multi-SID clustering is only
supported for ASCS/ERS.
TIP
The multi-SID clustering of SAP ASCS/ERS is a solution with higher complexity. It is more complex to implement. It also
involves higher administrative effort, when executing maintenance activities (like OS patching). Before you start the
actual implementation, take time to carefully plan out the deployment and all involved components like VMs, NFS
mounts, VIPs, load balancer configurations and so on.

The NFS server, SAP NetWeaver ASCS, SAP NetWeaver SCS, SAP NetWeaver ERS, and the SAP HANA database
use virtual hostname and virtual IP addresses. On Azure, a load balancer is required to use a virtual IP address.
We recommend using Standard load balancer.
The following list shows the configuration of the (A)SCS and ERS load balancer for this multi-SID cluster
example with three SAP systems. You will need separate frontend IP, health probes, and load-balancing rules for
each ASCS and ERS instance for each of the SIDs. Assign all VMs, that are part of the ASCS/ASCS cluster to one
backend pool.
(A )SCS
Frontend configuration
IP address for NW1: 10.3.1.14
IP address for NW2: 10.3.1.16
IP address for NW3: 10.3.1.13
Probe Ports
Port 620<nr> , therefore for NW1, NW2, and NW3 probe ports 62000 , 62010 and 62020
Load-balancing rules -
create one for each instance, that is, NW1/ASCS, NW2/ASCS and NW3/ASCS.
If using Standard Load Balancer, select HA por ts
If using Basic Load Balancer, create Load balancing rules for the following ports
32<nr> TCP
36<nr> TCP
39<nr> TCP
81<nr> TCP
5<nr> 13 TCP
5<nr> 14 TCP
5<nr> 16 TCP
ERS
Frontend configuration
IP address for NW1 10.3.1.15
IP address for NW2 10.3.1.17
IP address for NW3 10.3.1.19
Probe Port
Port 621<nr> , therefore for NW1, NW2, and N# probe ports 62102 , 62112 and 62122
Load-balancing rules - create one for each instance, that is, NW1/ERS, NW2/ERS and NW3/ERS.
If using Standard Load Balancer, select HA por ts
If using Basic Load Balancer, create Load balancing rules for the following ports
32<nr> TCP
33<nr> TCP
5<nr> 13 TCP
5<nr> 14 TCP
5<nr> 16 TCP
Backend configuration
Connected to primary network interfaces of all virtual machines that should be part of the
(A)SCS/ERS cluster

IMPORTANT
Floating IP is not supported on a NIC secondary IP configuration in load-balancing scenarios. For details see Azure Load
balancer Limitations. If you need additional IP address for the VM, deploy a second NIC.

NOTE
When VMs without public IP addresses are placed in the backend pool of internal (no public IP address) Standard Azure
load balancer, there will be no outbound internet connectivity, unless additional configuration is performed to allow
routing to public end points. For details on how to achieve outbound connectivity see Public endpoint connectivity for
Virtual Machines using Azure Standard Load Balancer in SAP high-availability scenarios.

IMPORTANT
Do not enable TCP timestamps on Azure VMs placed behind Azure Load Balancer. Enabling TCP timestamps will cause
the health probes to fail. Set parameter net.ipv4.tcp_timestamps to 0 . For details see Load Balancer health probes.

SAP NFS shares


SAP NetWeaver requires shared storage for the transport, profile directory, and so on. For highly available SAP
system, it is important to have highly available NFS shares. You will need to decide on the architecture for your
SAP NFS shares. One option is to build Highly available NFS cluster on Azure VMs on SUSE Linux Enterprise
Server, which can be shared between multiple SAP systems.
Another option is to deploy the shares on Azure NetApp Files NFS volumes. With Azure NetApp Files, you will
get built-in high availability for the SAP NFS shares.

Deploy the first SAP system in the cluster


Now that you have decided on the architecture for the SAP NFS shares, deploy the first SAP system in the
cluster, following the corresponding documentation.
If using highly available NFS server, follow High availability for SAP NetWeaver on Azure VMs on SUSE
Linux Enterprise Server for SAP applications.
If using Azure NetApp Files NFS volumes, follow High availability for SAP NetWeaver on Azure VMs on
SUSE Linux Enterprise Server with Azure NetApp Files for SAP applications
The documents listed above will guide you through the steps to prepare the necessary infrastructures, build the
cluster, prepare the OS for running the SAP application.

TIP
Always test the fail over functionality of the cluster, after the first system is deployed, before adding the additional SAP
SIDs to the cluster. That way you will know that the cluster functionality works, before adding the complexity of
additional SAP systems to the cluster.
Deploy additional SAP systems in the cluster
In this example, we assume that system NW1 was already deployed in the cluster. We will show how to deploy
in the cluster SAP systems NW2 and NW3 .
The following items are prefixed with either [A] - applicable to all nodes, [1] - only applicable to node 1 or [2] -
only applicable to node 2.
Prerequisites

IMPORTANT
Before following the instructions to deploy additional SAP systems in the cluster, follow the instructions to deploy the
first SAP system in the cluster, as there are steps which are only necessary during the first system deployment.

This documentation assumes that:


The Pacemaker cluster is already configured and running.
At least one SAP system (ASCS / ERS instance) is already deployed and is running in the cluster.
The cluster fail over functionality has been tested.
The NFS shares for all SAP systems are deployed.
Prepare for SAP NetWeaver Installation
1. Add configuration for the newly deployed system (that is, NW2 , NW3 ) to the existing Azure Load
Balancer, following the instructions Deploy Azure Load Balancer manually via Azure portal. Adjust the IP
addresses, health probe ports, load-balancing rules for your configuration.
2. [A] Set up name resolution for the additional SAP systems. You can either use DNS server or modify
/etc/hosts on all nodes. This example shows how to use the /etc/hosts file. Adapt the IP addresses
and the host names to your environment.

sudo vi /etc/hosts
# IP address of the load balancer frontend configuration for NW2 ASCS
10.3.1.16 msnw2ascs
# IP address of the load balancer frontend configuration for NW3 ASCS
10.3.1.13 msnw3ascs
# IP address of the load balancer frontend configuration for NW2 ERS
10.3.1.17 msnw2ers
# IP address of the load balancer frontend configuration for NW3 ERS
10.3.1.19 msnw3ers
# IP address for virtual host name for the NFS server for NW2
10.3.1.31 nw2-nfs
# IP address for virtual host name for the NFS server for NW3
10.3.1.32 nw3-nfs

3. [A] Create the shared directories for the additional NW2 and NW3 SAP systems that you are deploying
to the cluster.
sudo mkdir -p /sapmnt/NW2
sudo mkdir -p /usr/sap/NW2/SYS
sudo mkdir -p /usr/sap/NW2/ASCS10
sudo mkdir -p /usr/sap/NW2/ERS12
sudo mkdir -p /sapmnt/NW3
sudo mkdir -p /usr/sap/NW3/SYS
sudo mkdir -p /usr/sap/NW3/ASCS20
sudo mkdir -p /usr/sap/NW3/ERS22

sudo chattr +i /sapmnt/NW2


sudo chattr +i /usr/sap/NW2/SYS
sudo chattr +i /usr/sap/NW2/ASCS10
sudo chattr +i /usr/sap/NW2/ERS12
sudo chattr +i /sapmnt/NW3
sudo chattr +i /usr/sap/NW3/SYS
sudo chattr +i /usr/sap/NW3/ASCS20
sudo chattr +i /usr/sap/NW3/ERS22

4. [A] Configure autofs to mount the /sapmnt/SID and /usr/sap/SID/SYS file systems for the additional
SAP systems that you are deploying to the cluster. In this example NW2 and NW3 .
Update file /etc/auto.direct with the file systems for the additional SAP systems that you are
deploying to the cluster.
If using NFS file server, follow the instructions here
If using Azure NetApp Files, follow the instructions here
You will need to restart the autofs service to mount the newly added shares.
Install ASCS / ERS
1. Create the virtual IP and health probe cluster resources for the ASCS instance of the additional SAP
system you are deploying to the cluster. The example shown here is for NW2 and NW3 ASCS, using
highly available NFS server.

IMPORTANT
Recent testing revealed situations, where netcat stops responding to requests due to backlog and its limitation of
handling only one connection. The netcat resource stops listening to the Azure Load balancer requests and the
floating IP becomes unavailable.
For existing Pacemaker clusters, we recommended in the past replacing netcat with socat. Currently we
recommend using azure-lb resource agent, which is part of package resource-agents, with the following package
version requirements:
For SLES 12 SP4/SP5, the version must be at least resource-agents-4.3.018.a7fb5035-3.30.1.
For SLES 15/15 SP1, the version must be at least resource-agents-4.3.0184.6ee15eb2-4.13.1.
Note that the change will require brief downtime.
For existing Pacemaker clusters, if the configuration was already changed to use socat as described in Azure
Load-Balancer Detection Hardening, there is no requirement to switch immediately to azure-lb resource agent.
sudo crm configure primitive fs_NW2_ASCS Filesystem device='nw2-nfs:/NW2/ASCS'
directory='/usr/sap/NW2/ASCS10' fstype='nfs4' \
op start timeout=60s interval=0 \
op stop timeout=60s interval=0 \
op monitor interval=20s timeout=40s

sudo crm configure primitive vip_NW2_ASCS IPaddr2 \


params ip=10.3.1.16 cidr_netmask=24 \
op monitor interval=10 timeout=20

sudo crm configure primitive nc_NW2_ASCS azure-lb port=62010

sudo crm configure group g-NW2_ASCS fs_NW2_ASCS nc_NW2_ASCS vip_NW2_ASCS \


meta resource-stickiness=3000

sudo crm configure primitive fs_NW3_ASCS Filesystem device='nw3-nfs:/NW3/ASCS'


directory='/usr/sap/NW3/ASCS20' fstype='nfs4' \
op start timeout=60s interval=0 \
op stop timeout=60s interval=0 \
op monitor interval=20s timeout=40s

sudo crm configure primitive vip_NW3_ASCS IPaddr2 \


params ip=10.3.1.13 cidr_netmask=24 \
op monitor interval=10 timeout=20

sudo crm configure primitive nc_NW3_ASCS azure-lb port=62020

sudo crm configure group g-NW3_ASCS fs_NW3_ASCS nc_NW3_ASCS vip_NW3_ASCS \


meta resource-stickiness=3000

As you creating the resources they may be assigned to different cluster resources. When you group
them, they will migrate to one of the cluster nodes. Make sure the cluster status is ok and that all
resources are started. It is not important on which node the resources are running.
2. [1] Install SAP NetWeaver ASCS
Install SAP NetWeaver ASCS as root, using a virtual hostname that maps to the IP address of the load
balancer frontend configuration for the ASCS. For example, for system NW2 , the virtual hostname is
msnw2ascs , 10.3.1.16 and the instance number that you used for the probe of the load balancer, for
example 10 . for system NW3 , the virtual hostname is msnw3ascs , 10.3.1.13 and the instance number
that you used for the probe of the load balancer, for example 20 .
You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a non-root user to connect
to sapinst. You can use parameter SAPINST_USE_HOSTNAME to install SAP, using virtual host name.

sudo swpm/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin SAPINST_USE_HOSTNAME=virtual_hostname

If the installation fails to create a subfolder in /usr/sap/SID /ASCSInstance# , try setting the owner to
sid adm and group to sapsys of the ASCSInstance# and retry.
3. [1] Create a virtual IP and health-probe cluster resources for the ERS instance of the additional SAP
system you are deploying to the cluster. The example shown here is for NW2 and NW3 ERS, using
highly available NFS server.
sudo crm configure primitive fs_NW2_ERS Filesystem device='nw2-nfs:/NW2/ASCSERS'
directory='/usr/sap/NW2/ERS12' fstype='nfs4' \
op start timeout=60s interval=0 \
op stop timeout=60s interval=0 \
op monitor interval=20s timeout=40s

sudo crm configure primitive vip_NW2_ERS IPaddr2 \


params ip=10.3.1.17 cidr_netmask=24 \
op monitor interval=10 timeout=20

sudo crm configure primitive nc_NW2_ERS azure-lb port=62112

sudo crm configure group g-NW2_ERS fs_NW2_ERS nc_NW2_ERS vip_NW2_ERS

sudo crm configure primitive fs_NW3_ERS Filesystem device='nw3-nfs:/NW3/ASCSERS'


directory='/usr/sap/NW3/ERS22' fstype='nfs4' \
op start timeout=60s interval=0 \
op stop timeout=60s interval=0 \
op monitor interval=20s timeout=40s

sudo crm configure primitive vip_NW3_ERS IPaddr2 \


params ip=10.3.1.19 cidr_netmask=24 \
op monitor interval=10 timeout=20

sudo crm configure primitive nc_NW3_ERS azure-lb port=62122

sudo crm configure group g-NW3_ERS fs_NW3_ERS nc_NW3_ERS vip_NW3_ERS

As you creating the resources they may be assigned to different cluster nodes. When you group them,
they will migrate to one of the cluster nodes. Make sure the cluster status is ok and that all resources are
started.
Next, make sure that the resources of the newly created ERS group, are running on the cluster node,
opposite to the cluster node where the ASCS instance for the same SAP system was installed. For
example, if NW2 ASCS was installed on slesmsscl1 , then make sure the NW2 ERS group is running on
slesmsscl2 . You can migrate the NW2 ERS group to slesmsscl2 by running the following command:

crm resource migrate g-NW2_ERS slesmsscl2 force

4. [2] Install SAP NetWeaver ERS


Install SAP NetWeaver ERS as root on the other node, using a virtual hostname that maps to the IP
address of the load balancer frontend configuration for the ERS. For example for system NW2 , the
virtual host name will be msnw2ers , 10.3.1.17 and the instance number that you used for the probe of
the load balancer, for example 12 . For system NW3 , the virtual host name msnw3ers , 10.3.1.19 and
the instance number that you used for the probe of the load balancer, for example 22 .
You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a non-root user to connect
to sapinst. You can use parameter SAPINST_USE_HOSTNAME to install SAP, using virtual host name.

sudo swpm/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin SAPINST_USE_HOSTNAME=virtual_hostname

NOTE
Use SWPM SP 20 PL 05 or higher. Lower versions do not set the permissions correctly and the installation will
fail.
If the installation fails to create a subfolder in /usr/sap/NW2 /ERSInstance# , try setting the owner to
sid adm and the group to sapsys of the ERSInstance# folder and retry.
If it was necessary for you to migrate the ERS group of the newly deployed SAP system to a different
cluster node, don't forget to remove the location constraint for the ERS group. You can remove the
constraint by running the following command (the example is given for SAP systems NW2 and NW3 ).

crm resource unmigrate g-NW2_ERS


crm resource unmigrate g-NW3_ERS

5. [1] Adapt the ASCS/SCS and ERS instance profiles for the newly installed SAP system(s). The example
shown below is for NW2. You will need to adapt the ASCS/SCS and ERS profiles for all SAP instances
added to the cluster.
ASCS/SCS profile

sudo vi /sapmnt/NW2/profile/NW2_ASCS10_msnw2ascs

# Change the restart command to a start command


#Restart_Program_01 = local $(_EN) pf=$(_PF)
Start_Program_01 = local $(_EN) pf=$(_PF)

# Add the following lines


service/halib = $(DIR_CT_RUN)/saphascriptco.so
service/halib_cluster_connector = /usr/bin/sap_suse_cluster_connector

# Add the keep alive parameter, if using ENSA1


enque/encni/set_so_keepalive = true

For both ENSA1 and ENSA2, make sure that the keepalive OS parameters are set as described in SAP
note 1410736.
ERS profile

sudo vi /sapmnt/NW2/profile/NW2_ERS12_msnw2ers

# Change the restart command to a start command


#Restart_Program_00 = local $(_ER) pf=$(_PFL) NR=$(SCSID)
Start_Program_00 = local $(_ER) pf=$(_PFL) NR=$(SCSID)

# Add the following lines


service/halib = $(DIR_CT_RUN)/saphascriptco.so
service/halib_cluster_connector = /usr/bin/sap_suse_cluster_connector

# remove Autostart from ERS profile


# Autostart = 1

6. [A] Configure the SAP users for the newly deployed SAP system, in this example NW2 and NW3 .

# Add sidadm to the haclient group


sudo usermod -aG haclient nw2adm
sudo usermod -aG haclient nw3adm

7. Add the ASCS and ERS SAP services for the newly installed SAP system to the sapservice file. The
example shown below is for SAP systems NW2 and NW3 .
Add the ASCS service entry to the second node and copy the ERS service entry to the first node. Execute
the commands for each SAP system on the node, where the ASCS instance for the SAP system was
installed.

# Execute the following commands on slesmsscl1,assuming the NW2 ASCS instance was installed on
slesmsscl1
cat /usr/sap/sapservices | grep ASCS10 | sudo ssh slesmsscl2 "cat >>/usr/sap/sapservices"
sudo ssh slesmsscl2 "cat /usr/sap/sapservices" | grep ERS12 | sudo tee -a /usr/sap/sapservices
# Execute the following commands on slesmsscl2, assuming the NW3 ASCS instance was installed on
slesmsscl2
cat /usr/sap/sapservices | grep ASCS20 | sudo ssh slesmsscl1 "cat >>/usr/sap/sapservices"
sudo ssh slesmsscl1 "cat /usr/sap/sapservices" | grep ERS22 | sudo tee -a /usr/sap/sapservices

8. [1] Create the SAP cluster resources for the newly installed SAP system.
If using enqueue server 1 architecture (ENSA1), define the resources for SAP systems NW2 and NW3
as follows:
sudo crm configure property maintenance-mode="true"

sudo crm configure primitive rsc_sap_NW2_ASCS10 SAPInstance \


operations \$id=rsc_sap_NW2_ASCS10-operations \
op monitor interval=11 timeout=60 on-fail=restart \
params InstanceName=NW2_ASCS10_msnw2ascs START_PROFILE="/sapmnt/NW2/profile/NW2_ASCS10_msnw2ascs"
\
AUTOMATIC_RECOVER=false \
meta resource-stickiness=5000 failure-timeout=60 migration-threshold=1 priority=10

sudo crm configure primitive rsc_sap_NW2_ERS12 SAPInstance \


operations \$id=rsc_sap_NW2_ERS12-operations \
op monitor interval=11 timeout=60 on-fail=restart \
params InstanceName=NW2_ERS12_msnw2ers START_PROFILE="/sapmnt/NW2/profile/NW2_ERS12_msnw2ers"
AUTOMATIC_RECOVER=false IS_ERS=true \
meta priority=1000

sudo crm configure modgroup g-NW2_ASCS add rsc_sap_NW2_ASCS10


sudo crm configure modgroup g-NW2_ERS add rsc_sap_NW2_ERS12

sudo crm configure colocation col_sap_NW2_no_both -5000: g-NW2_ERS g-NW2_ASCS


sudo crm configure location loc_sap_NW2_failover_to_ers rsc_sap_NW2_ASCS10 rule 2000: runs_ers_NW2
eq 1
sudo crm configure order ord_sap_NW2_first_start_ascs Optional: rsc_sap_NW2_ASCS10:start
rsc_sap_NW2_ERS12:stop symmetrical=false

sudo crm configure primitive rsc_sap_NW3_ASCS20 SAPInstance \


operations \$id=rsc_sap_NW3_ASCS20-operations \
op monitor interval=11 timeout=60 on-fail=restart \
params InstanceName=NW3_ASCS10_msnw3ascs START_PROFILE="/sapmnt/NW3/profile/NW3_ASCS20_msnw3ascs"
\
AUTOMATIC_RECOVER=false \
meta resource-stickiness=5000 failure-timeout=60 migration-threshold=1 priority=10

sudo crm configure primitive rsc_sap_NW3_ERS22 SAPInstance \


operations \$id=rsc_sap_NW3_ERS22-operations \
op monitor interval=11 timeout=60 on-fail=restart \
params InstanceName=NW3_ERS22_msnw3ers START_PROFILE="/sapmnt/NW3/profile/NW3_ERS22_msnw2ers"
AUTOMATIC_RECOVER=false IS_ERS=true \
meta priority=1000

sudo crm configure modgroup g-NW3_ASCS add rsc_sap_NW3_ASCS20


sudo crm configure modgroup g-NW3_ERS add rsc_sap_NW3_ERS22

sudo crm configure colocation col_sap_NW3_no_both -5000: g-NW3_ERS g-NW3_ASCS


sudo crm configure location loc_sap_NW3_failover_to_ers rsc_sap_NW3_ASCS10 rule 2000: runs_ers_NW3
eq 1
sudo crm configure order ord_sap_NW3_first_start_ascs Optional: rsc_sap_NW3_ASCS20:start
rsc_sap_NW3_ERS22:stop symmetrical=false
sudo crm configure property maintenance-mode="false"

SAP introduced support for enqueue server 2, including replication, as of SAP NW 7.52. Starting with
ABAP Platform 1809, enqueue server 2 is installed by default. See SAP note 2630416 for enqueue
server 2 support. If using enqueue server 2 architecture (ENSA2), define the resources for SAP systems
NW2 and NW3 as follows:
sudo crm configure property maintenance-mode="true"

sudo crm configure primitive rsc_sap_NW2_ASCS10 SAPInstance \


operations \$id=rsc_sap_NW2_ASCS10-operations \
op monitor interval=11 timeout=60 on-fail=restart \
params InstanceName=NW2_ASCS10_msnw2ascs START_PROFILE="/sapmnt/NW2/profile/NW2_ASCS10_msnw2ascs"
\
AUTOMATIC_RECOVER=false \
meta resource-stickiness=5000

sudo crm configure primitive rsc_sap_NW2_ERS12 SAPInstance \


operations \$id=rsc_sap_NW2_ERS12-operations \
op monitor interval=11 timeout=60 on-fail=restart \
params InstanceName=NW2_ERS12_msnw2ers START_PROFILE="/sapmnt/NW2/profile/NW2_ERS12_msnw2ers"
AUTOMATIC_RECOVER=false IS_ERS=true

sudo crm configure modgroup g-NW2_ASCS add rsc_sap_NW2_ASCS10


sudo crm configure modgroup g-NW2_ERS add rsc_sap_NW2_ERS12

sudo crm configure colocation col_sap_NW2_no_both -5000: g-NW2_ERS g-NW2_ASCS


sudo crm configure order ord_sap_NW2_first_start_ascs Optional: rsc_sap_NW2_ASCS10:start
rsc_sap_NW2_ERS12:stop symmetrical=false

sudo crm configure primitive rsc_sap_NW3_ASCS20 SAPInstance \


operations \$id=rsc_sap_NW3_ASCS20-operations \
op monitor interval=11 timeout=60 on-fail=restart \
params InstanceName=NW3_ASCS10_msnw3ascs START_PROFILE="/sapmnt/NW3/profile/NW3_ASCS20_msnw3ascs"
\
AUTOMATIC_RECOVER=false \
meta resource-stickiness=5000

sudo crm configure primitive rsc_sap_NW3_ERS22 SAPInstance \


operations \$id=rsc_sap_NW3_ERS22-operations \
op monitor interval=11 timeout=60 on-fail=restart \
params InstanceName=NW3_ERS22_msnw3ers START_PROFILE="/sapmnt/NW3/profile/NW3_ERS22_msnw2ers"
AUTOMATIC_RECOVER=false IS_ERS=true

sudo crm configure modgroup g-NW3_ASCS add rsc_sap_NW3_ASCS20


sudo crm configure modgroup g-NW3_ERS add rsc_sap_NW3_ERS22

sudo crm configure colocation col_sap_NW3_no_both -5000: g-NW3_ERS g-NW3_ASCS


sudo crm configure order ord_sap_NW3_first_start_ascs Optional: rsc_sap_NW3_ASCS20:start
rsc_sap_NW3_ERS22:stop symmetrical=false
sudo crm configure property maintenance-mode="false"

If you are upgrading from an older version and switching to enqueue server 2, see SAP note 2641019.
Make sure that the cluster status is ok and that all resources are started. It is not important on which
node the resources are running. The following example shows the cluster resources status, after SAP
systems NW2 and NW3 were added to the cluster.
sudo crm_mon -r

# Online: [ slesmsscl1 slesmsscl2 ]

#Full list of resources:

#stonith-sbd (stonith:external/sbd): Started slesmsscl1


# Resource Group: g-NW1_ASCS
# fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started slesmsscl2
# nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started slesmsscl2
# vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started slesmsscl2
# rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started slesmsscl2
# Resource Group: g-NW1_ERS
# fs_NW1_ERS (ocf::heartbeat:Filesystem): Started slesmsscl1
# nc_NW1_ERS (ocf::heartbeat:azure-lb): Started slesmsscl1
# vip_NW1_ERS (ocf::heartbeat:IPaddr2): Started slesmsscl1
# rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started slesmsscl1
# Resource Group: g-NW2_ASCS
# fs_NW2_ASCS (ocf::heartbeat:Filesystem): Started slesmsscl1
# nc_NW2_ASCS (ocf::heartbeat:azure-lb): Started slesmsscl1
# vip_NW2_ASCS (ocf::heartbeat:IPaddr2): Started slesmsscl1
# rsc_sap_NW2_ASCS10 (ocf::heartbeat:SAPInstance): Started slesmsscl1
# Resource Group: g-NW2_ERS
# fs_NW2_ERS (ocf::heartbeat:Filesystem): Started slesmsscl2
# nc_NW2_ERS (ocf::heartbeat:azure-lb): Started slesmsscl2
# vip_NW2_ERS (ocf::heartbeat:IPaddr2): Started slesmsscl2
# rsc_sap_NW2_ERS12 (ocf::heartbeat:SAPInstance): Started slesmsscl2
# Resource Group: g-NW3_ASCS
# fs_NW3_ASCS (ocf::heartbeat:Filesystem): Started slesmsscl1
# nc_NW3_ASCS (ocf::heartbeat:azure-lb): Started slesmsscl1
# vip_NW3_ASCS (ocf::heartbeat:IPaddr2): Started slesmsscl1
# rsc_sap_NW3_ASCS20 (ocf::heartbeat:SAPInstance): Started slesmsscl1
# Resource Group: g-NW3_ERS
# fs_NW3_ERS (ocf::heartbeat:Filesystem): Started slesmsscl2
# nc_NW3_ERS (ocf::heartbeat:azure-lb): Started slesmsscl2
# vip_NW3_ERS (ocf::heartbeat:IPaddr2): Started slesmsscl2
# rsc_sap_NW3_ERS22 (ocf::heartbeat:SAPInstance): Started slesmsscl2

The following picture shows how the resources would look like in the HA Web Konsole(Hawk), with the
resources for SAP system NW2 expanded.

Proceed with the SAP installation


Complete your SAP installation by:
Preparing your SAP NetWeaver application servers
Installing a DBMS instance
Installing A primary SAP application server
Installing one or more additional SAP application instances

Test the multi-SID cluster setup


The following tests are a subset of the test cases in the best practices guides of SUSE. They are included for
your convenience. For the full list of cluster tests, reference the following documentation:
If using highly available NFS server, follow High availability for SAP NetWeaver on Azure VMs on SUSE
Linux Enterprise Server for SAP applications.
If using Azure NetApp Files NFS volumes, follow High availability for SAP NetWeaver on Azure VMs on
SUSE Linux Enterprise Server with Azure NetApp Files for SAP applications
Always read the SUSE best practices guides and perform all additional tests that might have been added.
The tests that are presented are in a two node, multi-SID cluster with three SAP systems installed.
1. Test HAGetFailoverConfig and HACheckFailoverConfig
Run the following commands as adm on the node where the ASCS instance is currently running. If the
commands fail with FAIL: Insufficient memory, it might be caused by dashes in your hostname. This is a
known issue and will be fixed by SUSE in the sap-suse-cluster-connector package.
slesmsscl1:nw1adm 57> sapcontrol -nr 00 -function HAGetFailoverConfig

# 10.12.2019 21:33:08
# HAGetFailoverConfig
# OK
# HAActive: TRUE
# HAProductVersion: SUSE Linux Enterprise Server for SAP Applications 12 SP4
# HASAPInterfaceVersion: SUSE Linux Enterprise Server for SAP Applications 12 SP4
(sap_suse_cluster_connector 3.1.0)
# HADocumentation: https://www.suse.com/products/sles-for-sap/resource-library/sap-best-practices/
# HAActiveNode: slesmsscl1
# HANodes: slesmsscl1, slesmsscl2

slesmsscl1:nw1adm 53> sapcontrol -nr 00 -function HACheckFailoverConfig

# 19.12.2019 21:19:58
# HACheckFailoverConfig
# OK
# state, category, description, comment
# SUCCESS, SAP CONFIGURATION, SAPInstance RA sufficient version, SAPInstance includes is-ers patch

slesmsscl2:nw2adm 35> sapcontrol -nr 10 -function HAGetFailoverConfig

# 10.12.2019 21:37:09
# HAGetFailoverConfig
# OK
# HAActive: TRUE
# HAProductVersion: SUSE Linux Enterprise Server for SAP Applications 12 SP4
# HASAPInterfaceVersion: SUSE Linux Enterprise Server for SAP Applications 12 SP4
(sap_suse_cluster_connector 3.1.0)
# HADocumentation: https://www.suse.com/products/sles-for-sap/resource-library/sap-best-practices/
# HAActiveNode: slesmsscl2
# HANodes: slesmsscl2, slesmsscl1

slesmsscl2:nw2adm 52> sapcontrol -nr 10 -function HACheckFailoverConfig

# 19.12.2019 21:17:39
# HACheckFailoverConfig
# OK
# state, category, description, comment
# SUCCESS, SAP CONFIGURATION, SAPInstance RA sufficient version, SAPInstance includes is-ers patch

slesmsscl1:nw3adm 49> sapcontrol -nr 20 -function HAGetFailoverConfig

# 10.12.2019 23:35:36
# HAGetFailoverConfig
# OK
# HAActive: TRUE
# HAProductVersion: SUSE Linux Enterprise Server for SAP Applications 12 SP4
# HASAPInterfaceVersion: SUSE Linux Enterprise Server for SAP Applications 12 SP4
(sap_suse_cluster_connector 3.1.0)
# HADocumentation: https://www.suse.com/products/sles-for-sap/resource-library/sap-best-practices/
# HAActiveNode: slesmsscl1
# HANodes: slesmsscl1, slesmsscl2

slesmsscl1:nw3adm 52> sapcontrol -nr 20 -function HACheckFailoverConfig

# 19.12.2019 21:10:42
# HACheckFailoverConfig
# OK
# state, category, description, comment
# SUCCESS, SAP CONFIGURATION, SAPInstance RA sufficient version, SAPInstance includes is-ers patch

2. Manually migrate the ASCS instance. The example shows migrating the ASCS instance for SAP system
NW2.
Resource state, before starting the test:
Full list of resources:
stonith-sbd (stonith:external/sbd): Started slesmsscl1
Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started slesmsscl1
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started slesmsscl1
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started slesmsscl1
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started slesmsscl1
Resource Group: g-NW1_ERS
fs_NW1_ERS (ocf::heartbeat:Filesystem): Started slesmsscl2
nc_NW1_ERS (ocf::heartbeat:azure-lb): Started slesmsscl2
vip_NW1_ERS (ocf::heartbeat:IPaddr2): Started slesmsscl2
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started slesmsscl2
Resource Group: g-NW2_ASCS
fs_NW2_ASCS (ocf::heartbeat:Filesystem): Started slesmsscl1
nc_NW2_ASCS (ocf::heartbeat:azure-lb): Started slesmsscl1
vip_NW2_ASCS (ocf::heartbeat:IPaddr2): Started slesmsscl1
rsc_sap_NW2_ASCS10 (ocf::heartbeat:SAPInstance): Started slesmsscl1
Resource Group: g-NW2_ERS
fs_NW2_ERS (ocf::heartbeat:Filesystem): Started slesmsscl2
nc_NW2_ERS (ocf::heartbeat:azure-lb): Started slesmsscl2
vip_NW2_ERS (ocf::heartbeat:IPaddr2): Started slesmsscl2
rsc_sap_NW2_ERS12 (ocf::heartbeat:SAPInstance): Started slesmsscl2
Resource Group: g-NW3_ASCS
fs_NW3_ASCS (ocf::heartbeat:Filesystem): Started slesmsscl2
nc_NW3_ASCS (ocf::heartbeat:azure-lb): Started slesmsscl2
vip_NW3_ASCS (ocf::heartbeat:IPaddr2): Started slesmsscl2
rsc_sap_NW3_ASCS20 (ocf::heartbeat:SAPInstance): Started slesmsscl2
Resource Group: g-NW3_ERS
fs_NW3_ERS (ocf::heartbeat:Filesystem): Started slesmsscl1
nc_NW3_ERS (ocf::heartbeat:azure-lb): Started slesmsscl1
vip_NW3_ERS (ocf::heartbeat:IPaddr2): Started slesmsscl1
rsc_sap_NW3_ERS22 (ocf::heartbeat:SAPInstance): Started slesmsscl1

Run the following commands as root to migrate the NW2 ASCS instance.

crm resource migrate rsc_sap_NW2_ASCS10 force


# INFO: Move constraint created for rsc_sap_NW2_ASCS10

crm resource unmigrate rsc_sap_NW2_ASCS10


# INFO: Removed migration constraints for rsc_sap_NW2_ASCS10

# Remove failed actions for the ERS that occurred as part of the migration
crm resource cleanup rsc_sap_NW2_ERS12

Resource state after the test:


Full list of resources:
stonith-sbd (stonith:external/sbd): Started slesmsscl1
Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started slesmsscl1
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started slesmsscl1
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started slesmsscl1
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started slesmsscl1
Resource Group: g-NW1_ERS
fs_NW1_ERS (ocf::heartbeat:Filesystem): Started slesmsscl2
nc_NW1_ERS (ocf::heartbeat:azure-lb): Started slesmsscl2
vip_NW1_ERS (ocf::heartbeat:IPaddr2): Started slesmsscl2
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started slesmsscl2
Resource Group: g-NW2_ASCS
fs_NW2_ASCS (ocf::heartbeat:Filesystem): Started slesmsscl2
nc_NW2_ASCS (ocf::heartbeat:azure-lb): Started slesmsscl2
vip_NW2_ASCS (ocf::heartbeat:IPaddr2): Started slesmsscl2
rsc_sap_NW2_ASCS10 (ocf::heartbeat:SAPInstance): Started slesmsscl2
Resource Group: g-NW2_ERS
fs_NW2_ERS (ocf::heartbeat:Filesystem): Started slesmsscl1
nc_NW2_ERS (ocf::heartbeat:azure-lb): Started slesmsscl1
vip_NW2_ERS (ocf::heartbeat:IPaddr2): Started slesmsscl1
rsc_sap_NW2_ERS12 (ocf::heartbeat:SAPInstance): Started slesmsscl1
Resource Group: g-NW3_ASCS
fs_NW3_ASCS (ocf::heartbeat:Filesystem): Started slesmsscl2
nc_NW3_ASCS (ocf::heartbeat:azure-lb): Started slesmsscl2
vip_NW3_ASCS (ocf::heartbeat:IPaddr2): Started slesmsscl2
rsc_sap_NW3_ASCS20 (ocf::heartbeat:SAPInstance): Started slesmsscl2
Resource Group: g-NW3_ERS
fs_NW3_ERS (ocf::heartbeat:Filesystem): Started slesmsscl1
nc_NW3_ERS (ocf::heartbeat:azure-lb): Started slesmsscl1
vip_NW3_ERS (ocf::heartbeat:IPaddr2): Started slesmsscl1
rsc_sap_NW3_ERS22 (ocf::heartbeat:SAPInstance): Started slesmsscl1

3. Test HAFailoverToNode. The test presented here shows migrating the ASCS instance for SAP system
NW2.
Resource state before starting the test:
Full list of resources:
stonith-sbd (stonith:external/sbd): Started slesmsscl1
Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started slesmsscl1
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started slesmsscl1
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started slesmsscl1
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started slesmsscl1
Resource Group: g-NW1_ERS
fs_NW1_ERS (ocf::heartbeat:Filesystem): Started slesmsscl2
nc_NW1_ERS (ocf::heartbeat:azure-lb): Started slesmsscl2
vip_NW1_ERS (ocf::heartbeat:IPaddr2): Started slesmsscl2
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started slesmsscl2
Resource Group: g-NW2_ASCS
fs_NW2_ASCS (ocf::heartbeat:Filesystem): Started slesmsscl2
nc_NW2_ASCS (ocf::heartbeat:azure-lb): Started slesmsscl2
vip_NW2_ASCS (ocf::heartbeat:IPaddr2): Started slesmsscl2
rsc_sap_NW2_ASCS10 (ocf::heartbeat:SAPInstance): Started slesmsscl2
Resource Group: g-NW2_ERS
fs_NW2_ERS (ocf::heartbeat:Filesystem): Started slesmsscl1
nc_NW2_ERS (ocf::heartbeat:azure-lb): Started slesmsscl1
vip_NW2_ERS (ocf::heartbeat:IPaddr2): Started slesmsscl1
rsc_sap_NW2_ERS12 (ocf::heartbeat:SAPInstance): Started slesmsscl1
Resource Group: g-NW3_ASCS
fs_NW3_ASCS (ocf::heartbeat:Filesystem): Started slesmsscl2
nc_NW3_ASCS (ocf::heartbeat:azure-lb): Started slesmsscl2
vip_NW3_ASCS (ocf::heartbeat:IPaddr2): Started slesmsscl2
rsc_sap_NW3_ASCS20 (ocf::heartbeat:SAPInstance): Started slesmsscl2
Resource Group: g-NW3_ERS
fs_NW3_ERS (ocf::heartbeat:Filesystem): Started slesmsscl1
nc_NW3_ERS (ocf::heartbeat:azure-lb): Started slesmsscl1
vip_NW3_ERS (ocf::heartbeat:IPaddr2): Started slesmsscl1
rsc_sap_NW3_ERS22 (ocf::heartbeat:SAPInstance): Started slesmsscl1

Run the following commands as nw2 adm to migrate the NW2 ASCS instance.

slesmsscl2:nw2adm 53> sapcontrol -nr 10 -host msnw2ascs -user nw2adm password -function
HAFailoverToNode ""

# run as root
# Remove failed actions for the ERS that occurred as part of the migration
crm resource cleanup rsc_sap_NW2_ERS12
# Remove migration constraints
crm resource clear rsc_sap_NW2_ASCS10
#INFO: Removed migration constraints for rsc_sap_NW2_ASCS10

Resource state after the test:


Full list of resources:
stonith-sbd (stonith:external/sbd): Started slesmsscl1
Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started slesmsscl1
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started slesmsscl1
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started slesmsscl1
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started slesmsscl1
Resource Group: g-NW1_ERS
fs_NW1_ERS (ocf::heartbeat:Filesystem): Started slesmsscl2
nc_NW1_ERS (ocf::heartbeat:azure-lb): Started slesmsscl2
vip_NW1_ERS (ocf::heartbeat:IPaddr2): Started slesmsscl2
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started slesmsscl2
Resource Group: g-NW2_ASCS
fs_NW2_ASCS (ocf::heartbeat:Filesystem): Started slesmsscl1
nc_NW2_ASCS (ocf::heartbeat:azure-lb): Started slesmsscl1
vip_NW2_ASCS (ocf::heartbeat:IPaddr2): Started slesmsscl1
rsc_sap_NW2_ASCS10 (ocf::heartbeat:SAPInstance): Started slesmsscl1
Resource Group: g-NW2_ERS
fs_NW2_ERS (ocf::heartbeat:Filesystem): Started slesmsscl2
nc_NW2_ERS (ocf::heartbeat:azure-lb): Started slesmsscl2
vip_NW2_ERS (ocf::heartbeat:IPaddr2): Started slesmsscl2
rsc_sap_NW2_ERS12 (ocf::heartbeat:SAPInstance): Started slesmsscl2
Resource Group: g-NW3_ASCS
fs_NW3_ASCS (ocf::heartbeat:Filesystem): Started slesmsscl2
nc_NW3_ASCS (ocf::heartbeat:azure-lb): Started slesmsscl2
vip_NW3_ASCS (ocf::heartbeat:IPaddr2): Started slesmsscl2
rsc_sap_NW3_ASCS20 (ocf::heartbeat:SAPInstance): Started slesmsscl2
Resource Group: g-NW3_ERS
fs_NW3_ERS (ocf::heartbeat:Filesystem): Started slesmsscl1
nc_NW3_ERS (ocf::heartbeat:azure-lb): Started slesmsscl1
vip_NW3_ERS (ocf::heartbeat:IPaddr2): Started slesmsscl1
rsc_sap_NW3_ERS22 (ocf::heartbeat:SAPInstance): Started slesmsscl1

4. Simulate node crash


Resource state before starting the test:
Full list of resources:
stonith-sbd (stonith:external/sbd): Started slesmsscl1
Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started slesmsscl2
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started slesmsscl2
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started slesmsscl2
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started slesmsscl2
Resource Group: g-NW1_ERS
fs_NW1_ERS (ocf::heartbeat:Filesystem): Started slesmsscl1
nc_NW1_ERS (ocf::heartbeat:azure-lb): Started slesmsscl1
vip_NW1_ERS (ocf::heartbeat:IPaddr2): Started slesmsscl1
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started slesmsscl1
Resource Group: g-NW2_ASCS
fs_NW2_ASCS (ocf::heartbeat:Filesystem): Started slesmsscl2
nc_NW2_ASCS (ocf::heartbeat:azure-lb): Started slesmsscl2
vip_NW2_ASCS (ocf::heartbeat:IPaddr2): Started slesmsscl2
rsc_sap_NW2_ASCS10 (ocf::heartbeat:SAPInstance): Started slesmsscl2
Resource Group: g-NW2_ERS
fs_NW2_ERS (ocf::heartbeat:Filesystem): Started slesmsscl1
nc_NW2_ERS (ocf::heartbeat:azure-lb): Started slesmsscl1
vip_NW2_ERS (ocf::heartbeat:IPaddr2): Started slesmsscl1
rsc_sap_NW2_ERS12 (ocf::heartbeat:SAPInstance): Started slesmsscl1
Resource Group: g-NW3_ASCS
fs_NW3_ASCS (ocf::heartbeat:Filesystem): Started slesmsscl2
nc_NW3_ASCS (ocf::heartbeat:azure-lb): Started slesmsscl2
vip_NW3_ASCS (ocf::heartbeat:IPaddr2): Started slesmsscl2
rsc_sap_NW3_ASCS20 (ocf::heartbeat:SAPInstance): Started slesmsscl2
Resource Group: g-NW3_ERS
fs_NW3_ERS (ocf::heartbeat:Filesystem): Started slesmsscl1
nc_NW3_ERS (ocf::heartbeat:azure-lb): Started slesmsscl1
vip_NW3_ERS (ocf::heartbeat:IPaddr2): Started slesmsscl1
rsc_sap_NW3_ERS22 (ocf::heartbeat:SAPInstance): Started slesmsscl1

Run the following command as root on the node where at least one ASCS instance is running. In this
example, we executed the command on slesmsscl2 , where the ASCS instances for NW1 and NW3 are
running.

slesmsscl2:~ # echo b > /proc/sysrq-trigger

If you use SBD, Pacemaker should not automatically start on the killed node. The status after the node is
started again should look like this.
Online: [ slesmsscl1 ]
OFFLINE: [ slesmsscl2 ]
Full list of resources:

stonith-sbd (stonith:external/sbd): Started slesmsscl1


Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started slesmsscl1
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started slesmsscl1
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started slesmsscl1
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started slesmsscl1
Resource Group: g-NW1_ERS
fs_NW1_ERS (ocf::heartbeat:Filesystem): Started slesmsscl1
nc_NW1_ERS (ocf::heartbeat:azure-lb): Started slesmsscl1
vip_NW1_ERS (ocf::heartbeat:IPaddr2): Started slesmsscl1
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started slesmsscl1
Resource Group: g-NW2_ASCS
fs_NW2_ASCS (ocf::heartbeat:Filesystem): Started slesmsscl1
nc_NW2_ASCS (ocf::heartbeat:azure-lb): Started slesmsscl1
vip_NW2_ASCS (ocf::heartbeat:IPaddr2): Started slesmsscl1
rsc_sap_NW2_ASCS10 (ocf::heartbeat:SAPInstance): Started slesmsscl1
Resource Group: g-NW2_ERS
fs_NW2_ERS (ocf::heartbeat:Filesystem): Started slesmsscl1
nc_NW2_ERS (ocf::heartbeat:azure-lb): Started slesmsscl1
vip_NW2_ERS (ocf::heartbeat:IPaddr2): Started slesmsscl1
rsc_sap_NW2_ERS12 (ocf::heartbeat:SAPInstance): Started slesmsscl1
Resource Group: g-NW3_ASCS
fs_NW3_ASCS (ocf::heartbeat:Filesystem): Started slesmsscl1
nc_NW3_ASCS (ocf::heartbeat:azure-lb): Started slesmsscl1
vip_NW3_ASCS (ocf::heartbeat:IPaddr2): Started slesmsscl1
rsc_sap_NW3_ASCS20 (ocf::heartbeat:SAPInstance): Started slesmsscl1
Resource Group: g-NW3_ERS
fs_NW3_ERS (ocf::heartbeat:Filesystem): Started slesmsscl1
nc_NW3_ERS (ocf::heartbeat:azure-lb): Started slesmsscl1
vip_NW3_ERS (ocf::heartbeat:IPaddr2): Started slesmsscl1
rsc_sap_NW3_ERS22 (ocf::heartbeat:SAPInstance): Started slesmsscl1

Failed Resource Actions:


* rsc_sap_NW1_ERS02_monitor_11000 on slesmsscl1 'not running' (7): call=125, status=complete,
exitreason='',
last-rc-change='Fri Dec 13 19:32:10 2019', queued=0ms, exec=0ms
* rsc_sap_NW2_ERS12_monitor_11000 on slesmsscl1 'not running' (7): call=126, status=complete,
exitreason='',
last-rc-change='Fri Dec 13 19:32:10 2019', queued=0ms, exec=0ms
* rsc_sap_NW3_ERS22_monitor_11000 on slesmsscl1 'not running' (7): call=127, status=complete,
exitreason='',
last-rc-change='Fri Dec 13 19:32:10 2019', queued=0ms, exec=0ms

Use the following commands to start Pacemaker on the killed node, clean the SBD messages, and clean
the failed resources.
# run as root
# list the SBD device(s)
cat /etc/sysconfig/sbd | grep SBD_DEVICE=

# output is like:
# SBD_DEVICE="/dev/disk/by-id/scsi-36001405772fe8401e6240c985857e116;/dev/disk/by-id/scsi-
36001405034a84428af24ddd8c3a3e9e1;/dev/disk/by-id/scsi-36001405cdd5ac8d40e548449318510c3"

sbd -d /dev/disk/by-id/scsi-36001405772fe8401e6240c985857e116 -d /dev/disk/by-id/scsi-


36001405034a84428af24ddd8c3a3e9e1 -d /dev/disk/by-id/scsi-36001405cdd5ac8d40e548449318510c3 message
slesmsscl2 clear

systemctl start pacemaker


crm resource cleanup rsc_sap_NW1_ERS02
crm resource cleanup rsc_sap_NW2_ERS12
crm resource cleanup rsc_sap_NW3_ERS22

Resource state after the test:

Full list of resources:


stonith-sbd (stonith:external/sbd): Started slesmsscl1
Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started slesmsscl1
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started slesmsscl1
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started slesmsscl1
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started slesmsscl1
Resource Group: g-NW1_ERS
fs_NW1_ERS (ocf::heartbeat:Filesystem): Started slesmsscl2
nc_NW1_ERS (ocf::heartbeat:azure-lb): Started slesmsscl2
vip_NW1_ERS (ocf::heartbeat:IPaddr2): Started slesmsscl2
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started slesmsscl2
Resource Group: g-NW2_ASCS
fs_NW2_ASCS (ocf::heartbeat:Filesystem): Started slesmsscl1
nc_NW2_ASCS (ocf::heartbeat:azure-lb): Started slesmsscl1
vip_NW2_ASCS (ocf::heartbeat:IPaddr2): Started slesmsscl1
rsc_sap_NW2_ASCS10 (ocf::heartbeat:SAPInstance): Started slesmsscl1
Resource Group: g-NW2_ERS
fs_NW2_ERS (ocf::heartbeat:Filesystem): Started slesmsscl2
nc_NW2_ERS (ocf::heartbeat:azure-lb): Started slesmsscl2
vip_NW2_ERS (ocf::heartbeat:IPaddr2): Started slesmsscl2
rsc_sap_NW2_ERS12 (ocf::heartbeat:SAPInstance): Started slesmsscl2
Resource Group: g-NW3_ASCS
fs_NW3_ASCS (ocf::heartbeat:Filesystem): Started slesmsscl1
nc_NW3_ASCS (ocf::heartbeat:azure-lb): Started slesmsscl1
vip_NW3_ASCS (ocf::heartbeat:IPaddr2): Started slesmsscl1
rsc_sap_NW3_ASCS20 (ocf::heartbeat:SAPInstance): Started slesmsscl1
Resource Group: g-NW3_ERS
fs_NW3_ERS (ocf::heartbeat:Filesystem): Started slesmsscl2
nc_NW3_ERS (ocf::heartbeat:azure-lb): Started slesmsscl2
vip_NW3_ERS (ocf::heartbeat:IPaddr2): Started slesmsscl2
rsc_sap_NW3_ERS22 (ocf::heartbeat:SAPInstance): Started slesmsscl2

Next steps
Azure Virtual Machines planning and implementation for SAP
Azure Virtual Machines deployment for SAP
Azure Virtual Machines DBMS deployment for SAP
To learn how to establish high availability and plan for disaster recovery of SAP HANA on Azure VMs, see
High Availability of SAP HANA on Azure Virtual Machines (VMs)
High availability for SAP NetWeaver on Azure VMs
on Red Hat Enterprise Linux for SAP applications
multi-SID guide
12/22/2020 • 24 minutes to read • Edit Online

This article describes how to deploy multiple SAP NetWeaver highly available systems(that is, multi-SID) in a two
node cluster on Azure VMs with Red Hat Enterprise Linux for SAP applications.
In the example configurations, installation commands etc. three SAP NetWeaver 7.50 systems are deployed in a
single, two node high availability cluster. The SAP systems SIDs are:
NW1 : ASCS instance number 00 and virtual host name msnw1ascs ; ERS instance number 02 and virtual
host name msnw1ers .
NW2 : ASCS instance number 10 and virtual hostname msnw2ascs ; ERS instance number 12 and virtual
host name msnw2ers .
NW3 : ASCS instance number 20 and virtual hostname msnw3ascs ; ERS instance number 22 and virtual
host name msnw3ers .
The article doesn't cover the database layer and the deployment of the SAP NFS shares. In the examples in this
article, we are using Azure NetApp Files volume sapMSID for the NFS shares, assuming that the volume is
already deployed. We are also assuming, that the Azure NetApp Files volume is deployed with NFSv3 protocol
and that the following file paths exist for the cluster resources for the ASCS and ERS instances of SAP systems
NW1, NW2 and NW3:
volume sapMSID (nfs://10.42.0.4/sapmntNW1 )
volume sapMSID (nfs://10.42.0.4/usrsapNW1 ascs)
volume sapMSID (nfs://10.42.0.4/usrsapNW1 sys)
volume sapMSID (nfs://10.42.0.4/usrsapNW1 ers)
volume sapMSID (nfs://10.42.0.4/sapmntNW2 )
volume sapMSID (nfs://10.42.0.4/usrsapNW2 ascs)
volume sapMSID (nfs://10.42.0.4/usrsapNW2 sys)
volume sapMSID (nfs://10.42.0.4/usrsapNW2 ers)
volume sapMSID (nfs://10.42.0.4/sapmntNW3 )
volume sapMSID (nfs://10.42.0.4/usrsapNW3 ascs)
volume sapMSID (nfs://10.42.0.4/usrsapNW3 sys)
volume sapMSID (nfs://10.42.0.4/usrsapNW3 ers)
Before you begin, refer to the following SAP Notes and papers first:
SAP Note 1928533, which has:
List of Azure VM sizes that are supported for the deployment of SAP software
Important capacity information for Azure VM sizes
Supported SAP software, and operating system (OS) and database combinations
Required SAP kernel version for Windows and Linux on Microsoft Azure
Azure NetApp Files documentation
SAP Note 2015553 lists prerequisites for SAP-supported SAP software deployments in Azure.
SAP Note 2002167 has recommended OS settings for Red Hat Enterprise Linux
SAP Note 2009879 has SAP HANA Guidelines for Red Hat Enterprise Linux
SAP Note 2178632 has detailed information about all monitoring metrics reported for SAP in Azure.
SAP Note 2191498 has the required SAP Host Agent version for Linux in Azure.
SAP Note 2243692 has information about SAP licensing on Linux in Azure.
SAP Note 1999351 has additional troubleshooting information for the Azure Enhanced Monitoring Extension
for SAP.
SAP Community WIKI has all required SAP Notes for Linux.
Azure Virtual Machines planning and implementation for SAP on Linux
Azure Virtual Machines deployment for SAP on Linux
Azure Virtual Machines DBMS deployment for SAP on Linux
SAP Netweaver in pacemaker cluster
General RHEL documentation
High Availability Add-On Overview
High Availability Add-On Administration
High Availability Add-On Reference
Configuring ASCS/ERS for SAP Netweaver with standalone resources in RHEL 7.5
Configure SAP S/4HANA ASCS/ERS with Standalone Enqueue Server 2 (ENSA2) in Pacemaker on RHEL
Azure-specific RHEL documentation:
Support Policies for RHEL High Availability Clusters - Microsoft Azure Virtual Machines as Cluster
Members
Installing and Configuring a Red Hat Enterprise Linux 7.4 (and later) High-Availability Cluster on
Microsoft Azure
NetApp SAP Applications on Microsoft Azure using Azure NetApp Files

Overview
The virtual machines, that participate in the cluster must be sized to be able to run all resources, in case failover
occurs. Each SAP SID can fail over independent from each other in the multi-SID high availability cluster.
To achieve high availability, SAP NetWeaver requires highly available shares. In this documentation, we present
the examples with the SAP shares deployed on Azure NetApp Files NFS volumes. It is also possible to host the
shares on highly available GlusterFS cluster, which can be used by multiple SAP systems.
IMPORTANT
The support for multi-SID clustering of SAP ASCS/ERS with Red Hat Linux as guest operating system in Azure VMs is
limited to five SAP SIDs on the same cluster. Each new SID increases the complexity. A mix of SAP Enqueue Replication
Server 1 and Enqueue Replication Server 2 on the same cluster is not suppor ted . Multi-SID clustering describes the
installation of multiple SAP ASCS/ERS instances with different SIDs in one Pacemaker cluster. Currently multi-SID clustering
is only supported for ASCS/ERS.

TIP
The multi-SID clustering of SAP ASCS/ERS is a solution with higher complexity. It is more complex to implement. It also
involves higher administrative effort, when executing maintenance activities (like OS patching). Before you start the actual
implementation, take time to carefully plan out the deployment and all involved components like VMs, NFS mounts, VIPs,
load balancer configurations and so on.

SAP NetWeaver ASCS, SAP NetWeaver SCS and SAP NetWeaver ERS use virtual hostname and virtual IP
addresses. On Azure, a load balancer is required to use a virtual IP address. We recommend using Standard load
balancer.
The following list shows the configuration of the (A)SCS and ERS load balancer for this multi-SID cluster example
with three SAP systems. You will need separate frontend IP, health probes, and load-balancing rules for each
ASCS and ERS instance for each of the SIDs. Assign all VMs, that are part of the ASCS/ASCS cluster to one
backend pool of a single ILB.
(A )SCS
Frontend configuration
IP address for NW1: 10.3.1.50
IP address for NW2: 10.3.1.52
IP address for NW3: 10.3.1.54
Probe Ports
Port 620<nr> , therefore for NW1, NW2, and NW3 probe ports 62000 , 62010 and 62020
Load-balancing rules - create one for each instance, that is, NW1/ASCS, NW2/ASCS and NW3/ASCS.
If using Standard Load Balancer, select HA por ts
If using Basic Load Balancer, create Load balancing rules for the following ports
32<nr> TCP
36<nr> TCP
39<nr> TCP
81<nr> TCP
5<nr> 13 TCP
5<nr> 14 TCP
5<nr> 16 TCP
ERS
Frontend configuration
IP address for NW1 10.3.1.51
IP address for NW2 10.3.1.53
IP address for NW3 10.3.1.55
Probe Port
Port 621<nr> , therefore for NW1, NW2, and N3 probe ports 62102 , 62112 and 62122
Load-balancing rules - create one for each instance, that is, NW1/ERS, NW2/ERS and NW3/ERS.
If using Standard Load Balancer, select HA por ts
If using Basic Load Balancer, create Load balancing rules for the following ports
32<nr> TCP
33<nr> TCP
5<nr> 13 TCP
5<nr> 14 TCP
5<nr> 16 TCP
Backend configuration
Connected to primary network interfaces of all virtual machines that should be part of the (A)SCS/ERS
cluster

IMPORTANT
Floating IP is not supported on a NIC secondary IP configuration in load-balancing scenarios. For details see Azure Load
balancer Limitations. If you need additional IP address for the VM, deploy a second NIC.

NOTE
When VMs without public IP addresses are placed in the backend pool of internal (no public IP address) Standard Azure
load balancer, there will be no outbound internet connectivity, unless additional configuration is performed to allow routing
to public end points. For details on how to achieve outbound connectivity see Public endpoint connectivity for Virtual
Machines using Azure Standard Load Balancer in SAP high-availability scenarios.
IMPORTANT
Do not enable TCP timestamps on Azure VMs placed behind Azure Load Balancer. Enabling TCP timestamps will cause the
health probes to fail. Set parameter net.ipv4.tcp_timestamps to 0 . For details see Load Balancer health probes.

SAP shares
SAP NetWeaver requires shared storage for the transport, profile directory, and so on. For highly available SAP
system, it is important to have highly available shares. You will need to decide on the architecture for your SAP
shares. One option is to deploy the shares on Azure NetApp Files NFS volumes. With Azure NetApp Files, you will
get built-in high availability for the SAP NFS shares.
Another option is to build GlusterFS on Azure VMs on Red Hat Enterprise Linux for SAP NetWeaver, which can be
shared between multiple SAP systems.

Deploy the first SAP system in the cluster


Now that you have decided on the architecture for the SAP shares, deploy the first SAP system in the cluster,
following the corresponding documentation.
If using Azure NetApp Files NFS volumes, follow Azure VMs high availability for SAP NetWeaver on Red Hat
Enterprise Linux with Azure NetApp Files for SAP applications
If using GlusterFS cluster, follow GlusterFS on Azure VMs on Red Hat Enterprise Linux for SAP NetWeaver.
The documents listed above will guide you through the steps to prepare the necessary infrastructure, build the
cluster, prepare the OS for running the SAP application.

TIP
Always test the fail over functionality of the cluster, after the first system is deployed, before adding the additional SAP
SIDs to the cluster. That way you will know that the cluster functionality works, before adding the complexity of additional
SAP systems to the cluster.

Deploy additional SAP systems in the cluster


In this example, we assume that system NW1 was already deployed in the cluster. We will show how to deploy in
the cluster SAP systems NW2 and NW3 .
The following items are prefixed with either [A] - applicable to all nodes, [1] - only applicable to node 1 or [2] -
only applicable to node 2.
Prerequisites

IMPORTANT
Before following the instructions to deploy additional SAP systems in the cluster, follow the instructions to deploy the first
SAP system in the cluster, as there are steps which are only necessary during the first system deployment.

This documentation assumes that:


The Pacemaker cluster is already configured and running.
At least one SAP system (ASCS / ERS instance) is already deployed and is running in the cluster.
The cluster failover functionality has been tested.
The NFS shares for all SAP systems are deployed.
Prepare for SAP NetWeaver Installation
1. Add configuration for the newly deployed system (that is, NW2 , NW3 ) to the existing Azure Load
Balancer, following the instructions Deploy Azure Load Balancer manually via Azure portal. Adjust the IP
addresses, health probe ports, load-balancing rules for your configuration.
2. [A] Setup name resolution for the additional SAP systems. You can either use DNS server or modify
/etc/hosts on all nodes. This example shows how to use the /etc/hosts file. Adapt the IP addresses and
the host names to your environment.

sudo vi /etc/hosts
# IP address of the load balancer frontend configuration for NW2 ASCS
10.3.1.52 msnw2ascs
# IP address of the load balancer frontend configuration for NW3 ASCS
10.3.1.54 msnw3ascs
# IP address of the load balancer frontend configuration for NW2 ERS
10.3.1.53 msnw2ers
# IP address of the load balancer frontend configuration for NW3 ERS
10.3.1.55 msnw3ers

3. [A] Create the shared directories for the additional NW2 and NW3 SAP systems that you are deploying
to the cluster.

sudo mkdir -p /sapmnt/NW2


sudo mkdir -p /usr/sap/NW2/SYS
sudo mkdir -p /usr/sap/NW2/ASCS10
sudo mkdir -p /usr/sap/NW2/ERS12
sudo mkdir -p /sapmnt/NW3
sudo mkdir -p /usr/sap/NW3/SYS
sudo mkdir -p /usr/sap/NW3/ASCS20
sudo mkdir -p /usr/sap/NW3/ERS22

sudo chattr +i /sapmnt/NW2


sudo chattr +i /usr/sap/NW2/SYS
sudo chattr +i /usr/sap/NW2/ASCS10
sudo chattr +i /usr/sap/NW2/ERS12
sudo chattr +i /sapmnt/NW3
sudo chattr +i /usr/sap/NW3/SYS
sudo chattr +i /usr/sap/NW3/ASCS20
sudo chattr +i /usr/sap/NW3/ERS22

4. [A] Add the mount entries for the /sapmnt/SID and /usr/sap/SID/SYS file systems for the additional SAP
systems that you are deploying to the cluster. In this example NW2 and NW3 .
Update file /etc/fstab with the file systems for the additional SAP systems that you are deploying to the
cluster.
If using Azure NetApp Files, follow the instructions here
If using GlusterFS cluster, follow the instructions here
Install ASCS / ERS
1. Create the virtual IP and health probe cluster resources for the ASCS instances of the additional SAP
systems you are deploying to the cluster. The example shown here is for NW2 and NW3 ASCS, using NFS
on Azure NetApp Files volumes with NFSv3 protocol.
sudo pcs resource create fs_NW2_ASCS Filesystem device='10.42.0.4:/sapMSIDR/usrsapNW2ascs' \
directory='/usr/sap/NW2/ASCS10' fstype='nfs' force_unmount=safe \
op start interval=0 timeout=60 op stop interval=0 timeout=120 op monitor interval=200 timeout=40 \
--group g-NW2_ASCS

sudo pcs resource create vip_NW2_ASCS IPaddr2 \


ip=10.3.1.52 cidr_netmask=24 \
--group g-NW2_ASCS

sudo pcs resource create nc_NW2_ASCS azure-lb port=62010 \


--group g-NW2_ASCS

sudo pcs resource create fs_NW3_ASCS Filesystem device='10.42.0.4:/sapMSIDR/usrsapNW3ascs' \


directory='/usr/sap/NW3/ASCS20' fstype='nfs' force_unmount=safe \
op start interval=0 timeout=60 op stop interval=0 timeout=120 op monitor interval=200 timeout=40 \
--group g-NW3_ASCS

sudo pcs resource create vip_NW3_ASCS IPaddr2 \


ip=10.3.1.54 cidr_netmask=24 \
--group g-NW3_ASCS

sudo pcs resource create nc_NW3_ASCS azure-lb port=62020 \


--group g-NW3_ASCS

Make sure the cluster status is ok and that all resources are started. It is not important on which node the
resources are running.
2. [1] Install SAP NetWeaver ASCS
Install SAP NetWeaver ASCS as root, using a virtual hostname that maps to the IP address of the load
balancer frontend configuration for the ASCS. For example, for system NW2 , the virtual hostname is
msnw2ascs , 10.3.1.52 and the instance number that you used for the probe of the load balancer, for
example 10 . For system NW3 , the virtual hostname is msnw3ascs , 10.3.1.54 and the instance number
that you used for the probe of the load balancer, for example 20 . Note down on which cluster node you
installed ASCS for each SAP SID.
You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a non-root user to connect
to sapinst. You can use parameter SAPINST_USE_HOSTNAME to install SAP, using virtual host name.

# Allow access to SWPM. This rule is not permanent. If you reboot the machine, you have to run the
command again
sudo firewall-cmd --zone=public --add-port=4237/tcp
sudo swpm/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin SAPINST_USE_HOSTNAME=virtual_hostname

If the installation fails to create a subfolder in /usr/sap/SID /ASCSInstance# , try setting the owner to
sid adm and group to sapsys of the ASCSInstance# and retry.
3. [1] Create a virtual IP and health-probe cluster resources for the ERS instance of the additional SAP
system you are deploying to the cluster. The example shown here is for NW2 and NW3 ERS, using NFS
on Azure NetApp Files volumes with NFSv3 protocol.
sudo pcs resource create fs_NW2_AERS Filesystem device='10.42.0.4:/sapMSIDR/usrsapNW2ers' \
directory='/usr/sap/NW2/ERS12' fstype='nfs' force_unmount=safe \
op start interval=0 timeout=60 op stop interval=0 timeout=120 op monitor interval=200 timeout=40 \
--group g-NW2_AERS

sudo pcs resource create vip_NW2_AERS IPaddr2 \


ip=10.3.1.53 cidr_netmask=24 \
--group g-NW2_AERS

sudo pcs resource create nc_NW2_AERS azure-lb port=62112 \


--group g-NW2_AERS

sudo pcs resource create fs_NW3_AERS Filesystem device='10.42.0.4:/sapMSIDR/usrsapNW3ers' \


directory='/usr/sap/NW3/ERS22' fstype='nfs' force_unmount=safe \
op start interval=0 timeout=60 op stop interval=0 timeout=120 op monitor interval=200 timeout=40 \
--group g-NW3_AERS

sudo pcs resource create vip_NW3_AERS IPaddr2 \


ip=10.3.1.55 cidr_netmask=24 \
--group g-NW3_AERS

sudo pcs resource create nc_NW3_AERS azure-lb port=62122 \


--group g-NW3_AERS

Make sure the cluster status is ok and that all resources are started.
Next, make sure that the resources of the newly created ERS group, are running on the cluster node,
opposite to the cluster node where the ASCS instance for the same SAP system was installed. For example,
if NW2 ASCS was installed on rhelmsscl1 , then make sure the NW2 ERS group is running on rhelmsscl2
. You can migrate the NW2 ERS group to rhelmsscl2 by running the following command for one of the
cluster resources in the group:

pcs resource move fs_NW2_AERS rhelmsscl2

4. [2] Install SAP NetWeaver ERS


Install SAP NetWeaver ERS as root on the other node, using a virtual hostname that maps to the IP
address of the load balancer frontend configuration for the ERS. For example for system NW2 , the virtual
host name will be msnw2ers , 10.3.1.53 and the instance number that you used for the probe of the load
balancer, for example 12 . For system NW3 , the virtual host name msnw3ers , 10.3.1.55 and the instance
number that you used for the probe of the load balancer, for example 22 .
You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a non-root user to connect
to sapinst. You can use parameter SAPINST_USE_HOSTNAME to install SAP, using virtual host name.

# Allow access to SWPM. This rule is not permanent. If you reboot the machine, you have to run the
command again
sudo firewall-cmd --zone=public --add-port=4237/tcp
sudo swpm/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin SAPINST_USE_HOSTNAME=virtual_hostname

NOTE
Use SWPM SP 20 PL 05 or higher. Lower versions do not set the permissions correctly and the installation will fail.

If the installation fails to create a subfolder in /usr/sap/NW2 /ERSInstance# , try setting the owner to
sid adm and the group to sapsys of the ERSInstance# folder and retry.
If it was necessary for you to migrate the ERS group of the newly deployed SAP system to a different
cluster node, don't forget to remove the location constraint for the ERS group. You can remove the
constraint by running the following command (the example is given for SAP systems NW2 and NW3 ).
Make sure to remove the temporary constraints for the same resource you used in the command to move
the ERS cluster group.

pcs resource clear fs_NW2_AERS


pcs resource clear fs_NW3_AERS

5. [1] Adapt the ASCS/SCS and ERS instance profiles for the newly installed SAP system(s). The example
shown below is for NW2. You will need to adapt the ASCS/SCS and ERS profiles for all SAP instances
added to the cluster.
ASCS/SCS profile

sudo vi /sapmnt/NW2/profile/NW2_ASCS10_msnw2ascs

# Change the restart command to a start command


#Restart_Program_01 = local $(_EN) pf=$(_PF)
Start_Program_01 = local $(_EN) pf=$(_PF)

# Add the keep alive parameter, if using ENSA1


enque/encni/set_so_keepalive = true

For both ENSA1 and ENSA2, make sure that the keepalive OS parameters are set as described in
SAP note 1410736.
ERS profile

sudo vi /sapmnt/NW2/profile/NW2_ERS12_msnw2ers

# Change the restart command to a start command


#Restart_Program_00 = local $(_ER) pf=$(_PFL) NR=$(SCSID)
Start_Program_00 = local $(_ER) pf=$(_PFL) NR=$(SCSID)

# remove Autostart from ERS profile


# Autostart = 1

6. [A] Update the /usr/sap/sapservices file


To prevent the start of the instances by the sapinit startup script, all instances managed by Pacemaker
must be commented out from /usr/sap/sapservices file. The example shown below is for SAP systems
NW2 and NW3 .

# On the node where ASCS was installed, comment out the line for the ASCS instacnes
#LD_LIBRARY_PATH=/usr/sap/NW2/ASCS10/exe:$LD_LIBRARY_PATH; export LD_LIBRARY_PATH;
/usr/sap/NW2/ASCS10/exe/sapstartsrv pf=/usr/sap/NW2/SYS/profile/NW2_ASCS10_msnw2ascs -D -u nw2adm
#LD_LIBRARY_PATH=/usr/sap/NW3/ASCS20/exe:$LD_LIBRARY_PATH; export LD_LIBRARY_PATH;
/usr/sap/NW3/ASCS20/exe/sapstartsrv pf=/usr/sap/NW3/SYS/profile/NW3_ASCS20_msnw3ascs -D -u nw3adm

# On the node where ERS was installed, comment out the line for the ERS instacnes
#LD_LIBRARY_PATH=/usr/sap/NW2/ERS12/exe:$LD_LIBRARY_PATH; export LD_LIBRARY_PATH;
/usr/sap/NW2/ERS12/exe/sapstartsrv pf=/usr/sap/NW2/ERS12/profile/NW2_ERS12_msnw2ers -D -u nw2adm
#LD_LIBRARY_PATH=/usr/sap/NW3/ERS22/exe:$LD_LIBRARY_PATH; export LD_LIBRARY_PATH;
/usr/sap/NW3/ERS22/exe/sapstartsrv pf=/usr/sap/NW3/ERS22/profile/NW3_ERS22_msnw3ers -D -u nw3adm

7. [1] Create the SAP cluster resources for the newly installed SAP system.
If using enqueue server 1 architecture (ENSA1), define the resources for SAP systems NW2 and NW3 as
follows:

sudo pcs property set maintenance-mode=true

sudo pcs resource create rsc_sap_NW2_ASCS10 SAPInstance \


InstanceName=NW2_ASCS10_msnw2ascs START_PROFILE="/sapmnt/NW2/profile/NW2_ASCS10_msnw2ascs" \
AUTOMATIC_RECOVER=false \
meta resource-stickiness=5000 migration-threshold=1 failure-timeout=60 \
op monitor interval=20 on-fail=restart timeout=60 \
op start interval=0 timeout=600 op stop interval=0 timeout=600 \
--group g-NW2_ASCS

sudo pcs resource create rsc_sap_NW2_ERS12 SAPInstance \


InstanceName=NW2_ERS12_msnw2ers START_PROFILE="/sapmnt/NW2/profile/NW2_ERS12_msnw2ers" \
AUTOMATIC_RECOVER=false IS_ERS=true \
op monitor interval=20 on-fail=restart timeout=60 op start interval=0 timeout=600 op stop interval=0
timeout=600 \
--group g-NW2_AERS

sudo pcs constraint colocation add g-NW2_AERS with g-NW2_ASCS -5000


sudo pcs constraint location rsc_sap_NW2_ASCS10 rule score=2000 runs_ers_NW2 eq 1
sudo pcs constraint order g-NW2_ASCS then g-NW2_AERS kind=Optional symmetrical=false

sudo pcs resource create rsc_sap_NW3_ASCS20 SAPInstance \


InstanceName=NW3_ASCS20_msnw3ascs START_PROFILE="/sapmnt/NW3/profile/NW3_ASCS20_msnw3ascs" \
AUTOMATIC_RECOVER=false \
meta resource-stickiness=5000 migration-threshold=1 failure-timeout=60 \
op monitor interval=20 on-fail=restart timeout=60 \
op start interval=0 timeout=600 op stop interval=0 timeout=600 \
--group g-NW3_ASCS

sudo pcs resource create rsc_sap_NW3_ERS22 SAPInstance \


InstanceName=NW3_ERS22_msnw3ers START_PROFILE="/sapmnt/NW3/profile/NW2_ERS22_msnw3ers" \
AUTOMATIC_RECOVER=false IS_ERS=true \
op monitor interval=20 on-fail=restart timeout=60 op start interval=0 timeout=600 op stop interval=0
timeout=600 \
--group g-NW3_AERS

sudo pcs constraint colocation add g-NW3_AERS with g-NW3_ASCS -5000


sudo pcs constraint location rsc_sap_NW3_ASCS20 rule score=2000 runs_ers_NW3 eq 1
sudo pcs constraint order g-NW3_ASCS then g-NW3_AERS kind=Optional symmetrical=false

sudo pcs property set maintenance-mode=false

SAP introduced support for enqueue server 2, including replication, as of SAP NW 7.52. Starting with
ABAP Platform 1809, enqueue server 2 is installed by default. See SAP note 2630416 for enqueue server
2 support. If using enqueue server 2 architecture (ENSA2), define the resources for SAP systems NW2
and NW3 as follows:
sudo pcs property set maintenance-mode=true

sudo pcs resource create rsc_sap_NW2_ASCS10 SAPInstance \


InstanceName=NW2_ASCS10_msnw2ascs START_PROFILE="/sapmnt/NW2/profile/NW2_ASCS10_msnw2ascs" \
AUTOMATIC_RECOVER=false \
meta resource-stickiness=5000 migration-threshold=1 failure-timeout=60 \
op monitor interval=20 on-fail=restart timeout=60 \
op start interval=0 timeout=600 op stop interval=0 timeout=600 \
--group g-NW2_ASCS

sudo pcs resource create rsc_sap_NW2_ERS12 SAPInstance \


InstanceName=NW2_ERS12_msnw2ers START_PROFILE="/sapmnt/NW2/profile/NW2_ERS12_msnw2ers" \
AUTOMATIC_RECOVER=false IS_ERS=true \
op monitor interval=20 on-fail=restart timeout=60 op start interval=0 timeout=600 op stop interval=0
timeout=600 \
--group g-NW2_AERS

sudo pcs constraint colocation add g-NW2_AERS with g-NW2_ASCS -5000


sudo pcs constraint order g-NW2_ASCS then g-NW2_AERS kind=Optional symmetrical=false
sudo pcs constraint order start g-NW2_ASCS then stop g-NW2_AERS symmetrical=false

sudo pcs resource create rsc_sap_NW3_ASCS20 SAPInstance \


InstanceName=NW3_ASCS20_msnw3ascs START_PROFILE="/sapmnt/NW3/profile/NW3_ASCS20_msnw3ascs" \
AUTOMATIC_RECOVER=false \
meta resource-stickiness=5000 migration-threshold=1 failure-timeout=60 \
op monitor interval=20 on-fail=restart timeout=60 \
op start interval=0 timeout=600 op stop interval=0 timeout=600 \
--group g-NW3_ASCS

sudo pcs resource create rsc_sap_NW3_ERS22 SAPInstance \


InstanceName=NW3_ERS22_msnw3ers START_PROFILE="/sapmnt/NW3/profile/NW2_ERS22_msnw3ers" \
AUTOMATIC_RECOVER=false IS_ERS=true \
op monitor interval=20 on-fail=restart timeout=60 op start interval=0 timeout=600 op stop interval=0
timeout=600 \
--group g-NW3_AERS

sudo pcs constraint colocation add g-NW3_AERS with g-NW3_ASCS -5000


sudo pcs constraint order g-NW3_ASCS then g-NW3_AERS kind=Optional symmetrical=false
sudo pcs constraint order start g-NW3_ASCS then stop g-NW3_AERS symmetrical=false

sudo pcs property set maintenance-mode=false

If you are upgrading from an older version and switching to enqueue server 2, see SAP note 2641019.

NOTE
The timeouts in the above configuration are just examples and may need to be adapted to the specific SAP setup.

Make sure that the cluster status is ok and that all resources are started. It is not important on which node
the resources are running. The following example shows the cluster resources status, after SAP systems
NW2 and NW3 were added to the cluster.
sudo pcs status

Online: [ rhelmsscl1 rhelmsscl2 ]

Full list of resources:

rsc_st_azure (stonith:fence_azure_arm): Started rhelmsscl1


Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started rhelmsscl1
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started rhelmsscl1
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started rhelmsscl1
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started rhelmsscl1
Resource Group: g-NW1_AERS
fs_NW1_AERS (ocf::heartbeat:Filesystem): Started rhelmsscl2
vip_NW1_AERS (ocf::heartbeat:IPaddr2): Started rhelmsscl2
nc_NW1_AERS (ocf::heartbeat:azure-lb): Started rhelmsscl2
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started rhelmsscl2
Resource Group: g-NW2_ASCS
fs_NW2_ASCS (ocf::heartbeat:Filesystem): Started rhelmsscl1
vip_NW2_ASCS (ocf::heartbeat:IPaddr2): Started rhelmsscl1
nc_NW2_ASCS (ocf::heartbeat:azure-lb): Started rhelmsscl1
rsc_sap_NW2_ASCS10 (ocf::heartbeat:SAPInstance): Started rhelmsscl1
Resource Group: g-NW2_AERS
fs_NW2_AERS (ocf::heartbeat:Filesystem): Started rhelmsscl1
vip_NW2_AERS (ocf::heartbeat:IPaddr2): Started rhelmsscl1
nc_NW2_AERS (ocf::heartbeat:azure-lb): Started rhelmsscl1
rsc_sap_NW2_ERS12 (ocf::heartbeat:SAPInstance): Started rhelmsscl1
Resource Group: g-NW3_ASCS
fs_NW3_ASCS (ocf::heartbeat:Filesystem): Started rhelmsscl1
vip_NW3_ASCS (ocf::heartbeat:IPaddr2): Started rhelmsscl1
nc_NW3_ASCS (ocf::heartbeat:azure-lb): Started rhelmsscl1
rsc_sap_NW3_ASCS20 (ocf::heartbeat:SAPInstance): Started rhelmsscl1
Resource Group: g-NW3_AERS
fs_NW3_AERS (ocf::heartbeat:Filesystem): Started rhelmsscl1
vip_NW3_AERS (ocf::heartbeat:IPaddr2): Started rhelmsscl1
nc_NW3_AERS (ocf::heartbeat:azure-lb): Started rhelmsscl1
rsc_sap_NW3_ERS22 (ocf::heartbeat:SAPInstance): Started rhelmsscl1

8. [A] Add firewall rules for ASCS and ERS on both nodes. The example below shows the firewall rules for
both SAP systems NW2 and NW3 .
# NW2 - ASCS
sudo firewall-cmd --zone=public --add-port=62010/tcp --permanent
sudo firewall-cmd --zone=public --add-port=62010/tcp
sudo firewall-cmd --zone=public --add-port=3210/tcp --permanent
sudo firewall-cmd --zone=public --add-port=3210/tcp
sudo firewall-cmd --zone=public --add-port=3610/tcp --permanent
sudo firewall-cmd --zone=public --add-port=3610/tcp
sudo firewall-cmd --zone=public --add-port=3910/tcp --permanent
sudo firewall-cmd --zone=public --add-port=3910/tcp
sudo firewall-cmd --zone=public --add-port=8110/tcp --permanent
sudo firewall-cmd --zone=public --add-port=8110/tcp
sudo firewall-cmd --zone=public --add-port=51013/tcp --permanent
sudo firewall-cmd --zone=public --add-port=51013/tcp
sudo firewall-cmd --zone=public --add-port=51014/tcp --permanent
sudo firewall-cmd --zone=public --add-port=51014/tcp
sudo firewall-cmd --zone=public --add-port=51016/tcp --permanent
sudo firewall-cmd --zone=public --add-port=51016/tcp
# NW2 - ERS
sudo firewall-cmd --zone=public --add-port=62112/tcp --permanent
sudo firewall-cmd --zone=public --add-port=62112/tcp
sudo firewall-cmd --zone=public --add-port=3312/tcp --permanent
sudo firewall-cmd --zone=public --add-port=3312/tcp
sudo firewall-cmd --zone=public --add-port=51213/tcp --permanent
sudo firewall-cmd --zone=public --add-port=51213/tcp
sudo firewall-cmd --zone=public --add-port=51214/tcp --permanent
sudo firewall-cmd --zone=public --add-port=51214/tcp
sudo firewall-cmd --zone=public --add-port=51216/tcp --permanent
sudo firewall-cmd --zone=public --add-port=51216/tcp
# NW3 - ASCS
sudo firewall-cmd --zone=public --add-port=62020/tcp --permanent
sudo firewall-cmd --zone=public --add-port=62020/tcp
sudo firewall-cmd --zone=public --add-port=3220/tcp --permanent
sudo firewall-cmd --zone=public --add-port=3220/tcp
sudo firewall-cmd --zone=public --add-port=3620/tcp --permanent
sudo firewall-cmd --zone=public --add-port=3620/tcp
sudo firewall-cmd --zone=public --add-port=3920/tcp --permanent
sudo firewall-cmd --zone=public --add-port=3920/tcp
sudo firewall-cmd --zone=public --add-port=8120/tcp --permanent
sudo firewall-cmd --zone=public --add-port=8120/tcp
sudo firewall-cmd --zone=public --add-port=52013/tcp --permanent
sudo firewall-cmd --zone=public --add-port=52013/tcp
sudo firewall-cmd --zone=public --add-port=52014/tcp --permanent
sudo firewall-cmd --zone=public --add-port=52014/tcp
sudo firewall-cmd --zone=public --add-port=52016/tcp --permanent
sudo firewall-cmd --zone=public --add-port=52016/tcp
# NW3 - ERS
sudo firewall-cmd --zone=public --add-port=62122/tcp --permanent
sudo firewall-cmd --zone=public --add-port=62122/tcp
sudo firewall-cmd --zone=public --add-port=3322/tcp --permanent
sudo firewall-cmd --zone=public --add-port=3322/tcp
sudo firewall-cmd --zone=public --add-port=52213/tcp --permanent
sudo firewall-cmd --zone=public --add-port=52213/tcp
sudo firewall-cmd --zone=public --add-port=52214/tcp --permanent
sudo firewall-cmd --zone=public --add-port=52214/tcp
sudo firewall-cmd --zone=public --add-port=52216/tcp --permanent
sudo firewall-cmd --zone=public --add-port=52216/tcp

Proceed with the SAP installation


Complete your SAP installation by:
Preparing your SAP NetWeaver application servers
Installing a DBMS instance
Installing A primary SAP application server
Installing one or more additional SAP application instances
Test the multi-SID cluster setup
The following tests are a subset of the test cases in the best practices guides of Red Hat. They are included for
your convenience. For the full list of cluster tests, reference the following documentation:
If using Azure NetApp Files NFS volumes, follow Azure VMs high availability for SAP NetWeaver on RHEL
with Azure NetApp Files for SAP applications
If using highly available GlusterFS , follow Azure VMs high availability for SAP NetWeaver on RHEL for SAP
applications.
Always read the Red Hat best practices guides and perform all additional tests that might have been added.
The tests that are presented are in a two node, multi-SID cluster with three SAP systems installed.
1. Manually migrate the ASCS instance. The example shows migrating the ASCS instance for SAP system
NW3.
Resource state before starting the test:

Online: [ rhelmsscl1 rhelmsscl2 ]

Full list of resources:

rsc_st_azure (stonith:fence_azure_arm): Started rhelmsscl1


Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started rhelmsscl1
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started rhelmsscl1
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started rhelmsscl1
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started rhelmsscl1
Resource Group: g-NW1_AERS
fs_NW1_AERS (ocf::heartbeat:Filesystem): Started rhelmsscl2
vip_NW1_AERS (ocf::heartbeat:IPaddr2): Started rhelmsscl2
nc_NW1_AERS (ocf::heartbeat:azure-lb): Started rhelmsscl2
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started rhelmsscl2
Resource Group: g-NW2_ASCS
fs_NW2_ASCS (ocf::heartbeat:Filesystem): Started rhelmsscl2
vip_NW2_ASCS (ocf::heartbeat:IPaddr2): Started rhelmsscl2
nc_NW2_ASCS (ocf::heartbeat:azure-lb): Started rhelmsscl2
rsc_sap_NW2_ASCS10 (ocf::heartbeat:SAPInstance): Started rhelmsscl2
Resource Group: g-NW2_AERS
fs_NW2_AERS (ocf::heartbeat:Filesystem): Started rhelmsscl1
vip_NW2_AERS (ocf::heartbeat:IPaddr2): Started rhelmsscl1
nc_NW2_AERS (ocf::heartbeat:azure-lb): Started rhelmsscl1
rsc_sap_NW2_ERS12 (ocf::heartbeat:SAPInstance): Started rhelmsscl1
Resource Group: g-NW3_ASCS
fs_NW3_ASCS (ocf::heartbeat:Filesystem): Started rhelmsscl2
vip_NW3_ASCS (ocf::heartbeat:IPaddr2): Started rhelmsscl2
nc_NW3_ASCS (ocf::heartbeat:azure-lb): Started rhelmsscl2
rsc_sap_NW3_ASCS20 (ocf::heartbeat:SAPInstance): Started rhelmsscl2
Resource Group: g-NW3_AERS
fs_NW3_AERS (ocf::heartbeat:Filesystem): Started rhelmsscl1
vip_NW3_AERS (ocf::heartbeat:IPaddr2): Started rhelmsscl1
nc_NW3_AERS (ocf::heartbeat:azure-lb): Started rhelmsscl1
rsc_sap_NW3_ERS22 (ocf::heartbeat:SAPInstance): Started rhelmsscl1

Run the following commands as root to migrate the NW3 ASCS instance.

pcs resource move rsc_sap_NW3_ASCS200


# Clear temporary migration constraints
pcs resource clear rsc_sap_NW3_ASCS20

# Remove failed actions for the ERS that occurred as part of the migration
pcs resource cleanup rsc_sap_NW3_ERS22
Resource state after the test:

Online: [ rhelmsscl1 rhelmsscl2 ]

Full list of resources:

rsc_st_azure (stonith:fence_azure_arm): Started rhelmsscl1


Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started rhelmsscl1
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started rhelmsscl1
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started rhelmsscl1
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started rhelmsscl1
Resource Group: g-NW1_AERS
fs_NW1_AERS (ocf::heartbeat:Filesystem): Started rhelmsscl2
vip_NW1_AERS (ocf::heartbeat:IPaddr2): Started rhelmsscl2
nc_NW1_AERS (ocf::heartbeat:azure-lb): Started rhelmsscl2
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started rhelmsscl2
Resource Group: g-NW2_ASCS
fs_NW2_ASCS (ocf::heartbeat:Filesystem): Started rhelmsscl2
vip_NW2_ASCS (ocf::heartbeat:IPaddr2): Started rhelmsscl2
nc_NW2_ASCS (ocf::heartbeat:azure-lb): Started rhelmsscl2
rsc_sap_NW2_ASCS10 (ocf::heartbeat:SAPInstance): Started rhelmsscl2
Resource Group: g-NW2_AERS
fs_NW2_AERS (ocf::heartbeat:Filesystem): Started rhelmsscl1
vip_NW2_AERS (ocf::heartbeat:IPaddr2): Started rhelmsscl1
nc_NW2_AERS (ocf::heartbeat:azure-lb): Started rhelmsscl1
rsc_sap_NW2_ERS12 (ocf::heartbeat:SAPInstance): Started rhelmsscl1
Resource Group: g-NW3_ASCS
fs_NW3_ASCS (ocf::heartbeat:Filesystem): Started rhelmsscl1
vip_NW3_ASCS (ocf::heartbeat:IPaddr2): Started rhelmsscl1
nc_NW3_ASCS (ocf::heartbeat:azure-lb): Started rhelmsscl1
rsc_sap_NW3_ASCS20 (ocf::heartbeat:SAPInstance): Started rhelmsscl1
Resource Group: g-NW3_AERS
fs_NW3_AERS (ocf::heartbeat:Filesystem): Started rhelmsscl2
vip_NW3_AERS (ocf::heartbeat:IPaddr2): Started rhelmsscl2
nc_NW3_AERS (ocf::heartbeat:azure-lb): Started rhelmsscl2
rsc_sap_NW3_ERS22 (ocf::heartbeat:SAPInstance): Started rhelmsscl2

2. Simulate node crash


Resource state before starting the test:
Online: [ rhelmsscl1 rhelmsscl2 ]

Full list of resources:

rsc_st_azure (stonith:fence_azure_arm): Started rhelmsscl1


Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started rhelmsscl1
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started rhelmsscl1
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started rhelmsscl1
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started rhelmsscl1
Resource Group: g-NW1_AERS
fs_NW1_AERS (ocf::heartbeat:Filesystem): Started rhelmsscl2
vip_NW1_AERS (ocf::heartbeat:IPaddr2): Started rhelmsscl2
nc_NW1_AERS (ocf::heartbeat:azure-lb): Started rhelmsscl2
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started rhelmsscl2
Resource Group: g-NW2_ASCS
fs_NW2_ASCS (ocf::heartbeat:Filesystem): Started rhelmsscl1
vip_NW2_ASCS (ocf::heartbeat:IPaddr2): Started rhelmsscl1
nc_NW2_ASCS (ocf::heartbeat:azure-lb): Started rhelmsscl1
rsc_sap_NW2_ASCS10 (ocf::heartbeat:SAPInstance): Started rhelmsscl1
Resource Group: g-NW2_AERS
fs_NW2_AERS (ocf::heartbeat:Filesystem): Started rhelmsscl2
vip_NW2_AERS (ocf::heartbeat:IPaddr2): Started rhelmsscl2
nc_NW2_AERS (ocf::heartbeat:azure-lb): Started rhelmsscl2
rsc_sap_NW2_ERS12 (ocf::heartbeat:SAPInstance): Started rhelmsscl2
Resource Group: g-NW3_ASCS
fs_NW3_ASCS (ocf::heartbeat:Filesystem): Started rhelmsscl1
vip_NW3_ASCS (ocf::heartbeat:IPaddr2): Started rhelmsscl1
nc_NW3_ASCS (ocf::heartbeat:azure-lb): Started rhelmsscl1
rsc_sap_NW3_ASCS20 (ocf::heartbeat:SAPInstance): Started rhelmsscl1
Resource Group: g-NW3_AERS
fs_NW3_AERS (ocf::heartbeat:Filesystem): Started rhelmsscl2
vip_NW3_AERS (ocf::heartbeat:IPaddr2): Started rhelmsscl2
nc_NW3_AERS (ocf::heartbeat:azure-lb): Started rhelmsscl2
rsc_sap_NW3_ERS22 (ocf::heartbeat:SAPInstance): Started rhelmsscl2

Run the following command as root on a node, where at least one ASCS instance is running. In this
example, we executed the command on rhelmsscl1 , where the ASCS instances for NW1, NW2 and NW3
are running.

echo c > /proc/sysrq-trigger

The status after the test, and after the node, that was crashed has started again, should look like this.
Full list of resources:

rsc_st_azure (stonith:fence_azure_arm): Started rhelmsscl2


Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started rhelmsscl2
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started rhelmsscl2
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started rhelmsscl2
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started rhelmsscl2
Resource Group: g-NW1_AERS
fs_NW1_AERS (ocf::heartbeat:Filesystem): Started rhelmsscl1
vip_NW1_AERS (ocf::heartbeat:IPaddr2): Started rhelmsscl1
nc_NW1_AERS (ocf::heartbeat:azure-lb): Started rhelmsscl1
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started rhelmsscl1
Resource Group: g-NW2_ASCS
fs_NW2_ASCS (ocf::heartbeat:Filesystem): Started rhelmsscl2
vip_NW2_ASCS (ocf::heartbeat:IPaddr2): Started rhelmsscl2
nc_NW2_ASCS (ocf::heartbeat:azure-lb): Started rhelmsscl2
rsc_sap_NW2_ASCS10 (ocf::heartbeat:SAPInstance): Started rhelmsscl2
Resource Group: g-NW2_AERS
fs_NW2_AERS (ocf::heartbeat:Filesystem): Started rhelmsscl1
vip_NW2_AERS (ocf::heartbeat:IPaddr2): Started rhelmsscl1
nc_NW2_AERS (ocf::heartbeat:azure-lb): Started rhelmsscl1
rsc_sap_NW2_ERS12 (ocf::heartbeat:SAPInstance): Started rhelmsscl1
Resource Group: g-NW3_ASCS
fs_NW3_ASCS (ocf::heartbeat:Filesystem): Started rhelmsscl2
vip_NW3_ASCS (ocf::heartbeat:IPaddr2): Started rhelmsscl2
nc_NW3_ASCS (ocf::heartbeat:azure-lb): Started rhelmsscl2
rsc_sap_NW3_ASCS20 (ocf::heartbeat:SAPInstance): Started rhelmsscl2
Resource Group: g-NW3_AERS
fs_NW3_AERS (ocf::heartbeat:Filesystem): Started rhelmsscl1
vip_NW3_AERS (ocf::heartbeat:IPaddr2): Started rhelmsscl1
nc_NW3_AERS (ocf::heartbeat:azure-lb): Started rhelmsscl1
rsc_sap_NW3_ERS22 (ocf::heartbeat:SAPInstance): Started rhelmsscl1

If there are messages for failed resources, clean the status of the failed resources. For example:

pcs resource cleanup rsc_sap_NW1_ERS02

Next steps
Azure Virtual Machines planning and implementation for SAP
Azure Virtual Machines deployment for SAP
Azure Virtual Machines DBMS deployment for SAP
To learn how to establish high availability and plan for disaster recovery of SAP HANA on Azure VMs, see
High Availability of SAP HANA on Azure Virtual Machines (VMs)
About disaster recovery for on-premises apps
3/19/2020 • 8 minutes to read • Edit Online

This article describes on-premises workloads and apps you can protect for disaster recovery with the Azure Site
Recovery service.

Overview
Organizations need a business continuity and disaster recovery (BCDR) strategy to keep workloads and data safe
and available during planned and unplanned downtime. And, recover to regular working conditions.
Site Recovery is an Azure service that contributes to your BCDR strategy. Using Site Recovery, you can deploy
application-aware replication to the cloud, or to a secondary site. You can use Site Recovery to manage replication,
perform disaster recovery testing, and run failovers and failback. Your apps can run on Windows or Linux-based
computers, physical servers, VMware, or Hyper-V.
Site Recovery integrates with Microsoft applications such as SharePoint, Exchange, Dynamics, SQL Server, and
Active Directory. Microsoft works closely with leading vendors including Oracle, SAP, and Red Hat. You can
customize replication solutions on an app-by-app basis.

Why use Site Recovery for application replication?


Site Recovery contributes to application-level protection and recovery as follows:
App-agnostic and provides replication for any workloads running on a supported machine.
Near-synchronous replication, with recovery point objectives (RPO) as low as 30 seconds to meet the needs of
most critical business apps.
App-consistent snapshots, for single or multi-tier applications.
Integration with SQL Server AlwaysOn, and partnership with other application-level replication technologies.
For example, Active Directory replication, SQL AlwaysOn, and Exchange Database Availability Groups (DAGs).
Flexible recovery plans that enable you to recover an entire application stack with a single click, and to include
external scripts and manual actions in the plan.
Advanced network management in Site Recovery and Azure to simplify app network requirements. Network
management such as the ability to reserve IP addresses, configure load-balancing, and integration with Azure
Traffic Manager for low recovery time objectives (RTO) network switchovers.
A rich automation library that provides production-ready, application-specific scripts that can be downloaded
and integrated with recovery plans.

Workload summary
Site Recovery can replicate any app running on a supported machine. We've partnered with product teams to do
additional testing for the apps specified in the following table.

REP L IC AT E REP L IC AT E
REP L IC AT E H Y P ER- V VM S TO REP L IC AT E VM WA RE VM S TO REP L IC AT E
A Z URE VM S TO A SEC O N DA RY H Y P ER- V VM S TO A SEC O N DA RY VM WA RE VM S TO
W O RK LO A D A Z URE SIT E A Z URE SIT E A Z URE

Active Directory, Yes Yes Yes Yes Yes


DNS
REP L IC AT E REP L IC AT E
REP L IC AT E H Y P ER- V VM S TO REP L IC AT E VM WA RE VM S TO REP L IC AT E
A Z URE VM S TO A SEC O N DA RY H Y P ER- V VM S TO A SEC O N DA RY VM WA RE VM S TO
W O RK LO A D A Z URE SIT E A Z URE SIT E A Z URE

Web apps (IIS, Yes Yes Yes Yes Yes


SQL)

System Center Yes Yes Yes Yes Yes


Operations
Manager

SharePoint Yes Yes Yes Yes Yes

SAP Yes (tested by Yes (tested by Yes (tested by Yes (tested by Yes (tested by
Microsoft) Microsoft) Microsoft) Microsoft) Microsoft)
Replicate SAP site
to Azure for non-
cluster

Exchange (non- Yes Yes Yes Yes Yes


DAG)

Remote Yes Yes Yes Yes Yes


Desktop/VDI

Linux (operating Yes (tested by Yes (tested by Yes (tested by Yes (tested by Yes (tested by
system and apps) Microsoft) Microsoft) Microsoft) Microsoft) Microsoft)

Dynamics AX Yes Yes Yes Yes Yes

Windows File Yes Yes Yes Yes Yes


Server

Citrix XenApp Yes N/A Yes N/A Yes


and XenDesktop

Replicate Active Directory and DNS


An Active Directory and DNS infrastructure are essential to most enterprise apps. During disaster recovery, you'll
need to protect and recover these infrastructure components, before you recover workloads and apps.
You can use Site Recovery to create a complete automated disaster recovery plan for Active Directory and DNS. For
example, to fail over SharePoint and SAP from a primary to a secondary site, you can set up a recovery plan that
first fails over Active Directory. Then use an additional app-specific recovery plan to fail over the other apps that
rely on Active Directory.
Learn more about disaster recovery for Active Directory and DNS.

Protect SQL Server


SQL Server provides a data services foundation for many business apps in an on-premises datacenter. Site
Recovery can be used with SQL Server HA/DR technologies, to protect multi-tiered enterprise apps that use SQL
Server.
Site Recovery provides:
A simple and cost-effective disaster recovery solution for SQL Server. Replicate multiple versions and editions of
SQL Server standalone servers and clusters, to Azure or to a secondary site.
Integration with SQL AlwaysOn Availability Groups, to manage failover and failback with Azure Site Recovery
recovery plans.
End-to-end recovery plans for the all tiers in an application, including the SQL Server databases.
Scaling of SQL Server for peak loads with Site Recovery, by bursting them into larger IaaS virtual machine sizes
in Azure.
Easy testing of SQL Server disaster recovery. You can run test failovers to analyze data and run compliance
checks, without impacting your production environment.
Learn more about disaster recovery for SQL server.

Protect SharePoint
Azure Site Recovery helps protect SharePoint deployments, as follows:
Eliminates the need and associated infrastructure costs for a stand-by farm for disaster recovery. Use Site
Recovery to replicate an entire farm (web, app, and database tiers) to Azure or to a secondary site.
Simplifies application deployment and management. Updates deployed to the primary site are automatically
replicated. The updates are available after failover and recovery of a farm in a secondary site. Lowers the
management complexity and costs associated with keeping a stand-by farm up to date.
Simplifies SharePoint application development and testing by creating a production-like copy on-demand
replica environment for testing and debugging.
Simplifies transition to the cloud by using Site Recovery to migrate SharePoint deployments to Azure.
Learn more about disaster recovery for SharePoint.

Protect Dynamics AX
Azure Site Recovery helps protect your Dynamics AX ERP solution, by:
Managing replication of your entire Dynamics AX environment (Web and AOS tiers, database tiers, SharePoint)
to Azure, or to a secondary site.
Simplifying migration of Dynamics AX deployments to the cloud (Azure).
Simplifying Dynamics AX application development and testing by creating a production-like copy on-demand,
for testing and debugging.
Learn more about disaster recovery for Dynamic AX.

Protect Remote Desktop Services


Remote Desktop Services (RDS) enables virtual desktop infrastructure (VDI), session-based desktops, and
applications, that allow users to work anywhere.
With Azure Site Recovery you can replicate the following services:
Replicate managed or unmanaged pooled virtual desktops to a secondary site.
Replicate remote applications and sessions to a secondary site or Azure.
The following table shows the replication options:
REP L IC AT E
REP L IC AT E REP L IC AT E P H Y SIC A L
H Y P ER- V REP L IC AT E VM WA RE REP L IC AT E SERVERS TO REP L IC AT E
REP L IC AT E VM S TO A H Y P ER- V VM S TO A VM WA RE A P H Y SIC A L
A Z URE VM S SEC O N DA R VM S TO SEC O N DA R VM S TO SEC O N DA R SERVERS TO
RDS TO A Z URE Y SIT E A Z URE Y SIT E A Z URE Y SIT E A Z URE

Pooled No Yes No Yes No Yes No


Vir tual
Desktop
(unmanag
ed)

Pooled No Yes No Yes No Yes No


Vir tual
Desktop
(managed
and
without
UPD)

Remote Yes Yes Yes Yes Yes Yes Yes


applicatio
ns and
Desktop
sessions
(without
UPD)

Learn more about disaster recovery for RDS.

Protect Exchange
Site Recovery helps protect Exchange, as follows:
For small Exchange deployments, such as a single or standalone server, Site Recovery can replicate and fail over
to Azure or to a secondary site.
For larger deployments, Site Recovery integrates with Exchange DAGS.
Exchange DAGs are the recommended solution for Exchange disaster recovery in an enterprise. Site Recovery
recovery plans can include DAGs, to orchestrate DAG failover across sites.
To learn more about disaster recovery for Exchange, see Exchange DAGs and Exchange disaster recovery.

Protect SAP
Use Site Recovery to protect your SAP deployment, as follows:
Enable protection of SAP NetWeaver and non-NetWeaver Production applications running on-premises, by
replicating components to Azure.
Enable protection of SAP NetWeaver and non-NetWeaver Production applications running Azure, by replicating
components to another Azure datacenter.
Simplify cloud migration, by using Site Recovery to migrate your SAP deployment to Azure.
Simplify SAP project upgrades, testing, and prototyping, by creating a production clone on-demand for testing
SAP applications.
Learn more about disaster recovery for SAP.

Protect Internet Information Services


Use Site Recovery to protect your Internet Information Services (IIS) deployment, as follows:
Azure Site Recovery provides disaster recovery by replicating the critical components in your environment to a cold
remote site or a public cloud like Microsoft Azure. Since the virtual machines with the web server and the database
are replicated to the recovery site, there's no requirement for a separate backup for configuration files or
certificates. The application mappings and bindings dependent on environment variables that are changed post
failover can be updated through scripts integrated into the disaster recovery plans. Virtual machines are brought
up on the recovery site only during a failover. Azure Site Recovery also helps you orchestrate the end-to-end
failover by providing you the following capabilities:
Sequencing the shutdown and startup of virtual machines in the various tiers.
Adding scripts to allow updates of application dependencies and bindings on the virtual machines after they've
started. The scripts can also be used to update the DNS server to point to the recovery site.
Allocate IP addresses to virtual machines pre-failover by mapping the primary and recovery networks and use
scripts that don't need to be updated post failover.
Ability for a one-click failover for multiple web applications that eliminates the scope for confusion during a
disaster.
Ability to test the recovery plans in an isolated environment for DR drills.
Learn more about disaster recovery for IIS.

Protect Citrix XenApp and XenDesktop


Use Site Recovery to protect your Citrix XenApp and XenDesktop deployments, as follows:
Enable protection of the Citrix XenApp and XenDesktop deployment. Replicate the different deployment layers to
Azure: Active Directory, DNS server, SQL database server, Citrix Delivery Controller, StoreFront server, XenApp
Master (VDA), Citrix XenApp License Server.
Simplify cloud migration, by using Site Recovery to migrate your Citrix XenApp and XenDesktop deployment to
Azure.
Simplify Citrix XenApp/XenDesktop testing, by creating a production-like copy on-demand for testing and
debugging.
This solution only applies to Windows Server virtual desktops and not client virtual desktops. Client virtual
desktops aren't yet supported for licensing in Azure. Learn More about licensing for client/server desktops in
Azure.
Learn more about disaster recovery for Citrix XenApp and XenDesktop deployments. Or, you can refer to the Citrix
whitepaper.

Next steps
Learn more about disaster recovery for an Azure VM.
Azure proximity placement groups for optimal
network latency with SAP applications
12/22/2020 • 11 minutes to read • Edit Online

SAP applications based on the SAP NetWeaver or SAP S/4HANA architecture are sensitive to network latency
between the SAP application tier and the SAP database tier. This sensitivity is the result of most of the business
logic running in the application layer. Because the SAP application layer runs the business logic, it issues queries
to the database tier at a high frequency, at a rate of thousands or tens of thousands per second. In most cases,
the nature of these queries is simple. They can often be run on the database tier in 500 microseconds or less.
The time spent on the network to send such a query from the application tier to the database tier and receive
the result set back has a major impact on the time it takes to run business processes. This sensitivity to network
latency is why you might want to achieve certain maximum network latency in SAP deployment projects. See
SAP Note #1100926 - FAQ: Network performance for guidelines on how to classify the network latency.
In many Azure regions, the number of datacenters has grown. At the same time, customers, especially for high-
end SAP systems, are using more special VM SKUs of the M or Mv2 family, or HANA Large Instances. These
Azure virtual machine types aren't always available in all the datacenters that complement an Azure region.
These facts can create opportunities to optimize network latency between the SAP application layer and the
SAP DBMS layer.
To give you a possibility to optimize network latency, Azure offers proximity placement groups. Proximity
placement groups can be used to force grouping of different VM types into a single Azure datacenter to
optimize the network latency between these different VM types to the best possible. In the process of deploying
the first VM into such a proximity placement group, the VM gets bound to a specific datacenter. As appealing as
this prospect sounds, the usage of the construct introduces some restrictions as well:
You cannot assume that all Azure VM types are available in every and all Azure datacenters. As a result, the
combination of different VM types within one proximity placement group can be restricted. These
restrictions occur because the host hardware that’s needed to run a certain VM type might not be present in
the datacenter to which the placement group was deployed
As you resize parts of the VMs that are within one proximity placement group, you cannot automatically
assume that in all cases the new VM type is available in the same datacenter as the other VMs that are part
of the proximity placement group
As Azure decommissions hardware it might force certain VMs of a proximity placement group into another
Azure datacenter. For details covering this case, read the document Co-locate resources for improved
latency

IMPORTANT
As a result of the potential restrictions, proximity placement groups should be used:
Only when necessary
Only on granularity of a single SAP system and not for a whole system landscape or a complete SAP landscape
In a way to keep the different VM types and the number of VMs within a proximity placement group to a minimum

What are proximity placement groups?


An Azure proximity placement group is a logical construct. When a proximity placement group is defined, it's
bound to an Azure region and an Azure resource group. When VMs are deployed, a proximity placement group
is referenced by:
The first Azure VM deployed in the datacenter. You can think of the first virtual machine as a "scope VM"
that's deployed in a datacenter based on Azure allocation algorithms that are eventually combined with user
definitions for a specific Availability Zone.
All subsequent VMs deployed that reference the proximity placement group, to place all subsequently
deployed Azure VMs in the same datacenter as the first virtual machine.

NOTE
If there is no host hardware deployed that could run a specific VM type in the datacenter where the first VM was placed,
the deployment of the requested VM type won’t succeed. You’ll get a failure message.

A single Azure resource group can have multiple proximity placement groups assigned to it. But a proximity
placement group can be assigned to only one Azure resource group.

Proximity placement groups with SAP systems that use only Azure
VMs
Most SAP NetWeaver and S/4HANA system deployments on Azure don't use HANA Large Instances. For
deployments that don't use HANA Large Instances, it's important to provide optimal performance between the
SAP application layer and the DBMS tier. To do so, define an Azure proximity placement group just for the
system.
In most customer deployments, customers build a single Azure resource group for SAP systems. In that case,
there's a one-to-one relationship between, for example, the production ERP system resource group and its
proximity placement group. In other cases, customers organize their resource groups horizontally and collect
all production systems in a single resource group. In this case, you'd have a one-to-many relationship between
your resource group for production SAP systems and several proximity placement groups for your production
SAP ERP, SAP BW, and so on.
Avoid bundling several SAP production or non-production systems in a single proximity placement group.
When a small number of SAP systems or an SAP system and some surrounding applications need to have low
latency network communication, you might consider moving these systems into one proximity placement
group. Avoid bundles of systems because the more systems you group in a proximity placement group, the
higher the chances:
That you require a VM type that can't be run in the specific datacenter into which the proximity placement
group was scoped to.
That resources of non-mainstream VMs, like M-Series VMs, could eventually be unfulfilled when you need
more because you're adding software to a proximity placement group over time.
Here's what the ideal configuration, as described, looks like:
In this case, single SAP systems are grouped in one resource group each, with one proximity placement group
each. There's no dependency on whether you use HANA scale-out or DBMS scale-up configurations.

Proximity placement groups and HANA Large Instances


If some of your SAP systems rely on HANA Large Instances for the application layer, you can experience
significant improvements in network latency between the HANA Large Instances unit and Azure VMs when
you're using HANA Large Instances units that are deployed in Revision 4 rows or stamps. One improvement is
that HANA Large Instances units, as they're deployed, deploy with a proximity placement group. You can use
that proximity placement group to deploy your application layer VMs. As a result, those VMs will be deployed
in the same datacenter that hosts your HANA Large Instances unit.
To determine whether your HANA Large Instances unit is deployed in a Revision 4 stamp or row, check the
article Azure HANA Large Instances control through Azure portal. In the attributes overview of your HANA
Large Instances unit, you can also determine the name of the proximity placement group because it was
created when your HANA Large Instances unit was deployed. The name that appears in the attributes overview
is the name of the proximity placement group that you should deploy your application layer VMs into.
As compared to SAP systems that use only Azure virtual machines, when you use HANA Large Instances, you
have less flexibility in deciding how many Azure resource groups to use. All the HANA Large Instances units of
a HANA Large Instances tenant are grouped in a single resource group, as described this article. Unless you
deploy into different tenants to separate, for example, production and non-production systems or other
systems, all your HANA Large Instances units will be deployed in one HANA Large Instances tenant. This tenant
has a one-to-one relationship with a resource group. But a separate proximity placement group will be defined
for each of the single units.
As a result, the relationships among Azure resource groups and proximity placement groups for a single tenant
will be as shown here:
Example of deployment with proximity placement groups
Following are some PowerShell commands that you can use to deploy your VMs with Azure proximity
placement groups.
The first step, after you sign in to Azure Cloud Shell, is to check whether you're in the Azure subscription that
you want to use for the deployment:

Get-AzureRmContext

If you need to change to a different subscription, you can do so by running this command:

Set-AzureRmContext -Subscription "my PPG test subscription"

Create a new Azure resource group by running this command:

New-AzResourceGroup -Name "myfirstppgexercise" -Location "westus2"

Create the new proximity placement group by running this command:


New-AzProximityPlacementGroup -ResourceGroupName "myfirstppgexercise" -Name "letsgetclose" -Location
"westus2"

Deploy your first VM into the proximity placement group by using a command like this one:

New-AzVm -ResourceGroupName "myfirstppgexercise" -Name "myppganchorvm" -Location "westus2" -OpenPorts


80,3389 -ProximityPlacementGroup "letsgetclose" -Size "Standard_DS11_v2"

The preceding command deploys a Windows-based VM. After this VM deployment succeeds, the datacenter
scope of the proximity placement group is defined within the Azure region. All subsequent VM deployments
that reference the proximity placement group, as shown in the preceding command, will be deployed in the
same Azure datacenter, as long as the VM type can be hosted on hardware placed in that datacenter, and
capacity for that VM type is available.

Combine availability sets and Availability Zones with proximity


placement groups
One of the disadvantages to using Availability Zones for SAP system deployments is that you can’t deploy the
SAP application layer by using availability sets within the specific zone. You want the SAP application layer to be
deployed in the same zones as the DBMS layer. Referencing an Availability Zone and an availability set when
deploying a single VM isn't supported. So, previously, you were forced to deploy your application layer by
referencing a zone. You lost the ability to make sure the application layer VMs were spread across different
update and failure domains.
By using proximity placement groups, you can bypass this restriction. Here's the deployment sequence:
Create a proximity placement group.
Deploy your anchor VM, usually the DBMS server, by referencing an Availability Zone.
Create an availability set that references the Azure proximity group. (See the command later in this article.)
Deploy the application layer VMs by referencing the availability set and the proximity placement group.
Instead of deploying the first VM as demonstrated in the previous section, you reference an Availability Zone
and the proximity placement group when you deploy the VM:

New-AzVm -ResourceGroupName "myfirstppgexercise" -Name "myppganchorvm" -Location "westus2" -OpenPorts


80,3389 -Zone "1" -ProximityPlacementGroup "letsgetclose" -Size "Standard_E16_v3"

A successful deployment of this virtual machine would host the database instance of the SAP system in one
Availability Zone. The scope of the proximity placement group is fixed to one of the datacenters that represent
the Availability Zone you defined.
Assume you deploy the Central Services VMs in the same way as the DBMS VMs, referencing the same zone or
zones and the same proximity placement groups. In the next step, you need to create the availability sets you
want to use for the application layer of your SAP system.
Define and create the proximity placement group. The command for creating the availability set requires an
additional reference to the proximity placement group ID (not the name). You can get the ID of the proximity
placement group by using this command:
Get-AzProximityPlacementGroup -ResourceGroupName "myfirstppgexercise" -Name "letsgetclose"

When you create the availability set, you need to consider additional parameters when you're using managed
disks (default unless specified otherwise) and proximity placement groups:

New-AzAvailabilitySet -ResourceGroupName "myfirstppgexercise" -Name "myppgavset" -Location "westus2" -


ProximityPlacementGroupId "/subscriptions/my very long ppg id string" -sku "aligned" -
PlatformUpdateDomainCount 3 -PlatformFaultDomainCount 2

Ideally, you should use three fault domains. But the number of supported fault domains can vary from region
to region. In this case, the maximum number of fault domains possible for the specific regions is two. To deploy
your application layer VMs, you need to add a reference to your availability set name and the proximity
placement group name, as shown here:

New-AzVm -ResourceGroupName "myfirstppgexercise" -Name "myppgavsetappvm" -Location "westus2" -OpenPorts


80,3389 -AvailabilitySetName "myppgavset" -ProximityPlacementGroup "letsgetclose" -Size "Standard_DS11_v2"

The result of this deployment is:


A DBMS layer and Central Services for your SAP system that's located in a specific Availability Zone or
Availability Zones.
An SAP application layer that's located through availability sets in the same Azure datacenters as the DBMS
VM or VMs.

NOTE
Because you deploy one DBMS VM into one zone and the second DBMS VM into another zone to create a high
availability configuration, you'll need a different proximity placement group for each of the zones. The same is true for
any availability set that you use.

Move an existing system into proximity placement groups


If you already have SAP systems deployed, you might want to optimize the network latency of some of your
critical systems and locate the application layer and the DBMS layer in the same datacenter. To move the VMs of
a complete Azure availability set to an existing proximity placement group that is scoped already, you need to
shut down all VMs of the availability set and assign the availability set to the existing proximity placement
group through Azure portal, PowerShell, or CLI. If you want to move a VM that is not part of an availability set
into an existing proximity placement group, you just need to shut down the VM and assign it to an existing
proximity placement group.

Next steps
Check out the documentation:
SAP workloads on Azure: planning and deployment checklist
Preview: Deploy VMs to proximity placement groups using Azure CLI
Preview: Deploy VMs to proximity placement groups using PowerShell
Considerations for Azure Virtual Machines DBMS deployment for SAP workloads
SAP BusinessObjects BI platform planning and
implementation guide on Azure
12/22/2020 • 17 minutes to read • Edit Online

Overview
The purpose of this guide is to provide guidelines for planning, deploying, and configuring SAP BusinessObjects BI
Platform, also known as SAP BOBI Platform on Azure. This guide is intended to cover common Azure services and
features that are relevant for SAP BOBI Platform. This guide isn't an exhaustive list of all possible configuration
options. It covers solutions common to typical deployment scenarios.
This guide isn't intended to replace the standard SAP BOBI Platform installation and administration guides,
operating system, or any database documentation.

Plan and implement SAP BusinessObjects BI platform on Azure


Microsoft Azure offers a wide range of services including compute, storage, networking, and many others for
businesses to build their applications without lengthy procurement cycles. Azure virtual machines (VM) help
companies to deploy on-demand and scalable computing resources for different SAP applications like SAP
NetWeaver based applications, SAP Hybris, SAP BusinessObjects BI Platform, based on their business need. Azure
also supports the cross-premises connectivity, which enables companies to integrate Azure virtual machines into
their on-premises domains, their private clouds and their SAP system landscape.
This document provides guidance on planning and implementation consideration for SAP BusinessObjects BI
Platform on Azure. It complements the SAP installation documentation and SAP Notes, which represent the
primary resources for installations and deployments of SAP BOBI.
Architecture overview
SAP BusinessObjects BI Platform is a self-contained system that can exist on a single Azure virtual machine or can
be scaled into a cluster of many Azure Virtual Machines that run different components. SAP BOBI Platform consists
of six conceptual tiers: Client Tier, Web Tier, Management Tier, Storage Tier, Processing Tier, and Data Tier. (For more
details on each tier, refer Administrator Guide in SAP BusinessObjects Business Intelligence Platform help portal).
Following is the high-level details on each tier:
Client Tier : It contains all desktop client applications that interact with the BI platform to provide different kind
of reporting, analytic, and administrative capabilities.
Web Tier : It contains web applications deployed to JAVA web application servers. Web applications provide BI
Platform functionality to end users through a web browser.
Management Tier : It coordinates and controls all the components that makes the BI Platform. It includes
Central Management Server (CMS) and the Event Server and associated services
Storage Tier : It is responsible for handling files, such as documents and reports. It also handles report caching
to save system resources when user access reports.
Processing Tier : It analyzes data, and produces reports and other output types. It's the only tier that accesses
the databases that contain report data.
Data Tier : It consists of the database servers hosting the CMS system databases and Auditing Data Store.
The SAP BI Platform consists of a collection of servers running on one or more hosts. It's essential that you choose
the correct deployment strategy based on the sizing, business need and type of environment. For small installation
like development or test, you can use a single Azure Virtual Machine for web application server, database server,
and all BI Platform servers. In case you're using Database-as-a-Service (DBaaS) offering from Azure, database
server will run separately from other components. For medium and large installation, you can have servers
running on multiple Azure virtual machines.
In below figure, architecture of large-scale deployment of SAP BOBI Platform on Azure virtual machines is shown,
where each component is distributed and placed in availability sets that can sustain failover if there is service
disruption.

Architecture details
Load balancer
In SAP BOBI multi-instance deployment, Web application servers (or web tier) are running on two or more
hosts. To distribute user load evenly across web servers, you can use a load balancer between end users and
web servers. In Azure, you can either use Azure Load Balancer or Azure Application Gateway to manage
traffic to your web servers.
Web application servers
The web server hosts the web applications of SAP BOBI Platform like CMC and BI Launch Pad. To achieve
high availability for web server, you must deploy at least two web application servers to manage
redundancy and load balancing. In Azure, these web application servers can be placed either in availability
sets or availability zones for better availability.
Tomcat is the default web application for SAP BI Platform. To achieve high availability for tomcat, enable
session replication using Static Membership Interceptor in Azure. It ensures that user can access SAP BI web
application even when tomcat service is disrupted.

IMPORTANT
By default Tomcat uses multicast IP and Port for clustering which is not supported on Azure (SAP Note 2764907).

BI platform servers
BI Platform servers include all the services that are part of SAP BOBI application (management tier,
processing tier, and storage tier). When a web server receives a request, it detects each BI platform server
(specifically, all CMS servers in a cluster) and automatically load balance their requests. In case if one of the
BI Platform hosts fails, web server automatically send requests to other host.
To achieve high availability or redundancy for BI Platform, you must deploy the application in at least two
Azure virtual machines. Based on the sizing, you can scale your BI Platform to run on more Azure virtual
machines.
File repository server (FRS)
File Repository Server contains all reports and other BI documents that have been created. In multi-instance
deployment, BI Platform servers are running on multiple virtual machines and each VM should have access
to these reports and other BI documents. So, a filesystem needs to be shared across all BI platform servers.
In Azure, you can either use Azure Premium Files or Azure NetApp Files for File Repository Server. Both of
these Azure services have built-in redundancy.

IMPORTANT
SMB Protocol for Azure Files is generally available, but NFS Protocol support for Azure Files is currently in preview. For
more information, see NFS 4.1 support for Azure Files is now in preview

CMS & audit database


SAP BOBI Platform requires a database to store its system data, which is referred as CMS database. It's used
to store BI platform information such as user, server, folder, document, configuration, and authentication
details.
Azure offers MySQL Database and Azure SQL database Database-as-a-Service (DBaaS) offering that can be
used for CMS database and Audit database. As this being a PaaS offering, customers don't have to worry
about operation, availability, and maintenance of the databases. Customer can also choose their own
database for CMS and Audit repository based on their business need.

Support matrix
This section describes supportability of different SAP BOBI component like SAP BusinessObjects BI Platform
version, Operating System and, Databases in Azure.
SAP BusinessObjects BI platform
Azure Infrastructure as a Service (IaaS) enables you to deploy and configure SAP BusinessObjects BI Platform on
Azure Compute. It supports following version of SAP BOBI Platform -
SAP BusinessObjects BI Platform 4.3
SAP BusinessObjects BI Platform 4.2 SP04+
SAP BusinessObjects BI Platform 4.1 SP05+
The SAP BI Platform runs on different operating system and databases. Supportability of SAP BOBI platform
between Operating System and Database version can be found in Product Availability Matrix for SAP BOBI.
Operating system
Azure supports following operating systems for SAP BusinessObjects BI Platform deployment.
Microsoft Windows Server
SUSE Linux Enterprise Server (SLES)
Red Hat Enterprise Linux (RHEL)
Oracle Linux (OL)
The operating system version that is listed in Product Availability Matrix (PAM) for SAP BusinessObjects BI Platform
are supported as long as they're compatible to run on Azure Infrastructure.
Databases
The BI Platform needs database for CMS and Auditing Data store, which can be installed on any supported
databases that are listed in SAP Product Availability Matrix that includes the following -
Microsoft SQL Server
Azure SQL Database (Supported database only for SAP BOBI Platform on Windows)
It's a fully managed SQL Server database engine, based on the latest stable Enterprise Edition of SQL Server.
Azure SQL database handles most of the database management functions such as upgrading, patching, and
monitoring without user involvement. With Azure SQL Database, you can create a highly available and high-
performance data storage layer for the applications and solutions in Azure. For more details, check Azure
SQL Database documentation.
Azure Database for MySQL (Follow same compatibility guidelines as mentioned for MySQL AB in SAP PAM)
It's a relational database service powered by the MySQL community edition. Being a fully managed
Database-as-a-Service (DBaaS) offering, it can handle mission-critical workloads with predictable
performance and dynamic scalability. It has built-in high availability, automatic backups, software patching,
automatic failure detection, and point-in-time restore for up to 35 days, which substantially reduce
operation tasks. For more details, check Azure Database for MySQL documentation.
SAP HANA
SAP ASE
IBM DB2
Oracle (For version and restriction, check SAP Note 2039619)
MaxDB
This document illustrates the guidelines to deploy SAP BOBI Platform on Windows with Azure SQL
Database and SAP BOBI Platform on Linux with Azure Database for MySQL . It's also our recommended
approach for running SAP BusinessObjects BI Platform on Azure.

Sizing
Sizing is a process of determining the hardware requirement to run the application efficiently. For SAP BOBI
Platform, sizing needs to be done using SAP sizing tool called Quick Sizer. The tool provides the SAPS based on the
input, which then needs to be mapped to certified Azure virtual machines types for SAP. SAP Note 1928533
provides the list of supported SAP products and Azure VM types along with SAPS. For more information on sizing,
check SAP BI Sizing Guide.
For storage need for SAP BOBI Platform, Azure offers different types of Managed Disks. For SAP BOBI Installation
directory, it's recommended to use premium managed disk and for the database that runs on virtual machines,
follow the guidance that is provided in DBMS deployment for SAP workload.
Azure supports two DBaaS offering for SAP BOBI Platform data tier - Azure SQL Database (BI Application running
on Windows) and Azure Database for MySQL (BI Application running on Linux and Windows). So based on the
sizing result, you can choose purchasing model that best fits your need.

TIP
For quick sizing reference, consider 800 SAPS = 1 vCPU while mapping the SAPS result of SAP BOBI Platform database tier to
Azure Database-as-a-Service (Azure SQL Database or Azure Database for MySQL).

Sizing models for Azure SQL database


Azure SQL Database offers the following three purchasing models:
vCore-based
It lets you choose the number of vCores, amount of memory, and the amount and speed of storage. The
vCore-based purchasing model also allows you to use Azure Hybrid Benefit for SQL Server to gain cost
savings. This model is suited for customer who value flexibility, control, and transparency.
There are three Service Tier Options being offered in vCore model that include - General Purpose, Business
Critical, and Hyperscale. The service tier defines the storage architecture, space, I/O limits, and business
continuity options related to availability and disaster recovery. Following is high-level details on each service
tier option -
1. General Purpose service tier is best suited for Business workloads. It offers budget-oriented, balanced,
and scalable compute and storage options. For more information, refer Resource options and limits.
2. Business Critical service tier offers business applications the highest resilience to failures by using
several isolated replicas, and provides the highest I/O performance per database replica. For more
information, refer Resource options and limits.
3. Hyperscale service tier is best for business workloads with highly scalable storage and read-scale
requirements. It offers higher resilience to failures by allowing configuration of more than one isolated
database replica. For more information, refer Resource options and limits.
DTU-based
The DTU-based purchasing model offers a blend of compute, memory, and I/O resources in three service
tiers, to support light and heavy database workloads. Compute sizes within each tier provide a different mix
of these resources, to which you can add additional storage resources. It's best suited for customers who
want simple, pre-configure resource options.
Service Tiers in the DTU-based purchase model is differentiated by a range of compute sizes with a fixed
amount of included storage, fixed retention period of backups, and fixed price.
Serverless
The serverless model automatically scales compute based on workload demand, and bills for the amount of
compute used per second. The serverless compute tier automatically pauses databases during inactive
periods when only storage is billed, and automatically resumes databases when activity returns. For more
information, refer Resource options and limits.
It's more suitable for intermittent, unpredictable usage with low average compute utilization over time. So
this model can be used for non-production SAP BOBI deployment.

NOTE
For SAP BOBI, it's convenient to use vCore based model and choose either General Purpose or Business Critical service tier
based on the business need.

Sizing models for Azure database for MySQL


Azure Database for MySQL comes with three different pricing tiers. They're differentiated by the amount of
compute in vCores, memory per vCore, and the storage technology used to store the date. Following is the high-
level details on the options and for more details on different attributes, refer Pricing Tier for Azure Database for
MySQL.
Basic
It's used for the target workloads that require light compute and I/O performance.
General Purpose
It's suited for most business workloads that require balanced compute and memory with scalable I/O
throughput.
Memory Optimized
For high-performance database workloads that require in-memory performance for faster transaction
processing and higher concurrency.

NOTE
For SAP BOBI, it is convenient to use General Purpose or Memory Optimized pricing tier based on the business workload.

Azure resources
Choosing regions
Azure region is one or a collection of data-centers that contains the infrastructure to run and hosts different Azure
Services. This infrastructure includes large number of nodes that function as compute nodes or storage nodes, or
run network functionality. Not all region offers the same services.
SAP BI Platform contains different components that might require specific VM types, Storage like Azure Files or
Azure NetApp Files or Database as a Service (DBaaS) for its data tier that might not be available in certain regions.
You can find out the exact information on VM types, Azure Storage types or, other Azure Services in Products
available by region site. If you're already running your SAP systems on Azure, probably you have your region
identified. In that case, you need to first investigate that the necessary services are available in those regions to
decide the architecture of SAP BI Platform.
Availability zones
Availability Zones are physically separate locations within an Azure region. Each Availability Zone is made of one or
more datacenters equipped with independent power, cooling, and networking.
To achieve high availability on each tier for SAP BI Platform, you can distribute VMs across Availability Zone by
implementing high availability framework, which can provide the best SLA in Azure. For Virtual Machine SLA in
Azure, check the latest version of Virtual Machine SLAs.
For data tier, Azure Database as a Service (DBaaS) service provides high availability framework by default. You just
need to select the region and service inherent high availability, redundancy, and resiliency capabilities to mitigate
database downtime from planned and unplanned outages, without requiring you to configure any additional
components. For more details on the SLA for supported DBaaS offering on Azure, check High availability in Azure
Database for MySQL and High availability for Azure SQL Database.
Availability sets
Availability set is a logical grouping capability for isolating Virtual Machine (VM) resources from each other on
being deployed. Azure makes sure of the VMs you place within an Availability Set run across multiple physical
servers, compute racks, storage units, and network switches. If a hardware or software failure happens, only a
subset of your VMs is affected and your overall solution stays operational. So when virtual machines are placed in
availability sets, Azure Fabric Controller distributes the VMs over different Fault and Upgrade domains to prevent
all VMs from being inaccessible because of infrastructure maintenance or failure within one Fault domain.
SAP BI Platform contains many different components and while designing the architecture you have to make sure
that each of this component is resilient of any disruption. It can be achieved by placing Azure virtual machines of
each component within availability sets. Keep in mind, when you mix VMs of different VM families within one
availability set, you may come across problems that prevent you to include a certain VM type into such availability
set. So have separate availability set for Web Application, BI Application for SAP BI Platform as highlighted in
Architecture Overview.
Also the number of update and fault domains that can be used by an Azure Availability Set within an Azure Scale
unit is finite. So if you keep adding VMs to a single availability set, two or more VMs will eventually end in the same
fault or update domain. For more information, see the Azure Availability Sets section of the Azure virtual machines
planning and implementation for SAP document.
To understand the concept of Azure availability sets and the way availability sets relate to Fault and Upgrade
Domains, read manage availability article.

IMPORTANT
The concepts of Azure Availability Zones and Azure availability sets are mutually exclusive. That means, you can either deploy
a pair or multiple VMs into a specific Availability Zone or an Azure availability set. But not both.

Virtual machines
Azure Virtual Machine is a service offering that enables you to deploy custom images to Azure as Infrastructure-as-
a-Service (IaaS) instances. It simplifies maintaining and operating applications by providing on-demand compute
and storage to host, scale, and manage web application and connected applications.
Azure offers varieties of virtual machines for all your application needs. But for SAP workload, Azure has narrowed
the selection to different VM families that are suitable for SAP workload and SAP HANA workload more specifically.
For more insight, check What SAP software is supported for Azure deployments.
Based on the SAP BI Platform sizing, you need to map your requirement to Azure Virtual Machine, which is
supported in Azure for SAP product. SAP Note 1928533 is a good starting point that list out supported Azure VM
types for SAP Products on Windows and Linux. Also a point to keep in mind that beyond the selection of purely
supported VM types, you also need to check whether those VM types are available in specific region. You can check
the availability of VM type on Products available by region page. For choosing the pricing model, you can refer to
Azure virtual machines for SAP workload
Storage
Azure Storage is an Azure-managed cloud service that provides storage that is highly available, secure, durable,
scalable, and redundant. Some of the storage types have limited use for SAP scenarios. But several Azure Storage
types are well suited or optimized for specific SAP workload scenarios. For more information, refer Azure Storage
types for SAP Workload guide, as it highlights different storage options that are suited for SAP.
Azure Storage has different Storage types available for customers and details for the same can be read in the article
What disk types are available in Azure?. SAP BOBI Platform uses following Azure Storage to build the application -
Azure-managed disks
It's a block-level storage volume that is managed by Azure. You can use the disks for SAP BOBI Platform
application servers and databases, when installed on Azure virtual machines. There are different types of
Azure Managed Disks available, but it's recommended to use Premium SSDs for SAP BOBI Platform
application and database.
In below example, Premium SSDs are used for BOBI Platform installation directory. For database installed on
virtual machine, you can use managed disks for data and log volume as per the guidelines. CMS and Audit
databases are typically small and it doesn’t have the same storage performance requirements as that of
other SAP OLTP/OLAP databases.
Azure Premium Files or Azure NetApp Files
In SAP BOBI Platform, File Repository Server (FRS) refers to the disk directories where contents like reports,
universes, and connections are stored which are used by all application servers of that system. Azure
Premium Files or Azure NetApp Files storage can be used as a shared file system for SAP BOBI applications
FRS. As this storage offering is not available all regions, refer to Products available by region site to find out
up-to-date information.
If the service is unavailable in your region, you can create NFS server from which you can share the file
system to SAP BOBI application. But you'll also need to consider its high availability.

Networking
SAP BOBI is a reporting and analytics BI platform that doesn’t hold any business data. So the system is connected
to other database servers from where it fetches all the data and provide insight to users. Azure provides a network
infrastructure, which allows the mapping of all scenarios that can be realized with SAP BI Platform like connecting
to on-premise system, systems in different virtual network and others. For more information check Microsoft
Azure Networking for SAP Workload.
For Database-as-a-Service offering, any newly created database (Azure SQL Database or Azure Database for
MySQL) has a firewall that blocks all external connections. To allow access to the DBaaS service from BI Platform
virtual machines, you need to specify one or more server-level firewall rules to enable access to your DBaaS server.
For more information, see Firewall rules for Azure Database for MySQL and Network Access Controls section for
Azure SQL database.

Next steps
SAP BusinessObjects BI Platform Deployment on Linux
Azure Virtual Machines planning and implementation for SAP
Azure Virtual Machines deployment for SAP
Azure Virtual Machines DBMS deployment for SAP
SAP BusinessObjects BI platform deployment guide
for linux on Azure
12/22/2020 • 31 minutes to read • Edit Online

This article describes the strategy to deploy SAP BOBI Platform on Azure for Linux. In this example, two virtual
machines with Premium SSD Managed Disks as its install directory are configured. Azure Database for MySQL is
used for CMS database, and Azure NetApp Files for File Repository Server is shared across both servers. The
default Tomcat Java web application and BI Platform application are installed together on both virtual machines. To
load balance the user request, Application Gateway is used that has native TLS/SSL offloading capabilities.
This type of architecture is effective for small deployment or non-production environment. For Production or
large-scale deployment, you can have separate hosts for Web Application and can as well have multiple BOBI
applications hosts allowing server to process more information.

In this example, below product version and file system layout is used
SAP BusinessObjects Platform 4.3
SUSE Linux Enterprise Server 12 SP5
Azure Database for MySQL (Version: 8.0.15)
MySQL C API Connector - libmysqlclient (Version: 6.1.11)

F IL E SY ST EM DESC RIP T IO N SIZ E ( GB ) O W N ER GRO UP STO RA GE

/usr/sap The file system SAP Sizing bl1adm sapsys Managed


for installation of Guidelines Premium Disk -
SAP BOBI SSD
instance, default
Tomcat Web
Application, and
database drivers
(if necessary)

/usr/sap/frsinput The mount Business Need bl1adm sapsys Azure NetApp


directory is for Files
the shared files
across all BOBI
hosts that will be
used as Input File
Repository
Directory

/usr/sap/frsoutpu The mount Business Need bl1adm sapsys Azure NetApp


t directory is for Files
the shared files
across all BOBI
hosts that will be
used as Output
File Repository
Directory

Deploy linux virtual machine via Azure portal


In this section, we'll create two virtual machines (VMs) with Linux Operating System (OS) image for SAP BOBI
Platform. The high-level steps to create Virtual Machines are as follows -
1. Create a Resource Group
2. Create a Virtual Network.
Don't use single subnet for all Azure services in SAP BI Platform deployment. Based on SAP BI Platform
architecture, you need to create multiple subnets. In this deployment, we'll create three subnets -
Application Subnet, File Repository Store Subnet, and Application Gateway Subnet.
In Azure, Application Gateway and Azure NetApp Files always need to be on separate subnet. Check
Azure Application Gateway and Guidelines for Azure NetApp Files Network Planning article for more
details.
3. Create an Availability Set.
To achieve redundancy for each tier in multi-instance deployment, place virtual machines for each tier in
an availability set. Make sure you separate the availability sets for each tier based on your architecture.
4. Create Virtual Machine 1 (azusbosl1).
You can either use custom image or choose an image from Azure Marketplace. Refer to Deploying a VM
from the Azure Marketplace for SAP or Deploying a VM with a custom image for SAP based on your
need.
5. Create Virtual Machine 2 (azusbosl2).
6. Add one Premium SSD disk. It will be used as SAP BOBI Installation directory.

Provision Azure NetApp Files


Before you continue with the setup for Azure NetApp Files, familiarize yourself with the Azure NetApp Files
documentation.
Azure NetApp Files is available in several Azure regions. Check to see whether your selected Azure region offers
Azure NetApp Files.
Use Azure NetApp Files availability by Azure Region page to check the availability of Azure NetApp Files by region.
Request onboarding to Azure NetApp Files by going to Register for Azure NetApp Files instructions before you
deploy Azure NetApp Files.
Deploy Azure NetApp Files resources
The following instructions assume that you've already deployed your Azure virtual network. The Azure NetApp
Files resources and VMs, where the Azure NetApp Files resources will be mounted, must be deployed in the same
Azure virtual network or in peered Azure virtual networks.
1. If you haven't already deployed the resources, request onboarding to Azure NetApp Files.
2. Create a NetApp account in your selected Azure region by following the instructions in Create a NetApp
account.
3. Set up an Azure NetApp Files capacity pool by following the instructions in Set up an Azure NetApp Files
capacity pool.
The SAP BI Platform architecture presented in this article uses a single Azure NetApp Files capacity pool
at the Premium Service level. For SAP BI File Repository Server on Azure, we recommend using an Azure
NetApp Files Premium or Ultra service Level.
4. Delegate a subnet to Azure NetApp Files, as described in the instructions in Delegate a subnet to Azure
NetApp Files.
5. Deploy Azure NetApp Files volumes by following the instructions in Create an NFS volume for Azure
NetApp Files.
ANF volume can be deployed as NFSv3 and NFSv4.1, as both protocol are supported for SAP BOBI
Platform. Deploy the volumes in respective Azure NetApp Files subnet. The IP addresses of the Azure
NetApp Volumes are assigned automatically.
Keep in mind that the Azure NetApp Files resources and the Azure VMs must be in the same Azure virtual network
or in peered Azure virtual networks. For example, azusbobi-frsinput, azusbobi-frsoutput are the volume names and
nfs://10.31.2.4/azusbobi-frsinput, nfs://10.31.2.4/azusbobi-frsoutput are the file paths for the Azure NetApp Files
Volumes.
Volume azusbobi-frsinput (nfs://10.31.2.4/azusbobi-frsinput)
Volume azusbobi-frsoutput (nfs://10.31.2.4/azusbobi-frsoutput)
Important considerations
As you're creating your Azure NetApp Files for SAP BOBI Platform File Repository Server, be aware of the following
consideration:
1. The minimum capacity pool is 4 tebibytes (TiB).
2. The minimum volume size is 100 gibibytes (GiB).
3. Azure NetApp Files and all virtual machines where the Azure NetApp Files volumes will be mounted must be in
the same Azure virtual network or in peered virtual networks in the same region. Azure NetApp Files access
over VNET peering in the same region is supported now. Azure NetApp access over global peering isn't
supported yet.
4. The selected virtual network must have a subnet that is delegated to Azure NetApp Files.
5. With the Azure NetApp Files export policy, you can control the allowed clients, the access type (read-write, read
only, and so on).
6. The Azure NetApp Files feature isn't zone-aware yet. Currently, the feature isn't deployed in all availability zones
in an Azure region. Be aware of the potential latency implications in some Azure regions.
7. Azure NetApp Files volumes can be deployed as NFSv3 or NFSv4.1 volumes. Both protocols are supported for
the SAP BI Platform Applications.

Configure file systems on linux servers


The steps in this section use the following prefixes:
[A] : The step applies to all hosts
Format and mount SAP file system
1. [A] List all attached disk

sudo lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 30G 0 disk
├─sda1 8:1 0 2M 0 part
├─sda2 8:2 0 512M 0 part /boot/efi
├─sda3 8:3 0 1G 0 part /boot
└─sda4 8:4 0 28.5G 0 part /
sdb 8:16 0 32G 0 disk
└─sdb1 8:17 0 32G 0 part /mnt
sdc 8:32 0 128G 0 disk
sr0 11:0 1 628K 0 rom
# Premium SSD of 128 GB is attached to Virtual Machine, whose device name is sdc

2. [A] Format block device for /usr/sap

sudo mkfs.xfs /dev/sdc

3. [A] Create mount directory

sudo mkdir -p /usr/sap

4. [A] Get UUID of block device

sudo blkid

#It will display information about block device. Copy UUID of the formatted block device

/dev/sdc: UUID="0eb5f6f8-fa77-42a6-b22d-7a9472b4dd1b" TYPE="xfs"

5. [A] Maintain file system mount entry in /etc/fstab

sudo echo "UUID=0eb5f6f8-fa77-42a6-b22d-7a9472b4dd1b /usr/sap xfs defaults,nofail 0 2" >> /etc/fstab

6. [A] Mount file system


sudo mount -a

sudo df -h

Filesystem Size Used Avail Use% Mounted on


devtmpfs 7.9G 8.0K 7.9G 1% /dev
tmpfs 7.9G 82M 7.8G 2% /run
tmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup
/dev/sda4 29G 1.8G 27G 6% /
tmpfs 1.6G 0 1.6G 0% /run/user/1000
/dev/sda3 1014M 87M 928M 9% /boot
/dev/sda2 512M 1.1M 511M 1% /boot/efi
/dev/sdb1 32G 49M 30G 1% /mnt
/dev/sdc 128G 29G 100G 23% /usr/sap

Mount Azure NetApp Files volume


1. [A] Create mount directories

sudo mkdir -p /usr/sap/frsinput


sudo mkdir -p /usr/sap/frsoutput

2. [A] Configure Client OS to support NFSv4.1 Mount (Only applicable if using NFSv4.1)
If you're using Azure NetApp Files volumes with NFSv4.1 protocol, execute following configuration on all
VMs, where Azure NetApp Files NFSv4.1 volumes need to be mounted.
Verify NFS domain settings
Make sure that the domain is configured as the default Azure NetApp Files domain that is,
defaultv4iddomain.com and the mapping is set to nobody .

sudo cat /etc/idmapd.conf


# Example
[General]
Domain = defaultv4iddomain.com
[Mapping]
Nobody-User = nobody
Nobody-Group = nobody

IMPORTANT
Make sure to set the NFS domain in /etc/idmapd.conf on the VM to match the default domain configuration on
Azure NetApp Files: defaultv4iddomain.com . If there's a mismatch between the domain configuration on the NFS
client (i.e. the VM) and the NFS server, i.e. the Azure NetApp configuration, then the permissions for files on Azure
NetApp volumes that are mounted on the VMs will be displayed as nobody.

Verify nfs4_disable_idmapping. It should be set to Y . To create the directory structure where


nfs4_disable_idmapping is located, execute the mount command. You won't be able to manually create the
directory under /sys/modules, because access is reserved for the kernel / drivers.
# Check nfs4_disable_idmapping
cat /sys/module/nfs/parameters/nfs4_disable_idmapping

# If you need to set nfs4_disable_idmapping to Y


mkdir /mnt/tmp
mount -t nfs -o sec=sys,vers=4.1 10.31.2.4:/azusbobi-frsinput /mnt/tmp
umount /mnt/tmp

echo "Y" > /sys/module/nfs/parameters/nfs4_disable_idmapping

# Make the configuration permanent


echo "options nfs nfs4_disable_idmapping=Y" >> /etc/modprobe.d/nfs.conf

3. [A] Add mount entries


If using NFSv3

sudo echo "10.31.2.4:/azusbobi-frsinput /usr/sap/frsinput nfs


rw,hard,rsize=65536,wsize=65536,vers=3" >> /etc/fstab
sudo echo "10.31.2.4:/azusbobi-frsoutput /usr/sap/frsoutput nfs
rw,hard,rsize=65536,wsize=65536,vers=3" >> /etc/fstab

If using NFSv4.1

sudo echo "10.31.2.4:/azusbobi-frsinput /usr/sap/frsinput nfs


rw,hard,rsize=65536,wsize=65536,vers=4.1,sec=sys" >> /etc/fstab
sudo echo "10.31.2.4:/azusbobi-frsoutput /usr/sap/frsoutput nfs
rw,hard,rsize=65536,wsize=65536,vers=4.1,sec=sys" >> /etc/fstab

4. [A] Mount NFS volumes

sudo mount -a

sudo df -h

Filesystem Size Used Avail Use% Mounted on


devtmpfs 7.9G 8.0K 7.9G 1% /dev
tmpfs 7.9G 82M 7.8G 2% /run
tmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup
/dev/sda4 29G 1.8G 27G 6% /
tmpfs 1.6G 0 1.6G 0% /run/user/1000
/dev/sda3 1014M 87M 928M 9% /boot
/dev/sda2 512M 1.1M 511M 1% /boot/efi
/dev/sdb1 32G 49M 30G 1% /mnt
/dev/sdc 128G 29G 100G 23% /usr/sap
10.31.2.4:/azusbobi-frsinput 101T 18G 100T 1% /usr/sap/frsinput
10.31.2.4:/azusbobi-frsoutput 100T 512K 100T 1% /usr/sap/frsoutput

Configure CMS database - Azure database for MySQL


This section provides details on how to provision Azure Database for MySQL using Azure portal. It also provides
instructions on how to create CMS and Audit Databases for SAP BOBI Platform and a user account to access the
database.
The guidelines are applicable only if you're using Azure DB for MySQL. For other database(s), refer to SAP or
database-specific documentation for instructions.
Create an Azure database for MySQL
Sign in to Azure portal and follow the steps mentioned in this Quick start Guide of Azure Database for MySQL. Few
points to note while provisioning Azure Database for MySQL -
1. Select the same region for Azure Database for MySQL where your SAP BI Platform application servers are
running.
2. Choose a supported DB version based on Product Availability Matrix (PAM) for SAP BI specific to your SAP
BOBI version. Follow same compatibility guidelines as addressed for MySQL AB in SAP PAM
3. In “compute+storage”, select Configure ser ver and select the appropriate pricing tier based on you sizing
output.
4. Storage Autogrowth is enabled by default. Keep in mind that Storage can only be scaled-up, not down.
5. By default, Back up Retention Period is seven days but you can optionally configure it up to 35 days.
6. Backups of Azure Database for MySQL are locally redundant by default, so if you want server backups in
geo-redundant storage, select Geographically Redundant from Backup Redundancy Options .

NOTE
Changing the Backup Redundancy Options after server creation is not supported.

Configure connection security


By default the server created is protected with a firewall and isn't accessible publicly. To provide access to the
virtual network where SAP BI Platform application servers are running, follow below steps -
1. Go to server resources in the Azure portal and select Connection security from left side menu for your server
resource.
2. Select Yes to Allow access to Azure ser vices .
3. Under VNET rules, select Adding existing vir tual network . Select the virtual network and subnet of SAP BI
Platform application server. Also you need to provide access to Jump box or other servers from where you can
connect MySQL Workbench to Azure Database for MySQL. MySQL Workbench will be used to create CMS and
Audit database
4. Once virtual networks are added, select Save .
Create CMS and audit database
1. Download and install MySQL workbench from MySQL website. Make sure you install MySQL workbench on
the server that can access Azure Database for MySQL.
2. Connect to server by using MySQL Workbench. Follow the instruction mentioned in this article. If the
connection test is successful, you'll get following message -

3. In SQL query tab, run below query to create schema for CMS and Audit database.
# Here cmsbl1 is the database name of CMS database. You can provide the name you want for CMS database.
CREATE SCHEMA `cmsbl1` DEFAULT CHARACTER SET utf8;

# auditbl1 is the database name of Audit database. You can provide the name you want for CMS database.
CREATE SCHEMA `auditbl1` DEFAULT CHARACTER SET utf8;

4. Create user account to connect to schema

# Create a user that can connect from any host, use the '%' wildcard as a host part
CREATE USER 'cmsadmin'@'%' IDENTIFIED BY 'password';
CREATE USER 'auditadmin'@'%' IDENTIFIED BY 'password';

# Grant all privileges to a user account over a specific database:


GRANT ALL PRIVILEGES ON cmsbl1.* TO 'cmsadmin'@'%' WITH GRANT OPTION;
GRANT ALL PRIVILEGES ON auditbl1.* TO 'auditadmin'@'%' WITH GRANT OPTION;

# Following any updates to the user privileges, be sure to save the changes by issuing the FLUSH
PRIVILEGES
FLUSH PRIVILEGES;

5. To check the privileges and roles of MySQL user account

USE sys;
SHOW GRANTS for 'cmsadmin'@'%';
+------------------------------------------------------------------------+
| Grants for cmsadmin@% |
+------------------------------------------------------------------------+
| GRANT USAGE ON *.* TO `cmsadmin`@`%` |
| GRANT ALL PRIVILEGES ON `cmsbl1`.* TO `cmsadmin`@`%` WITH GRANT OPTION |
+------------------------------------------------------------------------+

USE sys;
SHOW GRANTS FOR 'auditadmin'@'%';
+----------------------------------------------------------------------------+
| Grants for auditadmin@% |
+----------------------------------------------------------------------------+
| GRANT USAGE ON *.* TO `auditadmin`@`%` |
| GRANT ALL PRIVILEGES ON `auditbl1`.* TO `auditadmin`@`%` WITH GRANT OPTION |
+----------------------------------------------------------------------------+

Install MySQL C API connector (libmysqlclient) on linux server


For SAP BOBI Application server to access database, it requires database client/drivers. MySQL C API Connector for
Linux has to be used to access CMS and Audit databases. ODBC connection to CMS database isn't supported. This
section provides instructions on how to set up MySQL C API Connector on Linux.
1. Refer to MySQL drivers and management tools compatible with Azure Database for MySQL article, which
describes the drivers that are compatible with Azure Database for MySQL. Check for MySQL Connector/C
(libmysqlclient) driver in the article.
2. Refer to this link to download drivers.
3. Select the operating system and download the shared component rpm package of MySQL Connector. In this
example, mysql-connector-c-shared-6.1.11 connector version is used.
4. Install the connector in all SAP BOBI Application instance.
# Install rpm package
SLES: sudo zypper install <package>.rpm
RHEL: sudo yum install <package>.rpm

5. Check the path of libmysqlclient.so

# Find the location of libmysqlclient.so file


whereis libmysqlclient

# sample output
libmysqlclient: /usr/lib64/libmysqlclient.so

6. Set LD_LIBRARY_PATH to point to /usr/lib64 directory for user account that will be used for installation.

# This configuration is for bash shell. If you are using any other shell for sidadm, kindly set
environment variable accordingly.
vi /home/bl1adm/.bashrc

export LD_LIBRARY_PATH=/usr/lib64

Server Preparation
The steps in this section use the following prefixes:
[A] : The step applies to all hosts.
1. [A] Based on the flavor of Linux (SLES or RHEL), you need to set kernel parameters and install required
libraries. Refer to System requirements section in Business Intelligence Platform Installation Guide for
Unix.
2. [A] Ensure the time zone on your machine is set correctly. Refer to Additional Unix and Linux requirements
section in Installation Guide.
3. [A] Create user account (bl1 adm) and group (sapsys) under which the software's background processes can
run. Use this account to execute the installation and run the software. The account doesn't require root
privileges.
4. [A] Set user account (bl1 adm) environment to use a supported UTF-8 locale and ensure that your console
software supports UTF-8 character sets. To ensure that your operating system uses the correct locale, set the
LC_ALL and LANG environment variables to your preferred locale in your (bl1 adm) user environment.

# This configuration is for bash shell. If you are using any other shell for sidadm, kindly set
environment variable accordingly.
vi /home/bl1adm/.bashrc

export LANG=en_US.utf8
export LC_ALL=en_US.utf8

5. [A] Configure user account (bl1 adm).


# Set ulimit for bl1adm to unlimited
root@azusbosl1:~> ulimit -f unlimited bl1adm
root@azusbosl1:~> ulimit -u unlimited bl1adm

root@azusbosl1:~> su - bl1adm
bl1adm@azusbosl1:~> ulimit -a

core file size (blocks, -c) unlimited


data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 63936
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) unlimited
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited

6. Download and extract media for SAP BusinessObjects BI Platform from SAP Service Marketplace.

Installation
Check locale for user account bl1 adm on the server

bl1adm@azusbosl1:~> locale
LANG=en_US.utf8
LC_ALL=en_US.utf8

Navigate to media of SAP BusinessObjects BI Platform and run below command with bl1 adm user -

./setup.sh -InstallDir /usr/sap/BL1

Follow SAP BOBI Platform Installation Guide for Unix, specific to your version. Few points to note while installing
SAP BOBI Platform.
On Configure Product Registration screen, you can either use temporary license key for SAP
BusinessObjects Solutions from SAP Note 1288121 or can generate license key in SAP Service Marketplace
On Select Install Type screen, select Full installation on first server (azusbosl1), for other server
(azusbosl2) select Custom / Expand which will expand the existing BOBI setup.
On Select Default or Existing Database screen, select configure an existing database , which will
prompt you to select CMS and Audit database. Select MySQL for CMS Database type and Audit Database
type.
You can also select No auditing database, if you don’t want to configure auditing during installation.
Select appropriate options on Select Java Web Application Ser ver screen based on your SAP BOBI
architecture. In this example, we have selected option 1, which installs tomcat server on the same SAP BOBI
Platform.
Enter CMS database information in Configure CMS Repositor y Database - MySQL . Example input for
CMS database information for Linux installation. Azure Database for MySQL is used on default port 3306
(Optional) Enter Audit database information in Configure Audit Repositor y Database - MySQL .
Example input for Audit database information for Linux installation.

Follow the instructions and enter required inputs to complete the installation.
For multi-instance deployment, run the installation setup on second host (azusbosl2). During Select Install Type
screen, select Custom / Expand which will expand the existing BOBI setup.
In Azure database for MySQL offering, a gateway is used to redirect the connections to server instances. After the
connection is established, the MySQL client displays the version of MySQL set in the gateway, not the actual
version running on your MySQL server instance. To determine the version of your MySQL server instance, use the
SELECT VERSION(); command at the MySQL prompt. So in Central Management Console (CMC), you'll find
different database version that is basically the version set on gateway. Check Supported Azure Database for
MySQL server versions for more details.

# Run direct query to the database using MySQL Workbench

select version();

+-----------+
| version() |
+-----------+
| 8.0.15 |
+-----------+

Post installation
Tomcat clustering - session replication
Tomcat supports clustering of two or more application servers for session replication and failover. SAP BOBI
platform sessions are serialized, a user session can fail over seamlessly to another instance of tomcat, even when
an application server fails.
For example, if a user is connected to a web server that fails while the user is navigating a folder hierarchy in SAP
BI application. With a correctly configured cluster, the user may continue navigating the folder hierarchy without
being redirected to sign in page.
In SAP Note 2808640, steps to configure tomcat clustering is provided using multicast. But in Azure, multicast isn't
supported. So to make Tomcat cluster work in Azure, you must use StaticMembershipInterceptor (SAP Note
2764907). Check Tomcat Clustering using Static Membership for SAP BusinessObjects BI Platform on SAP blog to
set up tomcat cluster in Azure.
Load-balancing web tier of SAP BI platform
In SAP BOBI multi-instance deployment, Java Web Application servers (web tier) are running on two or more
hosts. To distribute user load evenly across web servers, you can use a load balancer between end users and web
servers. In Azure, you can either use Azure Load Balancer or Azure Application Gateway to manage traffic to your
web application servers. Details about each offering are explained in following section.
Azure load balancer (network-based load balancer)
Azure Load Balancer is a high performance, low latency layer 4 (TCP, UDP) load balancer that distributes traffic
among healthy Virtual Machines. A load balancer health probe monitors a given port on each VM and only
distributes traffic to an operational Virtual Machine(s). You can either choose a public load balancer or internal load
balancer depending on whether you want SAP BI Platform accessible from internet or not. Its zone redundant,
ensuring high-availability across Availability Zones.
Refer to Internal Load Balancer section in below figure where web application server runs on port 8080, default
Tomcat HTTP Port, which will be monitored by health probe. So any incoming request that comes from end users
will get redirected to the web application servers (azusbosl1 or azusbosl2) in the backend pool. Load balancer
doesn’t support TLS/SSL Termination (also known as TLS/SSL Offloading). If you are using Azure load balancer to
distribute traffic across web servers, we recommend using Standard Load Balancer.

NOTE
When VMs without public IP addresses are placed in the backend pool of internal (no public IP address) Standard Azure load
balancer, there will be no outbound internet connectivity, unless additional configuration is performed to allow routing to
public end points. For details on how to achieve outbound connectivity see Public endpoint connectivity for Virtual Machines
using Azure Standard Load Balancer in SAP high-availability scenarios.
Azure application gateway (web application load balancer)
Azure Application Gateway (AGW) provide Application Delivery Controller (ADC) as a service, which is used to help
application to direct user traffic to one or more web application servers. It offers various layer 7 load-balancing
capabilities like TLS/SSL Offloading, Web Application Firewall (WAF), Cookie-based session affinity and others for
your applications.
In SAP BI Platform, application gateway directs application web traffic to the specified resources in a backend pool
- azusbosl1 or azusbos2. You assign a listener to port, create rules, and add resources to a backend pool. In below
figure, application gateway with private frontend IP address (10.31.3.20) act as entry point for the users, handles
incoming TLS/SSL (HTTPS - TCP/443) connections, decrypt the TLS/SSL and passing on the unencrypted request
(HTTP - TCP/8080) to the servers in the backend pool. With in-built TLS/SSL termination feature, we just need to
maintain one TLS/SSL certificate on application gateway, which simplifies operations.
To configure Application Gateway for SAP BOBI Web Server, you can refer to Load Balancing SAP BOBI Web
Servers using Azure Application Gateway on SAP blog.

NOTE
We recommend to use Azure Application Gateway to load balance the traffic to web server as it provide feature likes like SSL
offloading, Centralize SSL management to reduce encryption and decryption overhead on server, Round-Robin algorithm to
distribute traffic, Web Application Firewall (WAF) capabilities, high-availability and so on.

SAP BusinessObjects BI Platform - back up and restore


Backup and Restore is a process of creating periodic copies of data and applications to separate location. So it can
be restored or recovered to previous state if the original data or applications are lost or damaged. It's also an
essential component of any business disaster recovery strategy.
To develop comprehensive backup and restore strategy for SAP BOBI Platform, identify the components that lead
to system downtime or disruption in the application. In SAP BOBI Platform, backup of following components are
vital to protect the application.
SAP BOBI Installation Directory (Managed Premium Disks)
File Repository Server (Azure NetApp Files or Azure Premium Files)
CMS Database (Azure Database for MySQL or Database on Azure VM)
Following section describes how to implement backup and restore strategy for each component on SAP BOBI
Platform.
Backup & restore for SAP BOBI installation directory
In Azure, the simplest way to back up application servers and all the attached disks is by using Azure Backup
Service. It provides independent and isolated backups to guard unintended destruction of the data on your VMs.
Backups are stored in a Recovery Services vault with built-in management of recovery points. Configuration and
scaling are simple, backups are optimized and can be restored easily when needed.
As part of backup process, snapshot is taken and the data is transferred to the Recovery Service vault with no
impact on production workloads. The snapshot provides different level of consistency as described in Snapshot
Consistency article. You can also choose to back up subset of the data disks in VM by using selective disks backup
and restore functionality. For more information, see Azure VM Backup document and FAQs - Backup Azure VMs.
Backup & restore for file repository server
For Azure NetApp Files , you can create an on-demand snapshots and schedule automatic snapshot by using
snapshot policies. Snapshot copies provide a point-in-time copy of your ANF volume. For more information, see
Manage snapshots by using Azure NetApp Files.
Azure Files backup is integrated with native Azure Backup service, which centralizes the backup and restore
function along with VMs backup and simplifies operation work. For more information, see Azure File Share backup
and FAQs - Back up Azure Files.
Backup & restore for CMS database
Azure Database of MySQL is DBaaS offering in Azure, which automatically creates server backups and stores them
in user configured locally redundant or geo-redundant storage. Azure Database of MySQL takes backups of the
data files and the transaction log. Depending on the supported maximum storage size, it either takes full and
differential backups (4-TB max storage servers) or snapshot backup (up to 16-TB max storage servers). These
backups allow you to restore a server at any point-in-time within your configured backup retention period. The
default backup retention period is seven days, which you can optionally configure it up to three days. All backups
are encrypted using AES 256-bit encryption.
These backup files aren't user-exposed and cannot be exported. These backups can only be used for restore
operations in Azure Database for MySQL. You can use mysqldump to copy a database. For more information, see
Backup and restore in Azure Database for MySQL.
For database installed on Virtual Machines, you can use standard backup tools or Azure Backup for HANA
Database. Also if the Azure Services and tools don't meet your requirement, you can use other backup tools or
script to create disks backup.

SAP BusinessObjects BI platform reliability


SAP BusinessObjects BI Platform includes different tiers, which are optimized for specific tasks and operations.
When a component from any one tier becomes unavailable, SAP BOBI application will either become inaccessible
or certain functionality of the application won’t work. So one need to make sure that each tier is designed to be
reliable to keep application operational without any business disruption.
This section focuses on the following options for SAP BOBI Platform -
High Availability: A high available platform has at least two of everything within Azure region to keep the
application operational if one of the servers becomes unavailable.
Disaster Recover y: It's a process of restoring your application functionality if there are any catastrophic loss
like entire Azure Region becomes unavailable because of some natural disaster.
Implementation of this solution varies based on the nature of the system setup in Azure. So customer needs to
tailor High Availability and Disaster Recovery solution based on their business requirement.
High availability
High Availability refers to a set of technologies that can minimize IT disruptions by providing business continuity of
application/services through redundant, fault-tolerant, or failover-protected components inside the same data
center. In our case, the data centers are within one Azure region. The article High-availability Architecture and
Scenarios for SAP provide an initial insight on different high availability techniques and recommendation offered
on Azure for SAP Applications, which will compliment the instructions in this section.
Based on the sizing result of SAP BOBI Platform, you need to design the landscape and determine the distribution
of BI components across Azure Virtual Machines and subnets. The level of redundancy in the distributed
architecture depends on the business required Recovery Time Objective (RTO) and Recovery Point Objective (RPO).
SAP BOBI Platform includes different tiers and components on each tier should be designed to achieve
redundancy. So that if one component fails, there's little to no disruption to your SAP BOBI application. For
example,
Redundant Application Servers like BI Application Servers and Web Server
Unique Components like CMS Database, File Repository Server, Load Balancer
Following section describes how to achieve high availability on each component of SAP BOBI Platform.
High availability for application servers
For BI and Web Application Servers whether they're installed separately or together, doesn’t need a specific high
availability solution. You can achieve high availability by redundancy, that is by configuring multiple instances of BI
and Web Servers in various Azure Virtual Machines.
To reduce the impact of downtime due to one or more events, it is advisable to follow below high availability
practice for the application servers running on multiple virtual Machines.
Use Availability Zones to protect datacenter failures.
Configure multiple Virtual Machines in an Availability Set for redundancy.
Use Managed Disks for VMs in an Availability Set.
Configure each application tier into separate Availability Sets.
For more information, check Manage the availability of Linux virtual machines
High availability for CMS database
If you're using Azure Database as a Service (DBaaS) service for CMS database, high availability framework is
provided by default. You just need to select the region and service inherent high availability, redundancy, and
resiliency capabilities without requiring you to configure any additional components. For more details on the SLA
of supported DBaaS offering on Azure, check High availability in Azure Database for MySQL and High availability
for Azure SQL Database
For other DBMS deployment for CMS database refer to DBMS deployment guides for SAP Workload, which
provides insight on different DBMS deployment and its approach to attain high availability.
High availability for file repository server
File Repository Server (FRS) refers to the disk directories where contents like reports, universes, and connections
are stored. It's being shared across all application servers of that system. So you must make sure that it's highly
available.
On Azure, you can either choose Azure Premium Files or Azure NetApp Files for file share that are designed to be
highly available and highly durable in nature. For more information, see Redundancy section for Azure Files.

NOTE
SMB Protocol for Azure Files is generally available, but NFS Protocol support for Azure Files is currently in preview. For more
information, see NFS 4.1 support for Azure Files is now in preview

As this File share service isn't available in all region, make sure you refer to Products available by region site to find
out up-to-date information. If the service isn't available in your region, you can create NFS server from which you
can share the file system to SAP BOBI application. But you'll also need to consider its high availability.
High availability for load balancer
To distribute traffic across web server, you can either use Azure Load Balancer or Azure Application Gateway. The
redundancy for either of the load balancer can be achieved based on the SKU you choose for deployment.
For Azure Load Balancer, redundancy can be achieved by configuring Standard Load Balancer frontend as zone-
redundant. For more information, see Standard Load Balancer and Availability Zones
For Application Gateway, high availability can be achieved based on the type of tier selected during deployment.
v1 SKU supports high-availability scenarios when you've deployed two or more instances. Azure
distributes these instances across update and fault domains to ensure that instances don't all fail at the
same time. So with this SKU, redundancy can be achieved within the zone
v2 SKU automatically ensures that new instances are spread across fault domains and update domains. If
you choose zone redundancy, the newest instances are also spread across availability zones to offer
zonal failure resiliency. For more details, refer Autoscaling and Zone-redundant Application Gateway v2
Reference high availability architecture for SAP BusinessObjects BI platform
Below reference architecture describe the setup of SAP BOBI Platform using availability set, which provides VMs
redundancy and availability within the zone. The architecture showcases the use of different Azure Services like
Azure Application Gateway, Azure NetApp Files, and Azure Database for MySQL for SAP BOBI Platform that offers
built-in redundancy, which reduces the complexity of managing different high availability solutions.
In below figure, the incoming traffic (HTTPS - TCP/443) is load balanced using Azure Application Gateway v1 SKU,
which is highly available when deployed on two or more instances. Multiple instances of web server, management
servers, and processing servers are deployed in separate Virtual Machines to achieve redundancy and each tier is
deployed in separate Availability Sets. Azure NetApp Files has built-in redundancy within data center, so your ANF
volumes for File Repository Server will be highly available. CMS Database is provisioned on Azure Database for
MySQL (DBaaS) which has inherent high availability. For more information, see High availability in Azure Database
for MySQL guide.

The above architecture provides insight on how SAP BOBI deployment on Azure can be done. But it doesn't cover
all possible configuration options for SAP BOBI Platform on Azure. Customer can tailor their deployment based on
their business requirement, by choosing different products/services for different components like Load Balancer,
File Repository Server, and DBMS.
In several Azure Regions, Availability Zones are offered which means it has independent supply of power source,
cooling, and network. It enables customer to deploy application across two or three availability zones. For
customer who wants to achieve high availability across AZs can deploy SAP BOBI Platform across availability
zones, making sure that each component in the application is zone redundant.
Disaster recovery
The instruction in this section explains the strategy to provide disaster recovery protection for SAP BOBI Platform.
It complements the Disaster Recovery for SAP document, which represents the primary resources for overall SAP
disaster recovery approach.
Reference disaster recovery architecture for SAP BusinessObjects BI platform
This reference architecture is running multi-instance deployment of SAP BOBI Platform with redundant application
servers. For disaster recovery, you should fail over all tier to a secondary region. Each tier uses a different strategy
to provide disaster recovery protection.

Load balancer
Load Balancer is used to distribute traffic across Web Application Servers of SAP BOBI Platform. To achieve DR for
Azure Application Gateway, implement parallel setup of application gateway on secondary region.
Virtual machines running web and BI application servers
Azure Site Recovery service can be used to replicate Virtual Machines running Web and BI Application Servers on
the secondary region. It replicates the servers on the secondary region so that when disasters and outages occur
you can easily fail over to your replicated environment and continue working
File repository servers
Azure NetApp Files provides NFS and SMB volumes, so any file-based copy tool can be used to replicate
data between Azure regions. For more information on how to copy ANF volume in another region, see FAQs
About Azure NetApp Files
You can use Azure NetApp Files Cross Region Replication, which is currently in preview that uses NetApp
SnapMirror® technology. So only changed blocks are sent over the network in a compressed, efficient
format. This proprietary technology minimizes the amount of data required to replicate across the regions,
which saves data transfer costs. It also shortens the replication time so you can achieve a smaller Restore
Point Objective (RPO). Refer to Requirements and considerations for using cross-region replication for more
information.
Azure premium files only support locally redundant (LRS) and zone redundant storage (ZRS). For Azure
Premium Files DR strategy, you can use AzCopy or Azure PowerShell to copy your files to another storage
account in a different region. For more information, see Disaster recovery and storage account failover
CMS database
Azure Database for MySQL provides multiple options to recover database if there are any disaster. Choose
appropriate option that works for your business.
Enable cross-region read replicas to enhance your business continuity and disaster recovery planning. You
can replicate from source server to up to five replicas. Read replicas are updated asynchronously using
MySQL's binary log replication technology. Replicas are new servers that you manage similar to regular
Azure Database for MySQL servers. Learn more about read replicas, available regions, restrictions and how
to fail over from the read replicas concepts article.
Use Azure Database for MySQL's geo-restore feature that restores the server using geo-redundant backups.
These backups are accessible even when the region on which your server is hosted is offline. You can
restore from these backups to any other region and bring your server back online.

NOTE
Geo-restore is only possible if you provisioned the server with geo-redundant backup storage. Changing the
Backup Redundancy Options after server creation is not supported. For more information, see Backup
Redundancy article.

Following is the recommendation for disaster recovery of each tier used in this example.

SA P B O B I P L AT F O RM T IERS REC O M M EN DAT IO N

Azure Application Gateway Parallel setup of Application Gateway on Secondary Region

Web Application Servers Replicate by using Site Recovery

BI Application Servers Replicate by using Site Recovery

Azure NetApp Files File based copy tool to replicate data to Secondary Region or
ANF Cross Region Replication (Preview)
SA P B O B I P L AT F O RM T IERS REC O M M EN DAT IO N

Azure Database for MySQL Cross region read replicas or Restore backup from geo-
redundant backups.

Next steps
Set up disaster recovery for a multi-tier SAP app deployment
Azure Virtual Machines planning and implementation for SAP
Azure Virtual Machines deployment for SAP
Azure Virtual Machines DBMS deployment for SAP
Tutorial: Configure SAP SuccessFactors to Active
Directory user provisioning
12/22/2020 • 14 minutes to read • Edit Online

The objective of this tutorial is to show the steps you need to perform to provision users from SuccessFactors
Employee Central into Active Directory (AD) and Azure AD, with optional write-back of email address to
SuccessFactors.

NOTE
Use this tutorial if the users you want to provision from SuccessFactors need an on-premises AD account and optionally an
Azure AD account. If the users from SuccessFactors only need Azure AD account (cloud-only users), then please refer to the
tutorial on configure SAP SuccessFactors to Azure AD user provisioning.

Overview
The Azure Active Directory user provisioning service integrates with the SuccessFactors Employee Central in order
to manage the identity life cycle of users.
The SuccessFactors user provisioning workflows supported by the Azure AD user provisioning service enable
automation of the following human resources and identity lifecycle management scenarios:
Hiring new employees - When a new employee is added to SuccessFactors, a user account is
automatically created in Active Directory, Azure Active Directory, and optionally Microsoft 365 and other
SaaS applications supported by Azure AD, with write-back of the email address to SuccessFactors.
Employee attribute and profile updates - When an employee record is updated in SuccessFactors (such
as their name, title, or manager), their user account will be automatically updated in Active Directory, Azure
Active Directory, and optionally Microsoft 365 and other SaaS applications supported by Azure AD.
Employee terminations - When an employee is terminated in SuccessFactors, their user account is
automatically disabled in Active Directory, Azure Active Directory, and optionally Microsoft 365 and other
SaaS applications supported by Azure AD.
Employee rehires - When an employee is rehired in SuccessFactors, their old account can be automatically
reactivated or re-provisioned (depending on your preference) to Active Directory, Azure Active Directory,
and optionally Microsoft 365 and other SaaS applications supported by Azure AD.
Who is this user provisioning solution best suited for?
This SuccessFactors to Active Directory user provisioning solution is ideally suited for:
Organizations that desire a pre-built, cloud-based solution for SuccessFactors user provisioning
Organizations that require direct user provisioning from SuccessFactors to Active Directory
Organizations that require users to be provisioned using data obtained from the SuccessFactors Employee
Central (EC)
Organizations that require joining, moving, and leaving users to be synced to one or more Active Directory
Forests, Domains, and OUs based only on change information detected in SuccessFactors Employee Central
(EC)
Organizations using Microsoft 365 for email

Solution Architecture
This section describes the end-to-end user provisioning solution architecture for common hybrid environments.
There are two related flows:
Authoritative HR Data Flow – from SuccessFactors to on-premises Active Director y: In this flow
worker events (such as New Hires, Transfers, Terminations) first occur in the cloud SuccessFactors Employee
Central and then the event data flows into on-premises Active Directory through Azure AD and the
Provisioning Agent. Depending on the event, it may lead to create/update/enable/disable operations in AD.
Email Writeback Flow – from on-premises Active Director y to SuccessFactors: Once the account
creation is complete in Active Directory, it is synced with Azure AD through Azure AD Connect sync and
email attribute can be written back to SuccessFactors.

End-to -end user data flow


1. The HR team performs worker transactions (Joiners/Movers/Leavers or New Hires/Transfers/Terminations) in
SuccessFactors Employee Central
2. The Azure AD Provisioning Service runs scheduled synchronizations of identities from SuccessFactors EC and
identifies changes that need to be processed for sync with on-premises Active Directory.
3. The Azure AD Provisioning Service invokes the on-premises Azure AD Connect Provisioning Agent with a
request payload containing AD account create/update/enable/disable operations.
4. The Azure AD Connect Provisioning Agent uses a service account to add/update AD account data.
5. The Azure AD Connect Sync engine runs delta sync to pull updates in AD.
6. The Active Directory updates are synced with Azure Active Directory.
7. If the SuccessFactors Writeback app is configured, it writes back email attribute to SuccessFactors, based on the
matching attribute used.

Planning your deployment


Configuring Cloud HR driven user provisioning from SuccessFactors to AD requires considerable planning covering
different aspects such as:
Setup of the Azure AD Connect provisioning agent
Number of SuccessFactors to AD user provisioning apps to deploy
Matching ID, Attribute mapping, transformation and scoping filters
Please refer to the cloud HR deployment plan for comprehensive guidelines around these topics. Please refer to the
SAP SuccessFactors integration reference to learn about the supported entities, processing details and how to
customize the integration for different HR scenarios.

Configuring SuccessFactors for the integration


A common requirement of all the SuccessFactors provisioning connectors is that they require credentials of a
SuccessFactors account with the right permissions to invoke the SuccessFactors OData APIs. This section describes
steps to create the service account in SuccessFactors and grant appropriate permissions.
Create/identify API user account in SuccessFactors
Create an API permissions role
Create a Permission Group for the API user
Grant Permission Role to the Permission Group
Create/identify API user account in SuccessFactors
Work with your SuccessFactors admin team or implementation partner to create or identify a user account in
SuccessFactors that will be used to invoke the OData APIs. The username and password credentials of this account
will be required when configuring the provisioning apps in Azure AD.
Create an API permissions role
Log in to SAP SuccessFactors with a user account that has access to the Admin Center.
Search for Manage Permission Roles, then select Manage Permission Roles from the search results.

From the Permission Role List, click Create New .

Add a Role Name and Description for the new permission role. The name and description should indicate
that the role is for API usage permissions.

Under Permission settings, click Permission..., then scroll down the permission list and click Manage
Integration Tools . Check the box for Allow Admin to Access to OData API through Basic
Authentication .
Scroll down in the same box and select Employee Central API . Add permissions as shown below to read
using ODATA API and edit using ODATA API. Select the edit option if you plan to use the same account for the
Writeback to SuccessFactors scenario.
NOTE
For the complete list of attributes retrieved by this provisioning app, please refer to SuccessFactors Attribute
Reference

Click on Done . Click Save Changes .


Create a Permission Group for the API user
In the SuccessFactors Admin Center, search for Manage Permission Groups, then select Manage Permission
Groups from the search results.

From the Manage Permission Groups window, click Create New .

Add a Group Name for the new group. The group name should indicate that the group is for API users.

Add members to the group. For example, you could select Username from the People Pool drop-down menu
and then enter the username of the API account that will be used for the integration.

Click Done to finish creating the Permission Group.


Grant Permission Role to the Permission Group
In SuccessFactors Admin Center, search for Manage Permission Roles, then select Manage Permission Roles
from the search results.
From the Permission Role List , select the role that you created for API usage permissions.
Under Grant this role to..., click Add... button.
Select Permission Group... from the drop-down menu, then click Select... to open the Groups window to
search and select the group created above.

Review the Permission Role grant to the Permission Group.

Click Save Changes .

Configuring user provisioning from SuccessFactors to Active Directory


This section provides steps for user account provisioning from SuccessFactors to each Active Directory domain
within the scope of your integration.
Add the provisioning connector app and download the Provisioning Agent
Install and configure on-premises Provisioning Agent(s)
Configure connectivity to SuccessFactors and Active Directory
Configure attribute mappings
Enable and launch user provisioning
Part 1: Add the provisioning connector app and download the Provisioning Agent
To configure SuccessFactors to Active Director y provisioning:
1. Go to https://portal.azure.com
2. In the left navigation bar, select Azure Active Director y
3. Select Enterprise Applications , then All Applications .
4. Select Add an application , and select the All category.
5. Search for SuccessFactors to Active Director y User Provisioning , and add that app from the gallery.
6. After the app is added and the app details screen is shown, select Provisioning
7. Change the Provisioning Mode to Automatic
8. Click on the information banner displayed to download the Provisioning Agent.

Part 2: Install and configure on-premises Provisioning Agent(s)


To provision to Active Directory on-premises, the Provisioning agent must be installed on a server that has .NET
4.7.1+ Framework and network access to the desired Active Directory domain(s).

TIP
You can check the version of the .NET framework on your server using the instructions provided here. If the server does not
have .NET 4.7.1 or higher installed, you can download it from here.

Transfer the downloaded agent installer to the server host and follow the steps given below to complete the agent
configuration.
1. Sign in to the Windows Server where you want to install the new agent.
2. Launch the Provisioning Agent installer, agree to the terms, and click on the Install button.
3. After installation is complete, the wizard will launch and you will see the Connect Azure AD screen. Click
on the Authenticate button to connect to your Azure AD instance.

4. Authenticate to your Azure AD instance using Global Admin Credentials.


NOTE
The Azure AD admin credentials is used only to connect to your Azure AD tenant. The agent does not store the
credentials locally on the server.

5. After successful authentication with Azure AD, you will see the Connect Active Director y screen. In this
step, enter your AD domain name and click on the Add Director y button.

6. You will now be prompted to enter the credentials required to connect to the AD Domain. On the same
screen, you can use the Select domain controller priority to specify domain controllers that the agent
should use for sending provisioning requests.
7. After configuring the domain, the installer displays a list of configured domains. On this screen, you can
repeat step #5 and #6 to add more domains or click on Next to proceed to agent registration.

NOTE
If you have multiple AD domains (e.g. na.contoso.com, emea.contoso.com), then please add each domain individually
to the list. Only adding the parent domain (e.g. contoso.com) is not sufficient. You must register each child domain
with the agent.
8. Review the configuration details and click on Confirm to register the agent.

9. The configuration wizard displays the progress of the agent registration.

10. Once the agent registration is successful, you can click on Exit to exit the Wizard.
11. Verify the installation of the Agent and make sure it is running by opening the "Services" Snap-In and look
for the Service named "Microsoft Azure AD Connect Provisioning Agent"

Part 3: In the provisioning app, configure connectivity to SuccessFactors and Active Directory
In this step, we establish connectivity with SuccessFactors and Active Directory in the Azure portal.
1. In the Azure portal, go back to the SuccessFactors to Active Directory User Provisioning App created in Part
1
2. Complete the Admin Credentials section as follows:
Admin Username – Enter the username of the SuccessFactors API user account, with the company
ID appended. It has the format: username@companyID
Admin password – Enter the password of the SuccessFactors API user account.
Tenant URL – Enter the name of the SuccessFactors OData API services endpoint. Only enter the
host name of server without http or https. This value should look like: .successfactors.com .
Active Director y Forest - The "Name" of your Active Directory domain, as registered with the
agent. Use the dropdown to select the target domain for provisioning. This value is typically a string
like: contoso.com
Active Director y Container - Enter the container DN where the agent should create user accounts
by default. Example: OU=Users,DC=contoso,DC=com

NOTE
This setting only comes into play for user account creations if the parentDistinguishedName attribute is not
configured in the attribute mappings. This setting is not used for user search or update operations. The entire
domain sub tree falls in the scope of the search operation.

Notification Email – Enter your email address, and check the "send email if failure occurs"
checkbox.

NOTE
The Azure AD Provisioning Service sends email notification if the provisioning job goes into a quarantine state.

Click the Test Connection button. If the connection test succeeds, click the Save button at the top. If it
fails, double-check that the SuccessFactors credentials and the AD credentials configured on the agent
setup are valid.

Once the credentials are saved successfully, the Mappings section will display the default mapping
Synchronize SuccessFactors Users to On Premises Active Director y
Part 4: Configure attribute mappings
In this section, you will configure how user data flows from SuccessFactors to Active Directory.
1. On the Provisioning tab under Mappings , click Synchronize SuccessFactors Users to On Premises
Active Director y .
2. In the Source Object Scope field, you can select which sets of users in SuccessFactors should be in scope
for provisioning to AD, by defining a set of attribute-based filters. The default scope is "all users in
SuccessFactors". Example filters:
Example: Scope to users with personIdExternal between 1000000 and 2000000 (excluding 2000000)
Attribute: personIdExternal
Operator: REGEX Match
Value: (1[0-9][0-9][0-9][0-9][0-9][0-9])
Example: Only employees and not contingent workers
Attribute: EmployeeID
Operator: IS NOT NULL

TIP
When you are configuring the provisioning app for the first time, you will need to test and verify your attribute
mappings and expressions to make sure that it is giving you the desired result. Microsoft recommends using the
scoping filters under Source Object Scope to test your mappings with a few test users from SuccessFactors. Once
you have verified that the mappings work, then you can either remove the filter or gradually expand it to include
more users.

Cau t i on

The default behavior of the provisioning engine is to disable/delete users that go out of scope. This may not
be desirable in your SuccessFactors to AD integration. To override this default behavior refer to the article
Skip deletion of user accounts that go out of scope
3. In the Target Object Actions field, you can globally filter what actions are performed on Active Directory.
Create and Update are most common.
4. In the Attribute mappings section, you can define how individual SuccessFactors attributes map to Active
Directory attributes.

NOTE
For the complete list of SuccessFactors attribute supported by the application, please refer to SuccessFactors Attribute
Reference

1. Click on an existing attribute mapping to update it, or click Add new mapping at the bottom of the screen
to add new mappings. An individual attribute mapping supports these properties:
Mapping Type
Direct – Writes the value of the SuccessFactors attribute to the AD attribute, with no changes
Constant - Write a static, constant string value to the AD attribute
Expression – Allows you to write a custom value to the AD attribute, based on one or more
SuccessFactors attributes. For more info, see this article on expressions.
Source attribute - The user attribute from SuccessFactors
Default value – Optional. If the source attribute has an empty value, the mapping will write this
value instead. Most common configuration is to leave this blank.
Target attribute – The user attribute in Active Directory.
Match objects using this attribute – Whether or not this mapping should be used to uniquely
identify users between SuccessFactors and Active Directory. This value is typically set on the Worker
ID field for SuccessFactors, which is typically mapped to one of the Employee ID attributes in Active
Directory.
Matching precedence – Multiple matching attributes can be set. When there are multiple, they are
evaluated in the order defined by this field. As soon as a match is found, no further matching
attributes are evaluated.
Apply this mapping
Always – Apply this mapping on both user creation and update actions
Only during creation - Apply this mapping only on user creation actions
2. To save your mappings, click Save at the top of the Attribute-Mapping section.
Once your attribute mapping configuration is complete, you can now enable and launch the user provisioning
service.

Enable and launch user provisioning


Once the SuccessFactors provisioning app configurations have been completed, you can turn on the provisioning
service in the Azure portal.

TIP
By default when you turn on the provisioning service, it will initiate provisioning operations for all users in scope. If there are
errors in the mapping or SuccessFactors data issues, then the provisioning job might fail and go into the quarantine state. To
avoid this, as a best practice, we recommend configuring Source Object Scope filter and testing your attribute mappings
with a few test users before launching the full sync for all users. Once you have verified that the mappings work and are
giving you the desired results, then you can either remove the filter or gradually expand it to include more users.

1. In the Provisioning tab, set the Provisioning Status to On .


2. Click Save .
3. This operation will start the initial sync, which can take a variable number of hours depending on how many
users are in the SuccessFactors tenant. You can check the progress bar to the track the progress of the sync
cycle.
4. At any time, check the Audit logs tab in the Azure portal to see what actions the provisioning service has
performed. The audit logs lists all individual sync events performed by the provisioning service, such as
which users are being read out of SuccessFactors and then subsequently added or updated to Active
Directory.
5. Once the initial sync is completed, it will write an audit summary report in the Provisioning tab, as shown
below.
Next steps
Learn more about supported SuccessFactors Attributes for inbound provisioning
Learn how to configure email writeback to SuccessFactors
Learn how to review logs and get reports on provisioning activity
Learn how to configure single sign-on between SuccessFactors and Azure Active Directory
Learn how to integrate other SaaS applications with Azure Active Directory
Learn how to export and import your provisioning configurations
Tutorial: Configure SAP SuccessFactors to Azure AD
user provisioning
12/22/2020 • 11 minutes to read • Edit Online

The objective of this tutorial is to show the steps you need to perform to provision worker data from
SuccessFactors Employee Central into Azure Active Directory, with optional write-back of email address to
SuccessFactors.

NOTE
Use this tutorial if the users you want to provision from SuccessFactors are cloud-only users who don't need an on-premises
AD account. If the users require only on-premises AD account or both AD and Azure AD account, then please refer to the
tutorial on configure SAP SuccessFactors to Active Directory user provisioning.

Overview
The Azure Active Directory user provisioning service integrates with the SuccessFactors Employee Central in order
to manage the identity life cycle of users.
The SuccessFactors user provisioning workflows supported by the Azure AD user provisioning service enable
automation of the following human resources and identity lifecycle management scenarios:
Hiring new employees - When a new employee is added to SuccessFactors, a user account is
automatically created in Azure Active Directory and optionally Microsoft 365 and other SaaS applications
supported by Azure AD, with write-back of the email address to SuccessFactors.
Employee attribute and profile updates - When an employee record is updated in SuccessFactors (such
as their name, title, or manager), their user account will be automatically updated Azure Active Directory and
optionally Microsoft 365 and other SaaS applications supported by Azure AD.
Employee terminations - When an employee is terminated in SuccessFactors, their user account is
automatically disabled in Azure Active Directory and optionally Microsoft 365 and other SaaS applications
supported by Azure AD.
Employee rehires - When an employee is rehired in SuccessFactors, their old account can be automatically
reactivated or re-provisioned (depending on your preference) to Azure Active Directory and optionally
Microsoft 365 and other SaaS applications supported by Azure AD.
Who is this user provisioning solution best suited for?
This SuccessFactors to Azure Active Directory user provisioning solution is ideally suited for:
Organizations that desire a pre-built, cloud-based solution for SuccessFactors user provisioning
Organizations that require direct user provisioning from SuccessFactors to Azure Active Directory
Organizations that require users to be provisioned using data obtained from the SuccessFactors Employee
Central (EC)
Organizations using Microsoft 365 for email

Solution Architecture
This section describes the end-to-end user provisioning solution architecture for cloud-only users. There are two
related flows:
Authoritative HR Data Flow – from SuccessFactors to Azure Active Director y: In this flow worker
events (such as New Hires, Transfers, Terminations) first occur in the cloud SuccessFactors Employee Central
and then the event data flows into Azure Active Directory. Depending on the event, it may lead to
create/update/enable/disable operations in Azure AD.
Email Writeback Flow – from on-premises Active Director y to SuccessFactors: Once the account
creation is complete in Azure Active Directory, the email attribute value or UPN generated in Azure AD can
be written back to SuccessFactors.

End-to -end user data flow


1. The HR team performs worker transactions (Joiners/Movers/Leavers or New Hires/Transfers/Terminations) in
SuccessFactors Employee Central
2. The Azure AD Provisioning Service runs scheduled synchronizations of identities from SuccessFactors EC and
identifies changes that need to be processed for sync with on-premises Active Directory.
3. The Azure AD Provisioning Service determines the change and invokes create/update/enable/disable operation
for the user in Azure AD.
4. If the SuccessFactors Writeback app is configured, then the user's email address is retrieved from Azure AD.
5. Azure AD provisioning service writes back email attribute to SuccessFactors, based on the matching attribute
used.

Planning your deployment


Configuring Cloud HR driven user provisioning from SuccessFactors to Azure AD requires considerable planning
covering different aspects such as:
Determining the Matching ID
Attribute mapping
Attribute transformation
Scoping filters
Please refer to the cloud HR deployment plan for comprehensive guidelines around these topics. Please refer to the
SAP SuccessFactors integration reference to learn about the supported entities, processing details and how to
customize the integration for different HR scenarios.

Configuring SuccessFactors for the integration


A common requirement of all the SuccessFactors provisioning connectors is that they require credentials of a
SuccessFactors account with the right permissions to invoke the SuccessFactors OData APIs. This section describes
steps to create the service account in SuccessFactors and grant appropriate permissions.
Create/identify API user account in SuccessFactors
Create an API permissions role
Create a Permission Group for the API user
Grant Permission Role to the Permission Group
Create/identify API user account in SuccessFactors
Work with your SuccessFactors admin team or implementation partner to create or identify a user account in
SuccessFactors that will be used to invoke the OData APIs. The username and password credentials of this account
will be required when configuring the provisioning apps in Azure AD.
Create an API permissions role
Log in to SAP SuccessFactors with a user account that has access to the Admin Center.
Search for Manage Permission Roles, then select Manage Permission Roles from the search results.

From the Permission Role List, click Create New .

Add a Role Name and Description for the new permission role. The name and description should indicate
that the role is for API usage permissions.

Under Permission settings, click Permission..., then scroll down the permission list and click Manage
Integration Tools . Check the box for Allow Admin to Access to OData API through Basic
Authentication .
Scroll down in the same box and select Employee Central API . Add permissions as shown below to read
using ODATA API and edit using ODATA API. Select the edit option if you plan to use the same account for the
Writeback to SuccessFactors scenario.

Click on Done . Click Save Changes .


Create a Permission Group for the API user
In the SuccessFactors Admin Center, search for Manage Permission Groups, then select Manage Permission
Groups from the search results.

From the Manage Permission Groups window, click Create New .

Add a Group Name for the new group. The group name should indicate that the group is for API users.

Add members to the group. For example, you could select Username from the People Pool drop-down menu
and then enter the username of the API account that will be used for the integration.

Click Done to finish creating the Permission Group.


Grant Permission Role to the Permission Group
In SuccessFactors Admin Center, search for Manage Permission Roles, then select Manage Permission Roles
from the search results.
From the Permission Role List , select the role that you created for API usage permissions.
Under Grant this role to..., click Add... button.
Select Permission Group... from the drop-down menu, then click Select... to open the Groups window to
search and select the group created above.
Review the Permission Role grant to the Permission Group.

Click Save Changes .

Configuring user provisioning from SuccessFactors to Azure AD


This section provides steps for user account provisioning from SuccessFactors to Azure AD.
Add the provisioning connector app and configure connectivity to SuccessFactors
Configure attribute mappings
Enable and launch user provisioning
Part 1: Add the provisioning connector app and configure connectivity to SuccessFactors
To configure SuccessFactors to Azure AD provisioning:
1. Go to https://portal.azure.com
2. In the left navigation bar, select Azure Active Director y
3. Select Enterprise Applications , then All Applications .
4. Select Add an application , and select the All category.
5. Search for SuccessFactors to Azure Active Director y User Provisioning , and add that app from the
gallery.
6. After the app is added and the app details screen is shown, select Provisioning
7. Change the Provisioning Mode to Automatic
8. Complete the Admin Credentials section as follows:
Admin Username – Enter the username of the SuccessFactors API user account, with the company
ID appended. It has the format: username@companyID
Admin password – Enter the password of the SuccessFactors API user account.
Tenant URL – Enter the name of the SuccessFactors OData API services endpoint. Only enter the
host name of server without http or https. This value should look like: api-ser ver-
name.successfactors.com .
Notification Email – Enter your email address, and check the "send email if failure occurs"
checkbox.

NOTE
The Azure AD Provisioning Service sends email notification if the provisioning job goes into a quarantine state.

Click the Test Connection button. If the connection test succeeds, click the Save button at the top. If it
fails, double-check that the SuccessFactors credentials and URL are valid.

Once the credentials are saved successfully, the Mappings section will display the default mapping
Synchronize SuccessFactors Users to Azure Active Director y
Part 2: Configure attribute mappings
In this section, you will configure how user data flows from SuccessFactors to Active Directory.
1. On the Provisioning tab under Mappings , click Synchronize SuccessFactors Users to Azure Active
Director y .
2. In the Source Object Scope field, you can select which sets of users in SuccessFactors should be in scope
for provisioning to Azure AD, by defining a set of attribute-based filters. The default scope is "all users in
SuccessFactors". Example filters:
Example: Scope to users with personIdExternal between 1000000 and 2000000 (excluding 2000000)
Attribute: personIdExternal
Operator: REGEX Match
Value: (1[0-9][0-9][0-9][0-9][0-9][0-9])
Example: Only employees and not contingent workers
Attribute: EmployeeID
Operator: IS NOT NULL

TIP
When you are configuring the provisioning app for the first time, you will need to test and verify your attribute
mappings and expressions to make sure that it is giving you the desired result. Microsoft recommends using the
scoping filters under Source Object Scope to test your mappings with a few test users from SuccessFactors. Once
you have verified that the mappings work, then you can either remove the filter or gradually expand it to include
more users.

Cau t i on

The default behavior of the provisioning engine is to disable/delete users that go out of scope. This may not
be desirable in your SuccessFactors to Azure AD integration. To override this default behavior refer to the
article Skip deletion of user accounts that go out of scope
3. In the Target Object Actions field, you can globally filter what actions are performed on Active Directory.
Create and Update are most common.
4. In the Attribute mappings section, you can define how individual SuccessFactors attributes map to Active
Directory attributes.

NOTE
For the complete list of SuccessFactors attribute supported by the application, please refer to SuccessFactors Attribute
Reference

1. Click on an existing attribute mapping to update it, or click Add new mapping at the bottom of the screen
to add new mappings. An individual attribute mapping supports these properties:
Mapping Type
Direct – Writes the value of the SuccessFactors attribute to the AD attribute, with no changes
Constant - Write a static, constant string value to the AD attribute
Expression – Allows you to write a custom value to the AD attribute, based on one or more
SuccessFactors attributes. For more info, see this article on expressions.
Source attribute - The user attribute from SuccessFactors
Default value – Optional. If the source attribute has an empty value, the mapping will write this
value instead. Most common configuration is to leave this blank.
Target attribute – The user attribute in Active Directory.
Match objects using this attribute – Whether or not this mapping should be used to uniquely
identify users between SuccessFactors and Active Directory. This value is typically set on the Worker
ID field for SuccessFactors, which is typically mapped to one of the Employee ID attributes in Active
Directory.
Matching precedence – Multiple matching attributes can be set. When there are multiple, they are
evaluated in the order defined by this field. As soon as a match is found, no further matching
attributes are evaluated.
Apply this mapping
Always – Apply this mapping on both user creation and update actions
Only during creation - Apply this mapping only on user creation actions
2. To save your mappings, click Save at the top of the Attribute-Mapping section.
Once your attribute mapping configuration is complete, you can now enable and launch the user provisioning
service.

Enable and launch user provisioning


Once the SuccessFactors provisioning app configurations have been completed, you can turn on the provisioning
service in the Azure portal.

TIP
By default when you turn on the provisioning service, it will initiate provisioning operations for all users in scope. If there are
errors in the mapping or Workday data issues, then the provisioning job might fail and go into the quarantine state. To avoid
this, as a best practice, we recommend configuring Source Object Scope filter and testing your attribute mappings with a
few test users before launching the full sync for all users. Once you have verified that the mappings work and are giving you
the desired results, then you can either remove the filter or gradually expand it to include more users.

1. In the Provisioning tab, set the Provisioning Status to On .


2. Click Save .
3. This operation will start the initial sync, which can take a variable number of hours depending on how many
users are in the SuccessFactors tenant. You can check the progress bar to the track the progress of the sync
cycle.
4. At any time, check the Audit logs tab in the Azure portal to see what actions the provisioning service has
performed. The audit logs lists all individual sync events performed by the provisioning service, such as
which users are being read out of Workday and then subsequently added or updated to Active Directory.
5. Once the initial sync is completed, it will write an audit summary report in the Provisioning tab, as shown
below.
Next steps
Learn more about supported SuccessFactors Attributes for inbound provisioning
Learn how to configure email writeback to SuccessFactors
Learn how to review logs and get reports on provisioning activity
Learn how to configure single sign-on between SuccessFactors and Azure Active Directory
Learn how to integrate other SaaS applications with Azure Active Directory
Learn how to export and import your provisioning configurations
Tutorial: Configure attribute write-back from Azure
AD to SAP SuccessFactors
12/22/2020 • 12 minutes to read • Edit Online

The objective of this tutorial is to show the steps to write-back attributes from Azure AD to SAP SuccessFactors
Employee Central.

Overview
You can configure the SAP SuccessFactors Writeback app to write specific attributes from Azure Active Directory to
SAP SuccessFactors Employee Central. The SuccessFactors writeback provisioning app supports assigning values
to the following Employee Central attributes:
Work Email
Username
Business phone number (including country code, area code, number, and extension)
Business phone number primary flag
Cell phone number (including country code, area code, number)
Cell phone primary flag
User custom01-custom15 attributes
loginMethod attribute

NOTE
This app does not have any dependency on the SuccessFactors inbound user provisioning integration apps. You can
configure it independent of SuccessFactors to on-premises AD provisioning app or SuccessFactors to Azure AD provisioning
app.

Who is this user provisioning solution best suited for?


This SuccessFactors Writeback user provisioning solution is ideally suited for:
Organizations using Microsoft 365 that desire to write-back authoritative attributes managed by IT (such as
email address, phone, username) back to SuccessFactors Employee Central.

Configuring SuccessFactors for the integration


All SuccessFactors provisioning connectors require credentials of a SuccessFactors account with the right
permissions to invoke the Employee Central OData APIs. This section describes steps to create the service account
in SuccessFactors and grant appropriate permissions.
Create/identify API user account in SuccessFactors
Create an API permissions role
Create a Permission Group for the API user
Grant Permission Role to the Permission Group
Create/identify API user account in SuccessFactors
Work with your SuccessFactors admin team or implementation partner to create or identify a user account in
SuccessFactors that will be used to invoke the OData APIs. The username and password credentials of this account
will be required when configuring the provisioning apps in Azure AD.
Create an API permissions role
1. Log in to SAP SuccessFactors with a user account that has access to the Admin Center.
2. Search for Manage Permission Roles, then select Manage Permission Roles from the search results.

3. From the Permission Role List, click Create New .

4. Add a Role Name and Description for the new permission role. The name and description should indicate
that the role is for API usage permissions.

5. Under Permission settings, click Permission..., then scroll down the permission list and click Manage
Integration Tools . Check the box for Allow Admin to Access to OData API through Basic
Authentication .
6. Scroll down in the same box and select Employee Central API . Add permissions as shown below to read
using ODATA API and edit using ODATA API. Select the edit option if you plan to use the same account for
the write-back to SuccessFactors scenario.

7. Click on Done . Click Save Changes .


Create a Permission Group for the API user
1. In the SuccessFactors Admin Center, search for Manage Permission Groups, then select Manage
Permission Groups from the search results.

2. From the Manage Permission Groups window, click Create New .

3. Add a Group Name for the new group. The group name should indicate that the group is for API users.

4. Add members to the group. For example, you could select Username from the People Pool drop-down
menu and then enter the username of the API account that will be used for the integration.

5. Click Done to finish creating the Permission Group.


Grant Permission Role to the Permission Group
1. In SuccessFactors Admin Center, search for Manage Permission Roles, then select Manage Permission
Roles from the search results.
2. From the Permission Role List , select the role that you created for API usage permissions.
3. Under Grant this role to..., click Add... button.
4. Select Permission Group... from the drop-down menu, then click Select... to open the Groups window to
search and select the group created above.

5. Review the Permission Role grant to the Permission Group.

6. Click Save Changes .

Preparing for SuccessFactors Writeback


The SuccessFactors Writeback provisioning app uses certain code values for setting email and phone numbers in
Employee Central. These code values are set as constant values in the attribute-mapping table and are different for
each SuccessFactors instance. This section provides steps to capture these code values.

NOTE
Please involve your SuccessFactors Admin to complete the steps in this section.
Identify Email and Phone Number picklist names
In SAP SuccessFactors, a picklist is a configurable set of options from which a user can make a selection. The
different types of email and phone number (e.g. business, personal, other) are represented using a picklist. In this
step, we will identify the picklists configured in your SuccessFactors tenant to store email and phone number
values.
1. In SuccessFactors Admin Center, search for Manage business configuration.

2. Under HRIS Elements , select emailInfo and click on the Details for the email-type field.

3. On the email-type details page, note down the name of the picklist associated with this field. By default, it
is ecEmailType . However it may be different in your tenant.

4. Under HRIS Elements , select phoneInfo and click on the Details for the phone-type field.
5. On the phone-type details page, note down the name of the picklist associated with this field. By default, it
is ecPhoneType . However it may be different in your tenant.

Retrieve constant value for emailType


1. In SuccessFactors Admin Center, search and open Picklist Center.
2. Use the name of the email picklist captured from the previous section (e.g. ecEmailType) to find the email
picklist.

3. Open the active email picklist.


4. On the email type picklist page, select the Business email type.

5. Note down the Option ID associated with the Business email. This is the code that we will use with
emailType in the attribute-mapping table.
NOTE
Drop the comma character when you copy over the value. For e.g. if the Option ID value is 8,448, then set the
emailType in Azure AD to the constant number 8448 (without the comma character).

Retrieve constant value for phoneType


1. In SuccessFactors Admin Center, search and open Picklist Center.
2. Use the name of the phone picklist captured from the previous section to find the phone picklist.

3. Open the active phone picklist.

4. On the phone type picklist page, review the different phone types listed under Picklist Values .

5. Note down the Option ID associated with the Business phone. This is the code that we will use with
businessPhoneType in the attribute-mapping table.
6. Note down the Option ID associated with the Cell phone. This is the code that we will use with
cellPhoneType in the attribute-mapping table.

NOTE
Drop the comma character when you copy over the value. For e.g. if the Option ID value is 10,606, then set the
cellPhoneType in Azure AD to the constant number 10606 (without the comma character).
Configuring SuccessFactors Writeback App
This section provides steps for
Add the provisioning connector app and configure connectivity to SuccessFactors
Configure attribute mappings
Enable and launch user provisioning
Part 1: Add the provisioning connector app and configure connectivity to SuccessFactors
To configure SuccessFactors Writeback :
1. Go to https://portal.azure.com
2. In the left navigation bar, select Azure Active Director y
3. Select Enterprise Applications , then All Applications .
4. Select Add an application , and select the All category.
5. Search for SuccessFactors Writeback , and add that app from the gallery.
6. After the app is added and the app details screen is shown, select Provisioning
7. Change the Provisioning Mode to Automatic
8. Complete the Admin Credentials section as follows:
Admin Username – Enter the username of the SuccessFactors API user account, with the company
ID appended. It has the format: username@companyID
Admin password – Enter the password of the SuccessFactors API user account.
Tenant URL – Enter the name of the SuccessFactors OData API services endpoint. Only enter the
host name of server without http or https. This value should look like: api4.successfactors.com .
Notification Email – Enter your email address, and check the "send email if failure occurs"
checkbox.

NOTE
The Azure AD Provisioning Service sends email notification if the provisioning job goes into a quarantine state.

Click the Test Connection button. If the connection test succeeds, click the Save button at the top. If it
fails, double-check that the SuccessFactors credentials and URL are valid.
Once the credentials are saved successfully, the Mappings section will display the default mapping.
Refresh the page, if the attribute mappings are not visible.
Part 2: Configure attribute mappings
In this section, you will configure how user data flows from SuccessFactors to Active Directory.
1. On the Provisioning tab under Mappings , click Provision Azure Active Director y Users .
2. In the Source Object Scope field, you can select which sets of users in Azure AD should be considered for
write-back, by defining a set of attribute-based filters. The default scope is "all users in Azure AD".

TIP
When you are configuring the provisioning app for the first time, you will need to test and verify your attribute
mappings and expressions to make sure that it is giving you the desired result. Microsoft recommends using the
scoping filters under Source Object Scope to test your mappings with a few test users from Azure AD. Once you
have verified that the mappings work, then you can either remove the filter or gradually expand it to include more
users.

3. The Target Object Actions field only supports the Update operation.
4. In the mapping table under Attribute mappings section, you can map the following Azure Active
Directory attributes to SuccessFactors. The table below provides guidance on how to map the write-back
attributes.

SUC C ESSFA C TO RS
# A Z URE A D AT T RIB UT E AT T RIB UT E REM A RK S

1 employeeId personIdExternal By default, this attribute is


the matching identifier.
Instead of employeeId you
can use any other Azure
AD attribute that may
store the value equal to
personIdExternal in
SuccessFactors.
SUC C ESSFA C TO RS
# A Z URE A D AT T RIB UT E AT T RIB UT E REM A RK S

2 mail email Map email attribute


source. For testing
purposes, you can map
userPrincipalName to
email.

3 8448 emailType This constant value is the


SuccessFactors ID value
associated with business
email. Update this value to
match your SuccessFactors
environment. See the
section Retrieve constant
value for emailType for
steps to set this value.

4 true emailIsPrimary Use this attribute to set


business email as primary
in SuccessFactors. If
business email is not
primary, set this flag to
false.

5 userPrincipalName [custom01 – custom15] Using Add New


Mapping , you can
optionally write
userPrincipalName or any
Azure AD attribute to a
custom attribute available
in the SuccessFactors User
object.

6 on-prem- username Using Add New


samAccountName Mapping , you can
optionally map on-
premises
samAccountName to
SuccessFactors username
attribute.

7 SSO loginMethod If SuccessFactors tenant is


setup for partial SSO, then
using Add New Mapping,
you can optionally set
loginMethod to a constant
value of "SSO" or "PWD".

8 telephoneNumber businessPhoneNumber Use this mapping to flow


telephoneNumber from
Azure AD to
SuccessFactors business /
work phone number.
SUC C ESSFA C TO RS
# A Z URE A D AT T RIB UT E AT T RIB UT E REM A RK S

9 10605 businessPhoneType This constant value is the


SuccessFactors ID value
associated with business
phone. Update this value
to match your
SuccessFactors
environment. See the
section Retrieve constant
value for phoneType for
steps to set this value.

10 true businessPhoneIsPrimary Use this attribute to set


the primary flag for
business phone number.
Valid values are true or
false.

11 mobile cellPhoneNumber Use this mapping to flow


telephoneNumber from
Azure AD to
SuccessFactors business /
work phone number.

12 10606 cellPhoneType This constant value is the


SuccessFactors ID value
associated with cell phone.
Update this value to
match your SuccessFactors
environment. See the
section Retrieve constant
value for phoneType for
steps to set this value.

13 false cellPhoneIsPrimary Use this attribute to set


the primary flag for cell
phone number. Valid
values are true or false.

5. Validate and review your attribute mappings.


6. Click Save to save the mappings. Next, we will update the JSON Path API expressions to use the phoneType
codes in your SuccessFactors instance.
7. Select Show advanced options .

8. Click on Edit attribute list for SuccessFactors .

NOTE
If the Edit attribute list for SuccessFactors option does not show in the Azure portal, use the URL
https://portal.azure.com/?Microsoft_AAD_IAM_forceSchemaEditorEnabled=true to access the page.

9. The API expression column in this view displays the JSON Path expressions used by the connector.
10. Update the JSON Path expressions for business phone and cell phone to use the ID value
(businessPhoneType and cellPhoneType) corresponding to your environment.
11. Click Save to save the mappings.

Enable and launch user provisioning


Once the SuccessFactors provisioning app configurations have been completed, you can turn on the provisioning
service in the Azure portal.

TIP
By default when you turn on the provisioning service, it will initiate provisioning operations for all users in scope. If there are
errors in the mapping or data issues, then the provisioning job might fail and go into the quarantine state. To avoid this, as a
best practice, we recommend configuring Source Object Scope filter and testing your attribute mappings with a few test
users before launching the full sync for all users. Once you have verified that the mappings work and are giving you the
desired results, then you can either remove the filter or gradually expand it to include more users.

1. In the Provisioning tab, set the Provisioning Status to On .


2. Select Scope . You can select from one of the following options:
Sync all users and groups : Select this option if you plan to write back mapped attributes of all users
from Azure AD to SuccessFactors, subject to the scoping rules defined under Mappings -> Source
Object Scope .
Sync only assigned users and groups : Select this option if you plan to write back mapped attributes
of only users that you have assigned to this application in the Application -> Manage -> Users and
groups menu option. These users are also subject to the scoping rules defined under Mappings ->
Source Object Scope .

NOTE
The SuccessFactors Writeback provisioning app does not support "group assignment". Only "user assignment" is
supported.

3. Click Save .
4. This operation will start the initial sync, which can take a variable number of hours depending on how
many users are in the Azure AD tenant and the scope defined for the operation. You can check the progress
bar to the track the progress of the sync cycle.
5. At any time, check the Provisioning logs tab in the Azure portal to see what actions the provisioning
service has performed. The provisioning logs lists all individual sync events performed by the provisioning
service.
6. Once the initial sync is completed, it will write an audit summary report in the Provisioning tab, as shown
below.

Supported scenarios, known issues and limitations


Refer to the Writeback scenarios section of the SAP SuccessFactors integration reference guide.

Next steps
Deep dive into Azure AD and SAP SuccessFactors integration reference
Learn how to review logs and get reports on provisioning activity
Learn how to configure single sign-on between SuccessFactors and Azure Active Directory
Learn how to integrate other SaaS applications with Azure Active Directory
Learn how to export and import your provisioning configurations
Tutorial: Configure SAP Cloud Platform Identity
Authentication for automatic user provisioning
12/22/2020 • 5 minutes to read • Edit Online

The objective of this tutorial is to demonstrate the steps to be performed in SAP Cloud Platform Identity
Authentication and Azure Active Directory (Azure AD) to configure Azure AD to automatically provision and de-
provision users and/or groups to SAP Cloud Platform Identity Authentication.

NOTE
This tutorial describes a connector built on top of the Azure AD User Provisioning Service. For important details on what this
service does, how it works, and frequently asked questions, see Automate user provisioning and deprovisioning to SaaS
applications with Azure Active Directory.
This connector is currently in Public Preview. For more information on the general Microsoft Azure terms of use for Preview
features, see Supplemental Terms of Use for Microsoft Azure Previews.

Prerequisites
The scenario outlined in this tutorial assumes that you already have the following prerequisites:
An Azure AD tenant
A SAP Cloud Platform Identity Authentication tenant
A user account in SAP Cloud Platform Identity Authentication with Admin permissions.

Assigning users to SAP Cloud Platform Identity Authentication


Azure Active Directory uses a concept called assignments to determine which users should receive access to
selected apps. In the context of automatic user provisioning, only the users and/or groups that have been assigned
to an application in Azure AD are synchronized.
Before configuring and enabling automatic user provisioning, you should decide which users and/or groups in
Azure AD need access to SAP Cloud Platform Identity Authentication. Once decided, you can assign these users
and/or groups to SAP Cloud Platform Identity Authentication by following the instructions here:
Assign a user or group to an enterprise app

Important tips for assigning users to SAP Cloud Platform Identity


Authentication
It is recommended that a single Azure AD user is assigned to SAP Cloud Platform Identity Authentication to
test the automatic user provisioning configuration. Additional users and/or groups may be assigned later.
When assigning a user to SAP Cloud Platform Identity Authentication, you must select any valid application-
specific role (if available) in the assignment dialog. Users with the Default Access role are excluded from
provisioning.

Setup SAP Cloud Platform Identity Authentication for provisioning


1. Sign in to your SAP Cloud Platform Identity Authentication Admin Console. Navigate to Users &
Authorizations > Administrators .

2. Press the +Add button on the left hand panel in order to add a new administrator to the list. Choose Add
System and enter the name of the system.

NOTE
The admininistrator user in SAP Cloud Platform Identity Authentication must be of type System . Creating a normal
administrator user can lead to unauthorized errors while provisioning.

3. Under Configure Authorizations, switch on the toggle button against Manage Users and Manage Groups .

4. You will receive an email to activate your account and set a password for SAP Cloud Platform Identity
Authentication Ser vice .
5. Copy the User ID and Password . These values will be entered in the Admin Username and Admin
Password fields respectively in the Provisioning tab of your SAP Cloud Platform Identity Authentication
application in the Azure portal.
Add SAP Cloud Platform Identity Authentication from the gallery
Before configuring SAP Cloud Platform Identity Authentication for automatic user provisioning with Azure AD, you
need to add SAP Cloud Platform Identity Authentication from the Azure AD application gallery to your list of
managed SaaS applications.
To add SAP Cloud Platform Identity Authentication from the Azure AD application galler y, perform
the following steps:
1. In the Azure por tal , in the left navigation panel, select Azure Active Director y .

2. Go to Enterprise applications , and then select All applications .

3. To add a new application, select the New application button at the top of the pane.

4. In the search box, enter SAP Cloud Platform Identity Authentication , select SAP Cloud Platform
Identity Authentication in the results panel, and then click the Add button to add the application.
Configuring automatic user provisioning to SAP Cloud Platform Identity
Authentication
This section guides you through the steps to configure the Azure AD provisioning service to create, update, and
disable users and/or groups in SAP Cloud Platform Identity Authentication based on user and/or group
assignments in Azure AD.

TIP
You may also choose to enable SAML-based single sign-on for SAP Cloud Platform Identity Authentication, following the
instructions provided in the SAP Cloud Platform Identity Authentication Single sign-on tutorial. Single sign-on can be
configured independently of automatic user provisioning, though these two features compliment each other

To configure automatic user provisioning for SAP Cloud Platform Identity Authentication in Azure AD:
1. Sign in to the Azure portal. Select Enterprise Applications , then select All applications .

2. In the applications list, select SAP Cloud Platform Identity Authentication .


3. Select the Provisioning tab.

4. Set the Provisioning Mode to Automatic .

5. Under the Admin Credentials section, input https://<tenantID>.accounts.ondemand.com/service/scim in


Tenant URL . Input the User ID and Password values retrieved earlier in Admin Username and Admin
Password respectively. Click Test Connection to ensure Azure AD can connect to SAP Cloud Platform
Identity Authentication. If the connection fails, ensure your SAP Cloud Platform Identity Authentication
account has Admin permissions and try again.

6. In the Notification Email field, enter the email address of a person or group who should receive the
provisioning error notifications and check the checkbox - Send an email notification when a failure
occurs .

7. Click Save .
8. Under the Mappings section, select Synchronize Azure Active Director y Users to SAP Cloud
Platform Identity Authentication .

9. Review the user attributes that are synchronized from Azure AD to SAP Cloud Platform Identity
Authentication in the Attribute Mapping section. The attributes selected as Matching properties are used
to match the user accounts in SAP Cloud Platform Identity Authentication for update operations. Select the
Save button to commit any changes.

10. To configure scoping filters, refer to the following instructions provided in the Scoping filter tutorial.
11. To enable the Azure AD provisioning service for SAP Cloud Platform Identity Authentication, change the
Provisioning Status to On in the Settings section.

12. Define the users and/or groups that you would like to provision to SAP Cloud Platform Identity
Authentication by choosing the desired values in Scope in the Settings section.
13. When you are ready to provision, click Save .

This operation starts the initial synchronization of all users and/or groups defined in Scope in the Settings
section. The initial sync takes longer to perform than subsequent syncs, which occur approximately every 40
minutes as long as the Azure AD provisioning service is running. You can use the Synchronization Details
section to monitor progress and follow links to provisioning activity report, which describes all actions performed
by the Azure AD provisioning service on SAP Cloud Platform Identity Authentication.
For more information on how to read the Azure AD provisioning logs, see Reporting on automatic user account
provisioning.

Connector limitations
SAP Cloud Platform Identity Authentication's SCIM endpoint requires certain attributes to be of specific format.
You can know more about these attributes and their specific format here.

Additional resources
Managing user account provisioning for Enterprise Apps
What is application access and single sign-on with Azure Active Directory?

Next steps
Learn how to review logs and get reports on provisioning activity
Tutorial: Azure Active Directory single sign-on (SSO)
integration with SAP Cloud Platform Identity
Authentication
12/22/2020 • 9 minutes to read • Edit Online

In this tutorial, you'll learn how to integrate SAP Cloud Platform Identity Authentication with Azure Active Directory
(Azure AD). When you integrate SAP Cloud Platform Identity Authentication with Azure AD, you can:
Control in Azure AD who has access to SAP Cloud Platform Identity Authentication.
Enable your users to be automatically signed-in to SAP Cloud Platform Identity Authentication with their Azure
AD accounts.
Manage your accounts in one central location - the Azure portal.

Prerequisites
To get started, you need the following items:
An Azure AD subscription. If you don't have a subscription, you can get a free account.
SAP Cloud Platform Identity Authentication single sign-on (SSO) enabled subscription.

Scenario description
In this tutorial, you configure and test Azure AD single sign-on in a test environment.
SAP Cloud Platform Identity Authentication supports SP and IDP initiated SSO
Before you dive into the technical details, it's vital to understand the concepts you're going to look at. The SAP
Cloud Platform Identity Authentication and Active Directory Federation Services enable you to implement SSO
across applications or services that are protected by Azure AD (as an IdP) with SAP applications and services that
are protected by SAP Cloud Platform Identity Authentication.
Currently, SAP Cloud Platform Identity Authentication acts as a Proxy Identity Provider to SAP applications. Azure
Active Directory in turn acts as the leading Identity Provider in this setup.
The following diagram illustrates this relationship:
With this setup, your SAP Cloud Platform Identity Authentication tenant is configured as a trusted application in
Azure Active Directory.
All SAP applications and services that you want to protect this way are subsequently configured in the SAP Cloud
Platform Identity Authentication management console.
Therefore, the authorization for granting access to SAP applications and services needs to take place in SAP Cloud
Platform Identity Authentication (as opposed to Azure Active Directory).
By configuring SAP Cloud Platform Identity Authentication as an application through the Azure Active Directory
Marketplace, you don't need to configure individual claims or SAML assertions.

NOTE
Currently only Web SSO has been tested by both parties. The flows that are necessary for App-to-API or API-to-API
communication should work but have not been tested yet. They will be tested during subsequent activities.

Adding SAP Cloud Platform Identity Authentication from the gallery


To configure the integration of SAP Cloud Platform Identity Authentication into Azure AD, you need to add SAP
Cloud Platform Identity Authentication from the gallery to your list of managed SaaS apps.
1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
2. On the left navigation pane, select the Azure Active Director y service.
3. Navigate to Enterprise Applications and then select All Applications .
4. To add new application, select New application .
5. In the Add from the galler y section, type SAP Cloud Platform Identity Authentication in the search box.
6. Select SAP Cloud Platform Identity Authentication from results panel and then add the app. Wait a few
seconds while the app is added to your tenant.

Configure and test Azure AD SSO for SAP Cloud Platform Identity
Authentication
Configure and test Azure AD SSO with SAP Cloud Platform Identity Authentication using a test user called
B.Simon . For SSO to work, you need to establish a link relationship between an Azure AD user and the related
user in SAP Cloud Platform Identity Authentication.
To configure and test Azure AD SSO with SAP Cloud Platform Identity Authentication, perform the following steps:
1. Configure Azure AD SSO - to enable your users to use this feature.
a. Create an Azure AD test user - to test Azure AD single sign-on with B.Simon.
b. Assign the Azure AD test user - to enable B.Simon to use Azure AD single sign-on.
2. Configure SAP Cloud Platform Identity Authentication SSO - to configure the single sign-on settings on
application side.
a. Create SAP Cloud Platform Identity Authentication test user - to have a counterpart of B.Simon
in SAP Cloud Platform Identity Authentication that is linked to the Azure AD representation of user.
3. Test SSO - to verify whether the configuration works.

Configure Azure AD SSO


Follow these steps to enable Azure AD SSO in the Azure portal.
1. In the Azure portal, on the SAP Cloud Platform Identity Authentication application integration page,
find the Manage section and select single sign-on .
2. On the Select a single sign-on method page, select SAML .
3. On the Set up single sign-on with SAML page, click the edit/pen icon for Basic SAML Configuration
to edit the settings.

4. On the Basic SAML Configuration section, if you wish to configure in IDP -initiated mode, perform the
following steps:
a. In the Identifier text box, type a URL using the following pattern: <IAS-tenant-id>.accounts.ondemand.com

b. In the Reply URL text box, type a URL using the following pattern:
https://<IAS-tenant-id>.accounts.ondemand.com/saml2/idp/acs/<IAS-tenant-id>.accounts.ondemand.com

NOTE
These values are not real. Update these values with the actual identifier and Reply URL. Contact the SAP Cloud
Platform Identity Authentication Client support team to get these values. If you don't understand Identifier value,
read the SAP Cloud Platform Identity Authentication documentation about Tenant SAML 2.0 configuration.

5. Click Set additional URLs and perform the following step if you wish to configure the application in SP -
initiated mode:
In the Sign-on URL text box, type a URL using the following pattern: {YOUR BUSINESS APPLICATION URL}

NOTE
This value is not real. Update this value with the actual sign-on URL. Please use your specific business application
Sign-on URL. Contact the SAP Cloud Platform Identity Authentication Client support team if you have any doubt.

6. SAP Cloud Platform Identity Authentication application expects the SAML assertions in a specific format,
which requires you to add custom attribute mappings to your SAML token attributes configuration. The
following screenshot shows the list of default attributes.

7. In addition to above, SAP Cloud Platform Identity Authentication application expects few more attributes to
be passed back in SAML response which are shown below. These attributes are also pre populated but you
can review them as per your requirements.

NAME SO URC E AT T RIB UT E

firstName user.givenname

8. On the Set up Single Sign-On with SAML page, in the SAML Signing Cer tificate section, click
Download to download the Metadata XML from the given options as per your requirement and save it on
your computer.

9. On the Set up SAP Cloud Platform Identity Authentication section, copy the appropriate URL(s) as per
your requirement.

Create an Azure AD test user


In this section, you'll create a test user in the Azure portal called B.Simon.
1. From the left pane in the Azure portal, select Azure Active Director y , select Users , and then select All users .
2. Select New user at the top of the screen.
3. In the User properties, follow these steps:
a. In the Name field, enter B.Simon .
b. In the User name field, enter the [email protected]. For example,
[email protected] .
c. Select the Show password check box, and then write down the value that's displayed in the Password
box.
d. Click Create .
Assign the Azure AD test user
In this section, you'll enable B.Simon to use Azure single sign-on by granting access to SAP Cloud Platform Identity
Authentication.
1. In the Azure portal, select Enterprise Applications , and then select All applications .
2. In the applications list, select SAP Cloud Platform Identity Authentication .
3. In the app's overview page, find the Manage section and select Users and groups .
4. Select Add user , then select Users and groups in the Add Assignment dialog.
5. In the Users and groups dialog, select B.Simon from the Users list, then click the Select button at the
bottom of the screen.
6. If you are expecting a role to be assigned to the users, you can select it from the Select a role dropdown. If
no role has been set up for this app, you see "Default Access" role selected.
7. In the Add Assignment dialog, click the Assign button.

Configure SAP Cloud Platform Identity Authentication SSO


1. To automate the configuration within SAP Cloud Platform Identity Authentication, you need to install My
Apps Secure Sign-in browser extension by clicking Install the extension .

2. After adding extension to the browser, click on Set up SAP Cloud Platform Identity Authentication will
direct you to the SAP Cloud Platform Identity Authentication application. From there, provide the admin
credentials to sign into SAP Cloud Platform Identity Authentication. The browser extension will automatically
configure the application for you and automate steps 3-7.
3. If you want to setup SAP Cloud Platform Identity Authentication manually, in a different web browser
window, go to the SAP Cloud Platform Identity Authentication administration console. The URL has the
following pattern: https://<tenant-id>.accounts.ondemand.com/admin . Then read the documentation about
SAP Cloud Platform Identity Authentication at Integration with Microsoft Azure AD.
4. In the Azure portal, select the Save button.
5. Continue with the following only if you want to add and enable SSO for another SAP application. Repeat the
steps under the section Adding SAP Cloud Platform Identity Authentication from the galler y .
6. In the Azure portal, on the SAP Cloud Platform Identity Authentication application integration page,
select Linked Sign-on .

7. Save the configuration.

NOTE
The new application leverages the single sign-on configuration of the previous SAP application. Make sure you use the same
Corporate Identity Providers in the SAP Cloud Platform Identity Authentication administration console.

Create SAP Cloud Platform Identity Authentication test user


You don't need to create a user in SAP Cloud Platform Identity Authentication. Users who are in the Azure AD user
store can use the SSO functionality.
SAP Cloud Platform Identity Authentication supports the Identity Federation option. This option allows the
application to check whether users who are authenticated by the corporate identity provider exist in the user store
of SAP Cloud Platform Identity Authentication.
The Identity Federation option is disabled by default. If Identity Federation is enabled, only the users that are
imported in SAP Cloud Platform Identity Authentication can access the application.
For more information about how to enable or disable Identity Federation with SAP Cloud Platform Identity
Authentication, see "Enable Identity Federation with SAP Cloud Platform Identity Authentication" in Configure
Identity Federation with the User Store of SAP Cloud Platform Identity Authentication.

Test SSO
In this section, you test your Azure AD single sign-on configuration with following options.
SP initiated:
Click on Test this application in Azure portal. This will redirect to SAP Cloud Platform Identity
Authentication Sign on URL where you can initiate the login flow.
Go to SAP Cloud Platform Identity Authentication Sign-on URL directly and initiate the login flow from there.
IDP initiated:
Click on Test this application in Azure portal and you should be automatically signed in to the SAP Cloud
Platform Identity Authentication for which you set up the SSO
You can also use Microsoft My Apps to test the application in any mode. When you click the SAP Cloud Platform
Identity Authentication tile in the My Apps, if configured in SP mode you would be redirected to the application
sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to
the SAP Cloud Platform Identity Authentication for which you set up the SSO. For more information about the My
Apps, see Introduction to the My Apps.

Next steps
Once you configure the SAP Cloud Platform Identity Authentication you can enforce session controls, which
protects exfiltration and infiltration of your organization’s sensitive data in real time. Session controls extends from
Conditional Access. Learn how to enforce session control with Microsoft Cloud App Security
Tutorial: Azure Active Directory single sign-on (SSO)
integration with SuccessFactors
11/2/2020 • 6 minutes to read • Edit Online

In this tutorial, you'll learn how to integrate SuccessFactors with Azure Active Directory (Azure AD). When you
integrate SuccessFactors with Azure AD, you can:
Control in Azure AD who has access to SuccessFactors.
Enable your users to be automatically signed-in to SuccessFactors with their Azure AD accounts.
Manage your accounts in one central location - the Azure portal.
To learn more about SaaS app integration with Azure AD, see What is application access and single sign-on with
Azure Active Directory.

Prerequisites
To get started, you need the following items:
An Azure AD subscription. If you don't have a subscription, you can get a free account.
SuccessFactors single sign-on (SSO) enabled subscription.

Scenario description
In this tutorial, you configure and test Azure AD SSO in a test environment.
SuccessFactors supports SP initiated SSO.
Once you configure the SuccessFactors you can enforce session controls, which protect exfiltration and
infiltration of your organization’s sensitive data in real-time. Session controls extend from Conditional Access.
Learn how to enforce session control with Microsoft Cloud App Security

Adding SuccessFactors from the gallery


To configure the integration of SuccessFactors into Azure AD, you need to add SuccessFactors from the gallery to
your list of managed SaaS apps.
1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
2. On the left navigation pane, select the Azure Active Director y service.
3. Navigate to Enterprise Applications and then select All Applications .
4. To add new application, select New application .
5. In the Add from the galler y section, type SuccessFactors in the search box.
6. Select SuccessFactors from results panel and then add the app. Wait a few seconds while the app is added to
your tenant.

Configure and test Azure AD SSO for SuccessFactors


Configure and test Azure AD SSO with SuccessFactors using a test user called B.Simon . For SSO to work, you
need to establish a link relationship between an Azure AD user and the related user in SuccessFactors.
To configure and test Azure AD SSO with SuccessFactors, complete the following building blocks:
1. Configure Azure AD SSO - to enable your users to use this feature.
a. Create an Azure AD test user - to test Azure AD single sign-on with B.Simon.
b. Assign the Azure AD test user - to enable B.Simon to use Azure AD single sign-on.
2. Configure SuccessFactors SSO - to configure the Single Sign-On settings on application side.
a. Create SuccessFactors test user - to have a counterpart of B.Simon in SuccessFactors that is linked to
the Azure AD representation of user.
3. Test SSO - to verify whether the configuration works.

Configure Azure AD SSO


Follow these steps to enable Azure AD SSO in the Azure portal.
1. In the Azure portal, on the SuccessFactors application integration page, find the Manage section and
select Single sign-on .
2. On the Select a Single sign-on method page, select SAML .
3. On the Set up Single Sign-On with SAML page, click the edit/pen icon for Basic SAML Configuration
to edit the settings.

4. On the Basic SAML Configuration section, perform the following steps:


a. In the Sign-on URL textbox, type a URL using the following pattern:
https://<companyname>.successfactors.com/<companyname>
https://<companyname>.sapsf.com/<companyname>
https://<companyname>.successfactors.eu/<companyname>
https://<companyname>.sapsf.eu
b. In the Identifier textbox, type a URL using the following pattern:
https://www.successfactors.com/<companyname>
https://www.successfactors.com
https://<companyname>.successfactors.eu
https://www.successfactors.eu/<companyname>
https://<companyname>.sapsf.com
https://hcm4preview.sapsf.com/<companyname>
https://<companyname>.sapsf.eu
https://www.successfactors.cn
https://www.successfactors.cn/<companyname>
c. In the Reply URL textbox, type a URL using the following pattern:
https://<companyname>.successfactors.com/<companyname>
https://<companyname>.successfactors.com
https://<companyname>.sapsf.com/<companyname>
https://<companyname>.sapsf.com
https://<companyname>.successfactors.eu/<companyname>
https://<companyname>.successfactors.eu
https://<companyname>.sapsf.eu
https://<companyname>.sapsf.eu/<companyname>
https://<companyname>.sapsf.cn
https://<companyname>.sapsf.cn/<companyname>

NOTE
These values are not real. Update these values with the actual Sign-on URL, Identifier and Reply URL. Contact
SuccessFactors Client support team to get these values.

5. On the Set up Single Sign-On with SAML page, in the SAML Signing Cer tificate section, find
Cer tificate (Base64) and select Download to download the certificate and save it on your computer.

6. On the Set up SuccessFactors section, copy the appropriate URL(s) based on your requirement.

Create an Azure AD test user


In this section, you'll create a test user in the Azure portal called B.Simon.
1. From the left pane in the Azure portal, select Azure Active Director y , select Users , and then select All users .
2. Select New user at the top of the screen.
3. In the User properties, follow these steps:
a. In the Name field, enter B.Simon .
b. In the User name field, enter the [email protected]. For example,
[email protected] .
c. Select the Show password check box, and then write down the value that's displayed in the Password
box.
d. Click Create .
Assign the Azure AD test user
In this section, you'll enable B.Simon to use Azure single sign-on by granting access to SuccessFactors.
1. In the Azure portal, select Enterprise Applications , and then select All applications .
2. In the applications list, select SuccessFactors .
3. In the app's overview page, find the Manage section and select Users and groups .

4. Select Add user , then select Users and groups in the Add Assignment dialog.

5. In the Users and groups dialog, select B.Simon from the Users list, then click the Select button at the
bottom of the screen.
6. If you're expecting any role value in the SAML assertion, in the Select Role dialog, select the appropriate
role for the user from the list and then click the Select button at the bottom of the screen.
7. In the Add Assignment dialog, click the Assign button.

Configure SuccessFactors SSO


1. In a different web browser window, log in to your SuccessFactors admin por tal as an administrator.
2. Visit Application Security and native to Single Sign On Feature .
3. Place any value in the Reset Token and click Save Token to enable SAML SSO.

NOTE
This value is used as the on/off switch. If any value is saved, the SAML SSO is ON. If a blank value is saved the SAML
SSO is OFF.
4. Native to below screenshot and perform the following actions:

a. Select the SAML v2 SSO Radio Button


b. Set the SAML Asser ting Par ty Name (for example, SAML issuer + company name).
c. In the Issuer URL textbox, paste the Azure AD Identifier value which you have copied from the Azure
portal.
d. Select Asser tion as Require Mandator y Signature .
e. Select Enabled as Enable SAML Flag .
f. Select No as Login Request Signature(SF Generated/SP/RP) .
g. Select Browser/Post Profile as SAML Profile .
h. Select No as Enforce Cer tificate Valid Period .
i. Copy the content of the downloaded certificate file from Azure portal, and then paste it into the SAML
Verifying Cer tificate textbox.

NOTE
The certificate content must have begin certificate and end certificate tags.

5. Navigate to SAML V2, and then perform the following steps:


a. Select Yes as Suppor t SP-initiated Global Logout .
b. In the Global Logout Ser vice URL (LogoutRequest destination) textbox, paste the Sign-Out URL
value which you have copied form the Azure portal.
c. Select No as Require sp must encr ypt all NameID element .
d. Select unspecified as NameID Format .
e. Select Yes as Enable sp initiated login (AuthnRequest) .
f. In the Send request as Company-Wide issuer textbox, paste Login URL value which you have copied
from the Azure portal.
6. Perform these steps if you want to make the login usernames Case Insensitive.

a. Visit Company Settings (near the bottom).


b. Select checkbox near Enable Non-Case-Sensitive Username .
c. Click Save .

NOTE
If you try to enable this, the system checks if it creates a duplicate SAML login name. For example if the customer
has usernames User1 and user1. Taking away case sensitivity makes these duplicates. The system gives you an error
message and does not enable the feature. The customer needs to change one of the usernames so it’s spelled
different.

Create SuccessFactors test user


To enable Azure AD users to sign in to SuccessFactors, they must be provisioned into SuccessFactors. In the case of
SuccessFactors, provisioning is a manual task.
To get users created in SuccessFactors, you need to contact the SuccessFactors support team.
Test SSO
In this section, you test your Azure AD single sign-on configuration using the Access Panel.
When you click the SuccessFactors tile in the Access Panel, you should be automatically signed in to the
SuccessFactors for which you set up SSO. For more information about the Access Panel, see Introduction to the
Access Panel.

Additional resources
List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory
What is application access and single sign-on with Azure Active Directory?
What is conditional access in Azure Active Directory?
Try SuccessFactors with Azure AD
What is session control in Microsoft Cloud App Security?
How to protect SuccessFactors with advanced visibility and controls
Tutorial: Integrate SAP Analytics Cloud with Azure
Active Directory
12/22/2020 • 6 minutes to read • Edit Online

In this tutorial, you'll learn how to integrate SAP Analytics Cloud with Azure Active Directory (Azure AD). When you
integrate SAP Analytics Cloud with Azure AD, you can:
Control in Azure AD who has access to SAP Analytics Cloud.
Enable your users to be automatically signed-in to SAP Analytics Cloud with their Azure AD accounts.
Manage your accounts in one central location - the Azure portal.
To learn more about SaaS app integration with Azure AD, see What is application access and single sign-on with
Azure Active Directory.

Prerequisites
To get started, you need the following items:
An Azure AD subscription. If you don't have a subscription, you can get a free account.
SAP Analytics Cloud single sign-on (SSO) enabled subscription.

Scenario description
In this tutorial, you configure and test Azure AD SSO in a test environment.
SAP Analytics Cloud supports SP initiated SSO

Adding SAP Analytics Cloud from the gallery


To configure the integration of SAP Analytics Cloud into Azure AD, you need to add SAP Analytics Cloud from the
gallery to your list of managed SaaS apps.
1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
2. On the left navigation pane, select the Azure Active Director y service.
3. Navigate to Enterprise Applications and then select All Applications .
4. To add new application, select New application .
5. In the Add from the galler y section, type SAP Analytics Cloud in the search box.
6. Select SAP Analytics Cloud from results panel and then add the app. Wait a few seconds while the app is
added to your tenant.

Configure and test Azure AD single sign-on


Configure and test Azure AD SSO with SAP Analytics Cloud using a test user called B.Simon . For SSO to work, you
need to establish a link relationship between an Azure AD user and the related user in SAP Analytics Cloud.
To configure and test Azure AD SSO with SAP Analytics Cloud, complete the following building blocks:
1. Configure Azure AD SSO - to enable your users to use this feature.
2. Configure SAP Analytics Cloud SSO - to configure the Single Sign-On settings on application side.
3. Create an Azure AD test user - to test Azure AD single sign-on with B.Simon.
4. Assign the Azure AD test user - to enable B.Simon to use Azure AD single sign-on.
5. Create SAP Analytics Cloud test user - to have a counterpart of B.Simon in SAP Analytics Cloud that is
linked to the Azure AD representation of user.
6. Test SSO - to verify whether the configuration works.
Configure Azure AD SSO
Follow these steps to enable Azure AD SSO in the Azure portal.
1. In the Azure portal, on the SAP Analytics Cloud application integration page, find the Manage section and
select Single sign-on .
2. On the Select a Single sign-on method page, select SAML .
3. On the Set up Single Sign-On with SAML page, click the edit/pen icon for Basic SAML Configuration
to edit the settings.

4. On the Basic SAML Configuration section, enter the values for the following fields:
a. In the Sign on URL text box, type a URL using the following pattern:
https://<sub-domain>.sapanalytics.cloud/
https://<sub-domain>.sapbusinessobjects.cloud/

b. In the Identifier (Entity ID) text box, type a URL using the following pattern:
<sub-domain>.sapbusinessobjects.cloud
<sub-domain>.sapanalytics.cloud

NOTE
The values in these URLs are for demonstration only. Update the values with the actual sign-on URL and identifier
URL. To get the sign-on URL, contact the SAP Analytics Cloud Client support team. You can get the identifier URL by
downloading the SAP Analytics Cloud metadata from the admin console. This is explained later in the tutorial.

5. On the Set up Single Sign-On with SAML page, in the SAML Signing Cer tificate section, find
Federation Metadata XML and select Download to download the certificate and save it on your
computer.
6. On the Set up SAP Analytics Cloud section, copy the appropriate URL(s) based on your requirement.

Configure SAP Analytics Cloud SSO


1. In a different web browser window, sign in to your SAP Analytics Cloud company site as an administrator.
2. Select Menu > System > Administration .

3. On the Security tab, select the Edit (pen) icon.

4. For Authentication Method , select SAML Single Sign-On (SSO) .


5. To download the service provider metadata (Step 1), select Download . In the metadata file, find and copy
the entityID value. In the Azure portal, on the Basic SAML Configuration dialog, paste the value in the
Identifier box.

6. To upload the service provider metadata (Step 2) in the file that you downloaded from the Azure portal,
under Upload Identity Provider metadata , select Upload .

7. In the User Attribute list, select the user attribute (Step 3) that you want to use for your implementation.
This user attribute maps to the identity provider. To enter a custom attribute on the user's page, use the
Custom SAML Mapping option. Or, you can select either Email or USER ID as the user attribute. In our
example, we selected Email because we mapped the user identifier claim with the userprincipalname
attribute in the User Attributes & Claims section in the Azure portal. This provides a unique user email,
which is sent to the SAP Analytics Cloud application in every successful SAML response.

8. To verify the account with the identity provider (Step 4), in the Login Credential (Email) box, enter the
user's email address. Then, select Verify Account . The system adds sign-in credentials to the user account.
9. Select the Save icon.

Create an Azure AD test user


In this section, you'll create a test user in the Azure portal called B.Simon.
1. From the left pane in the Azure portal, select Azure Active Director y , select Users , and then select All users .
2. Select New user at the top of the screen.
3. In the User properties, follow these steps:
a. In the Name field, enter B.Simon .
b. In the User name field, enter the [email protected]. For example,
[email protected] .
c. Select the Show password check box, and then write down the value that's displayed in the Password
box.
d. Click Create .
Assign the Azure AD test user
In this section, you'll enable B.Simon to use Azure single sign-on by granting access to SAP Analytics Cloud.
1. In the Azure portal, select Enterprise Applications , and then select All applications .
2. In the applications list, select SAP Analytics Cloud .
3. In the app's overview page, find the Manage section and select Users and groups .

4. Select Add user , then select Users and groups in the Add Assignment dialog.
5. In the Users and groups dialog, select B.Simon from the Users list, then click the Select button at the
bottom of the screen.
6. If you're expecting any role value in the SAML assertion, in the Select Role dialog, select the appropriate
role for the user from the list and then click the Select button at the bottom of the screen.
7. In the Add Assignment dialog, click the Assign button.
Create SAP Analytics Cloud test user
Azure AD users must be provisioned in SAP Analytics Cloud before they can sign in to SAP Analytics Cloud. In SAP
Analytics Cloud, provisioning is a manual task.
To provision a user account:
1. Sign in to your SAP Analytics Cloud company site as an administrator.
2. Select Menu > Security > Users .

3. On the Users page, to add new user details, select + .

Then, complete the following steps:


a. In the USER ID box, enter the user ID of the user, like B .
b. In the FIRST NAME box, enter the first name of the user, like B .
c. In the L AST NAME box, enter the last name of the user, like Simon .
d. In the DISPL AY NAME box, enter the full name of the user, like B.Simon .
e. In the E-MAIL box, enter the email address of the user, like [email protected] .
f. On the Select Roles page, select the appropriate role for the user, and then select OK .

g. Select the Save icon.


Test SSO
In this section, you test your Azure AD single sign-on configuration using the Access Panel.
When you click the SAP Analytics Cloud tile in the Access Panel, you should be automatically signed in to the SAP
Analytics Cloud for which you set up SSO. For more information about the Access Panel, see Introduction to the
Access Panel.

Additional Resources
List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory
What is application access and single sign-on with Azure Active Directory?
What is conditional access in Azure Active Directory?
Tutorial: Azure Active Directory single sign-on (SSO)
integration with SAP Fiori
11/2/2020 • 8 minutes to read • Edit Online

In this tutorial, you'll learn how to integrate SAP Fiori with Azure Active Directory (Azure AD). When you integrate
SAP Fiori with Azure AD, you can:
Control in Azure AD who has access to SAP Fiori.
Enable your users to be automatically signed-in to SAP Fiori with their Azure AD accounts.
Manage your accounts in one central location - the Azure portal.
To learn more about SaaS app integration with Azure AD, see What is application access and single sign-on with
Azure Active Directory.

Prerequisites
To get started, you need the following items:
An Azure AD subscription. If you don't have a subscription, you can get a free account.
SAP Fiori single sign-on (SSO) enabled subscription.

Scenario description
In this tutorial, you configure and test Azure AD SSO in a test environment.
SAP Fiori supports SP initiated SSO

NOTE
For SAP Fiori initiated iFrame Authentication, we recommend using the IsPassive parameter in the SAML AuthnRequest for
silent authentication. For more details of the IsPassive parameter refer to Azure AD SAML single sign-on information

Adding SAP Fiori from the gallery


To configure the integration of SAP Fiori into Azure AD, you need to add SAP Fiori from the gallery to your list of
managed SaaS apps.
1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
2. On the left navigation pane, select the Azure Active Director y service.
3. Navigate to Enterprise Applications and then select All Applications .
4. To add new application, select New application .
5. In the Add from the galler y section, type SAP Fiori in the search box.
6. Select SAP Fiori from results panel and then add the app. Wait a few seconds while the app is added to your
tenant.

Configure and test Azure AD single sign-on for SAP Fiori


Configure and test Azure AD SSO with SAP Fiori using a test user called B.Simon . For SSO to work, you need to
establish a link relationship between an Azure AD user and the related user in SAP Fiori.
To configure and test Azure AD SSO with SAP Fiori, complete the following building blocks:
1. Configure Azure AD SSO - to enable your users to use this feature.
a. Create an Azure AD test user - to test Azure AD single sign-on with B.Simon.
b. Assign the Azure AD test user - to enable B.Simon to use Azure AD single sign-on.
2. Configure SAP Fiori SSO - to configure the single sign-on settings on application side.
a. Create SAP Fiori test user - to have a counterpart of B.Simon in SAP Fiori that is linked to the Azure
AD representation of user.
3. Test SSO - to verify whether the configuration works.

Configure Azure AD SSO


Follow these steps to enable Azure AD SSO in the Azure portal.
1. Open a new web browser window and sign in to your SAP Fiori company site as an administrator.
2. Make sure that http and https services are active and that the relevant ports are assigned to transaction
code SMICM .
3. Sign in to SAP Business Client for SAP system T01 , where single sign-on is required. Then, activate HTTP
Security Session Management.
a. Go to transaction code SICF_SESSIONS . All relevant profile parameters with current values are
shown. They look like the following example:

login/create_sso2_ticket = 2
login/accept_sso2_ticket = 1
login/ticketcache_entries_max = 1000
login/ticketcache_off = 0 login/ticket_only_by_https = 0
icf/set_HTTPonly_flag_on_cookies = 3
icf/user_recheck = 0 http/security_session_timeout = 1800
http/security_context_cache_size = 2500
rdisp/plugin_auto_logout = 1800
rdisp/autothtime = 60

NOTE
Adjust the parameters based on your organization requirements. The preceding parameters are given only as
an example.

b. If necessary, adjust parameters in the instance (default) profile of the SAP system and restart the SAP
system.
c. Double-click the relevant client to enable an HTTP security session.
d. Activate the following SICF services:

/sap/public/bc/sec/saml2
/sap/public/bc/sec/cdc_ext_service
/sap/bc/webdynpro/sap/saml2
/sap/bc/webdynpro/sap/sec_diag_tool (This is only to enable / disable trace)

4. Go to transaction code SAML2 in Business Client for SAP system [T01/122 ]. The configuration UI opens in
a new browser window. In this example, we use Business Client for SAP system 122.

5. Enter your username and password, and then select Log on .


6. In the Provider Name box, replace T01122 with http://T01122 , and then select Save .

NOTE
By default, the provider name is in the format <sid><client>. Azure AD expects the name in the format
<protocol>://<name>. We recommend that you maintain the provider name as https://<sid><client> so you can
configure multiple SAP Fiori ABAP engines in Azure AD.
7. Select Local Provider tab > Metadata .
8. In the SAML 2.0 Metadata dialog box, download the generated metadata XML file and save it on your
computer.

9. In the Azure portal, on the SAP Fiori application integration page, find the Manage section and select
single sign-on .
10. On the Select a single sign-on method page, select SAML .
11. On the Set up single sign-on with SAML page, click the edit/pen icon for Basic SAML Configuration
to edit the settings.

12. On the Basic SAML Configuration section, if you have Ser vice Provider metadata file , perform the
following steps:
a. Click Upload metadata file .

b. Click on folder logo to select the metadata file and click Upload .

c. When the metadata file is successfully uploaded, the Identifier and Reply URL values are automatically
populated in the Basic SAML Configuration pane. In the Sign on URL box, enter a URL that has the
following pattern: https:\//\<your company instance of SAP Fiori\> .

NOTE
A few customers report errors related to incorrectly configured Reply URL values. If you see this error, you can use
the following PowerShell script to set the correct Reply URL for your instance:

Set-AzureADServicePrincipal -ObjectId $ServicePrincipalObjectId -ReplyUrls "<Your Correct Reply


URL(s)>"

You can set the ServicePrincipal object ID yourself before running the script, or you can pass it here.

13. The SAP Fiori application expects the SAML assertions to be in a specific format. Configure the following
claims for this application. To manage these attribute values, in the Set up Single Sign-On with SAML
pane, select Edit .
14. In the User Attributes & Claims pane, configure the SAML token attributes as shown in the preceding
image. Then, complete the following steps:
a. Select Edit to open the Manage user claims pane.
b. In the Transformation list, select ExtractMailPrefix() .
c. In the Parameter 1 list, select user.userprincipalname .
d. Select Save .

15. On the Set up single sign-on with SAML page, in the SAML Signing Cer tificate section, find
Federation Metadata XML and select Download to download the certificate and save it on your
computer.
16. On the Set up SAP Fiori section, copy the appropriate URL(s) based on your requirement.

Create an Azure AD test user


In this section, you'll create a test user in the Azure portal called B.Simon.
1. From the left pane in the Azure portal, select Azure Active Director y , select Users , and then select All users .
2. Select New user at the top of the screen.
3. In the User properties, follow these steps:
a. In the Name field, enter B.Simon .
b. In the User name field, enter the [email protected]. For example,
[email protected] .
c. Select the Show password check box, and then write down the value that's displayed in the Password
box.
d. Click Create .
Assign the Azure AD test user
In this section, you'll enable B.Simon to use Azure single sign-on by granting access to SAP Fiori.
1. In the Azure portal, select Enterprise Applications , and then select All applications .
2. In the applications list, select SAP Fiori .
3. In the app's overview page, find the Manage section and select Users and groups .

4. Select Add user , then select Users and groups in the Add Assignment dialog.
5. In the Users and groups dialog, select B.Simon from the Users list, then click the Select button at the
bottom of the screen.
6. If you're expecting any role value in the SAML assertion, in the Select Role dialog, select the appropriate
role for the user from the list and then click the Select button at the bottom of the screen.
7. In the Add Assignment dialog, click the Assign button.

Configure SAP Fiori SSO


1. Sign in to the SAP system and go to transaction code SAML2 . A new browser window opens with the SAML
configuration page.
2. To configure endpoints for a trusted identity provider (Azure AD), select the Trusted Providers tab.

3. Select Add , and then select Upload Metadata File from the context menu.

4. Upload the metadata file that you downloaded in the Azure portal. Select Next .

5. On the next page, in the Alias box, enter the alias name. For example, aadsts . Select Next .
6. Make sure that the value in the Digest Algorithm box is SHA-256 . Select Next .

7. Under Single Sign-On Endpoints , select HTTP POST , and then select Next .

8. Under Single Logout Endpoints , select HTTP Redirect , and then select Next .
9. Under Ar tifact Endpoints , select Next to continue.

10. Under Authentication Requirements , select Finish .


11. Select Trusted Provider > Identity Federation (at the bottom of the page). Select Edit .

12. Select Add .

13. In the Suppor ted NameID Formats dialog box, select Unspecified . Select OK .
The values for User ID Source and User ID Mapping Mode determine the link between the SAP user and
the Azure AD claim.
Scenario 1 : SAP user to Azure AD user mapping
a. In SAP, under Details of NameID Format "Unspecified" , note the details:

b. In the Azure portal, under User Attributes & Claims , note the required claims from Azure AD.

Scenario 2 : Select the SAP user ID based on the configured email address in SU01. In this case, the email ID
should be configured in SU01 for each user who requires SSO.
a. In SAP, under Details of NameID Format "Unspecified" , note the details:

b. In the Azure portal, under User Attributes & Claims , note the required claims from Azure AD.

14. Select Save , and then select Enable to enable the identity provider.
15. Select OK when prompted.

Create SAP Fiori test user


In this section, you create a user named Britta Simon in SAP Fiori. Work with your in-house SAP team of experts or
your organization SAP partner to add the user in the SAP Fiori platform.

Test SSO
1. After the identity provider Azure AD is activated in SAP Fiori, try to access one of the following URLs to test
single sign-on (you shouldn't be prompted for a username and password):
https://<sapurl>/sap/bc/bsp/sap/it00/default.htm
https://<sapurl>/sap/bc/bsp/sap/it00/default.htm

NOTE
Replace sapurl with the actual SAP host name.

2. The test URL should take you to the following test application page in SAP. If the page opens, Azure AD single
sign-on is successfully set up.
3. If you are prompted for a username and password, enable trace to help diagnose the issue. Use the
following URL for the trace: https://<sapurl>/sap/bc/webdynpro/sap/sec_diag_tool?sap-client=122&sap-
language=EN#.

Additional resources
List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory
What is application access and single sign-on with Azure Active Directory?
What is conditional access in Azure Active Directory?
Try SAP Fiori with Azure AD
Tutorial: Azure Active Directory single sign-on (SSO)
integration with SAP Qualtrics
11/2/2020 • 5 minutes to read • Edit Online

In this tutorial, you'll learn how to integrate SAP Qualtrics with Azure Active Directory (Azure AD). When you
integrate SAP Qualtrics with Azure AD, you can:
Control in Azure AD who has access to SAP Qualtrics.
Enable your users to be automatically signed in to SAP Qualtrics with their Azure AD accounts.
Manage your accounts in one central location: the Azure portal.
To learn more about software as a service (SaaS) app integration with Azure AD, see What is application access and
single sign-on with Azure Active Directory.

Prerequisites
To get started, you need:
An Azure AD subscription. If you don't have a subscription, you can get a free account.
A SAP Qualtrics subscription enabled for single sign-on (SSO).

Scenario description
In this tutorial, you configure and test Azure AD SSO in a test environment.
SAP Qualtrics supports SP and IDP initiated SSO.
SAP Qualtrics supports Just In Time user provisioning.
After you configure SAP Qualtrics, you can enforce session control, which protects exfiltration and infiltration of
your organization’s sensitive data in real time. Session control extends from conditional access. For more
information, see Learn how to enforce session control with Microsoft Cloud App Security.

Add SAP Qualtrics from the gallery


To configure the integration of SAP Qualtrics into Azure AD, you need to add SAP Qualtrics from the gallery to your
list of managed SaaS apps.
1. Sign in to the Azure portal by using either a work or school account, or a personal Microsoft account.
2. On the left pane, select Azure Active Director y .
3. Go to Enterprise Applications , and then select All Applications .
4. To add a new application, select New application .
5. In the Add from the galler y section, type SAP Qualtrics in the search box.
6. Select SAP Qualtrics from results, and then add the app. Wait a few seconds while the app is added to your
tenant.

Configure and test Azure AD single sign-on for SAP Qualtrics


Configure and test Azure AD SSO with SAP Qualtrics, by using a test user called B.Simon . For SSO to work, you
need to establish a linked relationship between an Azure AD user and the related user in SAP Qualtrics.
To configure and test Azure AD SSO with SAP Qualtrics, complete the following building blocks:
1. Configure Azure AD SSO to enable your users to use this feature.
a. Create an Azure AD test user to test Azure AD single sign-on with B.Simon.
b. Assign the Azure AD test user to enable B.Simon to use Azure AD single sign-on.
2. Configure SAP Qualtrics SSO to configure the single sign-on settings on the application side.
a. Create a SAP Qualtrics test user to have a counterpart of B.Simon in SAP Qualtrics, linked to the Azure AD
representation of the user.
3. Test SSO to verify whether the configuration works.

Configure Azure AD SSO


Follow these steps to enable Azure AD SSO in the Azure portal.
1. In the Azure portal, on the SAP Qualtrics application integration page, find the Manage section. Select
single sign-on .
2. On the Select a single sign-on method page, select SAML .
3. On the Set up single sign-on with SAML page, select the pencil icon for Basic SAML Configuration to
edit the settings.

4. On the Set up single sign-on with SAML page, if you want to configure the application in IDP initiated
mode, enter the values for the following fields:
a. In the Identifier text box, type a URL that uses the following pattern:
https://< DATACENTER >.qualtrics.com

b. In the Reply URL text box, type a URL that uses the following pattern:
https://< DATACENTER >.qualtrics.com/login/v1/sso/saml2/default-sp

c. In the Relay State text box, type a URL that uses the following pattern:
https://< brandID >.< DATACENTER >.qualtrics.com

5. Select Set additional URLs , and perform the following step if you want to configure the application in SP
initiated mode:
In the Sign-on URL textbox, type a URL that uses the following pattern:
https://< brandID >.< DATACENTER >.qualtrics.com

NOTE
These values are not real. Update these values with the actual Sign-on URL, Identifier, Reply URL, and Relay State. To
get these values, contact the Qualtrics Client support team. You can also refer to the patterns shown in the Basic
SAML Configuration section in the Azure portal.

6. On the Set up single sign-on with SAML page, in the SAML Signing Cer tificate section, select the
copy icon to copy App Federation Metadata Url and save it on your computer.

Create an Azure AD test user


In this section, you create a test user in the Azure portal called B.Simon.
1. From the left pane in the Azure portal, select Azure Active Director y > Users > All users .
2. Select New user at the top of the screen.
3. In the User properties, follow these steps:
a. In the Name field, enter B.Simon .
b. In the User name field, enter the [email protected]. For example,
[email protected] .
c. Select the Show password check box, and then write the password down.
d. Select Create .
Assign the Azure AD test user
In this section, you enable B.Simon to use Azure single sign-on by granting access to SAP Qualtrics.
1. In the Azure portal, select Enterprise Applications > All applications .
2. In the applications list, select SAP Qualtrics .
3. In the app's overview page, find the Manage section, and select Users and groups .

4. Select Add user . Then in the Add Assignment dialog box, select Users and groups .

5. In the Users and groups dialog box, select B.Simon from the list of users. Then choose Select at the
bottom of the screen.
6. If you're expecting any role value in the SAML assertion, in the Select Role dialog box, select the
appropriate role for the user from the list. Then choose Select at the bottom of the screen.
7. In the Add Assignment dialog box, select Assign .

Configure SAP Qualtrics SSO


To configure single sign-on on the SAP Qualtrics side, send the copied App Federation Metadata Url from the
Azure portal to the SAP Qualtrics support team. The support team ensures that the SAML SSO connection is set
properly on both sides.
Create SAP Qualtrics test user
SAP Qualtrics supports just-in-time user provisioning, which is enabled by default. There is no additional action for
you to take. If a user doesn't already exist in SAP Qualtrics, a new one is created after authentication.

Test SSO
In this section, you test your Azure AD single sign-on configuration by using Access Panel.
When you select the SAP Qualtrics tile in Access Panel, you're automatically signed in to the SAP Qualtrics for which
you set up SSO. For more information, see Sign in and start apps from the My Apps portal.

Additional resources
Tutorials for integrating SaaS applications with Azure Active Directory
What is application access and single sign-on with Azure Active Directory?
What is conditional access in Azure Active Directory?
Try SAP Qualtrics with Azure AD
What is session control in Microsoft Cloud App Security?
Protect SAP Qualtrics with advanced visibility and controls
Tutorial: Azure Active Directory integration with Ariba
11/2/2020 • 5 minutes to read • Edit Online

In this tutorial, you learn how to integrate Ariba with Azure Active Directory (Azure AD). Integrating Ariba with
Azure AD provides you with the following benefits:
You can control in Azure AD who has access to Ariba.
You can enable your users to be automatically signed-in to Ariba (Single Sign-On) with their Azure AD accounts.
You can manage your accounts in one central location - the Azure portal.
If you want to know more details about SaaS app integration with Azure AD, see What is application access and
single sign-on with Azure Active Directory. If you don't have an Azure subscription, create a free account before you
begin.

Prerequisites
To configure Azure AD integration with Ariba, you need the following items:
An Azure AD subscription. If you don't have an Azure AD environment, you can get one-month trial here
Ariba single sign-on enabled subscription

Scenario description
In this tutorial, you configure and test Azure AD single sign-on in a test environment.
Ariba supports SP initiated SSO
Once you configure Ariba you can enforce Session control, which protects exfiltration and infiltration of your
organization’s sensitive data in real time. Session control extends from Conditional Access. Learn how to
enforce session control with Microsoft Cloud App Security

Adding Ariba from the gallery


To configure the integration of Ariba into Azure AD, you need to add Ariba from the gallery to your list of managed
SaaS apps.
1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
2. On the left navigation pane, select the Azure Active Director y service.
3. Navigate to Enterprise Applications and then select All Applications .
4. To add new application, select New application .
5. In the Add from the galler y section, type Ariba in the search box.
6. Select Ariba from results panel and then add the app. Wait a few seconds while the app is added to your tenant.

Configure and test Azure AD SSO


In this section, you configure and test Azure AD single sign-on with Ariba based on a test user called Britta Simon .
For single sign-on to work, a link relationship between an Azure AD user and the related user in Ariba needs to be
established.
To configure and test Azure AD single sign-on with Ariba, you need to complete the following building blocks:
1. Configure Azure AD SSO - to enable your users to use this feature.
2. Configure Ariba SSO - to configure the Single Sign-On settings on application side.
3. Create an Azure AD test user - to test Azure AD single sign-on with Britta Simon.
4. Assign the Azure AD test user - to enable Britta Simon to use Azure AD single sign-on.
5. Create Ariba test user - to have a counterpart of Britta Simon in Ariba that is linked to the Azure AD
representation of user.
6. Test SSO - to verify whether the configuration works.
Configure Azure AD SSO
Follow these steps to enable Azure AD SSO in the Azure portal.
1. In the Azure portal, on the Ariba application integration page, find the Manage section and select single
sign-on .
2. On the Select a single sign-on method page, select SAML .
3. On the Set up single sign-on with SAML page, click the edit/pen icon for Basic SAML Configuration
to edit the settings.

4. On the Basic SAML Configuration section, perform the following steps:

a. In the Sign on URL text box, type a URL using the following pattern:

https://<subdomain>.sourcing.ariba.com
https://<subdomain>.supplier.ariba.com

b. In the Identifier (Entity ID) text box, type a URL using the following pattern:
http://<subdomain>.procurement-2.ariba.com

c. For Reply URL , enter one of the following URL pattern:


REP LY URL

https://<subdomain>.ariba.com/CUSTOM_URL

https://<subdomain>.procurement-eu.ariba.com/CUSTOM_URL

https://<subdomain>.procurement-eu.ariba.com

https://<subdomain>.procurement-2.ariba.com

https://<subdomain>.procurement-2.ariba.com/CUSTOM_URL

NOTE
These values are not real. Update these values with the actual Sign-on URL, Identifier and Reply URL. Here we suggest
you to use the unique value of string in the Identifier. Contact Ariba Client support team at 1-866-218-2155 to get
these values.. You can also refer to the patterns shown in the Basic SAML Configuration section in the Azure
portal.

5. On the Set up Single Sign-On with SAML page, in the SAML Signing Cer tificate section, click
Download to download the Cer tificate (Base64) from the given options as per your requirement and
save it on your computer.

Create an Azure AD test user


In this section, you'll create a test user named B.Simon in the Azure portal.
1. In the left pane of the Azure portal, select Azure Active Director y , select Users , and then select All users .
2. At the top of the screen, select New user .
3. In the User properties, follow these steps:
a. In the Name field, enter B.Simon .
b. In the User name field, enter <username>@<companydomain>.<extension> . For example:
[email protected] .
c. Select the Show password check box, and then make note of the value that's displayed in the Password
box.
d. Select Create .
Assign the Azure AD test user
In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Ariba.
1. In the Azure portal, select Enterprise Applications , and then select All applications .
2. In the applications list, select Ariba .
3. In the app's overview page, find the Manage section and select Users and groups .

4. Select Add user , then select Users and groups in the Add Assignment dialog.

5. In the Users and groups dialog, select B.Simon from the Users list, then click the Select button at the
bottom of the screen.
6. If you're expecting any role value in the SAML assertion, in the Select Role dialog, select the appropriate
role for the user from the list and then click the Select button at the bottom of the screen.
7. In the Add Assignment dialog, click the Assign button.

Configure Ariba SSO


To get SSO configured for your application, call Ariba support team on 1-866-218-2155 and they'll assist you
further on how to provide them the downloaded Cer tificate (Base64 ) file.
Create Ariba test user
In this section, you create a user called Britta Simon in Ariba. Work with Ariba support team at 1-866-218-2155
to add the users in the Ariba platform. Users must be created and activated before you use single sign-on.

Test SSO
In this section, you test your Azure AD single sign-on configuration using the Access Panel.
When you click the Ariba tile in the Access Panel, you should be automatically signed in to the Ariba for which you
set up SSO. For more information about the Access Panel, see Introduction to the Access Panel.

Additional Resources
List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory
What is application access and single sign-on with Azure Active Directory?
What is Conditional Access in Azure Active Directory?
Tutorial: Azure Active Directory single sign-on (SSO)
integration with Concur Travel and Expense
12/22/2020 • 7 minutes to read • Edit Online

In this tutorial, you'll learn how to integrate Concur Travel and Expense with Azure Active Directory (Azure AD).
When you integrate Concur Travel and Expense with Azure AD, you can:
Control in Azure AD who has access to Concur Travel and Expense.
Enable your users to be automatically signed-in to Concur Travel and Expense with their Azure AD accounts.
Manage your accounts in one central location - the Azure portal.

Prerequisites
To get started, you need the following items:
An Azure AD subscription. If you don't have a subscription, you can get a free account.
Concur Travel and Expense subscription.
A "Company Administrator" role under your Concur user account. You can test if you have the right access by
going to Concur SSO Self-Service Tool. If you do not have the access, please contact Concur support or
implementation project manager.

Scenario description
In this tutorial, you configure and test Azure AD SSO.
Concur Travel and Expense supports IDP and SP initiated SSO
Concur Travel and Expense supports testing SSO in both production and implementation environment

NOTE
Identifier of this application is a fixed string value for each of the three regions: US, EMEA, and China. So only one instance
can be configured for each region in one tenant.

Adding Concur Travel and Expense from the gallery


To configure the integration of Concur Travel and Expense into Azure AD, you need to add Concur Travel and
Expense from the gallery to your list of managed SaaS apps.
1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
2. On the left navigation pane, select the Azure Active Director y service.
3. Navigate to Enterprise Applications and then select All Applications .
4. To add new application, select New application .
5. In the Add from the galler y section, type Concur Travel and Expense in the search box.
6. Select Concur Travel and Expense from results panel and then add the app. Wait a few seconds while the app
is added to your tenant.

Configure and test Azure AD SSO for Concur Travel and Expense
Configure and test Azure AD SSO with Concur Travel and Expense using a test user called B.Simon . For SSO to
work, you need to establish a link relationship between an Azure AD user and the related user in Concur Travel and
Expense.
To configure and test Azure AD SSO with Concur Travel and Expense, perform the following steps:
1. Configure Azure AD SSO - to enable your users to use this feature.
a. Create an Azure AD test user - to test Azure AD single sign-on with B.Simon.
b. Assign the Azure AD test user - to enable B.Simon to use Azure AD single sign-on.
2. Configure Concur Travel and Expense SSO - to configure the single sign-on settings on application side.
a. Create Concur Travel and Expense test user - to have a counterpart of B.Simon in Concur Travel and
Expense that is linked to the Azure AD representation of user.
3. Test SSO - to verify whether the configuration works.

Configure Azure AD SSO


Follow these steps to enable Azure AD SSO in the Azure portal.
1. In the Azure portal, on the Concur Travel and Expense application integration page, find the Manage
section and select single sign-on .
2. On the Select a single sign-on method page, select SAML .
3. On the Set up single sign-on with SAML page, click the edit/pen icon for Basic SAML Configuration
to edit the settings.

4. On the Basic SAML Configuration section the application is pre-configured in IDP initiated mode and the
necessary URLs are already pre-populated with Azure. The user needs to save the configuration by clicking
the Save button.

NOTE
Identifier (Entity ID) and Reply URL (Assertion Consumer Service URL) are region specific. Please select based on the
datacenter of your Concur entity. If you do not know the datacenter of your Concur entity, please contact Concur
support.

5. On the Set up Single Sign-On with SAML page, click the edit/pen icon for User Attribute to edit the
settings. The Unique User Identifier needs to match Concur user login_id. Usually, you should change
user.userprincipalname to user.mail .
6. On the Set up single sign-on with SAML page, in the SAML Signing Cer tificate section, find
Federation Metadata XML and select Download to download the metadata and save it on your
computer.

Create an Azure AD test user


In this section, you'll create a test user in the Azure portal called B.Simon.
1. From the left pane in the Azure portal, select Azure Active Director y , select Users , and then select All users .
2. Select New user at the top of the screen.
3. In the User properties, follow these steps:
a. In the Name field, enter B.Simon .
b. In the User name field, enter the [email protected]. For example,
[email protected] .
c. Select the Show password check box, and then write down the value that's displayed in the Password
box.
d. Click Create .
Assign the Azure AD test user
In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Concur Travel and Expense.
1. In the Azure portal, select Enterprise Applications , and then select All applications .
2. In the applications list, select Concur Travel and Expense .
3. In the app's overview page, find the Manage section and select Users and groups .
4. Select Add user , then select Users and groups in the Add Assignment dialog.
5. In the Users and groups dialog, select B.Simon from the Users list, then click the Select button at the
bottom of the screen.
6. If you are expecting a role to be assigned to the users, you can select it from the Select a role dropdown. If
no role has been set up for this app, you see "Default Access" role selected.
7. In the Add Assignment dialog, click the Assign button.

Configure Concur Travel and Expense SSO


1. To automate the configuration within Concur Travel and Expense, you need to install My Apps Secure
Sign-in browser extension by clicking Install the extension .

2. After adding extension to the browser, click on Set up Concur Travel and Expense will direct you to the
Concur Travel and Expense application. From there, provide the admin credentials to sign into Concur Travel
and Expense. The browser extension will automatically configure the application for you and automate steps
3-7.

3. If you want to setup Concur Travel and Expense manually, in a different web browser window, you need to
upload the downloaded Federation Metadata XML to Concur SSO Self-Service Tool and sign in to your
Concur Travel and Expense company site as an administrator.
4. Click Add .
5. Enter a custom name for your IdP, for example "Azure AD (US)".
6. Click Upload XML File and attach Federation Metadata XML you downloaded previously.
7. Click Add Metadata to save the change.

Create Concur Travel and Expense test user


In this section, you create a user called B.Simon in Concur Travel and Expense. Work with Concur support team to
add the users in the Concur Travel and Expense platform. Users must be created and activated before you use single
sign-on.

NOTE
B.Simon's Concur login id needs to match B.Simon's unique identifier at Azure AD. For example, if B.Simon's Azure AD unique
identifer is [email protected] . B.Simon's Concur login id needs to be [email protected] as well.

Configure Concur Mobile SSO


To enable Concur mobile SSO, you need to give Concur support team User access URL . Follow steps below to get
User access URL from Azure AD:
1. Go to Enterprise applications
2. Click Concur Travel and Expense
3. Click Proper ties
4. Copy User access URL and give this URL to Concur support

NOTE
Self-Service option to configure SSO is not available so work with Concur support team to enable mobile SSO.

Test SSO
In this section, you test your Azure AD single sign-on configuration with following options.
SP initiated:
Click on Test this application in Azure portal. This will redirect to Concur Travel and Expense Sign on URL
where you can initiate the login flow.
Go to Concur Travel and Expense Sign-on URL directly and initiate the login flow from there.
IDP initiated:
Click on Test this application in Azure portal and you should be automatically signed in to the Concur Travel
and Expense for which you set up the SSO
You can also use Microsoft My Apps to test the application in any mode. When you click the Concur Travel and
Expense tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for
initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Concur Travel
and Expense for which you set up the SSO. For more information about the My Apps, see Introduction to the My
Apps.

Next steps
Once you configure Concur Travel and Expense you can enforce session control, which protects exfiltration and
infiltration of your organization’s sensitive data in real time. Session control extends from Conditional Access. Learn
how to enforce session control with Microsoft Cloud App Security.
Tutorial: Azure Active Directory integration with SAP
Cloud Platform
11/2/2020 • 8 minutes to read • Edit Online

In this tutorial, you learn how to integrate SAP Cloud Platform with Azure Active Directory (Azure AD). Integrating
SAP Cloud Platform with Azure AD provides you with the following benefits:
You can control in Azure AD who has access to SAP Cloud Platform.
You can enable your users to be automatically signed-in to SAP Cloud Platform (Single Sign-On) with their
Azure AD accounts.
You can manage your accounts in one central location - the Azure portal.
If you want to know more details about SaaS app integration with Azure AD, see What is application access and
single sign-on with Azure Active Directory. If you don't have an Azure subscription, create a free account before you
begin.

Prerequisites
To configure Azure AD integration with SAP Cloud Platform, you need the following items:
An Azure AD subscription. If you don't have an Azure AD environment, you can get one-month trial here
SAP Cloud Platform single sign-on enabled subscription
After completing this tutorial, the Azure AD users you have assigned to SAP Cloud Platform will be able to single
sign into the application using the Introduction to the Access Panel.

IMPORTANT
You need to deploy your own application or subscribe to an application on your SAP Cloud Platform account to test single
sign on. In this tutorial, an application is deployed in the account.

Scenario description
In this tutorial, you configure and test Azure AD single sign-on in a test environment.
SAP Cloud Platform supports SP initiated SSO

Adding SAP Cloud Platform from the gallery


To configure the integration of SAP Cloud Platform into Azure AD, you need to add SAP Cloud Platform from the
gallery to your list of managed SaaS apps.
To add SAP Cloud Platform from the galler y, perform the following steps:
1. In the Azure por tal , on the left navigation panel, click Azure Active Director y icon.
2. Navigate to Enterprise Applications and then select the All Applications option.

3. To add new application, click New application button on the top of dialog.

4. In the search box, type SAP Cloud Platform , select SAP Cloud Platform from result panel then click Add
button to add the application.

Configure and test Azure AD single sign-on


In this section, you configure and test Azure AD single sign-on with SAP Cloud Platform based on a test user called
Britta Simon . For single sign-on to work, a link relationship between an Azure AD user and the related user in SAP
Cloud Platform needs to be established.
To configure and test Azure AD single sign-on with SAP Cloud Platform, you need to complete the following
building blocks:
1. Configure Azure AD Single Sign-On - to enable your users to use this feature.
2. Configure SAP Cloud Platform Single Sign-On - to configure the Single Sign-On settings on application
side.
3. Create an Azure AD test user - to test Azure AD single sign-on with Britta Simon.
4. Assign the Azure AD test user - to enable Britta Simon to use Azure AD single sign-on.
5. Create SAP Cloud Platform test user - to have a counterpart of Britta Simon in SAP Cloud Platform that is
linked to the Azure AD representation of user.
6. Test single sign-on - to verify whether the configuration works.
Configure Azure AD single sign-on
In this section, you enable Azure AD single sign-on in the Azure portal.
To configure Azure AD single sign-on with SAP Cloud Platform, perform the following steps:
1. In the Azure portal, on the SAP Cloud Platform application integration page, select Single sign-on .

2. On the Select a Single sign-on method dialog, select SAML/WS-Fed mode to enable single sign-on.

3. On the Set up Single Sign-On with SAML page, click Edit icon to open Basic SAML Configuration
dialog.
4. On the Basic SAML Configuration section, perform the following steps:

a. In the Sign On URL textbox, type the URL used by your users to sign into your SAP Cloud Platform
application. This is the account-specific URL of a protected resource in your SAP Cloud Platform application.
The URL is based on the following pattern:
https://<applicationName><accountName>.<landscape host>.ondemand.com/<path_to_protected_resource>

NOTE
This is the URL in your SAP Cloud Platform application that requires the user to authenticate.

https://<subdomain>.hanatrial.ondemand.com/<instancename>
https://<subdomain>.hana.ondemand.com/<instancename>
b. In the Identifier textbox you will provide your SAP Cloud Platform's type a URL using one of the
following patterns:
https://hanatrial.ondemand.com/<instancename>
https://hana.ondemand.com/<instancename>
https://us1.hana.ondemand.com/<instancename>
https://ap1.hana.ondemand.com/<instancename>

c. In the Reply URL textbox, type a URL using the following pattern:
https://<subdomain>.hanatrial.ondemand.com/<instancename>
https://<subdomain>.hana.ondemand.com/<instancename>
https://<subdomain>.us1.hana.ondemand.com/<instancename>
https://<subdomain>.dispatcher.us1.hana.ondemand.com/<instancename>
https://<subdomain>.ap1.hana.ondemand.com/<instancename>
https://<subdomain>.dispatcher.ap1.hana.ondemand.com/<instancename>
https://<subdomain>.dispatcher.hana.ondemand.com/<instancename>

NOTE
These values are not real. Update these values with the actual Sign-On URL, Identifier, and Reply URL. Contact SAP
Cloud Platform Client support team to get Sign-On URL and Identifier. Reply URL you can get from trust
management section which is explained later in the tutorial.

5. On the Set up Single Sign-On with SAML page, in the SAML Signing Cer tificate section, click
Download to download the Federation Metadata XML from the given options as per your requirement
and save it on your computer.

Configure SAP Cloud Platform Single Sign-On


1. In a different web browser window, sign on to the SAP Cloud Platform Cockpit at
https://account.<landscape host>.ondemand.com/cockpit (for example:
https://account.hanatrial.ondemand.com/cockpit).
2. Click the Trust tab.

3. In the Trust Management section, under Local Ser vice Provider , perform the following steps:
a. Click Edit .
b. As Configuration Type , select Custom .
c. As Local Provider Name , leave the default value. Copy this value and paste it into the Identifier field in
the Azure AD configuration for SAP Cloud Platform.
d. To generate a Signing Key and a Signing Cer tificate key pair, click Generate Key Pair .
e. As Principal Propagation , select Disabled .
f. As Force Authentication , select Disabled .
g. Click Save .
4. After saving the Local Ser vice Provider settings, perform the following to obtain the Reply URL:
a. Download the SAP Cloud Platform metadata file by clicking Get Metadata .
b. Open the downloaded SAP Cloud Platform metadata XML file, and then locate the
ns3:Asser tionConsumerSer vice tag.
c. Copy the value of the Location attribute, and then paste it into the Reply URL field in the Azure AD
configuration for SAP Cloud Platform.
5. Click the Trusted Identity Provider tab, and then click Add Trusted Identity Provider .

NOTE
To manage the list of trusted identity providers, you need to have chosen the Custom configuration type in the Local
Service Provider section. For Default configuration type, you have a non-editable and implicit trust to the SAP ID
Service. For None, you don't have any trust settings.

6. Click the General tab, and then click Browse to upload the downloaded metadata file.
NOTE
After uploading the metadata file, the values for Single Sign-on URL , Single Logout URL , and Signing
Cer tificate are populated automatically.

7. Click the Attributes tab.


8. On the Attributes tab, perform the following step:

a. Click Add Asser tion-Based Attribute , and then add the following assertion-based attributes:
A SSERT IO N AT T RIB UT E P RIN C IPA L AT T RIB UT E

firstname
http://schemas.xmlsoap.org/ws/2005/05/identity/claims/givenname

lastname
http://schemas.xmlsoap.org/ws/2005/05/identity/claims/surname

email
http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress

NOTE
The configuration of the Attributes depends on how the application(s) on SCP are developed, that is, which
attribute(s) they expect in the SAML response and under which name (Principal Attribute) they access this attribute in
the code.

b. The Default Attribute in the screenshot is just for illustration purposes. It is not required to make the
scenario work.
c. The names and values for Principal Attribute shown in the screenshot depend on how the application is
developed. It is possible that your application requires different mappings.
Assertion-based groups
As an optional step, you can configure assertion-based groups for your Azure Active Directory Identity Provider.
Using groups on SAP Cloud Platform allows you to dynamically assign one or more users to one or more roles in
your SAP Cloud Platform applications, determined by values of attributes in the SAML 2.0 assertion.
For example, if the assertion contains the attribute "contract=temporary", you may want all affected users to be
added to the group "TEMPORARY". The group "TEMPORARY" may contain one or more roles from one or more
applications deployed in your SAP Cloud Platform account.
Use assertion-based groups when you want to simultaneously assign many users to one or more roles of
applications in your SAP Cloud Platform account. If you want to assign only a single or small number of users to
specific roles, we recommend assigning them directly in the “Authorizations ” tab of the SAP Cloud Platform
cockpit.
Create an Azure AD test user
The objective of this section is to create a test user in the Azure portal called Britta Simon.
1. In the Azure portal, in the left pane, select Azure Active Director y , select Users , and then select All users .

2. Select New user at the top of the screen.

3. In the User properties, perform the following steps.


a. In the Name field enter BrittaSimon .
b. In the User name field type [email protected]
For example, [email protected]
c. Select Show password check box, and then write down the value that's displayed in the Password box.
d. Click Create .
Assign the Azure AD test user
In this section, you enable Britta Simon to use Azure single sign-on by granting access to SAP Cloud Platform.
1. In the Azure portal, select Enterprise Applications , select All applications , then select SAP Cloud
Platform .

2. In the applications list, type and select SAP Cloud Platform .


3. In the menu on the left, select Users and groups .

4. Click the Add user button, then select Users and groups in the Add Assignment dialog.

5. In the Users and groups dialog select Britta Simon in the Users list, then click the Select button at the
bottom of the screen.
6. If you are expecting any role value in the SAML assertion then in the Select Role dialog select the
appropriate role for the user from the list, then click the Select button at the bottom of the screen.
7. In the Add Assignment dialog click the Assign button.
Create SAP Cloud Platform test user
In order to enable Azure AD users to log in to SAP Cloud Platform, you must assign roles in the SAP Cloud Platform
to them.
To assign a role to a user, perform the following steps:
1. Log in to your SAP Cloud Platform cockpit.
2. Perform the following:
a. Click Authorization .
b. Click the Users tab.
c. In the User textbox, type the user’s email address.
d. Click Assign to assign the user to a role.
e. Click Save .
Test single sign-on
In this section, you test your Azure AD single sign-on configuration using the Access Panel.
When you click the SAP Cloud Platform tile in the Access Panel, you should be automatically signed in to the SAP
Cloud Platform for which you set up SSO. For more information about the Access Panel, see Introduction to the
Access Panel.

Additional Resources
List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory
What is application access and single sign-on with Azure Active Directory?
What is Conditional Access in Azure Active Directory?
Tutorial: Azure Active Directory Single sign-on (SSO)
integration with SAP NetWeaver
11/2/2020 • 11 minutes to read • Edit Online

In this tutorial, you'll learn how to integrate SAP NetWeaver with Azure Active Directory (Azure AD). When you
integrate SAP NetWeaver with Azure AD, you can:
Control in Azure AD who has access to SAP NetWeaver.
Enable your users to be automatically signed-in to SAP NetWeaver with their Azure AD accounts.
Manage your accounts in one central location - the Azure portal.

Prerequisites
To get started, you need the following items:
An Azure AD subscription. If you don't have a subscription, you can get a free account.
SAP NetWeaver single sign-on (SSO) enabled subscription.
SAP NetWeaver V7.20 required atleast

Scenario description
SAP NetWeaver supports both SAML (SP initiated SSO ) and OAuth . In this tutorial, you configure and test
Azure AD SSO in a test environment.

NOTE
Identifier of this application is a fixed string value so only one instance can be configured in one tenant.

NOTE
Configure the application either in SAML or in OAuth as per your organizational requirement.

Adding SAP NetWeaver from the gallery


To configure the integration of SAP NetWeaver into Azure AD, you need to add SAP NetWeaver from the gallery to
your list of managed SaaS apps.
1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
2. On the left navigation pane, select the Azure Active Director y service.
3. Navigate to Enterprise Applications and then select All Applications .
4. To add new application, select New application .
5. In the Add from the galler y section, type SAP NetWeaver in the search box.
6. Select SAP NetWeaver from results panel and then add the app. Wait a few seconds while the app is added to
your tenant.

Configure and test Azure AD SSO for SAP NetWeaver


Configure and test Azure AD SSO with SAP NetWeaver using a test user called B.Simon . For SSO to work, you
need to establish a link relationship between an Azure AD user and the related user in SAP NetWeaver.
To configure and test Azure AD SSO with SAP NetWeaver, perform the following steps:
1. Configure Azure AD SSO to enable your users to use this feature.
a. Create an Azure AD test user to test Azure AD single sign-on with B.Simon.
b. Assign the Azure AD test user to enable B.Simon to use Azure AD single sign-on.
2. Configure SAP NetWeaver using SAML to configure the SSO settings on application side.
a. Create SAP NetWeaver test user to have a counterpart of B.Simon in SAP NetWeaver that is linked to
the Azure AD representation of user.
3. Test SSO to verify whether the configuration works.
4. Configure SAP NetWeaver for OAuth to configure the OAuth settings on application side.

Configure Azure AD SSO


In this section, you enable Azure AD single sign-on in the Azure portal.
To configure Azure AD single sign-on with SAP NetWeaver, perform the following steps:
1. Open a new web browser window and sign into your SAP NetWeaver company site as an administrator
2. Make sure that http and https services are active and appropriate ports are assigned in SMICM T-Code.
3. Sign on to business client of SAP System (T01), where SSO is required and activate HTTP Security session
Management.
a. Go to Transaction code SICF_SESSIONS . It displays all relevant profile parameters with current values.
They look like below:-

login/create_sso2_ticket = 2
login/accept_sso2_ticket = 1
login/ticketcache_entries_max = 1000
login/ticketcache_off = 0 login/ticket_only_by_https = 0
icf/set_HTTPonly_flag_on_cookies = 3
icf/user_recheck = 0 http/security_session_timeout = 1800
http/security_context_cache_size = 2500
rdisp/plugin_auto_logout = 1800
rdisp/autothtime = 60

NOTE
Adjust above parameters as per your organization requirements, Above parameters are given here as indication only.

b. If necessary adjust parameters, in the instance/default profile of SAP system and restart SAP system.
c. Double-click on relevant client to enable HTTP security session.
d. Activate below SICF services:

/sap/public/bc/sec/saml2
/sap/public/bc/sec/cdc_ext_service
/sap/bc/webdynpro/sap/saml2
/sap/bc/webdynpro/sap/sec_diag_tool (This is only to enable / disable trace)

4. Go to Transaction code SAML2 in business client of SAP system [T01/122]. It will open a user interface in a
browser. In this example, we assumed 122 as SAP business client.

5. Provide your username and password to enter in user interface and click Edit .
6. Replace Provider Name from T01122 to http://T01122 and click on Save .

NOTE
By default provider name come as <sid><client> format but Azure AD expects name in the format of
<protocol>://<name> , recommending to maintain provider name as https://<sid><client> to allow multiple
SAP NetWeaver ABAP engines to configure in Azure AD.
7. Generating Ser vice Provider Metadata :- Once we are done with configuring the Local Provider and
Trusted Providers settings on SAML 2.0 User Interface, the next step would be to generate the service
provider’s metadata file (which would contain all the settings, authentication contexts and other
configurations in SAP). Once this file is generated we need to upload this in Azure AD.

a. Go to Local Provider tab .


b. Click on Metadata .
c. Save the generated Metadata XML file on your computer and upload it in Basic SAML Configuration
section to autopopulate the Identifier and Reply URL values in Azure portal.
Follow these steps to enable Azure AD SSO in the Azure portal.
1. In the Azure portal, on the SAP NetWeaver application integration page, find the Manage section and
select Single sign-on .
2. On the Select a Single sign-on method page, select SAML .
3. On the Set up Single Sign-On with SAML page, click the edit/pen icon for Basic SAML Configuration
to edit the settings.

4. On the Basic SAML Configuration section, if you wish to configure the application in IDP initiated mode,
perform the following step:
a. Click Upload metadata file to upload the Ser vice Provider metadata file , which you have obtained
earlier.
b. Click on folder logo to select the metadata file and click Upload .
c. After the metadata file is successfully uploaded, the Identifier and Reply URL values get auto populated
in Basic SAML Configuration section textbox as shown below:
d. In the Sign-on URL text box, type a URL using the following pattern:
https://<your company instance of SAP NetWeaver>

NOTE
We have seen few customers reporting an error of incorrect Reply URL configured for their instance. If you receive any
such error, you can use following PowerShell script as a work around to set the correct Reply URL for your instance.:

Set-AzureADServicePrincipal -ObjectId $ServicePrincipalObjectId -ReplyUrls "<Your Correct Reply


URL(s)>"

ServicePrincipal Object ID is to be set by yourself first or you can pass that also here.

5. SAP NetWeaver application expects the SAML assertions in a specific format, which requires you to add
custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the
list of default attributes. Click Edit icon to open User Attributes dialog.
6. In the User Claims section on the User Attributes dialog, configure SAML token attribute as shown in the
image above and perform the following steps:
a. Click Edit icon to open the Manage user claims dialog.

b. From the Transformation list, select ExtractMailPrefix() .


c. From the Parameter 1 list, select user.userprincipalname .
d. Click Save .
7. On the Set up Single Sign-On with SAML page, in the SAML Signing Cer tificate section, find
Federation Metadata XML and select Download to download the certificate and save it on your
computer.
8. On the Set up SAP NetWeaver section, copy the appropriate URL(s) based on your requirement.

Create an Azure AD test user


In this section, you'll create a test user in the Azure portal called B.Simon.
1. From the left pane in the Azure portal, select Azure Active Director y , select Users , and then select All users .
2. Select New user at the top of the screen.
3. In the User properties, follow these steps:
a. In the Name field, enter B.Simon .
b. In the User name field, enter the [email protected]. For example,
[email protected] .
c. Select the Show password check box, and then write down the value that's displayed in the Password
box.
d. Click Create .
Assign the Azure AD test user
In this section, you'll enable B.Simon to use Azure single sign-on by granting access to SAP NetWeaver.
1. In the Azure portal, select Enterprise Applications , and then select All applications .
2. In the applications list, select SAP NetWeaver .
3. In the app's overview page, find the Manage section and select Users and groups .
4. Select Add user , then select Users and groups in the Add Assignment dialog.
5. In the Users and groups dialog, select B.Simon from the Users list, then click the Select button at the bottom
of the screen.
6. If you're expecting any role value in the SAML assertion, in the Select Role dialog, select the appropriate role
for the user from the list and then click the Select button at the bottom of the screen.
7. In the Add Assignment dialog, click the Assign button.

Configure SAP NetWeaver using SAML


1. Sign in to SAP system and go to transaction code SAML2. It opens new browser window with SAML
configuration screen.
2. For configuring End points for trusted Identity provider (Azure AD) go to Trusted Providers tab.

3. Press Add and select Upload Metadata File from the context menu.
4. Upload metadata file, which you have downloaded from the Azure portal.

5. In the next screen type the Alias name. For example, aadsts and press Next to continue.

6. Make sure that your Digest Algorithm should be SHA-256 and don’t require any changes and press
Next .
7. On Single Sign-On Endpoints , use HTTP POST and click Next to continue.

8. On Single Logout Endpoints select HTTPRedirect and click Next to continue.

9. On Ar tifact Endpoints , press Next to continue.


10. On Authentication Requirements , click Finish .

11. Go to tab Trusted Provider > Identity Federation (from bottom of the screen). Click Edit .
12. Click Add under the Identity Federation tab (bottom window).

13. From the pop-up window, select Unspecified from the Suppor ted NameID formats and click OK.

14. Note that user ID Source and user ID mapping mode values determine the link between SAP user and
Azure AD claim.
Scenario: SAP User to Azure AD user mapping.
a. NameID details screenshot from SAP.
b. Screenshot mentioning Required claims from Azure AD.

Scenario: Select SAP user ID based on configured email address in SU01. In this case email ID should be configured in su01 for
each user who requires SSO.
a. NameID details screenshot from SAP.

b. screenshot mentioning Required claims from Azure AD.

15. Click Save and then click Enable to enable identity provider.

16. Click OK once prompted.


Create SAP NetWeaver test user
In this section, you create a user called B.simon in SAP NetWeaver. Please work your in house SAP expert
team or work with your organization SAP partner to add the users in the SAP NetWeaver platform.

Test SSO
1. Once the identity provider Azure AD was activated, try accessing below URL to check SSO (there will no
prompt for username & password)
https://<sapurl>/sap/bc/bsp/sap/it00/default.htm

(or) use the URL below


https://<sapurl>/sap/bc/bsp/sap/it00/default.htm

NOTE
Replace sapurl with actual SAP hostname.

2. The above URL should take you to below mentioned screen. If you are able to reach up to the below page,
Azure AD SSO setup is successfully done.

3. If username & password prompt occurs, please diagnose the issue by enable the trace using below URL
https://<sapurl>/sap/bc/webdynpro/sap/sec_diag_tool?sap-client=122&sap-language=EN#

Configure SAP NetWeaver for OAuth


1. SAP Documented process is available at the location: NetWeaver Gateway Service Enabling and OAuth 2.0
Scope Creation
2. Go to SPRO and find Activate and Maintain ser vices .
3. In this example we want to connect the OData service: DAAG_MNGGRP with OAuth to Azure AD SSO. Use the
technical service name search for the service DAAG_MNGGRP and activate if not yet active, already (look for
green status under ICF nodes tab). Ensure if system alias (the connected backend system, where the service
actually running) is correct.

Then click pushbutton OAuth on the top button bar and assign scope (keep default name as offered).
4. For our example the scope is DAAG_MNGGRP_001 , it is generated from the service name by automatically
adding a number. Report /IWFND/R_OAUTH_SCOPES can be used to change name of scope or create manually.
NOTE
Message soft state status is not supported – can be ignored, as no problem. For more details, refer here

Create a service user for the OAuth 2.0 Client


1. OAuth2 uses a service ID to get the access token for the end-user on its behalf. Important restriction by
OAuth design: the OAuth 2.0 Client ID must be identical with the username the OAuth 2.0 client uses for
login when requesting an Access Token. Therefore, for our example, we are going to register an OAuth 2.0
client with name CLIENT1, and as a prerequisite a user with the same name (CLIENT1) must exist in the SAP
system and that user we will configure to be used by the referred application.
2. When registering an OAuth Client we use the SAML Bearer Grant type .

NOTE
For more details, refer OAuth 2.0 Client Registration for the SAML Bearer Grant Type here

3. tcod: SU01 / create user CLIENT1 as System type and assign password, save it as need to provide the
credential to the API programmer, who should burn it with the username to the calling code. No profile or
role should be assigned.
Register the new OAuth 2.0 Client ID with the creation wizard
1. To register a new OAuth 2.0 client start transaction SOAUTH2 . The transaction will display an overview
about the OAuth 2.0 clients that were already registered. Choose Create to start the wizard for the new
OAuth client named as CLIENT1in this example.
2. Go to T-Code: SOAUTH2 and Provide the description then click next .
3. Select the already added SAML2 IdP – Azure AD from the dropdown list and save.
4. Click on Add under scope assignment to add the previously created scope: DAAG_MNGGRP_001
5. Click finish .

Next Steps
Once you configure Azure AD SAP NetWeaver you can enforce Session Control, which protects exfiltration and
infiltration of your organization’s sensitive data in real time. Session Control extends from Conditional Access.
Learn how to enforce session control with Microsoft Cloud App Security
Tutorial: Azure Active Directory integration with SAP
Business ByDesign
11/2/2020 • 7 minutes to read • Edit Online

In this tutorial, you learn how to integrate SAP Business ByDesign with Azure Active Directory (Azure AD).
Integrating SAP Business ByDesign with Azure AD provides you with the following benefits:
You can control in Azure AD who has access to SAP Business ByDesign.
You can enable your users to be automatically signed-in to SAP Business ByDesign (Single Sign-On) with their
Azure AD accounts.
You can manage your accounts in one central location - the Azure portal.
If you want to know more details about SaaS app integration with Azure AD, see What is application access and
single sign-on with Azure Active Directory. If you don't have an Azure subscription, create a free account before you
begin.

Prerequisites
To configure Azure AD integration with SAP Business ByDesign, you need the following items:
An Azure AD subscription. If you don't have an Azure AD environment, you can get a free account
SAP Business ByDesign single sign-on enabled subscription

Scenario description
In this tutorial, you configure and test Azure AD single sign-on in a test environment.
SAP Business ByDesign supports SP initiated SSO

Adding SAP Business ByDesign from the gallery


To configure the integration of SAP Business ByDesign into Azure AD, you need to add SAP Business ByDesign from
the gallery to your list of managed SaaS apps.
To add SAP Business ByDesign from the galler y, perform the following steps:
1. In the Azure por tal , on the left navigation panel, click Azure Active Director y icon.

2. Navigate to Enterprise Applications and then select the All Applications option.
3. To add new application, click New application button on the top of dialog.

4. In the search box, type SAP Business ByDesign , select SAP Business ByDesign from result panel then
click Add button to add the application.

Configure and test Azure AD single sign-on


In this section, you configure and test Azure AD single sign-on with SAP Business ByDesign based on a test user
called Britta Simon . For single sign-on to work, a link relationship between an Azure AD user and the related user
in SAP Business ByDesign needs to be established.
To configure and test Azure AD single sign-on with SAP Business ByDesign, you need to complete the following
building blocks:
1. Configure Azure AD Single Sign-On - to enable your users to use this feature.
2. Configure SAP Business ByDesign Single Sign-On - to configure the Single Sign-On settings on
application side.
3. Create an Azure AD test user - to test Azure AD single sign-on with Britta Simon.
4. Assign the Azure AD test user - to enable Britta Simon to use Azure AD single sign-on.
5. Create SAP Business ByDesign test user - to have a counterpart of Britta Simon in SAP Business ByDesign
that is linked to the Azure AD representation of user.
6. Test single sign-on - to verify whether the configuration works.
Configure Azure AD single sign-on
In this section, you enable Azure AD single sign-on in the Azure portal.
To configure Azure AD single sign-on with SAP Business ByDesign, perform the following steps:
1. In the Azure portal, on the SAP Business ByDesign application integration page, select Single sign-on .

2. On the Select a Single sign-on method dialog, select SAML/WS-Fed mode to enable single sign-on.

3. On the Set up Single Sign-On with SAML page, click Edit icon to open Basic SAML Configuration
dialog.

4. On the Basic SAML Configuration section, perform the following steps:


a. In the Sign on URL text box, type a URL using the following pattern:
https://<servername>.sapbydesign.com

b. In the Identifier (Entity ID) text box, type a URL using the following pattern:
https://<servername>.sapbydesign.com

NOTE
These values are not real. Update these values with the actual Sign on URL and Identifier. Contact SAP Business
ByDesign Client support team to get these values. You can also refer to the patterns shown in the Basic SAML
Configuration section in the Azure portal.

5. SAP Business ByDesign application expects the SAML assertions in a specific format. Configure the following
claims for this application. You can manage the values of these attributes from the User Attributes section
on application integration page. On the Set up Single Sign-On with SAML page, click Edit button to
open User Attributes dialog.

6. Click on the Edit icon to edit the Name identifier value .


7. On the Manage user claims section, perform the following steps:

a. Select Transformation as a Source .


b. In the Transformation dropdown list, select ExtractMailPrefix() .
c. In the Parameter 1 dropdown list, select the user attribute you want to use for your implementation. For
example, if you want to use the EmployeeID as unique user identifier and you have stored the attribute value
in the ExtensionAttribute2, then select user.extensionattribute2.
d. Click Save .
8. On the Set up Single Sign-On with SAML page, in the SAML Signing Cer tificate section, click
Download to download the Federation Metadata XML from the given options as per your requirement
and save it on your computer.
9. On the Set up SAP Business ByDesign section, copy the appropriate URL(s) as per your requirement.

a. Login URL
b. Azure AD Identifier
c. Logout URL
Configure SAP Business ByDesign Single Sign-On
1. Sign on to your SAP Business ByDesign portal with administrator rights.
2. Navigate to Application and User Management Common Task and click the Identity Provider tab.
3. Click New Identity Provider and select the metadata XML file that you have downloaded from the Azure
portal. By importing the metadata, the system automatically uploads the required signature certificate and
encryption certificate.

4. To include the Asser tion Consumer Ser vice URL into the SAML request, select Include Asser tion
Consumer Ser vice URL .
5. Click Activate Single Sign-On .
6. Save your changes.
7. Click the My System tab.

8. In the Azure AD Sign On URL textbox, paste Login URL value, which you have copied from the Azure
portal.

9. Specify whether the employee can manually choose between logging on with user ID and password or SSO
by selecting Manual Identity Provider Selection .
10. In the SSO URL section, specify the URL that should be used by the employee to signon to the system. In
the URL Sent to Employee dropdown list, you can choose between the following options:
Non-SSO URL
The system sends only the normal system URL to the employee. The employee cannot signon using SSO,
and must use password or certificate instead.
SSO URL
The system sends only the SSO URL to the employee. The employee can signon using SSO. Authentication
request is redirected through the IdP.
Automatic Selection
If SSO is not active, the system sends the normal system URL to the employee. If SSO is active, the system
checks whether the employee has a password. If a password is available, both SSO URL and Non-SSO URL
are sent to the employee. However, if the employee has no password, only the SSO URL is sent to the
employee.
11. Save your changes.
Create an Azure AD test user
The objective of this section is to create a test user in the Azure portal called Britta Simon.
1. In the Azure portal, in the left pane, select Azure Active Director y , select Users , and then select All users .
2. Select New user at the top of the screen.

3. In the User properties, perform the following steps.

a. In the Name field enter BrittaSimon .


b. In the User name field type [email protected] . For example,
[email protected]
c. Select Show password check box, and then write down the value that's displayed in the Password box.
d. Click Create .
Assign the Azure AD test user
In this section, you enable Britta Simon to use Azure single sign-on by granting access to SAP Business ByDesign.
1. In the Azure portal, select Enterprise Applications , select All applications , then select SAP Business
ByDesign .
2. In the applications list, select SAP Business ByDesign .

3. In the menu on the left, select Users and groups .

4. Click the Add user button, then select Users and groups in the Add Assignment dialog.

5. In the Users and groups dialog select Britta Simon in the Users list, then click the Select button at the
bottom of the screen.
6. If you are expecting any role value in the SAML assertion then in the Select Role dialog select the
appropriate role for the user from the list, then click the Select button at the bottom of the screen.
7. In the Add Assignment dialog click the Assign button.
Create SAP Business ByDesign test user
In this section, you create a user called Britta Simon in SAP Business ByDesign. Please work with SAP Business
ByDesign Client support team to add the users in the SAP Business ByDesign platform.

NOTE
Please make sure that NameID value should match with the username field in the SAP Business ByDesign platform.
Test single sign-on
In this section, you test your Azure AD single sign-on configuration using the Access Panel.
When you click the SAP Business ByDesign tile in the Access Panel, you should be automatically signed in to the
SAP Business ByDesign for which you set up SSO. For more information about the Access Panel, see Introduction to
the Access Panel.

Additional Resources
List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory
What is application access and single sign-on with Azure Active Directory?
What is Conditional Access in Azure Active Directory?
Tutorial: Azure Active Directory integration with SAP
HANA
11/2/2020 • 8 minutes to read • Edit Online

In this tutorial, you learn how to integrate SAP HANA with Azure Active Directory (Azure AD). Integrating SAP
HANA with Azure AD provides you with the following benefits:
You can control in Azure AD who has access to SAP HANA.
You can enable your users to be automatically signed-in to SAP HANA (Single Sign-On) with their Azure AD
accounts.
You can manage your accounts in one central location - the Azure portal.
If you want to know more details about SaaS app integration with Azure AD, see What is application access and
single sign-on with Azure Active Directory. If you don't have an Azure subscription, create a free account before you
begin.

Prerequisites
To configure Azure AD integration with SAP HANA, you need the following items:
An Azure AD subscription
A SAP HANA subscription that's single sign-on (SSO) enabled
A HANA instance that's running on any public IaaS, on-premises, Azure VM, or SAP large instances in Azure
The XSA Administration web interface, as well as HANA Studio installed on the HANA instance

NOTE
We do not recommend using a production environment of SAP HANA to test the steps in this tutorial. Test the integration
first in the development or staging environment of the application, and then use the production environment.

To test the steps in this tutorial, follow these recommendations:


An Azure AD subscription. If you don't have an Azure AD environment, you can get one-month trial here
SAP HANA single sign-on enabled subscription

Scenario description
In this tutorial, you configure and test Azure AD single sign-on in a test environment.
SAP HANA supports IDP initiated SSO
SAP HANA supports just-in-time user provisioning

Adding SAP HANA from the gallery


To configure the integration of SAP HANA into Azure AD, you need to add SAP HANA from the gallery to your list of
managed SaaS apps.
To add SAP HANA from the galler y, perform the following steps:
1. In the Azure por tal , on the left navigation panel, click Azure Active Director y icon.
2. Navigate to Enterprise Applications and then select the All Applications option.

3. To add new application, click New application button on the top of dialog.

4. In the search box, type SAP HANA , select SAP HANA from result panel then click Add button to add the
application.

Configure and test Azure AD single sign-on


In this section, you configure and test Azure AD single sign-on with SAP HANA based on a test user called Britta
Simon . For single sign-on to work, a link relationship between an Azure AD user and the related user in SAP HANA
needs to be established.
To configure and test Azure AD single sign-on with SAP HANA, you need to complete the following building blocks:
1. Configure Azure AD Single Sign-On - to enable your users to use this feature.
2. Configure SAP HANA Single Sign-On - to configure the Single Sign-On settings on application side.
3. Create an Azure AD test user - to test Azure AD single sign-on with Britta Simon.
4. Assign the Azure AD test user - to enable Britta Simon to use Azure AD single sign-on.
5. Create SAP HANA test user - to have a counterpart of Britta Simon in SAP HANA that is linked to the Azure
AD representation of user.
6. Test single sign-on - to verify whether the configuration works.
Configure Azure AD single sign-on
In this section, you enable Azure AD single sign-on in the Azure portal.
To configure Azure AD single sign-on with SAP HANA, perform the following steps:
1. In the Azure portal, on the SAP HANA application integration page, select Single sign-on .

2. On the Select a Single sign-on method dialog, select SAML/WS-Fed mode to enable single sign-on.

3. On the Set up Single Sign-On with SAML page, click Edit icon to open Basic SAML Configuration
dialog.
4. On the Set up Single Sign-On with SAML page, perform the following steps:

a. In the Identifier text box, type the following: HA100

b. In the Reply URL text box, type a URL using the following pattern:
https://<Customer-SAP-instance-url>/sap/hana/xs/saml/login.xscfunc

NOTE
These values are not real. Update these values with the actual Identifier and Reply URL. Contact SAP HANA Client
support team to get these values. You can also refer to the patterns shown in the Basic SAML Configuration
section in the Azure portal.

5. SAP HANA application expects the SAML assertions in a specific format. Configure the following claims for
this application. You can manage the values of these attributes from the User Attributes section on
application integration page. On the Set up Single Sign-On with SAML page, click Edit button to open
User Attributes dialog.

6. In the User attributes section on the User Attributes & Claims dialog, perform the following steps:
a. Click Edit icon to open the Manage user claims dialog.
b. From the Transformation list, select ExtractMailPrefix() .
c. From the Parameter 1 list, select user.mail .
d. Click Save .
7. On the Set up Single Sign-On with SAML page, in the SAML Signing Cer tificate section, click
Download to download the Federation Metadata XML from the given options as per your requirement
and save it on your computer.

Configure SAP HANA Single Sign-On


1. To configure single sign-on on the SAP HANA side, sign in to your HANA XSA Web Console by going to
the respective HTTPS endpoint.
NOTE
In the default configuration, the URL redirects the request to a sign-in screen, which requires the credentials of an
authenticated SAP HANA database user. The user who signs in must have permissions to perform SAML
administration tasks.

2. In the XSA Web Interface, go to SAML Identity Provider . From there, select the + button on the bottom of
the screen to display the Add Identity Provider Info pane. Then take the following steps:

a. In the Add Identity Provider Info pane, paste the contents of the Metadata XML (which you
downloaded from the Azure portal) into the Metadata box.

b. If the contents of the XML document are valid, the parsing process extracts the information that's required
for the Subject, Entity ID, and Issuer fields in the General data screen area. It also extracts the
information that's necessary for the URL fields in the Destination screen area, for example, the Base URL
and SingleSignOn URL (*) fields.
c. In the Name box of the General Data screen area, enter a name for the new SAML SSO identity provider.

NOTE
The name of the SAML IDP is mandatory and must be unique. It appears in the list of available SAML IDPs that is
displayed when you select SAML as the authentication method for SAP HANA XS applications to use. For example,
you can do this in the Authentication screen area of the XS Artifact Administration tool.

3. Select Save to save the details of the SAML identity provider and to add the new SAML IDP to the list of
known SAML IDPs.

4. In HANA Studio, within the system properties of the Configuration tab, filter the settings by saml . Then
adjust the asser tion_timeout from 10 sec to 120 sec .
Create an Azure AD test user
The objective of this section is to create a test user in the Azure portal called Britta Simon.
1. In the Azure portal, in the left pane, select Azure Active Director y , select Users , and then select All users .

2. Select New user at the top of the screen.

3. In the User properties, perform the following steps.


a. In the Name field enter BrittaSimon .
b. In the User name field type [email protected]
For example, [email protected]
c. Select Show password check box, and then write down the value that's displayed in the Password box.
d. Click Create .
Assign the Azure AD test user
In this section, you enable Britta Simon to use Azure single sign-on by granting access to SAP HANA.
1. In the Azure portal, select Enterprise Applications , select All applications , then select SAP HANA .

2. In the applications list, type and select SAP HANA .


3. In the menu on the left, select Users and groups .

4. Click the Add user button, then select Users and groups in the Add Assignment dialog.

5. In the Users and groups dialog select Britta Simon in the Users list, then click the Select button at the
bottom of the screen.
6. If you are expecting any role value in the SAML assertion then in the Select Role dialog select the
appropriate role for the user from the list, then click the Select button at the bottom of the screen.
7. In the Add Assignment dialog click the Assign button.
Create SAP HANA test user
To enable Azure AD users to sign in to SAP HANA, you must provision them in SAP HANA. SAP HANA supports
just-in-time provisioning , which is by enabled by default.
If you need to create a user manually, take the following steps:
NOTE
You can change the external authentication that the user uses. They can authenticate with an external system such as
Kerberos. For detailed information about external identities, contact your domain administrator.

1. Open the SAP HANA Studio as an administrator, and then enable the DB-User for SAML SSO.

2. Select the invisible check box to the left of SAML , and then select the Configure link.
3. Select Add to add the SAML IDP. Select the appropriate SAML IDP, and then select OK .
4. Add the External Identity (in this case, BrittaSimon) or choose Any . Then select OK .

NOTE
If the Any check box is not selected, then the user name in HANA needs to exactly match the name of the user in the
UPN before the domain suffix. (For example, [email protected] becomes BrittaSimon in HANA.)

5. For testing purposes, assign all XS roles to the user.


TIP
You should give permissions that are appropriate for your use cases only.

6. Save the user.


Test single sign-on
In this section, you test your Azure AD single sign-on configuration using the Access Panel.
When you click the SAP HANA tile in the Access Panel, you should be automatically signed in to the SAP HANA for
which you set up SSO. For more information about the Access Panel, see Introduction to the Access Panel.

Additional Resources
List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory
What is application access and single sign-on with Azure Active Directory?
What is Conditional Access in Azure Active Directory?
Tutorial: Azure Active Directory single sign-on (SSO)
integration with SAP Cloud for Customer
11/2/2020 • 6 minutes to read • Edit Online

In this tutorial, you'll learn how to integrate SAP Cloud for Customer with Azure Active Directory (Azure AD). When
you integrate SAP Cloud for Customer with Azure AD, you can:
Control in Azure AD who has access to SAP Cloud for Customer.
Enable your users to be automatically signed-in to SAP Cloud for Customer with their Azure AD accounts.
Manage your accounts in one central location - the Azure portal.
To learn more about SaaS app integration with Azure AD, see What is application access and single sign-on with
Azure Active Directory.

Prerequisites
To get started, you need the following items:
An Azure AD subscription. If you don't have a subscription, you can get a free account.
SAP Cloud for Customer single sign-on (SSO) enabled subscription.

Scenario description
In this tutorial, you configure and test Azure AD SSO in a test environment.
SAP Cloud for Customer supports SP initiated SSO

Adding SAP Cloud for Customer from the gallery


To configure the integration of SAP Cloud for Customer into Azure AD, you need to add SAP Cloud for Customer
from the gallery to your list of managed SaaS apps.
1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
2. On the left navigation pane, select the Azure Active Director y service.
3. Navigate to Enterprise Applications and then select All Applications .
4. To add new application, select New application .
5. In the Add from the galler y section, type SAP Cloud for Customer in the search box.
6. Select SAP Cloud for Customer from results panel and then add the app. Wait a few seconds while the app is
added to your tenant.

Configure and test Azure AD single sign-on for SAP Cloud for Customer
Configure and test Azure AD SSO with SAP Cloud for Customer using a test user called B.Simon . For SSO to work,
you need to establish a link relationship between an Azure AD user and the related user in SAP Cloud for Customer.
To configure and test Azure AD SSO with SAP Cloud for Customer, complete the following building blocks:
1. Configure Azure AD SSO - to enable your users to use this feature.
a. Create an Azure AD test user - to test Azure AD single sign-on with B.Simon.
b. Assign the Azure AD test user - to enable B.Simon to use Azure AD single sign-on.
2. Configure SAP Cloud for Customer SSO - to configure the single sign-on settings on application side.
a. Create SAP Cloud for Customer test user - to have a counterpart of B.Simon in SAP Cloud for
Customer that is linked to the Azure AD representation of user.
3. Test SSO - to verify whether the configuration works.

Configure Azure AD SSO


Follow these steps to enable Azure AD SSO in the Azure portal.
1. In the Azure portal, on the SAP Cloud for Customer application integration page, find the Manage
section and select single sign-on .
2. On the Select a single sign-on method page, select SAML .
3. On the Set up single sign-on with SAML page, click the edit/pen icon for Basic SAML Configuration
to edit the settings.

4. On the Basic SAML Configuration section, enter the values for the following fields:
a. In the Sign on URL text box, type a URL using the following pattern:
https://<server name>.crm.ondemand.com

b. In the Identifier (Entity ID) text box, type a URL using the following pattern:
https://<server name>.crm.ondemand.com

NOTE
These values are not real. Update these values with the actual Sign on URL and Identifier. Contact SAP Cloud for
Customer Client support team to get these values. You can also refer to the patterns shown in the Basic SAML
Configuration section in the Azure portal.

5. SAP Cloud for Customer application expects the SAML assertions in a specific format, which requires you to
add custom attribute mappings to your SAML token attributes configuration. The following screenshot
shows the list of default attributes. Click Edit icon to open User Attributes dialog.

6. In the User Attributes section on the User Attributes & Claims dialog, perform the following steps:
a. Click Edit icon to open the Manage user claims dialog.
b. Select Transformation as source .
c. From the Transformation list, select ExtractMailPrefix() .
d. From the Parameter 1 list, select the user attribute you want to use for your implementation. For
example, if you want to use the EmployeeID as unique user identifier and you have stored the attribute value
in the ExtensionAttribute2, then select user.extensionattribute2.
e. Click Save .
7. On the Set up single sign-on with SAML page, in the SAML Signing Cer tificate section, find
Federation Metadata XML and select Download to download the certificate and save it on your
computer.
8. On the Set up SAP Cloud for Customer section, copy the appropriate URL(s) based on your requirement.

Create an Azure AD test user


In this section, you'll create a test user in the Azure portal called B.Simon.
1. From the left pane in the Azure portal, select Azure Active Director y , select Users , and then select All users .
2. Select New user at the top of the screen.
3. In the User properties, follow these steps:
a. In the Name field, enter B.Simon .
b. In the User name field, enter the [email protected]. For example,
[email protected] .
c. Select the Show password check box, and then write down the value that's displayed in the Password
box.
d. Click Create .
Assign the Azure AD test user
In this section, you'll enable B.Simon to use Azure single sign-on by granting access to SAP Cloud for Customer.
1. In the Azure portal, select Enterprise Applications , and then select All applications .
2. In the applications list, select SAP Cloud for Customer .
3. In the app's overview page, find the Manage section and select Users and groups .

4. Select Add user , then select Users and groups in the Add Assignment dialog.
5. In the Users and groups dialog, select B.Simon from the Users list, then click the Select button at the
bottom of the screen.
6. If you're expecting any role value in the SAML assertion, in the Select Role dialog, select the appropriate
role for the user from the list and then click the Select button at the bottom of the screen.
7. In the Add Assignment dialog, click the Assign button.

Configure SAP Cloud for Customer SSO


1. Open a new web browser window and sign into your SAP Cloud for Customer company site as an
administrator.
2. From the left side of menu, click on Identity Providers > Corporate Identity Providers > Add and on
the pop-up add the Identity provider name like Azure AD , click Save then click on SAML 2.0
Configuration .

3. On the SAML 2.0 Configuration section, perform the following steps:


a. Click Browse to upload the Federation Metadata XML file, which you have downloaded from Azure portal.
b. Once the XML file is successfully uploaded, the below values will get auto populated automatically then
click Save .
Create SAP Cloud for Customer test user
To enable Azure AD users to sign in to SAP Cloud for Customer, they must be provisioned into SAP Cloud for
Customer. In SAP Cloud for Customer, provisioning is a manual task.
To provision a user account, perform the following steps:
1. Sign in to SAP Cloud for Customer as a Security Administrator.
2. From the left side of the menu, click on Users & Authorizations > User Management > Add User .

3. On the Add New User section, perform the following steps:

a. In the First Name text box, enter the name of user like B .
b. In the Last Name text box, enter the name of user like Simon .
c. In E-Mail text box, enter the email of user like [email protected] .
d. In the Login Name text box, enter the name of user like B.Simon .
e. Select User Type as per your requirement.
f. Select Account Activation option as per your requirement.

Test SSO
In this section, you test your Azure AD single sign-on configuration using the Access Panel.
When you click the SAP Cloud for Customer tile in the Access Panel, you should be automatically signed in to the
SAP Cloud for Customer for which you set up SSO. For more information about the Access Panel, see Introduction
to the Access Panel.

Additional resources
List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory
What is application access and single sign-on with Azure Active Directory?
What is conditional access in Azure Active Directory?
Try SAP Cloud for Customer with Azure AD
Azure monitor for SAP solutions (preview)
12/22/2020 • 5 minutes to read • Edit Online

Overview
Azure Monitor for SAP Solutions is an Azure-native monitoring product for customers, running their SAP
landscapes on Azure. The product works with both SAP on Azure Virtual Machines and SAP on Azure Large
Instances. With Azure Monitor for SAP Solutions, customers can collect telemetry data from Azure infrastructure
and databases in one central location and visually correlate telemetry data for faster troubleshooting.
Azure Monitor for SAP Solutions is offered through Azure Marketplace. It provides a simple, intuitive setup
experience and takes only a few clicks to deploy the resource for Azure Monitor for SAP Solutions (known as SAP
monitor resource ).
Customers can monitor different components of an SAP landscape such as Azure Virtual Machines, High-
availability cluster, SAP HANA database and so on, by adding the corresponding provider for that component.
Supported infrastructure:
Azure Virtual Machine
Azure Large Instance
Supported databases:
SAP HANA Database
Microsoft SQL server
Azure Monitor for SAP Solutions leverages the power of existing Azure Monitor capabilities such as Log Analytics
and Workbooks to provide additional monitoring capabilities. Customers can create custom visualizations by
editing the default Workbooks provided by Azure Monitor for SAP Solutions, write custom queries and create
custom alerts by using Azure Log Analytics workspace, take advantage of flexible retention period and connect
monitoring data with their ticketing system.

What data does Azure Monitor for SAP solutions collect?


Data collection in Azure Monitor for SAP Solutions depends on the providers that are configured by customers.
During Public Preview, the following data is being collected.
High-availability Pacemaker cluster telemetry:
Node, resource, and SBD device status
Pacemaker location constraints
Quorum votes and ring status
Others
SAP HANA telemetry:
CPU, memory, disk, and network utilization
HANA System Replication (HSR)
HANA backup
HANA host status
Index server and Name server roles
Microsoft SQL server telemetry:
CPU, memory, disk utilization
Hostname, SQL Instance name, SAP System ID
Batch Requests, Compilations, and page Life Expectancy over time
Top 10 most expensive SQL statements over time
Top 12 largest table in the SAP system
Problems recorded in the SQL Server Error logs
Blocking processes and SQL Wait Statistics over time

Data sharing with Microsoft


Azure Monitor for SAP Solutions collects system metadata to provide improved support for our SAP on Azure
customers. No PII/EUII is collected. Customers can enable data sharing with Microsoft at the time of creating Azure
Monitor for SAP Solutions resource by choosing Share from the drop-down. It is highly recommended that
customers enable data sharing, as it gives Microsoft support and engineering teams more information about
customer environment and provides improved support to our mission-critical SAP on Azure customers.

Architecture overview
At a high level, the following diagram explains how Azure Monitor for SAP Solutions collects telemetry from SAP
HANA database. The architecture is agnostic to whether SAP HANA is deployed on Azure Virtual Machines or
Azure Large Instances.

The key components of the architecture are:


Azure portal – the starting point for customers. Customers can navigate to marketplace within Azure portal and
discover Azure Monitor for SAP Solutions
Azure Monitor for SAP Solutions resource – a landing place for customers to view monitoring telemetry
Managed resource group – deployed automatically as part of the Azure Monitor for SAP Solutions resource
deployment. The resources deployed within managed resource group help in collection of telemetry. Key
resources deployed and their purpose are:
Azure Virtual Machine: Also known as collector VM. This is a Standard_B2ms VM. The main purpose of
this VM is to host the Monitoring Payload. Monitoring payload refers to the logic of collecting telemetry
from the source systems and transferring the collected data to the monitoring framework. In the above
diagram, the monitoring payload contains the logic to connect to SAP HANA database over SQL port.
Azure Key Vault: This resource is deployed to securely hold SAP HANA database credentials and to store
information about providers.
Log Analytics Workspace: the destination where the telemetry data resides.
Visualization is built on top of telemetry in Log Analytics using Azure Workbooks. Customers can
customize visualization. Customers can also pin their Workbooks or specific visualization within
Workbooks to Azure dashboard for autorefresh capability with lowest granularity of 30 minutes.
Customers can use their existing workspace within the same subscription as SAP monitor
resource by choosing this option at the time of deployment.
Customers can use Kusto query language (KQL) to run queries against the raw tables inside Log
Analytics workspace. Look at Custom Logs.

NOTE
Customers are responsible for patching and maintaining the VM, deployed in the managed resource group.

TIP
Customers can choose to use an existing Log Analytics workspace for telemetry collection, if it is deployed within the same
Azure subscription as the resource for Azure Monitor for SAP Solutions.

Architecture Highlights
Following are the key highlights of the architecture:
Multi-instance - Customers can create monitor for multiple instances of a given component type (for
example, HANA DB, HA cluster, Microsoft SQL server) across multiple SAP SIDs within a VNET with a single
resource of Azure Monitor for SAP Solutions.
Multi-provider - The above architecture diagram shows the SAP HANA provider as an example. Similarly,
customers can configure additional providers for corresponding components (for example, HANA DB, HA
cluster, Microsoft SQL server) to collect data from those components.
Open source - The source code of Azure Monitor for SAP Solutions is available in GitHub. Customers can refer
to the provider code and learn more about the product, contribute or share feedback.
Extensible quer y framework - SQL queries to collect telemetry data are written in JSON. Additional SQL
queries to collect more telemetry data can be easily added. Customers can request specific telemetry data to be
added to Azure Monitor for SAP Solutions, by leaving feedback through link in the end of this document or
contacting their account team.

Pricing
Azure Monitor for SAP Solutions is a free product (no license fee). Customers are responsible for paying the cost
for the underlying components in the managed resource group.

Next steps
Learn about providers and create your first Azure Monitor for SAP Solutions resource.
Learn more about Providers
Deploy Azure Monitor for SAP solutions with Azure PowerShell
Do you have questions about Azure Monitor for SAP Solutions? Check the FAQ section
Azure monitor for SAP solutions providers (preview)
12/22/2020 • 4 minutes to read • Edit Online

Overview
In the context of Azure Monitor for SAP Solutions, a provider type refers to a specific provider. For example SAP
HANA, which is configured for a specific component within the SAP landscape, like SAP HANA database. A provider
contains the connection information for the corresponding component and helps to collect telemetry data from
that component. One Azure Monitor for SAP Solutions resource (also known as SAP monitor resource) can be
configured with multiple providers of the same provider type or multiple providers of multiple provider types.
Customers can choose to configure different provider types to enable data collection from corresponding
component in their SAP landscape. For Example, customers can configure one provider for SAP HANA provider
type, another provider for High-availability cluster provider type and so on.
Customers can also choose to configure multiple providers of a specific provider type to reuse the same SAP
monitor resource and associated managed group. Lean more about managed resource group. For public preview,
the following provider types are supported:
SAP HANA
High-availability cluster
Microsoft SQL Server

Customers are recommended to configure at least one provider from the available provider types at the time of
deploying the SAP Monitor resource. By configuring a provider, customers initiate data collection from the
corresponding component for which the provider is configured.
If customers don't configure any providers at the time of deploying SAP monitor resource, although the SAP
monitor resource will be successfully deployed, no telemetry data will be collected. Customers have an option to
add providers after deployment through SAP monitor resource within Azure portal. Customers can add or delete
providers from the SAP monitor resource at any time.

TIP
If you would like Microsoft to implement a specific provider, please leave feedback through link at the end of this document
or reach out your account team.

Provider type SAP HANA


Customers can configure one or more providers of provider type SAP HANA to enable data collection from SAP
HANA database. The SAP HANA provider connects to the SAP HANA database over SQL port, pulls telemetry data
from the database, and pushes it to the Log Analytics workspace in the customer subscription. The SAP HANA
provider collects data every 1 minute from the SAP HANA database.
In public preview, customers can expect to see the following data with SAP HANA provider: Underlying
infrastructure utilization, SAP HANA Host status, SAP HANA System Replication, and SAP HANA Backup telemetry
data. To configure SAP HANA provider, Host IP address, HANA SQL port number, and SYSTEMDB username and
password are required. Customers are recommended to configure SAP HANA provider against SYSTEMDB,
however additional providers can be configured against other database tenants.

Provider type High-availability cluster


Customers can configure one or more providers of provider type High-availability cluster to enable data collection
from Pacemaker cluster within the SAP landscape. The High-availability cluster provider connects to Pacemaker,
using ha_cluster_exporter endpoint, pulls telemetry data from the database and pushes it to Log Analytics
workspace in the customer subscription. High-availability cluster provider collects data every 60 seconds from
Pacemaker.
In public preview, customers can expect to see the following data with High-availability cluster provider:
Cluster status represented as roll-up of node and resource status
others

To configure a High-availability cluster provider, two primary steps are involved:


1. Install ha_cluster_exporter in each node within the Pacemaker cluster.
You have two options for installing ha_cluster_exporter:
Use Azure Automation scripts to deploy a High-availability cluster. The scripts install ha_cluster_exporter
on each cluster node.
Do a manual installation.
2. Configure a High-availability cluster provider for each node within the Pacemaker cluster.
To configure the High-availability cluster provider, the following information is required:
Name . A name for this provider. It should be unique for this Azure Monitor for SAP solutions instance.
Prometheus Endpoint . Usually http://<servername or ip address>:9664/metrics.
SID . For SAP systems, use the SAP SID. For other systems (for example, NFS clusters), use a three-
character name for the cluster. The SID must be distinct from other clusters that are monitored.
Cluster name . The cluster name used when creating the cluster. The cluster name can be found in the
cluster property cluster-name .
Hostname . The Linux hostname of the VM.

Provider type Microsoft SQL server


Customers can configure one or more providers of provider type Microsoft SQL Server to enable data collection
from SQL Server on Virtual Machines. SQL Server provider connects to Microsoft SQL Server over the SQL port,
pulls telemetry data from the database, and pushes it to the Log Analytics workspace in the customer subscription.
The SQL Server must be configured for SQL authentication and a SQL Server login, with the SAP DB as the default
database for the provider, must be created. SQL Server provider collects data between every 60 seconds up to
every hour from SQL server.
In public preview, customers can expect to see the following data with SQL Server provider: underlying
infrastructure utilization, top SQL statements, top largest table, problems recorded in the SQL Server error logs,
blocking processes and others.
To configure Microsoft SQL Server provider, the SAP System ID, the Host IP address, SQL Server port number as
well as the SQL Server login name and password are required.

Next steps
Create your first Azure Monitor for SAP solutions resource.
Do you have questions about Azure Monitor for SAP Solutions? Check the FAQ section
Deploy Azure Monitor for SAP Solutions with Azure
portal
12/22/2020 • 2 minutes to read • Edit Online

Azure Monitor for SAP Solutions resources can be created through the Azure portal. This method provides a
browser-based user interface to deploy Azure Monitor for SAP Solutions and configure providers.

Sign in to Azure portal


Sign in to the Azure portal at https://portal.azure.com

Create monitoring resource


1. Select Azure Monitor for SAP Solutions from Azure Marketplace .

2. In the Basics tab, provide the required values. If applicable, you can use an existing Log Analytics workspace.

3. When selecting a virtual network, ensure that the systems you want to monitor are reachable from within
that VNET.
IMPORTANT
Selecting Share for Data sharing with Microsoft enables our support teams to provide additional support.

Configure providers
SAP HANA provider
1. Select the Provider tab to add the providers you want to configure. You can add multiple providers one
after another or add them after deploying the monitoring resource.

2. Select Add provider and choose SAP HANA from the drop down.

IMPORTANT
Ensure that SAP HANA provider is configured for SAP HANA 'master' node.

3. Input the Private IP for the HANA server.


4. Input the name of the Database tenant you want to use. You can choose any tenant however, we recommend
using SYSTEMDB as it enables a wider array of monitoring areas.
5. Input the SQL port number associated with your HANA database. The port number should be in the format
of [3] + [instance#] + [13] . For example, 30013.
6. Input the Database username you want to use. Ensure that database user has the monitoring and catalog
read roles assigned.
7. When finished, select Add provider . Continue to add additional providers as needed or select Review +
create to complete the deployment.
High-availability cluster (Pacemaker) provider
1. Select High-availability cluster (Pacemaker) from the drop down.

IMPORTANT
To configure the High-availability cluster (Pacemaker) provider, ensure that ha_cluster_provider is installed in each
node. For more information see HA cluster exporter

2. Input the Prometheus endpoint in the form of http://IP:9664/metrics.


3. Input the System ID (SID), hostname and cluster name.
4. When finished, select Add provider . Continue to add additional providers as needed or select Review +
create to complete the deployment.
Microsoft SQL Server provider
1. Prior to adding the Microsoft SQL Server provider, you should run the following script in SQL Server
Management Studio to create a user with the appropriate permissions needed to configure the provider.

USE [<Database to monitor>]


DROP USER [AMS]
GO
USE [master]
DROP USER [AMS]
DROP LOGIN [AMS]
GO
CREATE LOGIN [AMS] WITH PASSWORD=N'<password>', DEFAULT_DATABASE=[<Database to monitor>],
DEFAULT_LANGUAGE=[us_english], CHECK_EXPIRATION=OFF, CHECK_POLICY=OFF
CREATE USER AMS FOR LOGIN AMS
ALTER ROLE [db_datareader] ADD MEMBER [AMS]
ALTER ROLE [db_denydatawriter] ADD MEMBER [AMS]
GRANT CONNECT TO AMS
GRANT VIEW SERVER STATE TO AMS
GRANT VIEW SERVER STATE TO AMS
GRANT VIEW ANY DEFINITION TO AMS
GRANT EXEC ON xp_readerrorlog TO AMS
GO
USE [<Database to monitor>]
CREATE USER [AMS] FOR LOGIN [AMS]
ALTER ROLE [db_datareader] ADD MEMBER [AMS]
ALTER ROLE [db_denydatawriter] ADD MEMBER [AMS]
GO

2. Select Add provider and choose Microsoft SQL Ser ver from the drop down.
3. Fill out the fields using information associated with your Microsoft SQL Server.
4. When finished, select Add provider . Continue to add additional providers as needed or select Review +
create to complete the deployment.

Next steps
Learn more about Azure Monitor for SAP Solutions
Quickstart: Deploy Azure Monitor for SAP Solutions
with Azure PowerShell
12/22/2020 • 3 minutes to read • Edit Online

This article describes how you can create Azure Monitor for SAP Solutions resources using the Az.HanaOnAzure
PowerShell module.
Cau t i on

Azure Monitor for SAP Solutions is currently in public preview. This preview version is provided without a service
level agreement. It's not recommended for production workloads. Certain features might not be supported or
might have constrained capabilities. For more information, see Supplemental Terms of Use for Microsoft Azure
Previews.

Requirements
If you don't have an Azure subscription, create a free account before you begin.
If you choose to use PowerShell locally, this article requires that you install the Az PowerShell module and connect
to your Azure account using the Connect-AzAccount cmdlet. For more information about installing the Az
PowerShell module, see Install Azure PowerShell. If you choose to use Cloud Shell, see Overview of Azure Cloud
Shell for more information.

IMPORTANT
While the Az.HanaOnAzure PowerShell module is in preview, you must install it separately using the Install-Module
cmdlet. Once this PowerShell module becomes generally available, it becomes part of future Az PowerShell module releases
and available natively from within Azure Cloud Shell.

Install-Module -Name Az.HanaOnAzure

If you have multiple Azure subscriptions, choose the appropriate subscription in which the resources should be
billed. Select a specific subscription using the Set-AzContext cmdlet.

Set-AzContext -SubscriptionId 00000000-0000-0000-0000-000000000000

Create a resource group


Create an Azure resource group using the New-AzResourceGroup cmdlet. A resource group is a logical container in
which Azure resources are deployed and managed as a group.
The following example creates a resource group with the specified name and in the specified location.

New-AzResourceGroup -Name myResourceGroup -Location westus2

SAP monitor
To create an SAP monitor, you use the New-AzSapMonitor cmdlet. The following example creates a SAP monitor for
the specified subscription, resource group, and resource name.

$Workspace = New-AzOperationalInsightsWorkspace -ResourceGroupName myResourceGroup -Name sapmonitor-test -


Location westus2 -Sku Standard

$WorkspaceKey = Get-AzOperationalInsightsWorkspaceSharedKey -ResourceGroupName myResourceGroup -Name


sapmonitor-test

$SapMonitorParams = @{
Name = 'ps-sapmonitor-t01'
ResourceGroupName = 'myResourceGroup'
Location = 'westus2'
EnableCustomerAnalytic = $true
MonitorSubnet = '/subscriptions/00000000-0000-0000-0000-
000000000000/resourceGroups/myResourceGroup/providers/Microsoft.Network/virtualNetworks/vnet-
sap/subnets/mysubnet'
LogAnalyticsWorkspaceSharedKey = $WorkspaceKey.PrimarySharedKey
LogAnalyticsWorkspaceId = $Workspace.CustomerId
LogAnalyticsWorkspaceResourceId = $Workspace.ResourceId
}
New-AzSapMonitor @SapMonitorParams

To retrieve the properties of a SAP monitor, you use the Get-AzSapMonitor cmdlet. The following example gets
properties of a SAP monitor for the specified subscription, resource group, and resource name.

Get-AzSapMonitor -ResourceGroupName myResourceGroup -Name ps-spamonitor-t01

Provider instance
To create a provider instance, you use the New-AzSapMonitorProviderInstance cmdlet. The following example
creates a provider instance for the specified subscription, resource group, and resource name.

$SapProviderParams = @{
ResourceGroupName = 'myResourceGroup'
Name = 'ps-sapmonitorins-t01'
SapMonitorName = 'yemingmonitor'
ProviderType = 'SapHana'
HanaHostname = 'hdb1-0'
HanaDatabaseName = 'SYSTEMDB'
HanaDatabaseSqlPort = '30015'
HanaDatabaseUsername = 'SYSTEM'
HanaDatabasePassword = (ConvertTo-SecureString 'Manager1' -AsPlainText -Force)
}
New-AzSapMonitorProviderInstance @SapProviderParams

To retrieve properties of a provider instance, you use the Get-AzSapMonitorProviderInstance cmdlet. The following
example gets properties of a provider instance for the specified subscription, resource group, SapMonitor name,
and resource name.

Get-AzSapMonitorProviderInstance -ResourceGroupName myResourceGroup -SapMonitorName ps-spamonitor-t01

Clean up resources
If the resources created in this article aren't needed, you can delete them by running the following examples.
Delete the provider instance
To remove a provider instance, you use the Remove-AzSapMonitorProviderInstance cmdlet. The following example
deletes a provider instance for the specified subscription, resource group, SapMonitor name, and resource name.

Remove-AzSapMonitorProviderInstance -ResourceGroupName myResourceGroup -SapMonitorName ps-spamonitor-t01 -Name


ps-sapmonitorins-t02

Delete the SAP monitor


To remove an SAP monitor, you use the Remove-AzSapMonitor cmdlet. The following example deletes a SAP
monitor for the specified subscription, resource group, and monitor name.

Remove-AzSapMonitor -ResourceGroupName myResourceGroup -Name ps-sapmonitor-t02

Delete the resource group


Cau t i on

The following example deletes the specified resource group and all resources contained within it. If resources
outside the scope of this article exist in the specified resource group, they will also be deleted.

Remove-AzResourceGroup -Name myResourceGroup

Next steps
Learn more about Azure Monitor for SAP Solutions.
Azure Monitor for SAP solutions FAQ (preview)
12/22/2020 • 2 minutes to read • Edit Online

Frequently asked questions


This article provides answers to frequently asked questions (FAQ) about Azure Monitor for SAP solutions.
Do I have to pay for Azure Monitor for SAP Solutions?
There is no licensing fee for Azure Monitor for SAP Solutions.
However, customers are responsible to bear the cost of managed resource group components.
In which regions is this ser vice available for public preview?
For public preview, this service will be available in East US 2, West US 2, East US and West Europe.
Do I need to provide permissions to allow the deployment of managed resource group in my
subscription?
No, explicit permissions are not required.
Where does the collector VM reside?
At the time of deploying Azure Monitor for SAP Solutions resource, we recommend that customers choose
the same VNET for monitoring resource as their SAP HANA server. Therefore, collector VM is recommended
to reside in the same VNET as SAP HANA server. If customers are using non-HANA database, the collector
VM will reside in the same VNET as non-HANA database.
Which versions of HANA are suppor ted?
HANA 1.0 SPS 12 (Rev. 120 or higher) and HANA 2.0 SPS03 or higher
Which HANA deployment configurations are suppor ted?
The following configurations are supported:
Single node (scale-up) and multi-node (scale-out)
Single database container (HANA 1.0 SPS 12) and multiple database containers (HANA 1.0 SPS 12 or
HANA 2.0)
Auto host failover (n+1) and HSR
Which SQL Ser ver Versions are suppor ted?
SQL Server 2012 SP4 or higher.
Which SQL Ser ver configurations are suppor ted?
The following configurations are supported:
Default or named standalone instances in a virtual machine
Clustered instances or instances in an AlwaysOn configuration when either using the virtual name of the
clustered resource or the AlwaysOn listener name. Currently no cluster or AlwaysOn specific metrics are
collected
Azure SQL Database (PAAS) is currently not supported
What happens if I accidentally delete the managed resource group?
The managed resource group is locked by default. Therefore, the chances of accidental deletion of the
managed resource group by customers are minuscule.
If a customer deletes the managed resource group, Azure Monitor for SAP Solutions will stop working. The
customer will have to deploy a new Azure Monitor for SAP Solutions resource and start over.
Which roles do I need in my Azure subscription to deploy Azure Monitor for SAP Solutions
resource?
Contributor role.
What is the SL A on this product?
Previews are excluded from service level agreements. Please read the full license term through Azure
Monitor for SAP Solutions marketplace image.
Can I monitor my entire landscape through this solution?
You can currently monitor HANA database, the underlying infrastructure, High-availability cluster, and
Microsoft SQL server in public preview.
Does this ser vice replace SAP Solution manager?
No. Customers can still use SAP Solution manager for Business process monitoring.
What is the value of this ser vice over traditional solutions like SAP HANA Cockpit/Studio?
Azure Monitor for SAP Solutions is not HANA database specific. Azure Monitor for SAP Solutions supports
also AnyDB.

Next steps
Create your first Azure Monitor for SAP solutions resource.

You might also like